Download Introduction To Networking - Alfagate Infocom Technology.... Main

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer and network surveillance wikipedia , lookup

MechMania wikipedia , lookup

Transcript
Introduction To Networking
Introduction
A network is simply a group of two or more Personal Computers linked together. Many types of
networks exist, but the most common types of networks are Local-Area Networks (LANs), and
Wide-Area Networks (WANs).
In a LAN, computers are connected together within a "local" area (for example, an office or
home). In a WAN, computers are further apart and are connected via telephone/communication
lines, radio waves or other means of connection.
How are Networks Categorized?
Networks are usually classified using three properties: Topology, Protocol and Architecture.
Topology specifies the geometric arrangement of the network. Common topologies are a bus, ring
and star.You can check out a figure showing the three common types of network topologies here.
Protocol specifies a common set of rules and signals the computers on the network use to
communicate. Most networks use Ethernet, but some networks may use IBM's Token Ring
protocol. We recommend Ethernet for both home and office networking. For more information,
please select the Ethernet link on the left.
Architecture refers to one of the two major types of network architecture: Peer-to-peer or
client/server. In a Peer-to-Peer networking configuration, there is no server, and computers
simply connect with each other in a workgroup to share files, printers and Internet access.
This is most commonly found in home configurations and is only practical for workgroups of a
dozen or less computers. In a client/server network there is usually an NT Domain Controller, to
which all of the computers log on. This server can provide various services, including centrally
routed Internet Access, mail (including e-mail), file sharing and printer access, as well as
ensuring security across the network. This is most commonly found in corporate configurations,
where network security is essential.
Network Topologies
Introduction
Network topologies can take a bit of time to understand when you're all new to this kind of cool
stuff, but it's very important to fully understand them as they are key elements to understanding
and troubleshooting networks and will help you decide what actions to take when you're faced
with network problems.
I will try to be as simple as possible and give some examples you can relate to, so let's get stuck
right into this stuff !
The Stuff :)
There are two types of topologies: Physical and Logical. The physical topology of a network
refers to the layout of cables, computers and other peripherals. Try to imagine yourself in a room
with a small network, you can see network cables coming out of every computer that is part of the
network, then those cables plug into a hub or switch. What you're looking at is the physical
topology of that network !
Logical topology is the method used to pass the information between the computers. In other
words, looking at that same room, if you were to try to see how the network works with all the
computers talking (think of the computers generating traffic and packets of data going
everywhere on the network) you would be looking at the logical part of the network. The way the
computers will be talking to each other and the direction of the traffic is controlled by the various
protocols (like Ethernet) or, if you like, rules.
If we used token ring, then the physical topology would have to change to meet the requirements
of the way the token ring protocol works (logically).
If it's all still confusing, consider this: The physical topology describes the layout of the network,
just like a map shows the layout of various roads, and the logical topology describes how the data
is sent accross the network or how the cars are able to travel (the direction and speed) at every
road on the map.
The most common types of physical topologies, which we are going to analyse, are: Bus,
Hub/Star and Ring
The Physical Bus Topology
Bus topology is fairly old news and you probably won't be seeing much of these around in any
modern office or home.
With the Bus topology, all workstations are connect directly to the main backbone that carries the
data. Traffic generated by any computer will travel across the backbone and be received by all
workstations. This works well in a small network of 2-5 computers, but as the number of
computers increases so will the network traffic and this can greatly decrease the performance and
available bandwidth of your network.
As you can see in the above example, all computers are attached to a continuous cable which
connects them in a straight line. The arrows clearly indicate that the packet generated by Node 1
is transmitted to all computers on the network, regardless the destination of this packet.
Also, because of the way the electrical signals are transmitted over this cable, its ends must be
terminated by special terminators that work as "shock absorbers", absorbing the signal so it won't
reflect back to where it came from. The value of 50Ohms has been selected after carefully taking
in consideration all the electrical characteristics of the cable used, the voltage that the signal
which runs through the cables, the maximum and minimum length of the bus and a few more.
If the bus (the long yellow cable) is damaged anywhere in its path, then it will most certainly
cause the network to stop working or, at the very least, cause big communication problems
between the workstations.
Thinnet - 10 Base2, also known as coax cable (Black in colour) and Thicknet - 10 Base 5 (Yellow
in colour) is used in these type of topologies.
The Physical HUB or STAR Topology
The Star or Hub topology is one of the most common network topologies found in most offices
and home networks. It has become very popular in contrast to the bus type (which we just spoke
about), because of the cost and the ease of troubleshooting.
The advantage of the star topology is that if one computer on the star topology fails, then only the
failed computer is unable to send or receive data. The remainder of the network functions
normally.
The disadvantage of using this topology is that because each computer is connected to a central
hub or switch, if this device fails, the entire network fails!
A classic example of this type of topology is the UTP (10 base T), which normaly has a blue
colour. Personally I find it boring, so I decided to go out and get myself green, red and yellow
colours :)
The Physical Ring Topology
In the ring topology, computers are connected on a single circle of cable. Unlike the bus
topology, there are no terminated ends. The signals travel around the loop in one direction and
pass through each computer, which acts as a repeater to boost the signal and send it to the next
computer. On a larger scale, multiple LANs can be connected to each other in a ring topology by
using Thicknet coaxial or fiber-optic cable.
The method by which the data is transmitted around the ring is called token passing. IBM's token
ring uses this method. A token is a special series of bits that contains control information.
Possession of the token allows a network device to transmit data to the network. Each network
has only one token.
The Physical Mesh Topology
In a mesh topology, each computer is connected to every other computer by a separate cable. This
configuration provides redundant paths through the new work, so if one computer blows up, you
don't lose the network :) On a large scale, you can connect multiple LANs using mesh topology
with leased telephone lines, Thicknet coaxial cable or fiber optic cable.
Again, the big advantage of this topology is its backup capabilities by providing multiple paths
through the network.
The Physical Hybrid Topology
With the hybrid topology, two or more topologies are combined to form a complete network. For
example, a hybrid topology could be the combination of a star and bus topology. These are also
the most common in use.
Star-Bus
In a star-bus topology, several star topology networks are linked to a bus connection. In this
topology, if a computer fails, it will not affect the rest of the network. However, if the central
component, or hub, that attaches all computers in a star, fails, then you have big problems since
no computer will be able to communicate.
Star-Ring
In the Star-Ring topology, the computers are connected to a central component as in a star
network. These components, however, are wired to form a ring network.
Like the star-bus topology, if a single computer fails, it will not affect the rest of the network. By
using token passing, each computer in a star-ring topology has an equal chance of
communicating. This allows for greater network traffic between segments than in a star-bus
topology.
Introduction To Data Transmission
Introduction
Routable protocols enable the transmission of data between computers in different segments of a
network. However, high volumes of certain kinds of network traffic can affect network efficiency
because they slow down transmission speed. The amount of network traffic generated varies with
the 3 types of data transmissions:

Broadcast

Multicast

Unicast
We are going to have a look at each one of these data transmissions because it's very important to
know the type of traffic they generate, what they are used for and why they exist on the network.
Before we proceed, please note that understanding the OSI Model (especially Layer 2 and 3),
Ethernet and the way a packet is structured is fundamental to understanding a broadcast, multicast
or unicast.
Media Access Control - MAC Addresses
Introduction
Media Access Control (MAC) addresses are talked about in various sections on the site, such as
the OSI-Layer 2, Multicast, Broadcast and Unicast. We are going to analyse them in depth here
so we can get a firm understanding of them since they are part of the fundamentals of networking.
MAC addresses are physical addresses, unlike IP addresses which are logical addresses. Logical
addresses require you to load special drivers and protocols in order to be able to configure your
network card/computer with an IP Address, whereas a MAC address doesn't require any drivers
whatsoever. The reason for this is that the MAC address is actually "burnt-in" into your network
card's memory chipset.
The Reason for MAC
Each computer on a network needs to be identified in some way. If you're thinking of IP
addresses, then you're correct to some extent, because an IP address does identify one unique
machine on a network, but that is not enough. Got you mixed up?
Check the diagram and explanation below to see why :
You see, the IP address of a machine exists on the 3rd Layer of the OSI model and, when a packet
reaches the computer, it will travel from Layer 1 upwards, so we need to be able to identify the
computer before Layer 3.
This is where the MAC address - Layer 2 comes into the picture. All machines on a network will
listen for packets that have their MAC address in the destination field of the packet (they also
listen for broadcasts and other stuff, but that's analysed in other sections). The Physical Layer
understands the electrical signals on the network and creates the frame which gets passed to the
Datalink layer. If the packet is destined for the computer then the MAC address in the destination
field of the packet will match, so it will accept it and pass it onto the Layer above (3) which, in
turn, will check the network address of the packet (IP Address), to make sure it matches with the
network address to which the computer has been configured.
Looking at a MAC
Let's now have a look at a MAC address and see what it looks like! I have taken my workstations
MAC address as an example:
When looking at a MAC address, you will always see it in HEX format. It is very rare that a
MAC address is represented in Binary format because it is simply tooooo long as we will see
futher on.
When a vendor, e.g Intel, creates network cards, they don't just give them any MAC address they
like, this would create a big confusion in identifying who created this network card and could
possibly result in clashing with another MAC address from another vendor e.g D-link, who
happened to choose the same MAC address for one of their network cards !
To make sure problems like this are not experienced, the IEEE group split the MAC address in
half, and used the first half to identify the vendor, and the second half is for the vendor to allocate
as serial numbers:
The Vendor code is specified by RFC - 1700. You might find a particular vendor having more
than just one code; this is because of the wide range of products they might have. They just apply
for more, as they need !
Keep in mind that even tho the MAC address is "burnt-in" to the network card's memory, some
vendors will allow you to download special programs to change the second half of the MAC
address on the card. This is because the vendors actually reuse the same MAC addresses for their
network cards because they create so many that they run out of numbers ! But at the same time,
the chances of you buying two network cards which have the same MAC address are so small
that it's almost impossible !
Let's start talking bits and bytes!
Now that we know what a MAC address looks like, we need to start analysing it. A MAC address
of any network card is always the same length, that is, 6 Bytes long or 48 Bits long. If you're
scratching your head wondering where these figures came from, then just have a look at the
picture below which makes it a bit easier to understand:
So that completes the discussion regarding MAC Addresses! I hope you have understood it all
because it's very important so you can expand your knowledge and truly understand what
happens in a network !
Unicast
Introduction
Compaired to broadcasts and Multicasts, a Unicast is very simple and one of the most common
data transmissions in a network.
The Reason for Unicast
Well it's pretty obvious why they came up with Unicasts, imagine trying to send data between 2
computers on a network, using broadcasts ! All you would get would be a very slow transfer and
possibly a conjested network with low bandwidth availability.
Data transfers are almost all of the times, unicasts. You have the sender e.g a webserver and the
receiver e.g a workstation. Data is transfered between these two hosts only, where as a broadcast
or a multicast is destined either everyone or just a group of computers.
In example above, my workstation sends a request to the Windows 2000 Server. The request is a
simple Unicast because it's directed to one machine (the server) and nothing else. You just need
to keep in mind that because we are talking about a Ethernet network, the traffic, hence the
packets, are seen by all machines (in this case the Linux Server aswell) but they will not process
them once they see that the destination MAC address in the packets do not match their own and
are also not set to FF:FF:FF:FF:FF:FF which would indicate that the packet is a broadcast.
There really isn't much more to say for Unicasts... so I guess i'll stop right here :)
Introduction To Multicast
Introduction
To understand what we are going to talk about, you must be familiar with how MAC addresses
are structured and how they work. The MAC Addresses page is available to help you learn more
about them..
A multicast is similar to a broadcast in the sense that its target is a number of machines on a
network, but not all. Where a broadcast is directed to all hosts on the network, a multicast is
directed to a group of hosts. The hosts can choose whether they wish to participate in the
multicast group (often done with the Internet Group Management Protocol), whereas in a
broadcast, all hosts are part of the broadcast group whether they like it or not :).
As you are aware, each host on an Ethernet network has a unique MAC address, so here's the
million dollar question: How do you talk to a group of hosts (our multicast group), where each
host has a different MAC address, and at the same time ensure that the other hosts, which are not
part of the multicast group, don't process the information ? You will soon know exactly how all
this works.
To keep things in perspective and make it easy to understand, we are going to concentrate only on
an Ethernet network using the IP protocol, which is what 80-90 % of home networks and offices
use.
Breaking things down...
In order to explain Multicasting the best I can and to make it easier for you understand, I decided
to break it down into 3 sections:
1) Hardware/Ethernet Multicasting
2) IP Multicasting
3) Mapping IP Multicast to Ethernet Multicast
A typical multicast on an Ethernet network, using the TCP/IP protocol, consists of two parts:
Hardware/Ethernet multicast and IP Multicast. Later on I will talk about Mapping IP Multicast to
Ethernet Multicast which is really what happens with multicasting on our Ethernet network using
the TCP/IP protocol.
The brief diagram below shows you the relationship between the 3 and how they complete the
multicasting model:
Hardware/Ethernet Multicasting
When a computer joins a multicast group, it needs to be able to distinguish between normal
unicasts (which are packets directed to one computer or one MAC address) and multicasts. With
hardware multicasting, the network card is configured, via its drivers, to watch out for particular
MAC addresses (in this case, multicast MAC addresses) apart from its own. When the network
card picks up a packet which has a destination MAC that matches any of the multicast MAC
addresses, it will pass it to the upper layers for further processing.
And this is how they do it :
Ethernet uses the low-order bit of the high-order octet to distinguish conventional unicast
addresses from multicast addresses. A unicast would have this bit set to ZERO (0), whereas a
multicast would be set to ONE (1)
To understand this, we need to analyse the destination MAC address of a unicast and multicast
packet, so you can see what we are talking about:
When a normal (unicast) packet is put on the network by a computer, it contains the Source and
Destination MAC address, found in the 2nd Layer of the OSI model. The following picture is an
example of my workstation (192.168.0.6) sending a packet to my network's gateway
(192.168.0.5):
Now let's analyse the destination MAC address:
When my gateway receives the packet, it knows it's a unicast packet as explained in the above
picture.
Let's now have a look at the MAC address of a multicast packet. Keep in mind, a multicast packet
is not directed to one host but a number of hosts, so the destination MAC address will not match
the unique MAC address of any computer, but the computers which are part of the multicast
group will recognise the destination MAC address and accept it for processing.
The following multicast packet was sent from my NetWare server. Notice the destination MAC
address (it's a multicast):
Analysis of a multicast destination MAC address:
So now you should be able to understand how computers can differentiate between a normal or
unicast packet and a multicast packet. Again, the destination MAC address 01-00-5E-00-00-05 is
not the MAC address of a particular host-computer but the MAC address that can be recognised
by computers that are part of the multicast group. I should also note that you will never find a
source address that is a multicast MAC address, the source address will always be a real one, to
identify which computer the packet came from.
The IEEE group used a special Rule to determine the various MAC addresses that will be
considered for multicasting. This Rule is covered in the last section of this page, but you don't
need to know it now in order to understand Hardware multicasting. Using this special rule it was
determined that MAC address 01:00:5E:00:00:05 will be used for the OSPF protocol, which
happens to be a routing protocol, and then this MAC address also maps to an IP address which is
analysed in IP Multicast.
IP Multicast
The IP Multicast is the second part of multicasting which, combined with the hardware
multicasting, gives us a multicasting model that works for our Ethernet network. If hardware
multicasting fails to work, then the packet will never arrive at the network layer upon which IP
multicasting is based, so the whole model fails.
With IP multicasting the hardware multicasting MAC address is mapped to an IP Address. Once
Layer 2 (Datalink) picks the multicast packet from the network (because it recognises it, as the
destination MAC address is a multicast) it will strip the MAC addresses off and send the rest to
the above layer, which is the Network Layer. At that point, the Network Layer needs to be able to
understand it's dealing with a multicast, so the IP address is set in a way that allows the computer
to see it as a multicast datagram. A host may send multicast datagrams to a multicast group
without being a member.
Multicasts are used a lot between routers so they can discover each other on an IP network. For
example, an Open Shortest Path First (OSPF) router sends a "hello" packet to other OSPF routers
on the network. The OSPF router must send this "hello" packet to an assigned multicast address,
which is 224.0.0.5, and the other routers will respond.
IP Multicast uses Class D IP Adresses:
Let's have a look at an example so we can understand that a bit better:
The picture below is a screenshot from my packet sniffer, it shows a multicast packet which was
sent from my NetWare server, notice the destination IP address:
The screenshot above shows the packet which was captured, it's simply displaying a quick
summary of what was caught. But, when we look on the left we see the above packet in much
more detail.
You can clearly see the markings I have put at the bottom which show you that the destination IP
for this packet is IP Address 224.0.0.5. This corresponds to a multicast IP and therefore is a
multicast packet.
The MAC header also shows a destination MAC address of 01-00-5E-00-00-05 which we
analysed in the previous section to show you how this is identified as a multicast packet at Layer
2 (Datalink Layer).
Some examples of IP multicast addresses:
224.0.0.0 Base Address (Reserved) [RFC1112,JBP]
224.0.0.1 All Systems on this Subnet [RFC1112,JBP]
224.0.0.2 All Routers on this Subnet [JBP]
224.0.0.3 Unassigned [JBP]
224.0.0.4 DVMRP Routers [RFC1075,JBP]
224.0.0.5 OSPFIGP OSPFIGP All Routers [RFC2328,JXM1]
Remember that these IP Addresses have been assigned by the IEEE !
Now all that's left is to explain how the IP multicast and MAC multicast map between each
other...
Mapping IP Multicast to Ethernet Multicast
The last part of multicast which combines the Hardware Multicasting and IP Multicasting is the
Mapping between them. There is a rule for the mapping, and this is it:
To map an IP Multicast address to the corresponding Hardward/Ethernet multicast address,
place the low-order 23 bits of the IP multicast address into the low-order 23 bits of the special
Ethernet multicast address. The rest of the high-order bits are defined by the IEEE (yellow colour
in the example)
The above rule basically determines the Hardware MAC address. Let's have a look at a real
example to understand this.
We are going to use Multicast IP Address 224.0.0.5 - a multicast for the OSPF routing protocol.
The picture below shows us the analysis of the IP address in binary so we can clearly see all the
bits:
It might seem a bit confusing at first, but let's break it down:
We have an IP Address of 224.0.0.5, this is then converted into binary so we can clearly see the
mapping of the 23 bits to the MAC address of the computer. The MAC Address part which is in
yellow has been defined by the IEEE group. So the yellow and pink line make the one MAC
Address as shown in binary mode, then we convert it from binary to hex and that's about it !
NOTE
You should keep in mind that multicast routers should not forward any multicast datagram with
destination addresses in the following 224.0.0.0 and 224.0.0.255. The next page (multicasting
list) gives a bit more information on this.
This just about does it for multicasting !
Multicast IP List
Introduction
This page contains all the Multicast IP Addresses and shows what protocol they are mapped to.
Should you ever use a packet sniffer to try and see what's on the network and you capture a
packet with a destination IP Address of 224.X.X.X, then simply look up this list and you will
know what the purpose of that packet was :)
Internet Multicast Addressess
Host Extensions for IP Multicasting [RFC1112] specifies the extensions required of a host
implementation of the Internet Protocol (IP) to support multicasting. Current addresses are listed
below.
The range of addresses between 224.0.0.0 and 224.0.0.255, inclusive, is reserved for the use of
routing protocols and other low-level topology discovery or maintenance protocols, such as
gateway discovery and group membership reporting. Multicast routers should not forward
any multicast datagram with destination addresses in this range, regardless of its TTL.
224.0.0.0 Base Address (Reserved) [RFC1112,JBP]
224.0.0.1 All Systems on this Subnet [RFC1112,JBP]
224.0.0.2 All Routers on this Subnet [JBP]
224.0.0.3 Unassigned [JBP]
224.0.0.4 DVMRP Routers [RFC1075,JBP]
224.0.0.5 OSPFIGP OSPFIGP All Routers [RFC1583,JXM1]
224.0.0.6 OSPFIGP OSPFIGP Designated Routers [RFC1583,JXM1]
224.0.0.7 ST Routers [RFC1190,KS14]
224.0.0.8 ST Hosts [RFC1190,KS14]
224.0.0.9 RIP2 Routers [RFC1723,GSM11]
224.0.0.10 IGRP Routers [Dino Farinacci]
224.0.0.11 Mobile-Agents [Bill Simpson]
224.0.0.12 DHCP Server / Relay Agent [RFC1884]
224.0.0.12 - 224.0.0.255 Unassigned [JBP]
224.0.1.0 VMTP Managers Group [RFC1045,DRC3]
224.0.1.1 NTP Network Time Protocol [RFC1119,DLM1]
224.0.1.2 SGI-Dogfight [AXC]
224.0.1.3 Rwhod [SXD]
224.0.1.4 VNP [DRC3]
224.0.1.5 Artificial Horizons - Aviator [BXF]
224.0.1.6 NSS - Name Service Server [BXS2]
224.0.1.7 AUDIONEWS - Audio News Multicast [MXF2]
224.0.1.8 SUN NIS+ Information Service [CXM3]
224.0.1.9 MTP Multicast Transport Protocol [SXA]
224.0.1.10 IETF-1-LOW-AUDIO [SC3]
224.0.1.11 IETF-1-AUDIO [SC3]
224.0.1.12 IETF-1-VIDEO [SC3]
224.0.1.13 IETF-2-LOW-AUDIO [SC3]
224.0.1.14 IETF-2-AUDIO [SC3]
224.0.1.15 IETF-2-VIDEO [SC3]
224.0.1.16 MUSIC-SERVICE [Guido van Rossum]
224.0.1.17 SEANET-TELEMETRY [Andrew Maffei]
224.0.1.18 SEANET-IMAGE [Andrew Maffei]
224.0.1.19 MLOADD [Braden]
224.0.1.20 any private experiment [JBP]
224.0.1.21 DVMRP on MOSPF [John Moy]
224.0.1.22 SVRLOC [Veizades]
224.0.1.23 XINGTV <[email protected]>
224.0.1.24 microsoft-ds <[email protected]>
224.0.1.25 nbc-pro <[email protected]>
224.0.1.26 nbc-pfn <[email protected]>
224.0.1.27 lmsc-calren-1 [Uang]
224.0.1.28 lmsc-calren-2 [Uang]
224.0.1.29 lmsc-calren-3 [Uang]
224.0.1.30 lmsc-calren-4 [Uang]
224.0.1.31 ampr-info [Janssen]
224.0.1.32 mtrace [Casner]
224.0.1.33 RSVP-encap-1 [Braden]
224.0.1.34 RSVP-encap-2 [Braden]
224.0.1.35 SVRLOC-DA [Veizades]
224.0.1.36 rln-server [Kean]
224.0.1.37 proshare-mc [Lewis]
224.0.1.38 - 224.0.1.255 Unassigned [JBP]
224.0.2.1 "rwho" Group (BSD) (unofficial) [JBP]
224.0.2.2 SUN RPC PMAPPROC_CALLIT [BXE1]
224.0.3.000-224.0.3.255 RFE Generic Service [DXS3]
224.0.4.000-224.0.4.255 RFE Individual Conferences [DXS3]
224.0.5.000-224.0.5.127 CDPD Groups [Bob Brenner]
224.0.5.128-224.0.5.255 Unassigned [IANA]
224.0.6.000-224.0.6.127 Cornell ISIS Project [Tim Clark]
224.0.6.128-224.0.6.255 Unassigned [IANA]
224.0.7.000-224.0.7.255 Where-Are-You [Simpson]
224.0.8.000-224.0.8.255 INTV [Tynan]
224.0.9.000-224.0.9.255 Internet Railroad [Malamud]
224.1.0.0-224.1.255.255 ST Multicast Groups [RFC1190,KS14]
224.2.0.0-224.2.255.255 Multimedia Conference Calls [SC3]
224.252.0.0-224.255.255.255 DIS transient groups [Joel Snyder]
232.0.0.0-232.255.255.255 VMTP transient groups [RFC1045,DRC3]
These addresses are listed in the Domain Name Service under MCAST.NET
and 224.IN-ADDR.ARPA.
Note that when used on an Ethernet or IEEE 802 network, the 23
low-order bits of the IP Multicast address are placed in the low-order
23 bits of the Ethernet or IEEE 802 net multicast address
1.0.94.0.0.0. See the section on "IANA ETHERNET ADDRESS BLOCK".
REFERENCES
[RFC1045] Cheriton, D., "VMTP: Versatile Message Transaction
Protocol Specification", RFC 1045, Stanford University,
February 1988.
[RFC1075] Waitzman, D., C. Partridge, and S. Deering "Distance Vector
Multicast Routing Protocol", RFC-1075, BBN STC, Stanford
University, November 1988.
[RFC1112] Deering, S., "Host Extensions for IP Multicasting",
STD 5, RFC 1112, Stanford University, August 1989.
[RFC1119] Mills, D., "Network Time Protocol (Version 1), Specification
and Implementation", STD 12, RFC 1119, University of
Delaware, July 1988.
[RFC1190] Topolcic, C., Editor, "Experimental Internet Stream
Protocol, Version 2 (ST-II)", RFC 1190, CIP Working Group,
October 1990.
[RFC1583] Moy, J., "The OSPF Specification", RFC 1583, Proteon,
March 1994.
[RFC1723] Malkin, G., "RIP Version 2: Carying Additional Information",
RFC 1723, Xylogics, November 1994.
[RFC1884] Hinden, R., and S. Deering, "IP Version 6 Addressing
Architecture", RFC 1884, Ipsilon Networks, Xerox PARC,
December 1995.
Data Transmission - Broadcast
Introduction
The term "Broadcast" is used very frequently in the networking world . You will see it in most
networking books and articles, or see it happening on your hub/switch when all the LED's start
flashing at the same time !
If you have been into networking for a while you most probably have come across the terms
"broadcast" and "subnet broadcast" . When I first dived into the networking world, I was
constantly confused between the two, because they both carried the "broadcast" term in them. We
will analyse both of them here, to help you understand exactly what they are and how they are
used !
Broadcast
A Broadcast means that the network delivers one copy of a packet to each destination. On bus
technologies like Ethernet, broadcast delivery can be accomplished with a single packet
transmission. On networks composed of switches with point-to-point connections, software must
implement broadcasting by forwarding copies of the packet across individual connections until all
switches have received a copy. We will be focusing only on Ethernet broadcasts.
The picture below illustrates a router which has sent a broadcast to all devices on its network:
Normally, when the computers on the network receive a packet, they will first try to match the
MAC address of the packet with their own and if that is successful, they process the packet and
hand it to the OSI layer above (Network Layer), if the MAC address is not matched, then the
packet is discarded and not processed. However, when they see a MAC address of
FF:FF:FF:FF:FF:FF, they will process this packet because they recognise it as a broadcast.
But what does a "broadcast" look like ?
Check out the image below, which is taken from my packet sniffer:
Let's now have a closer look at the above packet:
The image above shows a broadcast packet. You can clearly see that the "MAC destination
address" is set to FF:FF:FF:FF:FF:FF. The "Address IP destination" is set to 255.255.255.255,
this is the IP broadcast address and ensures that no matter what IP address the receiving
computer(s) have, they will not reject the data but process it.
Now you might ask yourself "Why would a workstation want to create a broadcast packet ?"
The answer to that lies within the various protocols used on our networks !
Let's take for example Address Resolution Protocol, or ARP. ARP is used to find out which MAC
address (effectively , which network card or computer) has a particular IP address bound to it.
You will find a detailed example of the whole process in the IP Routing section.
For a network device such as a router to ask "Who has IP address 192.168.0.100 ? ", it must
"shout" it out so it can grab everyone's attention, which is why it will use a broadcast to make
sure everyone listens and processes the packet on the network.
In the example image above, the particular machine was looking for a DHCP server (notice the
"bootps" protocol under the UDP Header - Layer 4, which is basically DHCP).
Subnet Broadcast or Direct Broadcast
A Subnet or Direct broadcast is targetted not to all hosts on a network, but to all hosts on a
subnet. Since a physical network can contain different subnets/networks e.g 192.168.0.0 and
200.200.200.0, the purpose of this special broadcast is to send a message to all the hosts in a
particular subnet.
In the example below, Router A sends a subnet broadcast onto the network. Hosts A,B,C and the
Server are configured to be part of the 192.168.0.0 network so they will receive and process the
data, but Host D is configured with a different IP Adress, so it's part of a different network, it will
accept the packet cause of its broadcast MAC address, but will drop the packet when it reaches its
Network Layer, where it will see that this packet was for a different IP network.
It is very similar to the network broadcast we just talked about but varies slightly in the sense that
its IP broadcast is not set to 255.255.255.255 , but is set to the subnet broadcast address. For
example, my home network is a Class C network : 192.168.0.0 with a subnetmask of
255.255.255.0 or, if you like to keep it simple, : 192.168.0.0/24.
This means that the available valid hosts for this network are from 192.168.0.1 to 192.168.0.254.
In this Class C network, as in every other network, there are 2 addresses which I can't use. The
first one is preserved to identify the network (192.168.0.0) and the second one for the subnet
broadcast (192.168.0.255).
The above packet, captured from my packet sniffer, shows my workstation broadcasting to the
subnet 192.168.0.0. From the broadcast address you can tell that I am using a full Class C
network range, otherwise the Destination IP wouldn't be 192.168.0.255.
The Packet decoder on the right shows you the contents of each header from the above packet.
Looking at the MAC Header (Datalink Layer), the destination MAC address is set to
FF:FF:FF:FF:FF:FF and the IP Header (Network Layer) has the Destination IP set to
192.168.0.255 which is, as I said, the Subnet Broadcast Address. Again, all computers on the
network which are part of the 192.168.0.0 subnet will process this packet, the rest will drop the
packet once they see it's for a network to which they do not belong.
In this example, I double clicked at my "Network Places" and was searching for a computer, this
forced my workstation to send out a Subnet Broadcast on the network asking if a particular
computer existed on the network.
And that just about does it for the broadcasting section !
Controlling Broadcasts and Multicasts
Introduction
The first step in controlling broadcast and multicast traffic is to identify which devices are
involved in a broadcast or multicast storm. The following protocols can send broadcast or
multicast packets:

Address Resolution Protocol (ARP)






Open Shortest Path First (OSPF)
IP Routing Information Protocol Version 1 (RIP1)
Service Advertising Protocol (SAP)
IPX Routing Information Protocol (RIP)
NetWare Link Services Protocol (NLSP)
AppleTalk Address Resolution Protocol (AARP)
After identifying the source of the broadcast or multicast storm, you must examine the packets to
find out which protocol or application triggered the broadcast or multicast storm. For example, if
a single device is responsible for a broadcast storm, you can examine the device's broadcast
traffic to determine exactly what the device was doing. For example, you can find out what the
device was looking for or what the device was announcing.
Broadcast or multicast storms are often caused by a fault that occurs during the device discovery
process. For example, if an IPX-based printing environment has been misconfigured, a print
driver client may continually send SAP packets to locate a specific print server. Unanswered
broadcast or multicast requests usually indicate that a device is missing or has been
misconfigured.
Examine the broadcast traffic on your company's network. Do you see numerous unanswered,
repeat queries? Do you see protocols (such as IP RIP1, SAP, and IPX RIP) that just "blab" all day
even when no other devices may be listening?
Or, is the majority of the broadcast and multicast traffic on your company's network purposeful?
That is, does the broadcast and multicast traffic have a request-reply communication pattern? For
example, are broadcast lookups answered?
Do broadcast packets contain meaningful information? For example, if a network has numerous
routers, do broadcast packets contain routing update information?
Is the broadcast rate acceptable? Does your company's network need RIP updates every 30
seconds, or can you increase the interval to one minute?
BROADCAST/MULTICAST DOMAINS
If your company's network is experiencing excessive broadcast or multicast traffic, you should
also check the scope of the broadcast or multicast domain. (A broadcast or multicast domain is
the range of devices that are affected by a broadcast or a multicast packet.) Understanding
broadcast and multicast domains can help you determine how harmful a broadcast storm can be
from any point on the network.
The scope of a broadcast and multicast domain depends, to some degree, on the network design.
For example, the picture below shows two networks, a switched network and a routed network:
On a switched network, Device 1 sends a broadcast or multicast packet that is propagated to all
ports of the switch. (A typical layer-2 switch does not filter either broadcast or multicast traffic.)
On a routed network, however, a router does not forward broadcast traffic. If Device 1 sends a
broadcast packet, only Device 2 and the router see the broadcast packet. If appropriate, the router
processes the broadcast packet and sends a reply. Because the broadcast packet is not forwarded,
it does not affect Devices 3 or 4.
Introduction To Subnetting
Introduction
So you have made it this far hey ? Well you are in for an AWESOME ride.
Subnetting is one of my favorite subjects. It can be as simple as 1,2,3 or as complex as trying to
get free tech support from Microsoft :)
Getting serious now.. Subnetting is a very interesting and important topic. I gather that most of
you have heard about it or have some idea what it's all about. For those who haven't dealt with
subnets before... hang in there because you're not alone ! Keep in mind we also have the website's
forum where you can post questions or read up on other people's questions and answers. It's an
excellent source of information and I recommend you use it !
For some reason a lot of people consider Subnetting to be a difficult subject, which is true to
some extent, but I must say that I think that most of them see it that way because they do not have
solid foundations on networking (essential !), and especially the IP protocol. But for you guys
(and girls), the above doesn't apply, because we have covered IP in the best possible way and we
DO have solid foundations. Right ?
Some Advice !
If you started reading the IP protocol on this site from the begining and have understood
everything, then you won't have any problem understanding subnetting... but (there is always a
darn "but" !) on the other hand if you do not understand very well what we have been talking
about in the previous pages, then you're going to find this somewhat difficult. Which ever the
case, I'm going to try and explain subnetting as simply as possible and hope to answer all your
questions.
Now, because Subnetting is a big topic to talk about and analyse in one page (yeah right !) I've
split it throughout a few sections to break it down into smaller pieces. Logically, as you move on
to higher sections, the concepts and material difficulty will increase :


Section 1: Basic Subnetting Concepts. This section is to help you understand what a
subnet really is. Introduction to the Default Subnet masks is covered at first and then you
get to see and learn how the network is affected by changing the subnet mask. There are
plenty of them cool diagrams (which you only find on this site !) to ensure that you get
the picture right :)
Section 2: Subnet Masks and Their Effect. Here we will look at the Default Subnet mask
in a bit more detail and introduce a few new concepts. Classless and Classful IP
Addresses are covered here and you get to learn how the subnet mask affects them.

Section 3: The Subnet Mask Bits. Detailed analysis of subnet mask bits. Learn to
recognise the number of bits in a subnet mask, followed by an introduction to complex
subnets.

Section 4: Routing and Communications between Subnets. Understand how routers deal
with subnets, how computers which are in different subnets can communicate with each
other, along with a few general notes on subnetting that you should know.

Section 5: Subnetting Guidelines. Some last information to help you plan your new
networks and a few things to keep in mind so you can avoid future problems with
subnets.
IP Subnetting - The Basic Concepts
Introduction
Introduction ? We already did that in the previous page :)
Let's get stuck right into this cool topic !
What is Subnetting ?
When we Subnet a network, we basically split it into smaller networks. For example, when a set
of IP Addresses is given to a company, e.g 254 they might want to "break" (the correct term is
"partition") that one network into smaller ones, one for each department. This way, their
Technical department and Management department can each have a small network of their own.
By subnetting the network we can partition it to as many smaller networks as we need and this
also helps reduce traffic and hides the complexity of the network.
By default, all type of Classes (A, B and C) have a subnet mask, we call it the "Default Subnet
mask". You need to have one because:
1) All computers need the subnet mask field filled when configuring IP
2) You need to set some logical boundaries in your network
3) You should at least enter the default subnet mask for the Class you're using
In the previous pages I spoke about IP Classes, Network IDs and Host IDs, the fact is that the
Subnet mask is what determines the Network ID and Host ID portion of an IP Address.
The table below shows clearly the subnetmask that applies for each network Class.
When dealing with subnet masks in the real world, we are free in most cases to use any type of
subnet mask in order to meet our needs. If for example we require one network which can contain
up to 254 computers, then a Class C network with its default subnet mask will do fine, but if we
need more, then we might consider a Class B network with its default subnet mask.
Note that the default subnet masks have been set by the IEEE committee, the same guys that set
and approve the different standards and protocols.
We will have a closer look at this later on and see how we can achieve a Class C network with
more than 254 hosts.
Understanding the concept
Let's stop here for one moment and have a look at what I mean by partitioning one network into
smaller ones by using different subnet masks.
The picture below shows our example network (192.168.0.0). All computers here have been
configured with the default Class C subnet mask (255.255.255.0):
Because of the subnet mask we used, all these computers are part of the one network marked in
blue. This also means that any one of these hosts (computers, router and server) can communicate
with each other.
If we now wanted to partition this network into smaller segments, then we would need to change
the subnet mask appropriately so we can get the desired result. Let's say we needed to change the
subnet mask from 255.255.255.0 to 255.255.255.224 on each configured host.
The picture below shows us how the computers will see the network once the subnet mask has
changed:
In reality, we have just created 8 networks from the one large (blue) network we had, but I am
keeping things simple for now and showing only 2 of these smaller networks because I want you
to understand the concept of subnetting and see how important the subnet mask is.
In the next pages which are to follow I will analyse in great depth the way subnetting works and
how to calculate it. It is very important that you understand the concepts introduced in this
section, so make sure you do, before continuing !
Subnet Masks & Their Effect
Introduction
There are a few different ways to approach subnetting and it can get confusing because of the
complexity of some subnets and the flexibility they offer. For this reason I created this little
paragraph to let you know how we are going to approach and learn subnetting. So.....
We are going to analyse the common subnet masks for each Class, giving detailed examples for
most of them and allowing you to "see" how everything is calculated and understand the different
effects a subnet mask can have as you change it. Once you have mastered this, you can then go on
and create your custom subnet masks using any type of Class.
Default Subnet masks of each Class
By now you should have some idea what the subnet mask does and how it's used to partition a
network. What you need to keep in mind is that each Class has its DEFAULT subnet mask, which
we can change to suit our needs. I have already mentioned this in the previous page, but we need
to look into it in a bit more detail.
The picture below shows our 3 Network Classes with their respective default subnet mask:
The Effect of a Subnet Mask on an IP Address
In the IP Classes page we analysed and showed clearly how an IP Address consists of two parts,
1) The Network ID and 2) The Host ID. This rule applies for all IP Addresses that use the default
subnet mask and we call them Classful IP Addresses.
We can see this once again in the picture below, where the IP Address is analysed in Binary,
because this is the way you should work when dealing with subnet masks:
We are looking at an IP Address with its subnet mask for the first time. What we have done is
take the decimal subnet mask and converted it to binary, along with the IP Address. It is essential
to work in binary because it makes things clearer and we can avoid making silly mistakes. The
ones (1) in the subnet mask "lock" or, if you like, define the Network ID portion. If we change
any bit within the Network ID of the IP Address, then we immediately move to a different
network. So in this example, we have a 24 bit subnet mask.
NOTE:
All Class C Classful IP Addresses have a 24 bit subnet mask (255.255.255.0).
All Class B Classful IP Addresses have a 16 bit subnet mask (255.255.0.0).
All Class A Classful IP Addresses have an 8 bit subnet mask (255.0.0.0).
On the other hand, the use of an IP Address with a subnet mask other than the default results in
the standard Host bits (the Bits used to indentify the HOST ID) being divided in to two parts: a
Subnet ID and Host ID. These type of IP Addresses are called Classless IP Addresses.
In order to understand what a "Classless IP Address" is without getting confused, we are going to
take the same IP Address as above, and make it a Classless IP Address by changing the default
subnet mask:
Looking at the picture above you will now notice that we have a Subnet ID, something that didn't
exist before. As the picture explains, we have borrowed 3 bits from the Host ID and used them to
create a Subnet ID. Effectively we partitioned our Class C network into smaller networks.
If you're wondering how many smaller networks, you'll find the answer on the next page. I prefer
that you understanding everything here rather than blasting you with more Subnet ID's, bits and
all the rest :)
Summary
In this page we saw the default subnet mask of each Class and also introduced the Classful and
Classless IP Addresses, which are a result of using various subnet masks.
When we use IP Addresses with their default subnet masks, e.g 192.168.0.10 is a Class C IP
Address so the default subnet mask would be 255.255.255.0, then these are "Classful IP
Addresses".
On the other hand, Classless IP Addresses have their subnet mask modified in a way so that there
is a "Subnet ID". This Subnet ID is created by borrowing Bits from the Host ID portion.
The picture below shows us both examples:
I hope that you have understood the new concepts and material on this page. Next we are going to
talk about subnet bits, learn how to calculate how many bits certain subnet masks are and see the
different and most used subnet masks available.
If you think you might have not understood a few sections throughout this page, I would suggest
you read it once more :)
Subnetting Analysis
Introduction
So we have covered to some depth the subnetting topic, but there is still much to learn ! We are
going to explain here the available subnet masks and analyse a Class C network, using a specific
subnet mask. It's all pretty simple, as long as you understand the logic behind it.
Understanding the use, and analysing different subnet masks
Okay, so we know what a subnet mask is, but we haven't spoken (yet) about the different values
they take, and the guidelines we need when we use them. That's what we are going to do here !
The truth is that you cannot take any subnet mask you like and apply it to a computer or any other
device, because depending on the random subnet mask you choose, it will either create a lot of
routing and communication problems, or it won't be accepted at all by the device you're trying to
configure.
For this reason we are going to have a look at the various subnet masks so you know exactly what
you need to use, and how to use it. Most important, we are going to make sure we understand
WHY you need to choose specific subnet masks, depending on your needs. Most people simply
use a standard subnet mask without understanding what that does. This is not the case for the
visitors to this site.
Let's first have a look at the most common subnet masks and then I'll show you where these
numbers come from :)
Common Subnet Masks
In order to keep this place tidy, we are going to see the common Subnet masks for each Class.
Looking at each Class's subnet mask is possibly the best and easiest way to learn them.
Numer of bits
Class A
Class B
Class C
0 (default mask)
255.0.0.0
(default_mask)
255.255.0.0
(default_mask)
255.255.255.0
(default_mask)
1
255.128.0.0 (default
+1)
255.255.128.0
(default+1)
255.255.255.128
(default+1)
2
255.192.0.0
(default+2)
255.255.192.0
(default+2)
255.255.255.192
(default+2)
3
255.224.0.0
(default+3)
255.255.224.0
(default+3)
255.255.255.224
(default+3)
4
255.240.0.0
(default+4)
255.255.240.0
(default+4)
255.255.255.240
(default+4)
5
255.248.0.0
(default+5)
255.255.248.0
(default+5)
255.255.255.248
(default+5)
6
255.252.0.0
(default+6)
255.255.252.0
(default+6)
255.255.255.252
(default+6)
7
255.254.0.0
(default+7)
255.255.254.0
(default+7)
255.255.255.254
(default+7) * Only 1 Host
per subnet
8
255.255.0.0
(default+8)
255.255.255.0
(default+8)
255.255.255.255
(default+8) * Reserved
for Broadcasts
The above table might seem confusing at first, but don't despair ! It's simple, really, you just need
to look at it in a different way !
The trick to understanding the pattern of the above table is to think of it in the following way:
Each Class has its default subnet mask, which I have noted using the Green colour, and all we are
doing is borrowing a Bit at a time (starting from 1, all the way to 8) from the Host ID portion of
each class. I have used various colours to show you the decimal numbers that we get each time
we borrow a bit from the Host ID portion. If you can't understand how these decimal numbers
work out, then you should read up on the Binary & IP page.
Each time we borrow a bit from the Host ID, we split the network into a different number of
networks. For example, when we borrowed 3 Bits in the Class C network, we ended up
partitioning the network into 8 smaller networks. Let's take a look at a detailed example (which
we will break into three parts) so we can fully understand all the above.
We are going to do an analysis using the Class C network and 3 Bits which we took from the
Host ID. The analysis will take place once we convert our decimal numbers to binary, something
that's essential for this type of work. We will see how we get 8 networks from such a
configuration and their ranges !
In this first part, we can see clearly where the 8 Networks come from. The rule applies to all types
of Subnets, no matter what Class they are. Simply take the Subnet Bits and place them into the
power of 2 and you get your Networks.
Now, that was the easy part. The second part is slightly more complicated and I need you focused
so you don't get mixed up!
At first the diagram below seems quite complex, so try to follow me as we go through it:
The IP Address and Subnet mask is show in Binary format. We focus on the last octec which
contains all the information we are after. Now, the last octec has 2 parts, the Subnet ID and Host
ID. When we want to calculate the Subnets and Hosts, we deal with them one at a time. Once
that's done, we put the Subnet ID and Host ID portion together so we can get the last octec's
decimal number.
We know we have 8 networks (or subnets) and, by simply counting or incrementing our binary
value by one each time, we get to see all the networks available. So we start off with 000 and
finish at 111. On the right hand side I have also put the equivalent decimal number for each
network.
Next we take the Host ID portion, where the first available host is 0 0001 (1 in Decimal), because
the 0 0000 (0 in Decimal) value is reserved as it is the Network Address (see IP Classes page),
and the last value which is 1 1111 (31 in decimal) is used as a Broadcast Address for each Subnet
(see Broadcast page).
Note
I've given a formula in the IP Classes page that allows you to calculate the available hosts, that's
exactly what we are doing here for each subnet. This formula is :2 to the power of X -2. Where X
is the number of Bits we have in the Host ID field, which for our example is 5. When we apply
this formula, we get 2 to the power of 5 - 2 = 30 Valid (usable) IP Addresess. If you're wondering
why we subtract 2, it's because one is used for the Network Address of that subnet and the other
for the Broadcast Address of that subnet. This shouldn't be new news to anyone :)
Summing up, these are the ranges for each subnet in our new network:
I hope the example didn't confuse you too much; the above example is one of the simplest type,
which is why I chose a Class C network, they are the easiest to work with.
If you did find it somewhat difficult, try to read over it slowly. After a few times, you will get to
understand it. These things do need time to sink in !
Subnet Routing & Communications
Introduction
So we understand all (almost !) about subnetting, but there are few questions/topics which we
haven't talked about as yet. Experience shows you can never know everything 100% ! Routing
and Communication between subnets is the main topic here. We have analysed subnetting and
understood how it works, but haven't yet dealt with the "communication" side of things. These,
along with a few other things I would like to bring to your attention, are going to be analysed here
! It's an easy and very interesting page, so sit back and read through it comfortably.
Communication Between Subnets
So, after reading all the previous pages about subnetting, let me ask you the following:
Do you think computers that are on the same physical network but configured to be on separate
subnets are able to communicate ?
The answer is "no". Why ? Simply because you must keep in mind that we are talking about the
communication between 2 different networks !
Looking at our example of the Class C network on the previous page, the fact is that one
computer is part of the network 192.168.0.0 and the other one part of network 192.168.0.32, and
these are two different networks. In our example, from the moment we modified the default
subnet mask from 255.255.255.0 to 255.255.255.224, we split that one network to 8 smaller ones.
Let's try it !
And because we just have to prove it..... we are going to try it on my home network ! In the worst
case I'll have to spend all night trying to figure out what went wrong but it will be worth it ! :)
Without complicating things, here is a diagram of my home network (I've excluded any
computers we are not going to be using, in order to save space):
Well, that's the network we have to play with. I have put on the diagram the results of a few
simple pings from each host and as you can see, they all came out nice: PASS.
So in order to proceed to phase 2 of our experiment, I modified the Subnet mask of my
workstation to 192.168.0.35 / 255.255.255.224 , my Slackware Linux Firewall to 192.168.0.1 /
255.255.255.224 (internal Network Interface Card) and my NetWare 6 Server to 192.168.0.10 /
255.255.255.224 as shown in the diagram below:
As you can see, the results for my workstation were devastating ... alone and totaly unaware that
the other two servers are still there ! When my workstation tries to actually ping the Linux
Firewall, it will get no reply, because its Gateway is a host which belongs to another network,
something that we knew would never work.
So, we have concluded that there cannot be any sort of communication between the computers of
Network 1 and Network 2.
So how can two hosts in two different subnets talk to each other ? That's what we are going to
have a look at right now !
Building The Bridge
There is a way to allow the communication between my workstation and my servers and the
Internet. Actually there are a few ways to achieve this and I'm going to show you a few ways,
even though some might seem silly or impractical. We are not interested in the best solution at the
moment, we just want to know the ways in which we can establish communication between the
two subnets.
Considering that subnets are smaller networks, you would remember that we use routers to
achieve communications between two networks. This example of my home network is no
exception to this rule.
We need a router which will route packets from one network to the other. Let's have a look at the
different ways we can solve this problem:
Method 1: Using a Server with 2 Network Cards
Our first option is to use one of the Servers, or a new Server which has at least 2 network cards
installed. By connecting each network card to one of our networks and configuring the network
cards so that each one belongs to one subnet/network we can route packets between them:
The above diagram shows pretty much everything that's needed. The 2nd network card has been
installed and it's been assigned an IP Address that falls within our Network 1 range and therefore
can communicate with my workstation . On the other hand the NetWare server now acts as a
Gateway for Network 1, so my workstation is reconfigured to use it as its Gateway. Any packets
from Network 1 to Network 2 or the Internet will pass through the NetWare server
Method 2: Binding 2 IP Addresses to the same network card
This method is possibly the best and easiest way around our problem. We use the same network
card on the NetWare server and bind another IP Address to it.
This second IP Address will obviously fall within the Network 1 IP range so that my workstation
can communicate with the server:
As noted on the diagram, the only problem we might encounter is the need for the operating
system of the server to support this type of configuration, but most modern operating systems
would comply.
Once configured, the Server takes care of any routing between the two networks.
Method 3: Installing a router
The third method is to install a router in the network.
This might seem a bit far fetched but remember that we are looking at all possible ways to
establish communications between our networks ! If this was a large network, then a router could
possibly be the ideal solution, but given the size of my network, well... let's just say it would be a
silly idea :)
My workstation in this setup would forward all packets to its Gateway, which is the router's
interface and is connected to Network 1 and it will be able to see all other servers and access the
Internet. It's a similar setup to Method 1 but instead of a Server we have a dedicated router. Oh,
and by the way, if we would end up using such a configuration in real life.. the hub which both of
the router's interface's connect to, would be replaced by some type of WAN link.
That completes our discussion on Subnet routing and communication.
Subnetting Guidelines
Introduction
There is always that day when you are called upon to provide a solution to a network problem.
The number of problems that can occur in a network are numerous and believe it or not, most of
them can be avoided if the initial design and installation of the network are done properly.
When I say "done properly" I don't just mean connecting the correct wires into the wall sockets !
Looking at it from an Administrator's point of view, I'd say that a "properly done job" is one that
has had a lot of thought put into it to avoid silly routing problems and solve today's and any future
needs.
This page contains all the information you need to know in order to design a network that won't
suffer from any of the above problems. I've seen some network setups which suffered from all the
above, and you would be amazed how frequently I see them at large companies.
Guidelines - Plan for Growth
When creating subnets for your network, answer the following questions:

How many subnets are needed today?
Calculate the maximum number of subnets required by rounding up
the maximum number to the nearest power of two.
For example, if an organization needs five subnets, 2 to the power of 2 or 4 will not
provide enough subnet addressing space, so you must round up to
2 to the power of 3 = 8 subnets.

How many subnets are needed in the future?
You must plan for future growth. For example, if 9 subnets are
required today, and you choose to provide for 2 to the power of 4 = 16 subnets, this
might not be enough when the seventeenth subnet needs to be deployed.
In this example, it might be wise to provide for more growth and
select 2 to the power of 5 = 32 as the maximum number of subnets.

What are the maximum number of hosts on a given segment?
You must ensure that there are enough bits available to assign host
addresses to the organization’s largest subnet.
If the largest subnet needs to support 40 host addresses today, 2 to the power of 5 =
32 will not provide enough host address space, so you would need
to round up to 2 to the power of 6 = 64.

How many hosts will there be in the future?
Besides planning for additional subnets, you must also plan for more
hosts to be added to each subnet in the future.
Make sure the organization’s address allocation provides enough
bits to deploy the required subnet addressing plan.
When developing subnets, class C addresses present the greatest
challenge because fewer bits are available to divide between subnet
addresses and host addresses. If you accommodate too many
subnets, there may be no room for additional hosts and growth in
the future.
All the above points will help you succeed in creating a well designed network which will have
the ability to cater for any additional future requirements. And if you do happen to have any
problems, well, there is always the website's forum where you can post your questions and
problems :)
Supernetting/CIDR Introduction
Introduction
Supernetting, also known as Classless InterDomain Routing (CIDR), is another awesome subject.
It exists thanks to the wide adoption of the Internet, which lead to the exhaustion of the available
IP Addresses. More specifically, supernetting was invented in 1993 with the purpose of extending
the 32 bit IP address lifetime until the adoption of IPv6 was complete.
Putting it as simply as possible, supernets are used to combine multiple Class C networks into
groups, which the router, in turn, treats as one big network. It might not seem like a smart thing to
do, but if you look at the picture on a larger scale you will notice some of the really awesome
advantages this offers.
The creation of Supernets is also known as Address Aggregation.
The Big Picture
Consider this realistic example: You work for a large ISP with a few hundred networks to which
it provides services like Internet access, e-mail etc. These networks, which basically are your
ISP's clients, consist of 254 host IPs each (One full Class C network for each client), and they
each have a permanent connection to your headquarters via ISDN (represented by the yellow
lines) and from there your ISP has a direct connection to the Internet Backbone.
This diagram shows the example network we're talking about. Our main focus is the two routers
the ISP has, Router No.1 and Router No.2, because these will be affected when we supernet the
networks.
Routers No.1 & No.2 exchange information with each other and update their tables, which
contain the networks they know about. Router 2 connects directly to 10 networks and needs to let
Router 1 know about each one of them. Router 1 in turn will also advertise these networks to the
Internet Backbone Router so it too will know about these networks.
The above setup requires that Router No.1 and the Internet Backbone Router each have more than
13 separate entries in their routing tables to make sure that each network is accessible from them.
This is not so bad for this example, but try to imagine the problems and the complexity of a
similar setup where you have thousands of networks, where the routing tables would be enormous
! Also, you should keep in mind that the larger the routing table, the more work the router needs
to do because it has a huge table of routes to maintain and look through all the time.
By using Supernetting, we could supernet the whole network so it appears to the Internet as
follows:
You can clearly see that all the clients' networks have been combined into one big network. Even
though Router No.1 and the Internet Backbone router see only one big network, Router No.2
knows all about the smaller Class C networks since it is the one "hiding" them from the rest of the
world and makes sure it sends the correct data to each network.
We are going to look at a more detailed example later on so we can understand exactly how
supernetting works.
NOTE
There are some limitations with Supernetting - this is why there is a rule which we must follow so
we don't bump into big routing problems and upset the network. We will have a closer look at the
rule on the next page.
The reason for evolution
Supernetting has become very popular and there are a lot of reasons why:

Class B network address space has nearly been exhausted

A small percentage of class C network addresses have been
assigned to networks

Routing tables in Internet routers have grown to a size beyond
the ability of software and people to effectively manage

The 32-bit IP address space will eventually be exhausted
How Supernets work
If you understand how Subnetting works, then you will surely understand Supernetting.
Supernets are the opposite of Subnets in that they combine multiple Class C networks into blocks
rather than dividing them into segments.
When Subnetting, we borrow bits from the Host ID portion, which increases the number of bits
used for the Network ID portion. With Supernetting we do exactly the opposite, meaning we take
the bits from the Network ID portion and give them to the Host ID portion, as illustrated in the
picture below:
The next page deals with a detailed example to give you an in-depth analysis of Supernetting. The
main concept you need to understand is that Supernetting is all about combining multiple Class C
networks into one or more groups and it does this by taking bits from the Network ID portion
and, by doing so, the bits assigned to the Host ID portion increase.
I think that's a pretty good introduction to Supernetting :)
Let's take a look at an example to see and understand how exactly Supernetting works in practice.
Supernetting/CIDR Analysis
Introduction
We have had a good introduction to Supernetting (CIDR) and we are about to have a look at an
example to finally give answers to all those questions you have about the subject.
NOTE:This page requires you to have basic knowledge and understanding on Internet Protocol,
Subnetting and Binary notation. These are covered in great detail on other pages and I
recommend you have a quick look over these topics if you think you're not up to scratch.
Guideline - Rule to Supernetting / CIDR
Before we get in to deep waters, we must talk about the main rule that applies to creating
Supernets. For our example, this rule dictates that, in order to create Supernets from Class C IP
Addresses, the network address must be consecutive and the third octec of the first IP Address
must be divisible by two.
If we had 8 networks we wanted to combine, then the third octec of the first IP address would
need to be divisible by eight and not two.
There is one more rule you should know and this rule has to do with the routers of the network,
which will need to work with the new changes. This rule dictates that all routers on the network
must be running static routing or using a classless routing protocol such as RIP2 or OSPF.
Classless routing protocols include the subnet mask information and can also pass supernetting
information. Routing protocols such as RIP1 do NOT include subnet mask information and
would just create problems!
The Example
Here is an example involving two companies that want to use Supernetting to solve their network
requirements. We are going to determine which company mets the criteria for a Supernet (we are
assuming the routers are setup in a way that will support supernetting):
As you can see, Companys No.1's network passes the test, therefore we can Supernet its two
networks.
The Analysis of Company 1's Network & creation of its Supernet
Let's now take Company No.1's network, see how the Supernet will be created and determine
various important parameters like the new network's broadcast address, the identification of the
new supernets etc.
To begin, we must take our two networks and look at them in binary format, this is the only way
to "see" exactly what we're doing when supernetting, and take a look at the Network and Host ID
portions:
If you have problems understanding why we have no Subnet ID, please read up on the IP and
Subnetting sections on this site where everything is explained as simply as possible using cool 3D
diagrams.
Now we need to create the Supernet. This means that we are going to take one bit from the
Network ID of these networks and give it to the Host ID portion. This 1 Bit is our Supernet ID.
So our subnet mask will now be reduced from 24 bits to 23 bits. You might get confused or ask
why we call this extra Bit we are giving to the Host ID a Supernet ID?
The answer is simple, the one Bit that we are taking from the Network ID is given to the Host ID
but, in order for us to clearly "see" where the supernet is created, we colour it Green and give it
the "Supernet ID" label:
So there you have it, a new supernet created!
Now I can point out something new; I waited to show you this because I didn't want to confuse
you :)
We have one Supernet made from two networks (203.31.218.0 and 203.31.219.0). In order to
identify these two networks we name the first one (203.31.218.0) Supernet 0 and the second one
(203.31.219.0) Supernet 1. This is to distinguish between the two networks and nothing more.
It actually makes more sense if you look at the values the Supernet ID field takes:
It's very important to understand that Supernet 0 and 1 are part of the same new network ! This
means that there is only one network address, one network broadcast address and not two as you
might expect.
Let's now have a look at some more important information regarding the new network:
ITEM
VALUE
Supernet range
203.31.218.0 - 203.31.219.255
Subnet Mask
255.255.254.0
Supernet Network Address
203.31.218.0
Supernet Broadcast Address
203.31.219.255
Supernet 0
203.31.218.0
Supernet 1
203.31.219.0
Valid IP Address range
203.31.218.1 - 203.31.219.254
Reserved IP Addresses
203.31.218.0, 203.31.219.255
The above table shows pretty much all the information someone would need about the new
network.
Let me also point out to you (in case you didn't ask yourself :> ) that IP Addresses
203.31.218.255 and 203.31.219.0, which would have been used as the broadcast address for our
first old network and the network address of our old second network, are now usuable addresses!
Yes, you can actually assign them to hosts, because we have a Supernet. Now, even though you
can use these addresses, I would probably not use them unless I really needed to. Not that it
makes a difference, but I always tend to reserve these types of addresses, it's just a habit of mine
:)
Also, every host that will be part of this Supernet will need to be configured with the new Subnet
mask, 255.255.254.0 as noted in the table above. Any host that isn't reconfigured will have big
problems trying to communicate with the rest of the network.
Well that completes the analysis of our Supernet example. As I pointed out in the beginning, you
must have your IP, Subnetting and Binary Notation up to date otherwise you will have difficulties
understanding a lot of the material so make sure you read up on those sections before giving this
page another shot :)
The Supernetting/CIDR Chart
Introduction
Because subnet masks can get very confusing, the creators of this wonderful network technology
also made available a few things to make life somewhat easier.
The following chart is really a summary of what we've seen so far. It gives you a good idea of the
networks we can combine and the result we'd see.
The Supernetting/CIDR chart
There are four columns available in our chart:
The CIDR Block, the Supernet Mask, Number of Class C Networks and the Number of Hosts
column.
Class C
CIDR Block Supernet Mask Number of Class C Networks Number of Hosts
/14
255.252.0.0
1024
262144
/15
255.254.0.0
512
131072
/16
255.255.0.0
256
65536
/17
255.255.128.0
128
32768
/18
255.255.192.0
64
16384
/19
255.255.224.0
32
8192
/20
255.255.240.0
16
4096
/21
255.255.248.0
8
2048
/22
255.255.252.0
4
1024
/23
255.255.254.0
2
512
/24
255.255.255.0
1
254
/25
255.255.255.128
1/2
126
/26
255.255.255.192
1/4
62
/27
255.255.255.224
1/8
32
/28
255.255.255.240
1/16
16
/29
255.255.255.248
1/32
8
/30
255.255.255.252
1/64
4
I am going to explain the meaning of each column, although you probably already know most of
them.
The CIDR Block
The CIDR Block simply represents the number of bits used for the subnet mask. For example, /14
means 14 bits assigned to the subnet mask, it is a lot easier telling someone you have a 14 bit
subnet mask rather than a subnet mask of 255.252.0.0 :)
Note: In the above paragraph, I called the 14 bits as a subnet mask, when in fact it's a supernet
mask, but because when you configure any network device, the field you will need to enter the
value is usually named as the 'subnet mask', I decided to name it 'subnet mask' aswell, in order to
avoid confusion.
I'd like you to pay particular attention to the CIDR Block /24, and /25 to /30. These blocks are
highlighted in yellow and blue because I want them to grab your attention :)
When we use a CIDR Block of 24 (24 bit subnet mask) we are not Supernetting ! This is a default
subnet mask for a Class C network. With CIDR Blocks /25 to /30 we are actually Subnetting and
not Supernetting !
Now you might wonder why I have them in the chart. The fact is that those particular CIDR
Blocks are valid, regardless of whether applying them to a network means we are Subnetting and
not Supernetting. If you have dealt with any ISPs and IP Address assignments, chances are you
would have been given your IP Addresses in CIDR format.
A good example is if you wanted a permanent connection to your ISP and only required 2 IP
Addresses, one for your router and one for your Firewall, you would be assigned one /30 CIDR
Block. With such a subnet mask you will have 4 IP Addresses, from which 2 will be reserved
(one for the Network address and one for the Broadcast address) and you're left with 2 that you
can assign to your hosts (router and firewall).
The Supernet Mask
Basically, this is your Subnet mask. When you configure the devices that will be attached to the
specified network, this is the value you will enter as a Subnet mask. It's also the decimal value the
CIDR Block specifies. For example, a /24 CIDR block means a 24 bit Subnet mask, which in its
turn translates to 255.255.255.0 :) Simple stuff !
Number of Class C Networks
This number shows us how many Class C Networks are combined by using a specific Supernet
mask or, if you like, CIDR Block. For example, the /24 CIDR Block, 255.255.255.0 Supernet
mask is 1 Class C Network, whereas a /20 CIDR Block, 255.255.240.0 Supernet mask is 16 Class
C networks.
Number Of Hosts
This value represents the number of hosts per Supernet. For example, when we use a /20 CIDR
Block, which means a Subnet (or Supernet) mask of 255.255.240.0, we can have up to 4096
hosts. Pretty straightforward stuff.
There is one thing you must be careful of though ! The value 4096 does not represent the valid,
usable IP Addresses. If you wanted to find out how many of these IP Addresses you can actually
use, in other words, assign to hosts, then you simply take 2 IP Addresses from that number (the
first and last IP Address), so you're left with 4094 IP Addresses to play with :)
Why take 2 away ? You shouldn't be asking questions like that if you have read the IP and
Subnetting sections but I'll tell you anyway :) One is reserved for the Network Address and one
for the Broadcast Address of that network !
Summary
That completes the explanation of the Supernetting/CIDR chart. You will see that Supernetting
and Subnetting have quite a few things in common, and this is simply because they work on the
same principle.
Again, if you have the whole topic, or certain sections hard to understand, you should give
yourself a small break, and then come back for another round :)
Network Cabling
Introduction
This section talks about the cabling used in today's networks. There's a lot of different type of
cabling in today's networks and I am not going to cover all of them, but I will be talking about the
most common cables, which include UTP CAT5 straight through and crossover, Coax and a few
more.
Cabling is very important if you want a network to work properly with minimum problems and
bandwidth losses. There are certain rules which must never be broken when you're trying to
design a network, otherwise you'll have problems when computers try to communicate. I have
seen sites which suffer from enormous problems because the initial desgin of the network was not
done properly !
In the near future, cabling will probably be something old and outdated since wireless
communication seems to be gaining more ground, day by day. With that in mind, around 95% of
companies still rely on cables, so don't worry about it too much :)
Let's have a quick look at the history of cabling which will allow us to appreciate what we have
today !
The Beginning
We tend to think of digital communication as a new idea but in 1844 a man called Samuel Morse
sent a message 37 miles from Washington D.C. to Baltimore, using his new invention ‘The
Telegraph’. This may seem a far cry from today's computer networks but the principles remain
the same.
Morse code is type of binary system which uses dots and dashes in different sequences to
represent letters and numbers. Modern data networks use 1s and 0s to achieve the same result.
The big difference is that while the telegraph operators of the mid 19th Century could perhaps
transmit 4 or 5 dots and dashes per second, computers now communicate at speeds of up to 1
Giga bit, or to put it another way, 1,000,000,000 separate 1s and 0s every second.
Although the telegraph and the teletypewriter were the forerunners of data communications, it has
only been in the last 35 years that things have really started to speed up. This was borne out of the
necessity for computers to communicate at ever ncreasing speeds and has driven the development
of faster and faster networking equipment, higher and higher specification cables and connecting
hardware.
Development of new network technology
Ethernet was developed in the mid 1970's by the Xerox Corporation at its Palo Alto Research
Centre (PARC) in California and in 1979 DEC and Intel joined forces with Xerox to standardize
the Ethernet system for everyone to use. The first specification by the three companies, called the
'Ethernet Blue Book', was released in 1980, it was also known as the 'DIX standard' after their
initials.
It was a 10 Mega bits per second system (10Mbps, = 10 million 1s and 0s per second) and used a
large coaxial backbone cable running throughout the building, with smaller coax cables tapped
off at 2.5m intervals to connect to the workstations. The large coax, which was usually yellow,
became known as 'Thick Ethernet' or 10Base5 - the '10' refers to the speed (10Mbps), the 'Base'
because it is a base band system (base band uses all of its bandwidth for each transmission, as
opposed to broad band which splits the bandwidth into separate channels to use concurrently) and
the '5' is short for the system's maximum cable length, in this case 500m.
The Institute of Electrical and Electronic Engineers (IEEE) released the official Ethernet standard
in 1983 called the IEEE 802.3 after the name of the working group responsible for its
development and, in 1985, version 2 (IEEE 802.3a) was released. This second version is
commonly known as 'Thin Ethernet' or 10Base2; in this case the maximum length is 185m even
though the '2' suggest that it should be 200m.
Since 1983, various standard have been introduced because of the increased bandwidth
requirements, so far we are up to the Gigabit rate !
Unshielded Twisted Pair
Introduction
Unshielded Twisted Pair cable is most certainly by far the most popular cable around the world.
UTP cable is used not only for networking but also for the traditional telephone (UTP-Cat 1).
There are 6 different types of UTP categories and, depending on what you want to achieve, you
would need the appropriate type of cable. UTP-CAT5 is the most popular UTP cable, it came to
replace the good old coaxial cable which was not able to keep up with the constant growing need
for faster and more reliable networks.
Characteristics
The characteristics of UTP are very good and make it easy to work with, install, expand and
troubleshoot and we are going to look at the different wiring schemes available for UTP, how to
create a straight through UTP cable, rules for safe operation and a lot of other cool stuff !
So let's have a quick look at each of the UTP categories available today:
Category 1/2/3/4/5/6 – a specification for the type of copper wire (most telephone and network
wire is copper) and jacks. The number (1, 3, 5, etc) refers to the revision of the specification and
in practical terms refers to the number of twists inside the wire (or the quality of connection in a
jack).
CAT1 is typically telephone wire. This type of wire is not capable of supporting computer
network traffic and is not twisted. It is also used by phone companies who provide ISDN, where
the wiring between the customer's site and the phone company's network uses CAT 1 cable.
CAT2, CAT3, CAT4, CAT5 and CAT6 are network wire specifications. This type of wire can
support computer network and telephone traffic. CAT2 is used mostly for token ring networks,
supporting speeds up to 4 Mbps. For higher network speeds (100Mbps plus) you must use CAT5
wire, but for 10Mbps CAT3 will suffice. CAT3, CAT4 and CAT5 cable are actually 4 pairs of
twisted copper wires and CAT5 has more twists per inch than CAT3 therefore can run at higher
speeds and greater lengths. The "twist" effect of each pair in the cables will cause any
interference presented/picked up on one cable to be cancelled out by the cable's partner which
twists around the initial cable. CAT3 and CAT4 are both used for Token Ring and have a
maximum length of 100 meters.
CAT6 wire was originally designed to support gigabit Ethernet (although there are standards that
will allow gigabit transmission over CAT5 wire, that's CAT 5e). It is similar to CAT5 wire, but
contains a physical separator between the 4 pairs to further reduce electromagnetic interference.
The next pages (check menu) show you how UTP cable is wired and the different wiring
schemes. It's well worth visiting and reading about.
Straight Thru UTP Cables
Introduction
We will be mainly focussing on the wiring of CAT5 cables here because they are the most
popluar cables around ! You will find info on wiring the classic CAT1 phone cables as well. It is
very important you know how exactly to wire UTP cables because it's the base of a solid network
and will help you avoid hours of frustration and troubleshooting if you do it right the first time :)
On the other hand, if you are dealing with a poorly cabled network, then you will be able to find
the problem and fix it more efficiently.
Wiring the UTP cables !
We are now going to look at how UTP cables are wired. There are 2 popular wiring schemes that
most people use today: the T-568A and T-568B, that differ only in which color coded pairs are
connected - pair 2 and 3 are reversed. Both work equally well, as long as you don't mix them! If
you always use only one version, you're OK, but if you mix A and B in a cable run, you will get
crossed pairs!
UTP cables are terminated with standard connectors, jacks and punchdowns. The jack/plug is
often referred to as an "RJ-45", but that is really a telco designation for the "modular 8 pin
connector" terminated with a USOC pinout used for telephones. The male connector on the end
of a patchcord is called a "plug" and the receptacle on the wall outlet is a "jack."
As I've already mentioned, UTP has 4 twisted pairs of wires, we'll now look at the pairs to see
what colour codes they have :
As you can see in the picture on the left, the 4 pairs are labeled. Pairs 2 & 3 are used for normal
10/100Mbit networks, while Pairs 1 & 4 are reserved. In Gigabit Ethernet, all 4 pairs are used.
CAT5 cable is the most common type of UTP around the world ! It's flexible, easy to install and
very reliable when wired properly :)
The left and center pictures show the end of a CAT5 cable with an RJ-45 connector; used by all
cables to connect to a hub or to your computer's network card. The picture to the right shows a
stripped CAT5 cable, indicating the 4 twisted pairs.
And to be a bit fancy, don't think that UTP CAT5 cable only comes in one boring colour... those
days are over ! You get a wide range of choices today :
.......
........
T-568A & T-568B 4-pair Wiring
Ethernet is generally carried in 8-conductor cables with 8-pin modular plugs and jacks. The
connector standard is called "RJ-45" and is just like a standard RJ-11 modular telephone
connector, except it is a bit wider to carry more pins.
Note: Keep in mind that the wiring schemes we are going to talk about are all for straight through
cables only ! Cross over cables are examined on a separate page !
The eight-conductor data cable contains 4 pairs of wires. Each pair consists of a solid colored
wire and a white wire with a stripe of the same color. The pairs are twisted together. To maintain
reliability on Ethernet, you should not untwist them any more than necessary (like about 1 cm).
The pairs designated for 10 and 100 Mbit Ethernet are Orange and Green. The other two pairs,
Brown and Blue, can be used for a second Ethernet line or for phone connections.
There are two wiring standards for these cables, called "T568A" (also called "EIA") and "T568B"
(also called "AT&T" and "258A"). They differ only in connection sequence - that is, which color
is on which pin, not in the definition of what electrical signal is on a particular color.
T-568A is supposed to be the standard for new installations, while T-568B is an acceptable
alternative. However, most off-the-shelf data equipment and cables seem to be wired to T568B.
T568B is also the AT&T standard. In fact, I have seen very few people using T568A to wire their
network. It's important not to mix systems, as both you and your equipment will become
hopelessly confused.
Pin Number Designations for T568B
Note that the odd pin numbers are always the white with stripe color (1,3,5,7). The wires connect
to RJ-45 8-pin connectors as shown below:
Color Codes for T568B
Pin color - pair name
1 white/orange (pair 2) TxData +
2 orange (pair 2) ........ TxData 3 white/green (pair 3) ..RecvData+
4 blue (pair 1)
5 white/blue (pair 1)
6 green (pair 3) ...........RecvData7 white/brown (pair 4)
8 brown (pair 4)
The wall jack may be wired in a different sequence because the wires are often crossed inside the
jack. The jack should either come with a wiring diagram or at least designate pin numbers.
Note that the blue pair is on the centre pins; this pair translates to the red/green pair for ordinary
telephone lines which is also in the centre pair of an RJ-11. (green=wh/blu; red=blu)
Pin Number Designations for T568A
The T568A specification reverses the orange and green connections so that pairs 1 and 2 are on
the centre 4 pins, which makes it more compatible with the telco voice connections. (Note that in
the RJ-11 plug at the top, pairs 1 and 2 are on the centre 4 pins.) T568A goes:
Color Codes for T568A
Pin color - pair name
1 white/green (pair 3) ..RecvData+
2 green (pair 3) ..........RecvData3 white/orange (pair 2) TxData +
4 blue (pair 1)
5 white/blue (pair 1)
6 orange (pair 2) .........TxData 7 white/brown (pair 4)
8 brown (pair 4)
The diagram below shows the 568A and 568B in comparison:
Where are they used ?
The most common application for a straight through cable is a connection between a PC and a
hub/switch. In this case the PC is connected directly to the hub/switch which will automatically
cross over the cable internaly, using special circuits. In the case of a CAT1 cable, which is usually
found in telephone lines, only 2 wires are used, these do not require any special cross over since
the phones connect directly to the phone socket.
The picture above shows us a standard CAT5 straight thru cable, used to connect a PC to a HUB.
You might get a bit confused because you might expect the TX+ of one side to connect to the
TX+ of the other side but this is not the case. When you connect a PC to a HUB, the HUB it will
automatically x-over the cable for you by using its internal circuits, this results Pin 1 from the PC
(which is TX+) to connect to Pin 1 of the HUB (which connects to RX+).This happens for the
rest of the pinouts aswell.
If the HUB didn't x-over the pinouts using its internal circuits (this happens when you use the
Uplink port on the hub) then Pin 1 from the PC (which is TX+) would connect to Pin 1 of the
HUB (which would be TX+ in this case). So you notice that no matter what we do with the HUB
port (uplink or normal), the signals assigned to the 8 Pins on the PC side of things, will always
remain the same, the HUB's pinouts though will change depending wether the port is set to
normal or uplink.
This pretty much concludes our discussion on straight thru UTP cables !
CAT5 UTP X-Over Cable
Introduction
The cross-over (x-over) CAT5 UTP cable has to be one of the most used cables after the classic
straight-thru cable. The x-over cable allows us to connect two computers without needing a hub
or switch. If you recall, the hub does the x-over for you internally, so you only need to use a
straight thru cable from the PC to the hub. Since now we don't have a hub, we need to manually
do the x-over.
Why do we need an x-over ?
When sending or receiving data between two devices, e.g computers, one will be sending while
the other receives. All this is done via the network cable and if you look at a network cable you
will notice that it contains multiple cables. Some of these cables are used to send data, while
others are used to receive data and this is exactly what we take into account when creating an xover cable. We basically connect the TX (transmit) of one end to the RX (receive) of the other !
The diagram below shows this in the simplest way possible:
CAT5 X-over
There is only one way to make a CAT5 x-over cable and it's pretty simple. Those who read the
"wiring utp" section know an x-over cable is a a 568A on one end and a 568B on the other. If you
haven't read the wiring section, don't worry because I'll be giving you enough information to
understand what we are talking about.
As mentioned previously, an x-over cable is as simple as connecting the TX from one end to the
RX of the other and vice versa.
Let's now have a look at the pinouts of a typical x-over CAT5 cable:
As you can see, only 4 pins are needed for a x-over cable. When you buy a x-over cable, you
might find that all 8 pins are used, these cables aren't any different from the above, it's just that
there are cables running to the unsed pins. This won't make any difference in performance, but is
just a habit some people follow.
Here are the pinouts for a x-over cable which has all 8 pins connected:
Where else can I use a x-over ?
X-over cables are not just used to connect computers, but a variety of other devices. Prime
example are switches and hubs. If you have two hubs and you need to connect them, you would
usually use the special uplink port which, when activated through a little switch (in most cases),
makes that particular port not cross the tx and rx, but leave them as if they where straight through.
What happens though if you haven't got any uplink ports or they are already used ?
The X-over cable will allow you to connect them and solve your problem. The diagram below
shows a few examples to make it simpler:
As you can see in the above diagram, thanks to the uplink port, there is no need for a x-over
cable.
Let's now have have look at how to cope when we don't have an uplink to spare, in which case we
must make a x-over cable to connect the two hubs:
All the above should explain a x-over cable, where we use it and why we need it. I thought it
would be a good idea to include, as a last picture, the pinouts of a straight thru and a x-over cable
so you can compare them side by side:
10Base-T/2/5/F/35 - Ethernet
Introduction
The 10Base-T UTP Ethernet and 10Base-2 Coax Ethernet were very popular around the early to
mid 1990's when 100Mbit network cards and hubs/switches were very expensive. Today's prices
have dropped so much that most vendors don't focus on the 10Base networks but the 100Base
ones and, at the same time, support the 10 BaseT and 10Base-2 standard. We will also talk about
the 10Base5/F and 35 shortly.
So what does 10 BaseT/2/5/F/35 mean ?
To make it simpler to distinguish cables they are categorised; that's how we got the CAT1, 2, 3
etc cables. Each category is specific for speed and type of network. But since one type of cable
can support various speeds, depending on its quality and wiring, the cables are named using the
"BaseT" to show exactly what type of networks the specific cable is made to handle.
We are going to break the "10 Base T (and the rest) " into 3 parts so we can make it easier to
understand:
10
The number 10 represents the frequency in MHz (Mega HertZ) for which this cable is made. In
this case it is 10 MHz. The greater the MHz, the greater speeds the cable can handle. If you try to
use this type of cable for greater frequencies (and, therefore, speeds) then it either will not work
or become extremely unreliable. The 10 MHz speed translates to 10Mbit per second, which in
theory means 1.2 MBytes per second. In practice though, you wouldn't get more than 800 KBytes
per second.
Base
The word "Base" refers to Baseband. Baseband is the type of communication used by Ethernet
and it means that when a computer is transmitting, it uses all the available bandwith, whereas
Broadband (cable modems) shares the bandwidth available. This is the reason cable modem users
notice a slowdown in speed when they are connected on a busy node, or when their neighbour is
downloading all the time at maximum speed ! Of course with Ethernet you will notice a
slowdown in speed but it will be smaller in comparison to broadband.
T/2/5/F/35
The "T" refers to "Twisted Pair" physical medium that carries the signal. This shows the structure
of the cable and tells us it contains pairs which are twisted. For example, UTP has twisted pairs
and this is the cable used in such cases. For more information, see the "UTP -Unshielded Twisted
Pair" page where you can find information on pinouts for the cables.
10Base-T
A few years ago, the 10 BaseT cables used CAT3 cables, which are used for speeds up to 10Mbit,
but today you will find mostly CAT5 cables, which are good for speeds up to 100 Mhz or
100Mbit, these cables are also used for 10Mbit networks. Only 2 pairs of the UTP cable are used
with the 10Base-T specification and the maximum length is 100 meters. Minimum length
between nodes is 2.5 meters.
10Base-2
This specification uses Coaxial cable which is usually black, sometimes also called "Thinwire
coax", "Thin Ethernet" or "RJ-58" cable. Maximum length is 185 meters while the minimum
length between nodes is 0.5 meters. 10Base-2 uses BNC connectors which, depending on the
configuration, require special terminators. The 10Base-2 specification is analysed here in great
detail (also contains pictures) if you wish to read more about it.
10Base-5
This specification uses what's called "Thickwire" coaxial cable, which is usually yellow. The
maximum length is 500 meters while the minimum length between nodes is 2.5 meters. Also,
special connectors are used to interface to the network card, these are called AUI (Attachment
Unit Interface) connectors and are similar to the DB-15 pin connectors most soundcards use for
their joystick/MIDI port.
Most networks use UTP cable and RJ-45 connectors or Coaxial cable with BNC "T" connectors,
for this reason special devices made their way to the market that allow you to connect an AUI
network card to these different cable networks.
The picture below shows you a few of these devices:
10Base-F
This specification uses fibre optic cable. Fibre optic cable is considered to be more secure than
UTP or any other type of cabling because it is nearly impossible to tap into. It is also resistant to
electro magnetic interference and attenuation. Even though the 10Base-F specification is for
speeds up to 10Mbits per second, depending on the type of fibre and equiptment you use, you can
get speeds of up to 2Gigabits per second !
10Base-35
The 10Base-35 specification uses broadband coaxial cable. It is able to carry multiple baseband
channels for a maximum length of 3,600 meters or 3.6 Kms.
Summary
To summarise, keep the following in mind:




10Base-T works for 10Mbit networks only and uses unshielded twisted pair cable with
RJ-45 connectors at each end and maximum length of 100 meters. They also only use 2
pairs of cables.
10Base-2 works for 10Mbit networks only and uses Coaxial cable. Maximum length is
185 meters and BNC "T" connectors are used to connect to the computers; there are
special terminators at each of the coaxial cable.
10Base-5 works for 10Mbit networks only and uses Thick Coaxial cable. Maximum
length is 500 meters and special "AUI" connectors (DB-15) are used to interface with the
network card.
10Base-F works for 10Mbit networks only and uses cool fibre optic cable :)
100Base-(T) TX/T4/FX - Ethernet
Introduction
The 100Base-TX (sometimes referred to 100Base-T) cable is the most popular cable around since
it has actually replaced the older 10Base-T and 10Base-2 (Coaxial). The 100Base-TX cable
provides fast speeds up to 100Mbits and is more reliable since it uses CAT5 cable (see the CAT
1/2/3/4/5 page).There is also 100Base-T4 and 100Base-FX available, which we discuss later.
So what does 100Base-TX/T4/FX mean ?
To make it simpler to distinguish cables they are categorised; that's how we got the CAT1, 2, 3
etc cables. Each category is specific for speed and type of network. But since one type of cable
can support various speeds, depending on its quality and wiring, the cables are named using the
"BaseT" to show exactly what type of networks the specific cable is made to handle.
We are going to break the "100Base-T?" into 3 parts so we can make it easier to understand:
100
The number 100 represents the frequency in MHz (Mega HertZ) for which this cable is made. In
this case it is 100 MHz. The greater the MHz, the greater speeds the cable can handle. If you try
to use this type of cable for greater frequencies (and, therefore, speeds) it will either not work or
become extremely unreliable. The 100 MHz speed translates to 100Mbit per second, which in
theory means 12 MBytes per second. In practice though, you wouldn't get more than 4 MBytes
per second.
Base
The word "Base" refers to Baseband. Baseband is the type of communication used by Ethernet
and it means that when a computer is transmitting, it uses all the available bandwith, whereas
Broadband (cable modems) shares the bandwidth available. This is the reason cable modem users
notice a slowdown in speed when they are connected on a busy node, or when their neighbour is
downloading all the time at maximum speed ! Of course with Ethernet you will notice a
slowdown in speed but it will be smaller in comparison to broadband.
TX/T4/FX
The "T" refers to "Twisted Pair" physical medium that carries the signal. This shows the structure
of the cable and tells us it contains pairs which are twisted. For example, UTP has twisted pairs
and this is the cable used in such cases. The 100Base-T is used sometimes to refer to the
100Base-TX cable specification. For more information, see the "UTP -Unshielded Twisted Pair"
page where you can find information on pinouts for the cables. All 100Mbit rated cables, except
the 100Base-FX, use CAT5 cable.
100Base-TX
The TX (sometimes refered as "T" only) means it's a CAT5 UTP straight through cable using 2 of
the 4 available pairs and supports speeds up to 100Mbits. Maximum length is 100 meters and
minimum length between nodes is 2.5 meters.
100Base-T4
The T4 means it's a CAT5 UTP straight through cable using all 4 available pairs and supports
speeds up to 100Mbits. Maximum length is 100 meters and minimum length between nodes is 2.5
meters.
100Base-FX
The FX means it's a 2 strand fiber cable and supports speeds up to 100Mbits. Maximum length is
usually upto 2 kms.
Summary
To summarise, keep the following in mind:




100Base-TX/T4 works for 100Mbit networks only and uses unshielded twisted pair cable
with RJ-45 connectors at each end
All CAT5 UTP cables have 4 pairs of cables (8 wires).
100Base-TX (sometimes called 100Base-T) uses 2 of the 4 available pairs within the
UTP cable, whereas the 100Base-T4 uses all 4 pairs.
100Base-FX also works for speeds up to 100Mbits but uses fibre optic cable instead of
UTP.
Fibre Optic Cable
Introduction
In the 1950's more research and development into the transmission of visible images through
optical fibres led to some success in the medical world where it was being used in remote
illumination and viewing instruments. In 1966 Charles Kao and George Hockham proposed the
transmission of information over glass fibre and realised that to make it a practical proposition,
much lower losses in the cables were essential.
This was the driving force behind the developments to improve the optical losses in fibre
manufacturing and today optical losses are significantly lower than the original target set by
Charles Kao and George Hockham.
The advantages of using fibre optics
Because of the Low loss, high bandwidth properties of fibre cables they can be used over greater
distances than copper cables. In data networks this can be as much as 2km without the use of
repeaters. Their light weight and small size also make them ideal for applications where running
copper cables would be impractical and, by using multiplexors, one fibre could replace hundreds
of copper cables. This is pretty impressive for a tiny glass filament, but the real benefit in the data
industry is its immunity to Electro Magnetic Interference (EMI), and the fact that glass is not an
electrical conductor.
Because fibre is non-conductive it can be used where electrical isolation is needed, for instance,
between buildings where copper cables would require cross bonding to eliminate differences in
earth potentials. Fibres also pose no threat in dangerous environments such as chemical plants
where a spark could trigger an explosion. Last but not least is the security aspect, it is very, very
difficult to tap into a fibre cable to read the data signals.
Fibre construction
There are many different types of fibre cable, but for the purposes of this explanation we will deal
with one of the most common types, 62.5/125 micron loose tube. The numbers represent the
diameters of the fibre core and cladding, these are measured in microns which are millionths of a
metre.
Loose tube fibre cable can be indoor or outdoor, or both, the outdoor cables usually have the tube
filled with gel to act as a moisture barrier to the ingress of water. The number of cores in one
cable can be anywhere from 4 to 144.
Over the years a variety of core sizes have been produced but these days there are three main
sizes that are used in data communications, these are 50/125, 62.5/125 and 8.3/125. The 50/125
and 62.5/125 micron multi-mode cables are the most widely used in data networks, although
recently the 62.5 has become the more popular choice. This is rather unfortunate because the
50/125 has been found to be the better option for Gigabit Ethernet applications.
The 8.3/125 micron is a single mode cable which until now hasn't been widely used in data
networking due to the high cost of single mode hardware. Things are beginning to change
because the length limits for Gigabit Ethernet over 62.5/125 fibre has been reduced to around
220m and now using 8.3/125 may be the only choice for some campus size networks. Hopefully,
this shift to single mode may start to bring the costs down.
What's the difference between single-mode and multi-mode?
With copper cables larger size means less resistance and therefore more current, but with fibre the
opposite is true. To explain this we first need to understand how the light propagates within the
fibre core.
Light propagation
Light travels along a fibre cable by a process called 'Total Internal Reflection' (TIR), this is made
possible by using two types of glass which have different refractive indexes. The inner core has a
high refractive index and the outer cladding has a low index. This is the same principle as the
reflection you see when you look into a pond. The water in the pond has a higher refractive index
than the air and if you look at it from a shallow angle you will see a reflection of the surrounding
area, however, if you look straight down at the water you can see the bottom of the pond.
At some specific angle between these two view points the light stops reflecting off the surface of
the water and passes through the air/water interface allowing you to see the bottom of the pond.
In multi-mode fibres, as the name suggests, there are multiple modes of propagation for the rays
of light. These range from low order modes, which take the most direct route straight down the
middle, to high order modes, which take the longest route as they bounce from one side to the
other all the way down the fibre.
This has the effect of scattering the signal because the rays from one pulse of light arrive at the
far end at different times; this is known as Intermodal Dispersion (sometimes referred to as
Differential Mode Delay, DMD). To ease the problem, graded index fibres were developed.
Unlike the examples above which have a definite barrier between core and cladding, these have a
high refractive index at the centre which gradually reduces to a low refractive index at the
circumference. This slows down the lower order modes allowing the rays to arrive at the far end
closer together, thereby reducing intermodal dispersion and improving the shape of the signal.
So what about the single-mode fibre?
Well, what's the best way to get rid of Intermodal Dispersion?, easy, only allow one mode of
propagation. So a smaller core size means higher bandwidth and greater distances. Simple as that
! :)
Direct Cable Connection
Introduction
From the early PC days, Direct Cable Connection (dcc) was the most popular way to transfer data
from one PC to another. Of course, it might seem a bit of an "old fashioned" way to transfer data
these days but remember that back then most PC's were running Dos 6.22 or Windows for
Workgroups 3.11 if you were lucky !
Today, most computers are equipped with a network card and have an x-over or hub which will
allow you to transfer data a lot faster than a serial or parallel cable. But still, there is always a
time when you require a simple transfer via serial or parallel and that's what this page is about.
There is a variety of programs which allow you to use the above mentioned cables to successfully
transfer data between PCs but you should know that you can achieve your goal without them as
well since Windows 95 and above supports the direct cable connection method.
Installing Windows programs or components to transfer data is out of this section's scope, but I
have included some notes on what you should check before attempting the Direct Connection via
cable, this info is included in the "Important DCC Info". We will also be learning how to create
the cables required to meet our goals and comparing the speed of the two (Serial and Parallel)
Because the page ended up being quite long, I decided to split it in order to make it easier to read.
Simply click on the subject you'd like to read about:
Serial Direct Connection
Parallel Direct Connection
Serial Direct Cable Connection
Introduction
The Serial Direct Connection is the one which utilizes the COM ports of your computers. Every
computer has at least 2 COM ports, COM1 and COM2. The "COM" stands for
"Communications". It's pinouts are a lot simpler when compared to the parallel port, but the speed
is also a lot slower :)
To give you an idea of how fast (or slow) a serial port is, at its best you will get around 12 to 14
KB per second. That's pretty slow when you're used to a network connection, but let me show you
how serial data is transferred so you can also understand why it's a lot slower:
The above picture gives you an idea on how serial data is transferred. Each coloured block that is
numbered is sent from PC 1 to PC 2. PC 2 will receive the data in the same order it was sent, in
other words it will receive data block 1 first and then 2, all the way to block 7. This is a pretty
good representation of data flow in a serial cable. Serial ports transmit data sequentially over one
pair of wires (the rest of the wires are used to controll the transfer).
Another way you can think of it is like a one lane road where the road is wide enough to only fit
one car at a time (one data block at a time in our example above), so you would imagine that the
road cannont process several cars at one time.
The Serial port
Most new computers have two COM ports with 9 pins each, these are DB-9 male connectors.
Older computers would have one DB-9 male connector and one DB-25 male connector. The 25
pin male connector is pretty much the same as the 9 pin, it's just bigger.
Let's have a look at a serial port to see what we are talking about:
Different pinouts are used for the DB-9 and DB-25 connectors and we will have a look at them in
a moment. Let's just have another quick look at the COM ports of a new computer:
Notice the COM ports, they are both DB-9 connectors, there is no more DB-25 ! The connector
above the two blue COM ports is an LPT or Parallel port.
The serial port of a computer is able to run at different speeds, thus allowing us to connect
different devices which communicate at different speeds with the computer. The following table
shows the speeds at which most computers' serial ports are able to run and how many KB/sec
they translate to:
Now we will have a look at the pin outs of both DB-9 and DB-25 connectors:
The Cable
All that's left now is the pinouts required to allow us to use the serial cable for direct connection.
There is a special term for this type of a cable, it's call a "null modem" cable, which basically
means you need to have TX and RX crossed over. Because you can have different configurations,
e.g DB-9 to DB-9, DB-9 to DB-25, and DB-25 to DB-25, I have created different tables to show
you the pinouts for each one:
1) DB-9 to DB-9. You use this configuration when you need a cable with a DB-9 connector on
each end:
2) DB-9 to DB-25. You use this configuration when you need a cable with one DB-9 and one
DB-25 connector on either end:
3) DB-25 to DB-25. You use this configuration when you need a cable with a DB-25 connector
on each end:
Well, that pretty much covers everything about serial direct connection via a null modem cable.
If you're using third party software to connect your computers, you probably won't stumble into
big problems, but if you're using Windows software be sure you have unique names for each of
your computers because Windows will treat the direct connection as a "network" connection. This
means you will be able to see the other computer via Network Neighborhood.
Parallel Direct Cable Connection
Parallel Direct Connection
The Parallel Direct Connection is the second solution to transfer data from one computer to
another. The cable required is slightly more complicated as it has more wires that need to be
connected, but the speeds you will get from it will make it well worth the time and effort required
to make the cable.
Most people would know the parallel Direct Cables as "Laplink" cables. You get one when you
buy the Laplink program or PCAnywhere, it's usually a yellow cable, but you'll be able to make
your own by the time you finish reading this page.
Because of the variety of parallel (LPT) ports, 4 to be precise, but we use the same cable for
everyone one of them. We will have a look at them all to make sure we cover everything :)
Now, as far as speed's concerned, with a standard LPT port you're looking at around 40 to 60 KB
per second whereas with the faster LPT ports you should expect something around 1MB per
second ! Whichever way you see it, it's a huge improvement in comparison to the serial cable
(Null modem cable).
Let's have a quick look at the way data is transferred over a parallel link, this will help us
understand why it's also a lot faster than the serial method of transfer:
This diagram shows a parallel transfer. In serial transfer there is one block of data moved at a
time, whereas with parallel and, more specificaly in our example, there are 4 data blocks moved
at a time. Parallel ports transmit data simultaneously over multiple lines and are therefore faster
than serial.
If you're having difficulties understanding the diagram just think of a 4 lane highway, which is
our parallel cable, where 4 cars at a time are moving whereas the serial cable is like a one lane
highway with one car at a time moving. Hope that helps :)
What does the parallel port (LPT) look like ?
The picture below shows a parallel port, also known as LPT port, of a new computer.
With new computers, you will always find the LPT port right above the two COM ports and it's
usually colour coded purple. No matter what type of LPT port you have, they all look the same,
it's the electronic characteristics which changes amongst the 4 different types of LPT ports and
that's transparent to everyone. All LPT ports are female DB-25 connectors.
So what are the different LPT ports ?
Before we get stuck into the pinouts of the LPT port, let's have a look at the different types of
LPT ports available. Again, depending on the LPT port, you would expect different speed rates:
Because it might seem a bit confusing at the begining, I have include a bit more techincal
information on the various ports to help you understand more about them. To keep it simple, I
have categorised and colour coded them to show which ports match the table above:
4 bit ports
The port can do 8 bit byte output and 4 bit nibble input. These ports are often called
"unidirectional" and are most commonly found on desktop bus cards (also called IO expansion
cards, serial/parallel cards, or even 2S+P cards) and older laptops. This is still the most common
type of port, especially on desktop systems. 4 bit ports are capable of effective transfer rates of
about 40-60 KBytes per second in typical devices but can be pushed upwards of 140 KBytes/sec
with certain design tricks.
8 bit ports
These ports can do both 8 bit input and output and are sometimes called "bidirectional ports" but
that term is often misused by vendors to refer to 4 bit ports as well. Most newer laptops have 8 bit
capability although it may need to be enabled with the laptop's vendor-specific CMOS setup
function. This is discussed below. A relatively smaller percentage of LPT bus cards have 8bit
capability that sometimes must be enabled with a hardware jumper on the board itself. True 8 bit
ports are preferable to 4 bit ports because they are considerably faster when used with external
devices that take advantage of the 8 bit capability. 8 bit ports are capable of speeds ranging from
80-300 KBytes per second, again depending on the speed of the attached device, the quality of the
driver software and the port's electrical characteristics.
EPP ports
Can do both 8bit input and output at ISA bus speeds. These ports are as fast as 8 bit bus cards and
can achieve transfer rates upwards of 600 KByte per second. These ports are usually used by nonprinter peripheral devices such as external CDROMs, tape drives, hard drives, network adaptors
and more.
ECP ports
Can do both 8 bit input and output at bus speeds. The specification for this port type was jointly
developed by Microsoft and Hewlett-Packard. ECP ports are distinguished by having DMA
capability, on-board FIFOs at least 16 bytes deep, some hardware data compression capability
and are generally featured more than other ports. These ports are as fast as 8 bit bus cards and can
achieve transfer rates upwards of 1 Mbyte per second and faster on PCs whose buses will support
it. The design is capable of faster transfer rates in the future.
Laplink cable is used to link two PCs with MSDOS 6.0 or later very effectively by using
INTERSVR.EXE (on Host) and INTERLNK.EXE (on GUEST) PCs. But it can also be used to
data-transfer at faster speed with DCC Feature of Win9x/Me/2000.
Let's now have a quick look at the pinouts of an LPT port:
The Cable
As explained, there are different LPT ports, but the cable used is the same for all types of LPT
ports. Depending on your computer bios LPT settings you will be able to achieve different speed
transfers as outlined in the table above.
The picture below clearly shows the pinouts of the required cable:
One wire should be attached to the metal body of the Male pins on both sides, this is also shown
as the "metal body" on the diagram.
Now, because I understand how much trouble someone can fall into when trying to create a cable
and get it to work properly, I have included the DirectParallel Connection Monitor Utility, for all
the DCC users to troubleshoot and test DCC connection and cable on both computers. It provides
detailed information about the connection, the cable being used for the connection, the I/O mode
(4-bit, 8-bit, ECP, EPP), the parallel port types, I/O address, and IRQ.
And that pretty much finishes the discussion on Parallel Cable Connections !
USB Direct Cable Connection
Introduction
Serial and Parallel Direct Cable Connections are considered to be a bit "old fashioned" these
days. USB Direct Cable Connection (DCC), on the other hand, belongs in the "new fashioned"
category :) USB DCC is a few years old, but because most people would use their network card
to transfer data, the DCC hasn't been very well known for the USB port, but does exist.... and the
catch is that you can't make it, but you must buy it ! But don't be tempted to leave the page just as
yet, there is a lot of information on USB which is always good to know. Keep reading .... :)
Let's have a closer look and see what it's all about !
About USB
USB stands for Universal Serial Bus. Most peripherals for computers these days come in a USB
version. The USB port was designed to be very flexible and for this reason you are able to
connect printers, external hard drives, cdroms, joysticks, scanners, digital cameras, modems, hubs
and a lot of other cool stuff to it.
The Universal Serial Bus gives you a single, standardized, easy-to-use way to connect up to 127
devices to a computer. The 127 number is a theoretical number :) In practice it's a lot less ! The
devices you connect can even power through the USB port of your computer if they draw less
than 500mA, which is half an Ampere (I). A good example is my little Canon scanner, it only has
one cable which is used to power the scanner up and to transfer the data to the computer !
Currently there are 2 versions of the USB port, the initial version which is USB v1.1 and the
newer version USB v2 which has hit the market since the end of 2001. Most people have
computers and devices which use the first version, but all new computers will now come with
USB v2. This new version of the USB port is backwards compatible with the older version and
also a lot faster.
The table below compares the two USB ports so you can see the speed difference:
Keep in mind that when you're using a USB DCC cable, you won't get such great speeds, but
somewhere around the 500KBytes/sec. This also depends on the type of CPU, O/S, the quality of
the cable and electronic components and protocols running on your system.
Another thing which you should keep in mind is the Windows operating system that supports the
USB port:
The USB Cable
The USB standard uses A and B connectors to avoid confusion. "A" connectors head "upstream"
toward the computer, while "B" connectors head "downstream" and connect to individual
devices. This might seem confusing to some, but it was designed to avoid confusion between
consumers because it would be more complicated for most people to try and figure out which end
goes where.
And this is what the USB cable and connectors actually look like:
As mentioned earlier, the USB port can power certain devices and also transfer data at the same
time. For this to happen, the USB port must have at least 4 cables of which 2 are for the power,
and 2 for the data.
The diagram is to help you understand what the cable contains:
The USB DCC (Finally :) )
As I mentioned in the introduction of this page, the USB DCC cable cannot be made, because it
requires special electronic circuits built around the cable. Parallel Technologies manufacture USB
DCC cables and they call it the "NET-LinQ":
..........................
The USB DCC cable can also be used to connect a computer to your network. The way it works
is pretty simple. Assuming you have Computers A, B , C and D. Computer A, B and C are
connected via an Ethernet LAN and Computer D hasn't got a network card to connect to the
network. Using the NET-LinQ or other similar cables you can connect Computer D with any of
the other 3 computers as long as they have a USB port, then by configuring the network protocols
on Computer D, it will be able to see and connect to the rest of the network !
This completes the discusion about USB Direct Cable Connection.
Important Direct Cable Connection Notes
Important Points for DCC
This page was designed to provide some notes on Direct Cable Connection (File-transfer) of
Win9x/ME/2000 with LAPLINK (Printer port) Cable or Null-Modem (serial port) Cable.
I've successfully used Laplink cable to link two PCs for FILE TRANSFER only (not playing
Games), with WIN95 and Direct Cable Connection program using the NetBeui protocol on each
computer. You can quickly check to see if the protocol is installed by doubleclicking on the
"Network Section" in Control Panel of your Windows operating system.
In addition to the above, you must have installed "Client for Microsoft Networks", "File and
Printer Sharing for Microsoft Networks" and optionally the TCP/IP protocol, which will require
some configuration. Providing a simply IP Address and subnetmask will be enough for our
purposes, the rest of the fields can be ignored. If you would like to allow users to access your files
and printer, then ensure both the options in "File and Print Sharing" are selected.
Once you have completed the above steps, you should have the following listed in the "Network
Selection" window::




Client for Microsoft Networks
TCP/IP
Netbeui
File and Printer Sharing for Microsoft Networks
Once your changes are complete, Windows might prompt you to reboot the system, so make sure
all work is saved before answering "yes"!
You should also share the Disks on both computers by right-clicking on the selected disks
installed in your system and select the "Sharing" option that will appear in the menu. You can
access them via your "My Computer" icon on your desktop.
After you complete these actions, you will see a blue hand "holding" your shared drives,
indicating that the drive is shared with the rest of the network!
Introduction To Protocols
Introduction
In the networking and communications area, a protocol is the formal specification that defines the
procedures that must be followed when transmitting or receiving data. Protocols define the
format, timing, sequence, and error checking used on the network.
In plain english, the above means that if you have 2 or more devices e.g computers which want to
communicate, then they need a common "Protocol" which is a set of rules that guide the
computers on how and when to talk to each other.
The way this "defenition" happens in computer land is by the RFC's (Requests For Comments)
where the IETF (a group of enginners with no life) make up the new standards and protocols and
then the major vendors (IBM, Cisco, Microsoft, Novell) follow these standards and implement
them in their products to make more money and try to take over this world !
There are hundreads of protocols out there and it is impossible to list them all here, but instead we
have included some of the most popular protocols around so you can read up on them and learn
more about them.
The table below shows the most popular TCP/IP protocols. The OSI model is there for you to see
which layer each of these protocols work at.
One thing which you should keep in mind is that as you move from the lower layers (Physical) to
the upper layers (Applications), more processing time is needed by the device that's dealing with
the protocol.
Please note: All routing protocols can be found under the "Networking/Routing" menu option.
TCP/IP Protocol Stack ..................The OSI Model
...
Currently available protocols to read about are :










Internet Protocol (IP)
TCP
UDP
ICMP
DNS
FTP
TFTP
Ethernet
RIP
OSPF
Transmission Control Protocol (TCP) Introduction
Introduction
The Transmission Control Protocol, or TCP as we will refer to it from now on, is one of the most
important and well-known protocols in the world on networks today. Used in every type of
network world-wide, it enables millions of data transmissions to reach their destination and works
as a bridge, connecting hosts with one another and allowing them to use various programs in
order to exchange data.
The Need For Reliable Delivery
TCP is defined by RFC 793 and was introduced to the world towards the end of 1981. The
motivation behind creating such a protocol was the fact that back in the early 80s, computer
communication systems were playing a very important role for the military, education and normal
office environments. As such, there was the need to create a mechanism that would be robust,
reliable and complete data transmission on various mediums without great losses.
TCP was designed to be able to deliver all of the above, and so it was adopted promptly by the
rest of the world.
Because TCP is not the type of protocol you can analyse in one page, it has been separated and
analysed over 13 pages in order to cover all its characteristics and help you gain a better
understanding of how it works and what its capabilities are.
Section 1: TCP, A Transport Protocol. This page is a brief introduction to TCP. It will show you
how TCP fits into the OSI Model, by using simple diagrams. It also helps you understand the
concept of a "Transport Protocol".
Section 2: Quick TCP Overview. This page is aimed for readers requiring a good and quick
overview of the protocol's features without getting into too much technical detail.
Section 3: The TCP Header/Segment. Find out what the "TCP Header" and "TCP Segment" refer
to. These two terms are used quite often when talking about the protocol, thus it is essential we
understand what these two terms are related to.
Section 4: In-Depth TCP Analysis. This subsection is a whole topic in itself and deals with the indepth analysis of the TCP Header. We examine each field step-by-step, using plenty of examples
and our well known cool-3D diagrams to make sure you understand all the material. This analysis
is covered over 7 pages of hardcore information, so be prepared!
Section 4.1: TCP Analysis - [Section 1]: Source & Destination port number. Find out what ports
are and how they are used in a typical data transfer.
Section 4.2: TCP Analysis - [Section 2]: Sequence & Acknowledgement Numbers. At last,
everything you wanted to know about sequence and acknowledgment numbers. We will cover
them in much detail using plenty of diagrams to ensure you are not left with unanswered
questions.
Section 4.3: TCP Analysis - [Section 3]: Header Length. We examine the meaning of this field
and how it is calculated.
Section 4.4: TCP Analysis - [Section 4]: TCP Flag Options. This is one of the most important
pages in our in-depth analysis. Here you will learn what these flags are, how many flags the
protocol supports and lastly, how they are used. We will also examine how hackers can use
specific flags to gain vital information on remote systems.
Section 4.5: TCP Analysis - [Section 5]: Window Size, Checksum & Urgent Pointer. These fields
play one of the most important roles in bandwidth utilisation. Find out how you can increase data
throughput and minimise delays between WAN links by playing around with these fields! This
page is highly recommended for anyone seeking details about WAN link efficiency and data
throughput.
Section 4.6: TCP Analysis - [Section 6]: TCP Options. This page is considered to be an extension
to the previous one. Here you will learn about selective acknowledgments, window scalling and
several other options available to TCP that ensure data is handled the best possible way as it
transits to its destination.
Section 4.7: TCP Analysis - [Section 7]: Data. The reason for all of the above! Our last section
provides an overview of TCP protocol and concludes with several good notes.
You will surely agree that there is much to cover in this fantastic protocol. So, let us not loose
any more time and begin our cool analysis! If you need to grab any brain food, now's your
chance cause once we start .... there is no stopping till we're done!
TCP, A Transport Protocol
Introduction
Understanding how each protocol fits into the OSI Model is essential for any network engineer.
This page analyses how TCP is classified as a 'transport protocol' and gives you an insight into
what to expect from the protocol.
Fitting TCP into the OSI Model
As most of you are well aware, every protocol has its place within the OSI Model. The OSI
Model is an indication of the complexity and intelligence of the protocol. As a general rule, the
higher you move up the OSI Model, the more intelligent protocols become. The positioning of the
layer also reflects how CPU entensive they are, whereas the lower layers of the OSI Model are
quite the opposite, that is, less CPU intensive and less intelligent.
TCP is placed at the 4th layer of the OSI Model, which is also known as the transport layer. If
you have read through the OSI model pages, you will recall that the transport layer is responsible
for establishing sessions, data transfer and tearing down virtual connections.
With this in mind, you would expect any protocol that's placed in the transport layer to implement
certain features and characteristics that would allow it to support the functionality the layer
provides.
So as we analyse TCP, you will surely agree that it fits right into the transport layer.
The diagram below shows you where the TCP header is located within a frame that's been
generated by a computer and sent to the network. If you rotate it 90 degrees to your left, you
would get something similar to the previous diagram. This of course is because each layer
appends its own information, or header if you like:
The frame is made up of six 3d blocks so you can see which piece is added by every OSI layer.
You can see that the TCP Header containing all the options the protocol supports, is placed right
after the IP Header (Layer 3), and before the data section that contains upper layer information
(Layers 5,6,7).
Note: For those who are wondering about the presence of the FCS block at the end, it contains a
special checksum that is placed by the datalink layer in order to allow the receiving host to detect
if the current frame has been corrupt during transit.
Please refer to the Ethernet II Frame page for more information.
Where and why would we use the TCP ?
TCP is used in almost every type of network. As a protocol, it is not restricted to any type of
network topology, whether it be a local area network (LAN) or wide area network (WAN). Being
a transport protocol, we call it a transport protocol because it's located in the transport layer of the
OSI model its primary job is to get data from one location to another, regardless of the physical
network and location.
As most of you already know, there are two types of transport protocols, TCP being one of them
and UDP (User Datagram Protocol) being the other. The difference between these two transport
protocols is that TCP offers an extremely reliable and robust method of transferring data,
ensuring that the data being transferred does not become corrupt in any way. UDP, on the other
hand, offers a non reliable way of transferring data without being able to guarantee the data has
arrived to its destination or its integrity when it does arrive.
The concept of a transport protocol
As we mentioned, TCP is a transport protocol and this means it is used to transfer data of other
protocols. At first, this might sound weird or confusing but this is exactly why it was designed,
adding substantial functionality to the protocols it carries.
The diagram below is the simplest way to show the concept of a 'transport' protocol:
In the pages to follow, we will have a closer look at how TCP manages to provide its reliable data
transfer method and make sure packets get to their destination without errors. This whole process
is the work of many 'subsystems' within the TCP that work together to provide the reliability that
TCP gives us.
Before we dive in deeper though, let's have a quick overall view of the protocol. If you're not
interested in too much technical detail, then the next page is for you! For those looking for an indepth analysis, you should read the quick-overview page to give you an idea on what we will be
analysing soon.
Quick Overview Of The Transmission Control Protocol - TCP
Introduction
To assist in making this process as painless and understandable as possible, we are going to
provide a quick overview of the protocol and then start analysing each component-field in the
pages to come, using examples and the cool 3D diagrams you all love:)
As previously mentioned on a number of occasions, TCP is one of the two protocols that lives at
the Transport layer and is used to carry data from one host to another. What makes TCP so
popular is the way it works when sending and receiving data. Unlike UDP, TCP will check for
errors in every packet it receives in an endless struggle to avoid data corruption.
Some common protocols that use TCP are: FTP, Telnet, HTTP, HTTPS, DNS, SMTP and POP3.
Let's have a closer look at the main characteristics of this wonderful protocol.
When people refer to "TCP/IP" remember that they are talking about a suite of protocols and not
just one protocol, like most people think. TCP/IP is not one protocol. Please see the Protocols
section for more information.
Main Features
Here are the main features of the TCP that we are going to analyse:






Reliable Transport
Connection-Oriented
Flow Control
Windowing
Acknowledgements
More overhead
Reliable Transport
It's a reliable transport because of the different techniques it uses to ensure that the data received
is error free. TCP is a robust protocol used for file transfers where data error is not an option.
When you decide to download a 50MB file from a website, you wouldn't want to find out after
the download is complete that the file has an error! Even though, in reality, this does happen, it
just goes to show that you can't always be perfect with certain things.
This picture shows the TCP header within an ethernet II frame. Right below this you will find our
second diagram that zooms into to the TCP header, displaying the field the protocol contains:
The diagram on the left shows the individual breakdown of each field within the TCP header
along with its length in bits.
Remember that 8 bits equal to 1 byte.
The most popular fields within the TCP header are the Source Port, Destination Port and Code
bits. These Code bits are also known as 'flags'.
The rest of the fields help make sure all TCP segments make it to their destination and are
reassembled in the correct order, while at the same time providing an error free mechanism
should a few segments go missing and never reach their destination.
Keep in mind that in the pages to follow we will have a detailed look into each available field, for
now we are just providing an overview of them.
Connection Oriented
What this basically means is that a connection is established between the two hosts or rather, the
two computers, before any data is transferred. When the term "connection is established" is used,
this means that both computers know about each other and have agreed on the exchange of data.
This is also where the famous 3-way handshake happens. You will find the SYN and ACK bits in
the Code bits field which are used to perform the 3-way handshake. Thanks to the 3-way
handshake, TCP is connection oriented.
The following diagram explains the procedure of the 3-way handshake:
STEP 1: Host A sends the initial packet to Host B. This packet has the "SYN" bit enabled. Host
B receives the packet and sees the "SYN" bit which has a value of "1" (in binary, this means ON)
so it knows that Host A is trying to establish a connection with it.
STEP 2: Assuming Host B has enough resources, it sends a packet back to Host A and with the
"SYN and ACK" bits enabled (1). The SYN that Host B sends, at this step, means 'I want to
synchronise with you' and the ACK means 'I acknowledge your previous SYN request'.
STEP 3: So... after all that, Host A sends another packet to Host B and with the "ACK" bit set
(1), it effectively tells Host B 'Yes, I acknowledge your previous request'.
Once the 3-way handshake is complete, the connection is established (virtual circuit) and the data
transfer begins.
Flow Control
Flow control is used to control the data flow between the connection. If for any reason one of the
two hosts are unable to keep up with the data transfer, it is able to send special signals to the other
end, asking it to either stop or slow down so it can keep up.
For example, if Host B was a webserver from which people could download games, then
obviously Host A is not going to be the only computer downloading from this webserver, so Host
B must regulate the data flow to every computer downloading from it. This means it might turn to
Host A and tell it to wait for a while until more resources are available because it has another 20
users trying to download at the same time.
Below is a diagram that illustrates a simple flow control session between two hosts. At this point,
we only need to understand the concept of flow control:
Generally speaking, when a machine receives a flood of data too quickly for it to process, it stores
it in a memory section called a buffer. This buffering action solves the problem only if the data
bursts are small and don't last long.
However, if the data burst continues it will eventually exhaust the memory of the receiving end
and that will result in the arriving data being discarded. So in this situation the receiving end will
simply issue a "Not ready" or "Stop" indicator to the sender, or source of the flood. After the
receiver processes the data it has in its memory, it sends out a "Ready" or "Go" transport indicator
and the sending machine receives the "Go" indicator and resumes its transmission.
Windowing
Data throughput, or transfer efficiency, would be low if the transmitting machine had to wait for
an acknowledgment after sending each packet of data (the correct term is segment as we will see
on the next page). Because there is time available after the sender transmits the data segment and
before it finishes processing acknowledgments from the receiving machine, the sender uses the
break to transmit more data. If we wanted to briefly define Windowing we could do so by stating
that it is the number of data segments the transmitting machine is allowed to send without
receiving an acknowledgment for them.
Windowing controls how much information is transferred from one end to the other. While some
protocols quantify information by observing the number of packets, TCP/IP measures it by
counting the number of bytes.
Let's explain what is happening in the above diagram.
Host B is sending data to Host A, using a window size equal to one. This means that Host B is
expecting an "ACK" for each data segment it sends to Host A. Once the first data segment is sent,
Host A receives it and sends an "ACK 2" to Host B. You might be wondering why "ACK 2" and
not just "ACK"?
The "ACK 2" is translated by Host B to say: 'I acknowledge (ACK) the packet you just sent me
and I am ready to receive the second (2) segment'. So Host B gets the second data segment ready
and sends it off to Host A, expecting an "ACK 3" response from Host A so it can send the third
data segment for which, as the picture shows, it receives the "ACK 3".
However, if it received an "ACK 2" again, this would mean something went wrong with the
previous transmission and Host B will retransmit the lost segment. We will see how this works in
the Acknowledgments section later on. Let's now try a different Window size to get a better
understanding.. let's say 3!
Keep in mind the way the "ACK's" work, otherwise you might find the following example a bit
confusing. If you can't understand it, read the previous example again where the Window size
was equal to one.
In the above example, we have a window size equal to 3, which means that Host B can send 3
data segments to Host A before expecting an "ACK" back. Host B sends the first 3 segments
(Send 1, Send 2 and Send 3), Host A receives them all in good condition and then sends the
"ACK 4" to Host B. This means that Host A acknowledged the 3 data segments Host B sent and
awaits the next data segments which, in this case, would be 4, 5 and 6.
Acknowledgments
Reliable data delivery ensures the integrity of a stream of data sent from one machine to the other
through a fully functional data link. This guarantees the data won't be duplicated or lost. The
method that achieves this is known as positive acknowledgment with retransmission. This
technique requires a receiving machine to communicate with the transmitting source by sending
an acknowledgment message back to the sender when it receives data. The sender documents
each segment it sends and waits for this acknowledgment before sending the next segment. When
it sends a segment, the transmitting machine starts a timer and retransmits if it expires before an
acknowledgment is returned from the receiving end.
This figure shows how the Acknowledgments work. If you examine the diagram closely you will
see the window size of this transfer which is equal to 3. At first, Host B sends 3 data segments to
Host A and they are received in perfect condition so, based on what we learned, Host A sends an
"ACK 4" acknowledging the 3 data segments and requesting the next 3 data segments which will
be 4, 5, 6. As a result, Host B sends data segments 4, 5, 6 but 5 gets lost somewhere along the
way and Host A doesn't receive it so, after a bit of waiting, it realises that 5 got lost and sends an
"ACK 5" to Host B, indicating that it would like data segment 5 retransmitted. Now you see why
this method is called "positive acknowledgment with retransmission".
At this point Host B sends data segment 5 and waits for Host A to send an "ACK" so it can
continue sending the rest of the data. Host A receives the 5th data segment and sends "ACK 7"
which means 'I received the previous data segment, now please send me the next 3'. The next step
is not shown on the diagram but it would be Host B sending data segments 7, 8 and 9.
More Overhead
As you can see, there is quite a neat mechanism under the TCP hood that enables data to be
transferred error free. All the features the protocol supports come at a price, and this is the
overhead associated with TCP.
When we talk about overhead, we are referring to all the different fields contained within the TCP
header and error checking that takes place to ensure no portion of the data is corrupt. While for
most this is a fair trade off, some people simply can't spare the extra processing power, bandwidth
and increased time the TCP transactions require, for this reason we have the alternative UDP
protocol, which you can read about in the UDP protocol section.
At this point our quick overview of the TCP has reached its conclusion. From the next page
onwards, we start to dive in deeper, so take a deep breath and jump right into it!
The TCP Header/Segment
Introduction
This page will introduce several new concepts, nothing of great difficulty, but essential for you to
understand. We will learn what a TCP segment is, analyse it and start to explore the guts of TCP
:)
So buckle up and get ready. It's all really simple, you just need to clear your mind and try to see
things in the simplest form and you will discover how easy and friendly TCP really is. You can
only feel comfortable with something once you get to know it.
TCP Header and TCP Segment
If we wanted to be more accurate with the terms we use, then perhaps we would title this page as
"Analysing A TCP Segment". Why? Well, that's what it's called in the networking world so we
need to know it by the correct term.
This of course leads us to another new definition, a TCP segment:
The unit of transfer between the TCP software on to machines is called a TCP segment.
If your expression has transformed itself to resemble a confused person, then don't worry, just
keep reading...
Understanding this term is easier than you thought 5 seconds ago, just take a good look at the
diagram below:
Now you see that a TCP segment is basically the TCP header plus the data that's right behind it
and, of course, the data belongs to the upper layers (5,6,7).
The data contents could be part of a file transfer, or the response from a http request, the fact is
that we really are not interested in the data's contents, but only in the fact that it's part of the TCP
segment.
The screen shot below was taken from my packet sniffer, and it shows the DATA portion
belonging to the TCP Header:
If you tried to capture a similar packet from any packet sniffer, it is more likely to display the
Data portion within the TCP header, just as the screen shot on the left.
So the question is whether a TCP header and a TCP segment are basically the same thing.
Even though it might seem they are, in most cases, when referring to the TCP header, we are
talking about the header without the data, whereas a TCP segment includes the data.
Getting Ready To Analyse The TCP Header
We are now ready to begin examining the structure of the TCP header. However, be sure to keep
in mind that the 'TCP Header' is the same thing as a 'TCP Segment', meaning it's the TCP header
information plus the Data, just as the diagrams above show.
The last screen shot certainly gives out a fair bit of information, but there is still much that hasn't
been revealed, not to mention nothing's really been analysed as yet :)
Analysing The TCP Header
Introduction
A fair amount of time was spent trying to figure out which way to analyse the TCP header. Most
websites and other resources mention the protocol's main characteristics with a bit of information
attached, leaving the reader with a lot of questions and making it difficult to comprehend how
certain aspects of the protocol works.
For this reason a different approach was selected. Our method certainly gets right into the
protocol's guts and contains a lot of information which some of you might choose to skip, but it is
guaranteed to satisfy you by giving a thorough understanding of what is going on.
Get Ready.... Here It Comes!
For those who skipped the first introduction page of the protocol, you will be happy to find out
that the tcp quick-overview page contains a brief summary of the protocol's main characteristics
to help refresh your memory. If you need to dive into the details at any point, simply return to this
page!
The diagram below shows the TCP header captured from a packet that I was running on the
network. We'll be using it to help us through our step by step analysis of TCP.
As you can see, the TCP header has been completely expanded to show us all the fields the
protocol contains. The numbers on the right are each field's length in bits. This is also shown in
the quick TCP overview page.
Since much time was spent to ensure our analysis was complete in all aspects, be sure that by the
end of it, you will understand each field's purpose and how it works.
We should also point out that when the packet in our example arrives to its destination, only
section 7 (the last one) is sent to the upper OSI layers because it contains the data it is waiting for.
The rest of the information (including the MAC header, IP Header and TCP header) is overhead
which serves the purpose of getting the packet to its destination and allowing the receiving end to
figure out what to do with the packet, e.g. send the data to the correct local application.
Now you're starting to understand the somewhat complex mechanisim involved in determing how
data gets from one point to another!
Since you have made it this far, you can select the section you want to read about by simply
clicking on the coloured area on the above packet, or by using the menu below. It is highly
recommended that you start from the first section and slowly progress to the final one. This will
avoid confusion and limit the case of you scratching your head halfway through any of the other
sections:







Section 1: Source & Destination Port Number
Section 2: Sequence & Acknowledgement Numbers
Section 3: Header Length
Section 4: TCP Flag Options
Section 5: Window Size, Checksum & Urgent Pointer
Section 6: TCP Options
Section 7: Data
TCP Analysis - Section 1: Source & Destination port number
Introduction
This section contains one of the most well-known fields in the TCP header, the Source and
Destination port numbers. These fields are used to specify the application or services offered on
local or remote hosts.
You will come to understand how important ports are and how they can be used to gain
information on remote systems that have been targetted for attacks. We will cover basic and
advanced port communications using detailed examples and colourful diagrams, but for now, we
will start with some basics to help break down the topic and allow us to smoothly progress in to
more advanced and complex information.
When a host needs to generate a request or send data, it requires some information:
1) IP Address of the desired host to which it wants to send the data or request.
2) Port number to which the data or request should be sent to on the remote host. In the case of a
request, it allows the sender to specify the service it is intending to use. We will analyse this soon.
1) The IP Address is used to uniquely identify the desired host we need to contact. This
information is not shown in the above packet because it exists in the IP header section located
right above the TCP header we are analysing. If we were to expand the IP header, we would
(certainly) find the source and destination IP Address fields in there.
2) The 2nd important aspect, the port number, allows us to identify the service or application our
data or request must be sent to, as we have previously stated. When a host, whether it be a simple
computer or a dedicated server, offers various services such as http, ftp, telnet, all clients
connecting to it must use a port number to choose which particular service they would like to use.
The best way to understand the concept is through examples and there are plenty of them below,
so let's take a look at a few, starting from a simple one and then moving towards something
slightly more complicated.
Time To Dive Deeper!
Let's consider your web browser for a moment.
When you send a http request to download a webpage, it must be sent to the correct web server in
order for it to receive it, process it and allow you to view the page you want. This is achieved by
obtaining the correct IP address via DNS resolution and sending the request to the correct port
number at the remote machine (web server). The port value, in the case of an http request, is
usually 80.
Once your request arrives at the web server, it will check that the packet is indeed for itself. This
is done by observing the destination IP Address of the newly received packet. Keep in mind that
this particular step is a function of the Network layer.
Once it verifies that the packet is in fact for the local machine, it will process the packet and see
that the destination port number is equal to 80. It then realises it should send the data (or request)
to the http deamon that's waiting in the background to serve clients:
Using this neat method we are able to use the rest of the services offered by the server. So, to use
the FTP service, our workstation generates a packet that is directed to the server's IP address, that
is 200.0.0.1, but this time with a destination port of 21.
The diagram that follows illustrates this process:
By now you should understand the purpose of the destination port and how it allows us to select
the services we require from hosts that offer them.
For those who noticed, our captured packet at the beginning of this page also shows the existence
of another port, the source port, which we are going to take a look at below.
Understanding the Source Port
The source port serves analogues to the destination port, but is used by the sending host to help
keep track of new incoming connections and existing data streams.
As most of you are well aware, in TCP/UDP data communications, a host will always provide a
destination and source port number. We have already analysed the destination port, and how it
allows the host to select the service it requires. The source port is provided to the remote
machine, in the case of our example, this is the Internet Server, in order for it to reply to the
correct session initiated by the other side.
This is achieved by reversing the destination and source ports. When the host (in our example,
Host A) receives this packet, it will identify the packet as a reply to the previous packet it sent:
As Host A receives the Internet Server's reply, the Transport layer will notice the reversed ports
and recognise it as a response to the previous packet it sent (the one with the green arrow).
The Transport and Session layers keep track of all new connections, established connections and
connections that are in the process of being torn down, which explains how Host A remembers
that it's expecting a reply from the Internet Server.
Of course the captured packet that's displayed at the beginning of the page shows different port
numbers than the ones in these diagrams. In that particular case, the workstation sends a request
to its local http proxy server that runs on port 8080, using port 3025 as its source port.
We should also note that TCP uses a few more mechanisms to accurately keep track of these
connections. The pages to follow will analyse them as well, so don't worry about missing out on
any information, just grab some brain food (hhmmm chocolate...), sit back, relax and continue
reading!
TCP Analysis - Section 2: Sequence & Acknowledgement Numbers
Introduction
This page will closely examine the Sequence and Acknowledgement numbers. The very purpose
of their existence is related directly to the fact that the Internet, and generally most networks, are
packet switched (we will explain shortly) and because we nearly always send and receive data
that is larger than the maximum transmission unit (a.k.a MTU - analysed on sections 5 and 6 )
which is 1500 on most networks.
Let's take a look at the fields we are about to analyse:
As you can see, the Sequence number proceeds the Acknowledgement number.
We are going to explain how these numbers increment and what they mean, how various
operating systems handle them in a different manner and lastly, what way these numbers can
become a security hazard for those who require a solid secure network.
TCP - Connection Oriented Protocol
The Sequence and Acknowledgement fields are two of the many features that help us classify
TCP as a connection oriented protocol. As such, when data is sent through a TCP connection,
they help the remote hosts keep track of the connection and ensure that no packet has been lost on
the way to its destination.
TCP utilizes positive acknowledgments, timeouts and retransmissions to ensure error-free,
sequenced delivery of user data. If the retransmission timer expires before an acknowledgment is
received, data is retransmitted starting at the byte after the last acknowledged byte in the stream.
A further point worth mentioning is the fact that Sequence numbers are generated differently on
each operating system. Using special algorithims (and sometimes weak ones), an operating
system will generate these numbers, which are used to track the packets sent or received, and
since both Sequence and Acknowledgement fields are 32bit, there are 2^32= 4,294,967,296
possibilities of generating a different number!
Initial Sequence Number (ISN)
When two hosts need to transfer data using the TCP transport protocol, a new connection is
created. This involves the first host that wishes to initiate the connection, to generate what is
called an Initial Sequence Number (ISN), which is basically the first sequence number that's
contained in the Sequence field we are looking at. The ISN has always been the subject of
security issues, as it seems to be a favourite way for hackers to 'hijack' TCP connections.
Believe it or not, hijacking a new TCP connection is something an experienced hacker can
alarmingly achieve with very few attempts. The root of this security problem starts with the way
the ISN is generated.
Every operating system uses its own algorithm to generate an ISN for every new connection, so
all a hacker needs to do is figure out, or rather predict, which algorithm is used by the specific
operating system, generate the next predicted sequence number and place it inside a packet that is
sent to the other end. If the attacker is successful, the receiving end is fooled and thinks the packet
is a valid one coming from the host that initiated the connection.
At the same time, the attacker will launch a flood attack to the host that initiated the TCP
connection, keeping it busy so it won't send any packets to the remote host with which it tried to
initiate the connection.
Here is a brief illustration of the above-mentioned attack:
As described, the hacker must find the ISN algorithm by sampling the Initial Sequence Numbers
used in all new connections by Host A. Once this is complete and the hacker knows the algorithm
and they are ready to initiate their attack:
Timing is critical for the hacker, so he sends his first fake packet to the Internet Banking Server
while at the same time starts flooding Host A with garbage data in order to consume the host's
bandwidth and resources. By doing so, Host A is unable to cope with the data it's receiving and
will not send any packets to the Internet Banking Server.
The fake packet sent to the Internet Banking Server will contain valid headers, meaning it will
seem like it originated from Host A's IP Address and will be sent to the correct port the Internet
Banking Server is listening to.
There have been numerous reports published online that talk about the method each operating
system uses to generate its ISN and how easy or difficult it is to predict. Do not be alarmed to
discover that the Windows operating system's ISN algorithm is by far the easiest to predict!
Programs such as 'nmap' will actually test to see how difficult it can be to discover the ISN
algorithm used in any operating system. In most cases, hackers will first sample TCP ISN's from
the host victim, looking for patterns in the initial sequence numbers chosen by TCP
implementations when responding to a connection request. Once a pattern is found it's only a
matter of minutes for connections initiated by the host to be hijacked.
Example of Sequence and Acknowledgment Numbers
To help us understand how these newly introduced fields are used to track a connection's packets,
an example is given below.
Before we proceed, we should note that you will come across the terms "ACK flag" or "SYN
flag"; these terms should not be confused with the Sequence and Acknowledgment numbers as
they are different fields within the TCP header. The screen shot below is to help you understand:
You can see the Sequence number and Acknowledgement number fields, followed by the TCP
Flags to which we're referring.
The TCP Flags (light purple section) will be covered on the pages to come in much greater depth,
but because we need to work with them now to help us examine how the Sequence and
Acknowledgement numbers work, we are forced to analyse a small portion of them.
To keep things simple, remember that when talking about Sequence and Acknowledgement
numbers we are referring to the blue section, while SYN and ACK flags refer to the light purple
section.
The next diagram shows the establishment of a new connection to a web server - the Gateway
Server. The first three packets are part of the 3-way handshake performed by TCP before any data
is transferred between the two hosts, while the small screen shot under the diagram is captured by
our packet sniffer:
To make sure we understand what is happening here, we will analyse the example step by step.
Step 1
Host A wishes to download a webpage from the Gateway Server. This requires a new connection
between the two to be established so Host A sends a packet to the Gateway Server. This packet
has the SYN flag set and also contains the ISN generated by Host A's operating system, that is
1293906975. Since Host A is initiating the connection and hasn't received a reply from the
Gateway Server, the Acknowledgment number is set to zero (0).
In short, Host A is telling the Gateway Server the following: "I'd like to initiate a new connection
with you. My Sequence number is 1293906975".
Step 2
The Gateway Server receives Host A's request and generates a reply containing its own generated
ISN, that is 3455719727, and the next Sequence number it is expecting from Host A which is
1293906976. The Server also has the SYN & ACK flags set, acknowledging the previous packet
it received and informing Host A of its own Sequence number.
In short, the Gateway Server is telling Host A the following: "I acknowledge your sequence
number and expecting your next packet with sequence number 1293906976. My sequence
number is 3455719727".
Step 3
Host A receives the reply and now knows Gateway's sequence number. It generates another
packet to complete the connection. This packet has the ACK flag set and also contains the
sequence number that it expects the Gateway Server to use next, that is 3455719728.
In short, Host A is telling the Gateway Server the following: "I acknowledge your last packet.
This packet's sequence number is 1293906976, which is what you're expecting. I'll also be
expecting the next packet you send me to have a sequence number of 3455719728".
Now, someone might be expecting the next packet to be sent from the Gateway Server, but this is
not the case. You might recall that Host A initiated the connection because it wanted to download
a web page from the Gateway Server. Since the 3-way TCP handshake has been completed, a
virtual connection between the two now exists and the Gateway Server is ready to listen to Host
A's request.
With this in mind, it's now time for Host A to ask for the webpage it wanted, which brings us to
step number 4.
Step 4
In this step, Host A generates a packet with some data and sends it to the Gateway Server. The
data tells the Gateway Server which webpage it would like sent.
Note that the sequence number of the segment in line 4 is the same as in line 3 because the ACK
does not occupy sequence number space.
So keep in mind that any packets generated, which are simply acknowledgments (in other words,
have only the ACK flag set and contain no data) to previously received packets, never increment
the sequence number.
Last Notes
There are other important roles that the Sequence and Acknowledgement numbers have during
the communication of two hosts. Because segments (or packets) travel in IP datagrams, they can
be lost or delivered out of order, so the receiver uses the sequence numbers to reorder the
segments. The receiver collects the data from arriving segments and reconstructs an exact copy of
the stream being sent.
If we have a closer look at the diagram above, we notice that the TCP Acknowledgement number
specifies the sequence number of the next segment expected by the receiver. Simply scroll back
to Step 2 and you will see what I mean.
Summary
This page has introduced the Sequence and Acknowledgement fields within the TCP header. We
have seen how hackers hijack connections by discovering the algorithms used to produce the
ISNs and we examined step by step the way Sequence and Acknowledgement numbers increase.
The next page examines the TCP header length field, so take a quick break if it's required and let's
continue!
TCP Analysis - Section 3: Header Length
Introduction
The third field under close examination is the TCP Header length. There really isn't that much to
say about the Header length other than to explain what it represents and how to interpret its
values, but this alone is very important as you will soon see.
Let's take a quick look at this field, noting its location within the TCP structure:
You might also have seen the Header length represented as "Data offset" in other packet sniffers
or applications, this is virtually the same as the Header length, only with a 'fancier' name.
Analysing the Header length
If you open any networking book that covers the TCP header, you will almost certainly find the
following description for this particular field:
"An interger that specifies the length of the segment header measured in 32-bit multiples"
(Internetworking with TCP/IP, Douglas E. Comer, p. 204, 1995). This description sounds
impressive, but when you look at the packet, you're most likely to scratch your head thinking:
what exactly did that mean?
Well, you can cease being confused because we are going to cover it step by step, giving answers
to all possible questions you might have. If we don't cover your questions completely, well...
there are always our forums to turn to!
Step 1 - What portion is the "Header length" ?
Before we dive into analysing the meaning of the values used in this field, which by the way
changes with every packet, we need to understand which portion on the packet is the "Header
length".
Looking at the screen shot on the left, the light blue highlighted section shows us the section
that's counted towards the Header length value. With this in mind, you can see that the total
length of the light blue section (header length) is 28 bytes.
The Header length field is required because of the TCP Options field, which contains various
options that might or might not be used. Logically, if no options are used then the header length
will be much smaller.
If you take a look at our example, you will notice the 'TCP Options' is equal to 'yes', meaning
there are options in this field that are used in this particular connection. We've expanded the
section to show the TCP options used and these are 'Max Segment' and 'SACK OK'. These will be
analysed in the pages which follow, at the present time though we are interested in whether the
TCP options are used or not.
As the packet in our screenshot reaches the receiving end, the receiver will read the header length
field and know exactly where the data portion starts.
This data will be carried to the layers above, while the TCP header will be stripped and
disregarded. In this example, we have no data, which is normal since the packet is initiating a 3way handshake (Flags, SYN=1), but we will cover that in more depth on the next page.
The main issue requiring our attention deals with the values used for the header length field and
learning how to interpret them correctly.
Step 2 - Header Value Analysis
From the screen shot above, we can see our packet sniffer indicating that the field has a value of
7(hex) and this is interpreted as 28 bytes. To calculate this, you take the value of 7, multiply it by
32 and divide the result by 8: 7x32=224/8=28 bytes.
Do you recall the definition given at the beginning of this page? "An interger that specifies the
length of the segment header measured in 32-bit multiples". This was the formal way of
describing these calculations :)
The calculation given is automatically performed by our packet sniffer, which is quite thoughtful,
wouldn't you agree? This can be considered, if you like, as an additional 'feature' found on most
serious packet sniffers.
Below you will find another screen shot from our packet sniffer that shows a portion of the TCP
header (left frame) containing the header length field. On the right frame, the packet sniffer
shows the packet's contents in hex:
By selecting the Header length field on the left, the program automatically highlights the
corresponding section and hex value on the right frame. According to the packet sniffer, the hex
value '70' is the value for the header length field.
If you recall at the beginning of the page, we mentioned the header length field being 4 bits long.
This means that when viewing the value in hex, we should only have one digit or character
highlighted, but this isn't the case here because the packet sniffer has incorrectly highlighted the
'7' and '0' together, giving us the impression that the field is 8 bits long!
Note: In hex, each character e.g '7' represents 4 bits. This means that on the right frame, only '7'
should be highlighted, and not "70". Furthermore, if we were to convert '7' hex to binary, the
result would be '0111' (notice the total amount of bits is equal to 4).
Summary
The 'Header length' field is very simple as it contains only a number that allows the receiving end
to calculate the number of bytes in the TCP Header. At the same time, it is mandatory because
without it there is no way the receiver will know where the data portion begins!
Logically, wherever the TCP header ends, the data begins - this is clear in the screen shots
provided on this page. So, if you find yourself analysing packets and trying to figure out where
the data starts, all you need to do is find the TCP Header, read the "Header length" value and you
can find exactly where the data portion starts!
Next up are the TCP flags that most of us have come across when talking about the famous 3-way
handshake and virtual connections TCP creates before exchanging data.
TCP Analysis - Section 4: TCP Flag Options
Introduction
As we have seen in the previous pages, some TCP segments carry data while others are simple
acknowledgements for previously received data. The popular 3-way handshake utilises the SYNs
and ACKs available in the TCP to help complete the connection before data is transferred.
Our conclusion is that each TCP segment has a purpose, and this is determined with the help of
the TCP flag options, allowing the sender or receiver to specify which flags should be used so the
segment is handled correctly by the other end.
Let's take a look at the TCP flags field to begin our analysis:
You can see the 2 flags that are used during the 3-way handshake (SYN, ACK) and data transfers.
As with all flags, a value of '1' means that a particular flag is 'set' or, if you like, is 'on'. In this
example, only the "SYN" flag is set, indicating that this is the first segment of a new TCP
connection.
In addition to this, each flag is one bit long, and since there are 6 flags, this makes the Flags
section 6 bits in total.
You would have to agree that the most popular flags are the "SYN", "ACK" and "FIN", used to
establish connections, acknowledge successful segment transfers and, lastly, terminate
connections. While the rest of the flags are not as well known, their role and purpose makes them,
in some cases, equally important.
We will begin our analysis by examining all six flags, starting from the top, that is, the Urgent
Pointer:
1st Flag - Urgent Pointer
The first flag is the Urgent Pointer flag, as shown in the previous screen shot. This flag is used to
identify incoming data as 'urgent'. Such incoming segments do not have to wait until the previous
segments are consumed by the receiving end but are sent directly and processed immediately.
An Urgent Pointer could be used during a stream of data transfer where a host is sending data to
an application running on a remote machine. If a problem appears, the host machine needs to
abort the data transfer and stop the data processing on the other end. Under normal circumstances,
the abort signal will be sent and queued at the remote machine until all previously sent data is
processed, however, in this case, we need the abort signal to be processed immediately.
By setting the abort signal's segment Urgent Pointer flag to '1', the remote machine will not wait
till all queued data is processed and then execute the abort. Instead, it will give that specific
segment priority, processing it immediately and stopping all further data processing.
If you're finding it hard to understand, consider this real-life example:
At your local post office, hundreds of trucks are unloading bags of letters from all over the world.
Because the amount of trucks entering the post office building are abundant, they line up one
behind the other, waiting for their turn to unload their bags.
As a result, the queue ends up being quite long. However, a truck with a big red flag suddenly
joins the queue and the security officer, whose job it is to make sure no truck skips the queue,
sees the red flag and knows it's carrying very important letters that need to get to their destination
urgently. By following the normal procedures, the security officer signals to the truck to skip the
queue and go all the way up to the front, giving it priority over the other the trucks.
In this example, the trucks represent the segments that arrive at their destination and are queued
in the buffer waiting to be processed, while the truck with the red flag is the segment with the
Urgent Pointer flag set.
A further point to note is the existence of theUrgent Pointer field. This field is covered in section
5, but we can briefly mention that when the Urgent Pointer flag is set to '1' (that's the one we are
analysing here), then the Urgent Pointer field specifies the position in the segment where urgent
data ends.
2nd Flag - ACKnowledgement
The ACKnowledgement flag is used to acknowledge the successful receipt of packets.
If you run a packet sniffer while transferring data using the TCP, you will notice that, in most
cases, for every packet you send or receive, an ACKnowledgement follows. So if you received a
packet from a remote host, then your workstation will most probably send one back with the
ACK field set to "1".
In some cases where the sender requires one ACKnowledgement for every 3 packets sent, the
receiving end will send the ACK expected once (the 3rd sequential packet is received). This is
also called Windowing and is covered extensively in the pages that follow.
3rd Flag - PUSH
The Push flag, like the Urgent flag, exists to ensure that the data is given the priority (that it
deserves) and is processed at the sending or receiving end. This particular flag is used quite
frequently at the beginning and end of a data transfer, affecting the way the data is handled at
both ends.
When developers create new applications, they must make sure they follow specific guidelines
given by the RFC's to ensure that their applications work properly and manage the flow of data in
and out of the application layer of the OSI model flawlessly. When used, the Push bit makes sure
the data segment is handled correctly and given the appropriate priority at both ends of a virtual
connection.
When a host sends its data, it is temporarily queued in the TCP buffer, a special area in the
memory, until the segment has reached a certain size and is then sent to the receiver. This design
guarantees that the data transfer is as efficient as possible, without waisting time and bandwidth
by creating multiple segments, but combining them into one or more larger ones.
When the segment arrives at the receiving end, it is placed in the TCP incoming buffer before it is
passed onto the application layer. The data queued in the incoming buffer will remain there until
the other segments arrive and, once this is complete, the data is passed to the application layer
that's waiting for it.
While this procedure works well in most cases, there are a lot of instances where this 'queueing'
of data is undesirable because any delay during queuing can cause problems to the waiting
application. A simple example would be a TCP stream, e.g real player, where data must be sent
and processed (by the receiver) immediately to ensure a smooth stream without any cut offs.
A final point to mention here is that the Push flag is usually set on the last segment of a file to
prevent buffer deadlocks. It is also seen when used to send HTTP or other types of requests
through a proxy - ensuring the request is handled appropriately and effectively.
4th Flag - Reset (RST) Flag
The reset flag is used when a segment arrives that is not intended for the current connection. In
other words, if you were to send a packet to a host in order to establish a connection, and there
was no such service waiting to answer at the remote host, then the host would automatically reject
your request and then send you a reply with the RST flag set. This indicates that the remote host
has reset the connection.
While this might prove very simple and logical, the truth is that in most cases this 'feature' is used
by most hackers in order to scan hosts for 'open' ports. All modern port scanners are able to detect
'open' or 'listening' ports thanks to the 'reset' function.
The method used to detect these ports is very simple: When attempting to scan a remote host, a
valid TCP segment is constructed with the SYN flag set (1) and sent to the target host. If there is
no service listening for incoming connections on the specific port, then the remote host will reply
with ACK and RST flag set (1). If, on the other hand, there is a service listening on the port, the
remote host will construct a TCP segment with the ACK flag set (1). This is, of course, part of the
standard 3-way handshake we have covered.
Once the host scanning for open ports receives this segment, it will complete the 3-way
handshake and then terminate it using the FIN (see below) flag, and mark the specific port as
"active".
5th Flag - SYNchronisation Flag
The fifth flag contained in the TCP Flag options is perhaps the most well know flag used in TCP
communications. As you might be aware, the SYN flag is initialy sent when establishing the
classical 3-way handshake between two hosts:
In the above diagram, Host A needs to download data from Host B using TCP as its transport
protocol. The protocol requires the 3-way handshake to take place so a virtual connection can be
established by both ends in order to exchange data.
During the 3-way handshake we are able to count a total of 2 SYN flags transmitted, one by each
host. As files are exchanged and new connections created, we will see more SYN flags being sent
and received.
6th Flag - FIN Flag
The final flag available is the FIN flag, standing for the word FINished. This flag is used to tear
down the virtual connections created using the previous flag (SYN), so because of this reason, the
FIN flag always appears when the last packets are exchanged between a connection.
It is important to note that when a host sends a FIN flag to close a connection, it may continue to
receive data until the remote host has also closed the connection, although this occurs only under
certain circumstances. Once the connection is teared down by both sides, the buffers set aside on
each end for the connection are released.
A normal teardown procedure is depicted below:
The above diagram represents an existing connection betwen Host A and B, where the two hosts
are exchanging data. Once the data transfer is complete, Host A sends a packet with the FIN,
ACK flags set (STEP 1).
With this packet, Host A is ACKnowledging the previous stream while at the same time initiating
the TCP close procedure to kill this connection. At this point, Host A's application will stop
receiving any data and will close the connection from this side.
In response to Host A's request to close the connection, Host B will send an ACKnowledgement
(STEP 2) back, and also notify its application that the connection is no longer available. Once this
is complete, the host (B) will send its own FIN, ACK flags (STEP 3) to close their part of the
connection.
If you're wondering why this procedure is required, then you may need to recall that TCP is a Full
Duplex connection, meaning that there are two directions of data flow. In our example this is the
connection flow from Host A to Host B and vice versa. In addition, it requires both hosts to close
the connection from their side, hence the reason behind the fact that both hosts must send a FIN
flag and the other host must ACKnowledge it.
Lastly, at Step 4, Host A willl acknowledge the request Host B sent at STEP 3 and the closedown
procedure for both sides is now complete!
Summary
This page dealt with the TCP Flag Options available to make life either more difficult, or easy,
depending on how you look at the picture :)
Perhaps the most important information given on this page that is beneficial to remember is the
TCP handshake procedure and the fact that TCP is a Full Duplex connection.
The following section will examine the TCP Window size, Checksum and Urgent Pointer fields,
all of which are relevant and very important. For this reason we strongly suggest you read
through these topics, rather than skip over them.
TCP Analysis - Section 5: Window Size, Checksum & Urgent Pointer
Introduction
Our fifth section contains some very interesting fields that are used by the TCP transport protocol.
We see how TCP helps control how much data is transferred per segment, make sure there are no
errors in the segment and, lastly, flag our data as urgent, to ensure it gets the priority it requires
when leaving the sender and arriving at the recipient.
So let's not waste any time and get right into our analysis!
The fifth section we are analysing here occupies a total of 6 bytes in the TCP header. These
values, like most of the fields in the protocol's header, remain constant in size, regardless of the
amount of application data.
This means that while the values they contain will change, the total amount of space the field
occupied will not.
The Window Flag
The Window size is considered to be one of the most important flags within the TCP header. This
field is used by the receiver to indicate to the sender the amount of data that it is able to accept.
Regardless of who the sender or receiver is, the field will always exist and be used.
You will notice that the largest portion of this page is dedicated to the Window size field. The
reason behind this is because this field is of great importance. The Window size field is the key to
efficient data transfers and flow control. It trully is amazing once you start to realise how
important this flag is and how many functions it contains.
The Window size field uses 'bytes' as a metric. So in our example above, the number 64,240 is
equal to 64,240 bytes, or 62.7 kb (64,240/1024).
The 62.7 kbytes reflects the amount of data the receiver is able to accept, before transmitting to
the sender (the server) a new Window value. When the amount of data transmitted is equal to the
current Window value, the sender will expect a new Window value from the receiver, along with
an acknowledgement for the Window just received.
The above process is required in order to maintain flawless data transmission and high efficiency.
We should however note that the Window size field selected is not in any case just a random
value, but one calculated using special formulas like the one in our example below:
In this example, Host A is connected to a Web server via a 10 Mbit link. According to our
formula, to calculate the best Window value we need the following information: Bandwidth and
Delay. We are aware of the link's bandwidth: 10,000,000 bits (10 Mbits), and we can easily find
out the delay by issuing a 'ping' from Host A to the Web server which gives us an average Round
Trip Time response (RTT) of 10 milliseconds or 0.01 seconds.
We are then able to use this information to calculate the most efficient Window size (WS):
WS = 10,000,000 x 0.01 => WS = 100,000 bits or (100,000/8)/1024 = 12,5 kbytes
For 10 Mbps bandwidth and a round-trip delay of 0.01 sec, this gives a window size of about 12
kb or nine 1460-byte segments:
This should yield maximum throughput on a 10 Mbps LAN, even if the delay is as high as 10 ms
because most LANs have round-trip delay of less than a few milliseconds. When bandwidth is
lower, more delay can be tolerated for the same fixed window size, so a window size of 12 kb
works well at lower speeds, too.
Windowing - A Form of Flow Control
Apart from the Windowing concept being a key factor for efficient data transmission, it is also a
form of flow control, where a host (the receiver) is able to indicate to the other (the sender) how
much data it can accept and then wait for further instructions.
The fact is that in almost all cases, the default value of 62 kbytes is used as a Window size. In
addition, even though a Window size of 62 kbytes might have been selected by the receiver, the
link is constantly monitored for packet losses and delays during the data transfer by both hosts,
resulting in small increases or decreases of the original Window size in order to optimise the
bandwidth utilisation and data throughput.
This automatic self-correcting mechanisim ensures that the two hosts will try to make use of the
pipe linking them in the best possible way, but do keep in mind that this is not a guarantee that
they will always succeed. This is generally the reason why a user is able to manually modify the
Window size until the best value is found and this, as we explained, depends greatly on the link
between the hosts and its delay.
In the case where the Window size falls to zero, the remote TCP can send no more data. It must
wait until buffer space becomes available and it receives a packet announcing a non-zero Window
size.
Lastly, for those who deal with Cisco routers, you might be interested to know that you are able
to configure the Window size on Cisco routers running the Cisco IOS v9 and greater. Routers
with versions 12.2(8)T and above support Window Scaling, a feature that's automatically enabled
for Window sizes above 65,535, with a maximum value of 1,073,741,823 bytes!
Window Scalling will be dealt with in greater depth on the following page.
On the Server Side: Larger Window Size = More Memory
Most network administrators who have worked with very busy web servers would recall the
massive amounts of memory they require. Since we now understand the concept of a 'Window
size', we are able to quickly analyse how it affects busy web servers that have thousands of clients
connecting to them and requesting data.
When a client connects to a web server, the server is required to reserve a small amount of
memory (RAM) aside for the client's session. The amount of required memory is the same
amount as the window size and, as we have seen, this value depends on the bandwidth and delay
between the client and server.
To give you an idea how the window size affects the server's requirements in memory, let's take
an example:
If you had a web server that served 10,000 clients on a local area network (LAN) running at 100
Mbits with a 0.1 second round trip delay and wanted maximum performance/efficiency for your
file transfers, according to our formula, you would need to allocate a window of 1.25 MB for
each client, or 12 Gigs of memory for all your clients! Assuming of course that all 10,000 clients
are connected to your web server simultaneously.
To support large file transfers in both directions (server to client and vice versa), your server
would need: [(100,000,000 x 0.1) 10,000 x 2] = over 24 Gigs of memory just for the socket
buffers!
So you can see how important it is for clients not to use oversized window values! In fact, the
current TCP standard requires that the receiver must be capable of accepting a full window's
worth of data at all times. If the receiver over-subscribes its buffer space, it may have to drop an
incoming packet. The sender will discover this packet loss and invoke TCP congestion control
mechanisms even though the network is not congested.
It is clear that receivers should not over-subscribe buffer space (window size) if they wish to
maintain high performance and avoid packet loss.
Checksum Flag
The TCP Checksum field was created to ensure that the data contained in the TCP segment
reaches the correct destination and is error-free. For those network gurus who are wondering how
TCP would ensure the segment arrives to the correct destination (IP Address), you will be happy
to know that there is a little bit more information used than just the TCP header to calculate the
checksum and, naturally, it would include a portion of the IP Header.
This 'extra' piece of information is called the pseudo-header and we will shortly analyse its
contents but, for now, let's view a visual representation of the sections used to calculate the TCP
checksum:
The above diagram shows you the pseudo header, followed by the TCP header and the data this
segment contains. However, once again, be sure to remember that the pseudo header is included
in the Checksum calculation to ensure the segment has arrived at the correct receiver.
Let's now take a look how the receiver is able to verify it is the right receiver for the segment it
just received by analysing the pseudo header.
The Pseudo Header
The pseudo header is a combination of 5 different fields, used during the calculation of the TCP
checksum. It is important to note (and remember!) that the pseudo header is not transmitted to the
receiver, but is simply involved in the checksum calculation.
Here are the 5 fields as they are defined by the TCP RFC:
When the segment arrives at its destination and is processed through the OSI layers, once the
transport layer (Layer 4) is reached, the receiver will recreate the pseudo header in order to
recalculate the TCP header checksum and compare the result with the value stored in the segment
it has received.
If we assume the segment somehow managed to find its way to a wrong machine, when the
pseudo header is recreated, the wrong IP Address will be inserted into the Destination IP Address
field and the result will be an incorrect calculated checksum. Therefore, the receiver that wasn't
supposed to receive the segment will drop it as it's obviously not meant to be there.
Now you know how the checksum field guarantees that the correct host will receive the packet, or
that it will get there without any errors!
However, be sure to keep in mind that even though these mechanisms exist and work wonderfully
in theory, when it comes to the practical part, there is a possibility that packets with errors might
make their way through to the application!
It's quite amazing once you sit down and think for a minute that this process happens for every
single packet that is sent and received between hosts that use TCP and UDP (UDP calculates the
same way its checksum) as their transport protocol!
Lastly, during the TCP header checksum calculation, the field is set to zero (0) as shown below.
This action is performed only during the checksum calculation on either end because it is
unknown at the time. Once the value is calculated, it is then inserted into the field, replacing the
inital zero (0) value.
This is also illustrated in the screen shot below:
In summarising the procedure followed when calculating the checksum, the following process
occurs, from the sender all the way to the receiver:
The sender prepares the segment that is to be sent to the receiving end. The checksum is set to
zero, in fact 4 zeros (hex) or 8 zeros (0000 0000) if you look at it in binary, because the checksum
is an 8 bit field.
The checksum in then calculated using the pseudo header, TCP header and lastly the data to be
attached to the specific segment. The result is then stored in the checksum field and the segment
is sent!
The segment arrives at the receiver and is processed. When it reaches the 4th OSI layer where the
TCP lives, the checksum field is set once again to zero. The receiver will then create its own
pseudo header for the segment received by entering its own IP Address in the Destination IP
Address field (as shown in the previous diagrams) and makes use of the TCP header and data to
calculate the new checksum.
If all is successfully accomplished, the result should be identical with the one the checksum field
segment had when it arrived. When this occurs, the packet is then further processed and the data
is handed to the application awaiting it.
If, however, the checksum is different, then the packet should be discarded (dropped) and a
notification will be sent to the receiver depending on how the TCP stack is implemented on the
receiver's operating system.
The Urgent Pointer
In section 4, we analysed the TCP Flag options and amongst them we found the Urgent Pointer
flag. The urgent pointer flag in the TCP Flag allows us to mark a segment of data as 'urgent',
while this urgent pointer field specifies where exactly the urgent data ends.
To help you understand this, take a look at the following diagram:
You may also be interested to know that the Urgent Pointer can also be used when attacking
remote hosts. From the case studies we have analysed, we see that certain applications, which
supposedly guard your system from attack attempts, do not properly log attacks when the URG
flag is set. One particular application happens to be the famous BlackIce Server v2.9, so beware!
As a final conclusion this section, if you find yourself capturing thousands of packets in order to
view one with the URG bit set, don't be disappointed if you are unable to catch any such packets!
We found it nearly impossible to get our workstation to generate such packets using telnet, http,
ftp and other protocols. The best option and by far the easiest way would be to look for packet
crafting programs that allow you to create packets with different flags and options set.
Summary
While this section was a fairly extensive, we have covered some very important sections of the
TCP protocol. You now know what a TCP Window is and how you can calculate it depending on
your bandwidth and delay.
We also examined the Checksum field, which is used by the receiver to verify the segment it
received is not corrupt and at the same time checking to make sure it didn't receive the segment
accidently!
Lastly, we examined in great detail the usage of the URG flag and Urgent Pointer field, which are
used to define an incoming segment that contains urgent data.
After enjoying such a thorough analysis, we're sure you're ready for more! The next section deals
with the TCP Options located at the end of the TCP header.
TCP Analysis - Section 6: TCP Options
Introduction
The TCP Options are located at the end of the TCP Header which is also why they are covered
last. Thanks to the TCP Options field we have been able to enhance the TCP protocol by
introducing new features or 'addons' as some people like to call them, defined by their respective
RFC's.
As data communication continues to become more complex and less tolerable to errors and
latency, it was clear that these new features had to be incorporated to the TCP transport to help
overcome the problems created by the new links and speeds available.
To give you an example, Window Scaling, mentioned in the previous pages and elaborated here,
is possible using the TCP Options field because the original Window field is only 16 bits long,
allowing a maximum decimal number of 65,535. Clearly this is far too small when we want to
express 'Window size' values using numbers in the range of thousands to a million e.g 400,000 or
950,000.
Before we delve into any details, let's take a look at the TCP Options field:
As you can see, the TCP Options field is the sixth section of the TCP Header analysis.
Located at the end of the header and right before the Data section, it allows us to make use of the
new enhancements recommended by the engineers who help design the protocols we use in data
communications today.
TCP Options
Most of the TCP Options we will be analysing are required to appear only during the initial SYN
and SYN/ACK phase of the 3-way-handshake TCP performs to establish a virtual link before
transferring any data. Other options, however, can be used at will, during the TCP session.
It is also important to note that the TCP Options may occupy space at the end of the TCP header
and are a multiple of 8 bits in length. This means that if we use one TCP Option that is 4 bits in
length, there must be another 4 bits of padding in order to comply with the TCP RFC. So the TCP
Options length MUST be in multiples of 8 bits, that is 8, 16, 24, 32 e.t.c
Here's a brief view of the TCP Options we are going to analyse:





Maximum Segment Size (MSS)
Window Scaling
Selective Acknowledgements (SACK)
Timestamps
Nop
Let's now take a look at the exciting options available and explain the purpose of each one.
Maximum Segment Size (MSS)
The Maximum Segment Size is used to define the maximum segment that will be used during a
connection between two hosts. As such, you should only see this option used during the SYN and
SYN/ACK phase of the 3-way-handshake. The MSS TCP Option occupies 4 bytes (32 bits) of
length.
If you have previously come across the term "MTU" which stands for Maximum Transfer Unit,
you will be pleased to know that the MSS helps define the MTU used on the network.
If your scratching your head because the MSS and MTU field doesn't make any sense to you, or it
is not quite clear, don't worry, the following diagram will help you get the big picture:
You can see the Maximum Segment Size consists of the TCP Header and Data, while the
Maximum Transfer Unit includes the MSS plus the IP Header.
It would also benefit us to recognise the correct terminology that corresponds to each level of the
OSI Model: The TCP Header and Data is called a Segment (Layer 4), while the IP Header and the
Segment is called an IP Datagram (Layer 3).
Furthermore, regardless of the size the MTU will have, there is an additional 18 bytes overhead
placed by the Datalink layer. This overhead includes the Source and Destination MAC Address,
the Protocol type, followed by the Frame Check Sequence placed at the end of the frame.
This is also the reason why we can only have a maximum MTU of 1500 bytes. Since the
maximum size of an Ethernet II frame is 1518 bytes, subtracting 18 bytes (Datalink overhead)
leaves us with 1500 bytes to play with.
TCP usually computes the Maximum Segment Size (MSS) that results in IP Datagrams that
match the network MTU. In practice, this means the MSS will have such a value that if we add
the IP Header as well, the IP Datagram (IP Header+TCP Header+DATA) would be equal to the
network MTU.
If the MSS option is omitted by one or both ends of the connection, then the value of 536 bytes
will be used. The MSS value of 536 bytes is defined by RFC 1122 and is calculated by taking the
default value of an IP Datagram, 576 bytes, minus the standard length of the IP and TCP Header
(40 bytes), which gives us 536 bytes.
In general, it is very important to use the best possible MSS value for your network because your
network performance could be extremely poor if this value is too large or too small. To help you
understand why, lets look at a simple example:
If you wanted to transfer 1 byte of data through the network, you would need to create a datagram
with 40 bytes of overhead, 20 for the IP Header and 20 for the TCP Header. This means that your
using 1/41 of your available network bandwidth for data. The rest is nothing but overhead!
On the other hand, if the MSS is very large, your IP Datagrams will also be very large, meaning
that they will most probably fail to fit into one packet should the MTU be too small. Therefore
they will require to be fragmented, increasing the overhead by a factor of 2.
Window Scaling
We briefly mentioned Window Scaling in the previous section of the TCP analysis, though you
will soon discover that this topic is quite broad and requires a great deal of attention.
After gaining a sound understanding of what the Window size flag is used for, Window Scaling
is, in essence, an extention to the Window size flag. Because the largest possible value in the
Window size flag is only 65,535 bytes (64 kb), it was clear that a larger field was required in
order to increase the value to a whopping 1 Gig! Thus, Window Scaling was born.
The Window Scaling option can be a maximum of 30 bits in size, which includes the original 16
bit Window size field covered in the previous section. So that's 16 (original window field) + 14
(TCP Options 'Window Scaling') = 30 bits in total.
If you're wondering where on earth would someone use such an extremely large Window size,
think again. Window Scaling was created for high-latency, high-bandwidth WAN links where a
limited Window size can cause severe performance problems.
To consolidate all these technological terms and numbers, an example would prove to beneficial:
The above example assumes we are using the maximum Window size of 64 kbs and because the
WAN link has very high latency, the packets take some time to arrive to their destination, that is,
Host B. Due to the high latency, Host A has stopped transmitting data since there are 64 kbs of
data sent and they have not yet been acknowledged.
When the Time = 4, Host B has received the data and sends the long awaited acknowledgement
to Host A so it can continue to send data, but the acknowledgement will not arrive until
somewhere around Time = 6.
So, from Time = 1 up until Time = 6, Host A is sitting and waiting. You can imagine how poor
the performance of this transfer would be in this situation. If we were to transfer a 10 Mb file, it
would take hours!
Let's now consider the same example, using Window Scaling:
As you can see, with the use of Window Scaling, the window size has increased to256 kb! Since
the value is quite large, which translates to more data during transit, Host B has already received
the first few packets, while Host A is still sending the first 256 kb window.
On Time = 2, Host B sends an Acknowledgement to Host A, which is still busy sending data.
Host A will receive the Acknowledgement before it finishes the 256 kb window and will
therefore continue sending data without pause since it will soon receive another
Acknowledgement from Host B.
Clearly the difference that a large window size has made is evident, increasing the network
performance and minimising the ideal time for the sending host.
The Window Scale option is defined in RFC 1072, which lets a system advertise 30-bit (16 from
the original window + 14 from the TCP Options) Window size values, with a maximum buffer
size of 1 GB. This option has been clarified and redefined in RFC 1323, which is the specification
that all implementations employ today.
Lastly, for those who deal with Cisco routers, it may benefit you to know that you are also able to
configure the Window size on Cisco routers running the Cisco IOS v9 and greater. Also, routers
with versions 12.2(8)T and above support Window Scaling, which is automatically enabled for
Window sizes above 65,535 bytes (64 kb), with a maximum value of 1,073,741,823 bytes (1
GByte)!
Selective Acknowledgments (SACK)
TCP has been designed to be a fairly robust protocol though, despite this, it still has several
disadvantages, one of which concerns Acknowledgements, which also happens to be the reason
Selective Acknowledgement were introduced with RFC 1072.
The problem with the good old plain Acknowledgements is that there are no mechanisms for a
receiver to state "I'm still waiting for bytes 20 through 25, but have received bytes 30 through
35". And if your wondering whether this is possible, then the answer is 'yes' it is!
If segments arrive out of order and there is a hole in the receiver's queue, then using the 'classical'
Acknowledgements supported by TCP, can only say "I've received everything up to byte 20". The
sender then needs to recognise that something has gone wrong and continue sending from that
point onwards (byte 20).
As you may have concluded, the above situation is totally unacceptable, so a more robust service
had to be created, hence Selective Acknowledgments!
Firstly, when a virtual connection is established using the classic 3-way-handshake the hosts must
send a "Selective Acknowledgments Permitted" in the TCP Options to indicate that they are able
to use SACK's. From this point onwards, the SACK option is sent whenever a selective
acknowledgment is required.
For example, if we have a Windows98 client that is waiting for byte 4,268, but the SACK option
shows that the Windows98 client has also received bytes 7,080 through 8,486, it is obvious that it
is missing bytes 4,268 through 7,079, so the server should only resend the missing 2,810 bytes,
rather than restarting the entire transfer at byte number 4,268.
Lastly, we should note that the SACK field in the TCP Options uses two 16 bit fields, a total of
32 bits together. The reason there are two fields is because the receiver must be able to specify
the range of bytes it has received, just like the example we used. In the case where Window
Scaling is also used, these 2 x 16 bit fields can be expanded to two 24 or 32 bit fields.
Timestamps
Another aspect of TCP's flow-control and reliability services is the round-trip delivery times that
a virtual circuit is experiencing. The round-trip delivery time will accurately determine how long
TCP will wait before attempting to retransmit a segment that has not been acknowledged.
Because every network has unique latency characteristics, TCP has to understand these
characteristics in order to set accurate acknowledgment timer threshold values. LANs typically
have very low latency times, and as such TCP can use low values for the acknowledgment timers.
If a segment is not acknowledged quickly, a sender can retransmit the questionable data quickly,
thus minimizing any lost bandwidth and delay.
On the other hand, using a low threshold value on a WAN is sure to cause problems simply
because the acknowledgment timers will expire before the data ever reaches the destination.
Therefore, in order for TCP to accurately set the timer threshold value for a virtual circuit, it has
to measure the round-trip delivery times for various segments. Finally, it has to monitor
additional segments throughout the connection's lifetime to keep up with the changes in the
network. This is where the Timestamp option comes into the picture.
Similarly to the majority of the other TCP Options covered here, the Timestamp option must be
sent during the 3-way-handshake in order to enable its use during any subsequent segments.
The Timestamp field consists of a Timestamp Echo and Timestamp Reply field, both of which
the reply field is always set to zero by the sender and completed by the receiver after which it is
sent back to the original sender. Both timestamp fields are 4 bytes long!
Nop
The nop TCP Option means "No Option" and is used to separate the different options used within
the TCP Option field. The implementation of the nop field depends on the operating system used.
For example, if options MSS and SACK are used, Windows XP will usually place two nop's
between them, as was indicated in the first picture on this page.
Lastly, we should note that the nop option occupies 1 byte. In our example at the beggining of the
page, it would occupy 2 bytes since it's used twice. You should also be aware that this field is
usually checked by hackers when trying to determine the remote host's operating system.
Summary
This page provided all the available TCP Options that have been introduced to the TCP protocol
in its efforts to extend its reliability and performance. While these options are critical in some
cases, most users are totally unaware of their existence, especially network administrators. The
information provided here is essential to help administrators deal with odd local and wan network
problems that can't be solved by rebooting a server or router :)
The final page to this topic is a summary covering the previous six pages of TCP, as there is little
to analyse in the data section of the TCP Segment. It is highly suggested you read it as a recap to
help you remember the material covered.
TCP Analysis - Section 7: Data & Quick Summary
Introduction
Finally, the last page of our incredible TCP Analysis. As most of you would expect, this section is
dedicated to the DATA, which is also the reason all the previous pages exist!
The Data
The following diagram may have been tiresome, however, it will be displayed one final time to
note the data portion of the packet:
Your knowledge regarding the procedure followed when the above packet arrives to its
destination is assumed. However, a summary is given below to refresh our understanding in order
to avoid confusion.
When the above packet arrives at the receiver, a decapsulation process is required in order to
remove each OSI layer's overhead and pass the Data portion to the application that's waiting for
it. As such, when the packet is received in full by the network card, it is given to the 2nd OSI
layer (Datalink) which, after performing a quick check on the packet for errors, it will strip the
overhead associated with that layer, meaning the yellow blocks will be removed.
The remaining portion, that is, the IP header, TCP header and Data, now called an IP Datagram,
will be passed to the 3rd OSI layer (Network) where another check will be performed and if
found to be error free, the IP header will be stripped and the rest (now called a Segment) is passed
to the 4th OSI layer.
The TCP protocol (4th OSI layer) will accept the segment and perform its own error check on the
segment. Assuming it is found error free, the TCP header is stripped off and remaining data is
given to the upper layers eventually arriving at the application waiting for it.
Summary
Our in-depth analysis of the the TCP protocol has reached its conclusion. After reading all these
pages, we are sure you have a much better understanding regarding the TCP protocol's purpose
and process, and you are able to really appreciate its functions.
We hope you have enjoyed our analysis and we're sure you will be back for more!
User Datagram Protocol - UDP
Some common protocols which use UDP are: DNS, TFTP, ARP, RARP and SNMP.
When people refer to "TCP/IP" remember that they are talking about a suite of protocols, and not
just one (as most people think). TCP/IP is NOT one protocol. Please see the Protocols section for
more information.
The User Datagram Protocol (UDP) is defined by IETF RFC768
UDP - User Datagram Protocol
The second protocol used at the Transport layer is UDP. Application developers can use UDP in
place of TCP. UDP is the scaled-down economy model and is considered a thin protocol. Like a
thin person in a car, a thin protocol doesn't take up a lot of room - or in this case, much bandwidth
on a network.
UDP as mentioned dosen't offer all the bells and whistles of TCP, but it does a fabulous job of
transporting information that doesn't require reliable delivery and it does so using far fewer
network resources.
Unreliable Transport
UDP is considered to be an unreliable transport protocol. When UDP sends segments over a
network, it just sends them and forgets about them. It doesn't follow through, check on them, or
even allow for an acknowledgment of safe arrival, in other words .... complete abandonment!
This does not mean that UDP is ineffective, only that it doesn't handle issues of reliability.
The picture below shows us the UDP header within a data packet. This is to show you the
different fields a UDP header contains:
Connection-less Oriented
For those who read about TCP, you would know it is a connection oriented protocol, but UDP
isn't. This is because UDP doesn't create a virtual circuit (establish a connection before data
transfer), nor does it contact the destination before delivering information to it. No 3-way
handshake or anything like that here!
Since UDP assumes that the application will use its own reliability method, it doesn't use any,
which obviously makes things transfer faster.
Less Overhead
The very low overhead, compared to TCP, is a result of the lack of windowing or
acknowledgments. This certainly speeds things up but you get an unreliable (in comparison to
TCP) service. There really isn't much more to write about UDP so I'll finish here.
Domain Name System (DNS) Introduction
Introduction
DNS is a very well known protocol. It is used for resolving host names and domain names to IP
addresses. The fact is that when you type www.firewall.cx it is translated into an IP address via
special queries that take place from your PC, but I'll explain how that works later on.
Because there is a fair bit of material to cover for the DNS protocol, and I don't want to confuse
you with too much information on one page, I have broken it down into 5 sections, each covering
a specific part of the protocol.
People who want specific information on the DNS protocol can go straight to the section they
need, the rest of us who just want to learn it all can start reading in the order presented:
Section 1: The DNS Protocol. How and why the DNS protocol was born. Page contains a bit of
historical information and also compares DNS with the OSI Reference model, where you will see
the layers on which DNS works. Internet DNS hierarchy is also analysed here, giving you the
chance to understand how domains on the Internet are structured.
Section 2: The DNS Resolution Process. What really happens when a host requests a DNS
resolution. Full analysis of the whole resolution process using a real life example. Understand
Name Servers and the role they play in the DNS system.
Section 3: The DNS Query Message Format. This section, along with the next one gives you the
DNS packet format in all its glory. Learn how DNS queries are generated and formatted. See,
learn and understand the various fields within the packets as your taken through a full detailed
analysis of the packet structure using the cool 3D diagrams.
Section 4: The DNS Response Message Format. This is the continuation of the section above,
dealing with the DNS response that's received. You will learn how the response packet is
generated, formatted and sent to the resolver. Again, you're taken through a full detailed analysis
of the packet structure using the cool 3D diagrams.
Section 5: The DNS Server (BIND). Based on BIND for Linux, this section is broken into a
futher 6 pages:

Section 5.1: Introduction to the DNS Server. Learn how a DNS server is setup on a Linux
machine. Over 85% of DNS servers on the Internet run on Linux and Unix based systems
while Microsoft and Novell DNS servers follow the same structure. DNS Zones and
Domains are also covered on this page, this is essential for understanding how DNS
Servers work.

Section 5.2: The db.DOMAIN file. Complete analysis of the zone data file for a Primary
DNS server. See what is contains and understand how its structured.

Section 5.3: The db.ADDR file. Complete analysis of the zone data file for a Primary
DNS server. See what is contains and understand how its structured.

Section 5.4: Other common files. Analysing the rest of the files which are common to all
DNS servers.

Section 5.5: Slave DNS Server. Instructions on setting up a secondary DNS server.

Section 5.6: DNS Caching. The key to an efficient DNS server. This is a must for any
DNS Administrator. Learn how DNS caching helps improve performance and reduce
traffic. Includes analysis of specific parameters within the DNS packet, which helps make
DNS caching a reality, and find out how to avoid problems that come with Domain
redelegation or website transfers.
As you can see, there's plenty of stuff to cover. But don't despair because is all cool stuff ! Grab
something to drink and let's dive into the DNS waters ! You will be amazed at the stuff you'll find
:)
The DNS Protocol
Introduction
If you ever wondered where DNS came from, this is your chance to find out ! The quick
summary on DNS's history will also help you understand why DNS servers are run mostly on
Linux and Unix-type systems. We then get to see the layers of the OSI Model on which DNS
works and, towards the end of the page, you will find out how the Domains (and DNS servers)
are structured on the Internet to ensure uptime and effectiveness.
The History
DNS began in the early days when the Internet was only a small network created by the
Department of Defence for research purposes. Host names (simple computer names) of
computers were manually entered into a file (called HOSTS) which was located on a central
server. Each site/computer that needed to resolve host names had to download this file. But as the
number of hosts grew, so did the HOSTS file (Linux, Unix, Windows and NetWare still use such
files) until it was far too large for computers to download and it was generating great amounts of
traffic ! So they thought ... Stuff this .. let's find a better solution ... and in 1984 the Domain Name
System was introduced.
The Protocol
The Domain Name System is a 'hierarchically distributed database', which is a fancy way of
saying that its layers are arranged in a definite order and that its data is distributed across a wide
range of machines (just like the roots of a tree branch out from the main root).
Most companies today have their own little DNS server to ensure the computers can find each
other without problems. If you're using Windows 2000 and Active Directory, then you surely are
using DNS for the name resolutions of your computers. Microsoft has created its own version of a
"DNS" server, called a WINS server, which stands for Windows Internet Name Service, but this
is old technology and uses protocols that are nowhere near as efficient as DNS, so it was natural
for Microsoft to move away from WINS and towards DNS, after all, the whole Internet works on
DNS :)
The DNS protocol works when your computer sends out a DNS query to a name server to resolve
a domain. For example, you type "www.firewall.cx" in your web browser, this triggers a DNS
request, which your computer sends to a DNS server in order to get the website's IP Address !
There is a detailed example on the pages to follow so I won't get into too much detail for the
moment.
The DNS protocol normally uses the UDP protocol as a means of transport because of its small
overhead in comparison to TCP; the less overhead a protocol has, the faster it is !
In the case where there are constant errors and the computer trying to request a DNS resolution
can't get an error free answer, or any answer at all, it will switch to TCP to ensure the data arrives
without errors.
This process, though, depends on the operating system you're using. Some operating systems
might not allow DNS to use the TCP protocol, thus limiting it to UDP only. It is rare that you will
get so many errors that you can't resolve any hostname or domain name to an IP Address.
The DNS protocol utilises Port 53 for its service. This means that a DNS server listens on Port 53
and expects any client wishing to use the service to use the same port. There are, however, cases
where you might need to use a different port, something possible depending on the operating
system and DNS server you are running.
In the following pages we'll be looking at the actual DNS packet format, where you are able to
see exactly the contents of DNS query, so we won't analyse the packet structure here.
Next we'll take a close look at how the Internet domains and DNS servers are structured to make
sure the model works flawlessly and efficiently !
The Internet Domain Name Server Hierarchy
This interesting section will help you understand how domain names on the Internet are
structured and where DNS servers fit in to the picture. When you think about the millions of
domain names registered today, you probably think that you have to be superhuman to manage
such a structure of DNS servers !
Well that's not that case. The DNS structure has been designed in such a way that no DNS server
needs to know about all possible domains, but only those immediately above and below it.
The picture below shows part of the Internet DNS hierarchical structure:
.......
Let's explain how it works :
Internic controls the "root" domain, which includes all the top level domains. These are marked in
a green oval for clarity. Within the green oval you have the ROOT DNS servers, which know all
about the authoritative DNS servers for the domains immediately below them e.g firewall.cx,
cisco.com, microsoft.com etc. These ROOT DNS servers can tell you which DNS server takes
care of firewall.cx, cisco.com, microsoft.com and the rest.
Each domain, including the ones we are talking about (cisco, firewall, microsoft), have what we
call a "Primary DNS" and "Secondary DNS". The Primary DNS is the one that holds all the
information about its domain. The Secondary acts as a backup in case the Primary DNS fails. The
process in which a Primary DNS server sends its copy to the Secondary DNS server is called
Zone Transfer and is covered in the DNS Database section.
Today there are hundreds of websites at which you are able to register your own domain and,
once you've done that, you have the power to manage it yourself. In the example above, Cisco
bought the "Cisco.com" domain and then created your resource records. Some examples of
resource records for the Cisco domain in our example are: support , www and routers. These will
be analysed in depth on the next pages.
So here comes the million dollar question :)
How do you create subdomains and www's (known as resouce records) ?
The answer is pretty simple:
You use a special DNS administration interface (usually web based - provided by the guys with
whom you registered your domain) that allows you to create, change and delete the subdomains,
www's or whatever resource record you can come up with. When you're making changes to the
DNS settings of your domain, you're actually changing the contents of specific files that are
located on that server.
These changes then slowly propagate to the authoritative DNS servers, which are responsible for
your domain area and then the whole Internet will contact these DNS servers when they need to
access any section of your domain.
For example, if you need to resolve ftp.firewall.cx, your computer will locate and contact the
DNS Server responsible for the .CX domains, which will let you know the DNS server that's in
charge of the Firewall.cx domain. The DNS server of Firewall.cx in turn will let your computer
know the IP Address of ftp.firewall.cx because it holds all the information for the firewall.cx
domain.
That completes our first section. It's not that hard after all!
DNS Resolution Process
Introduction
This section will help you understand how the DNS queries work on the Internet and your home
network. There are two ways to use the domain name system in order to resolve a host or domain
name to an IP Address and we're going to look at them here. There is also a detailed example
later on this page to help you understand it better.
Queries and Resolution
As mentioned in the introduction section, there are two ways for a client to use the domain name
system to get an answer.
One of these involves the client contacting the name servers (this is also called a non Recursive
query) one at a time until it finds the authority server that contains the information it requires,
while the other way is to ask the name server system to perform the complete translation (this is
also called a Recursive query), in which case the client will send the query and get a response that
contains the IP Address of the domain it's looking for.
It's really exciting to see how DNS queries work. While analysing with you the packets that are
sent and received from the DNS server, I'm going to show you how the client chooses the method
by which it wants its query to be resolved, so you will truly understand how these cool features
work ! The DNS Query/Response Message Format pages contain all this packet analysis
information, so let's continue and prepare for it !
Our Example DNS Resolution
We will now look at what happens when your workstation requests a domain to be resolved. The
example that follows will show you the whole procedure step by step, so make sure you take your
time to read it and understand it !
When someone wants to visit the Cisco website (www.cisco.com), they go to their web browser
and type "http://www.cisco.com" or just "www.cisco.com" and, after a few seconds, the website
is displayed. But what happens in the background after they type the address and hit enter is
pretty much unknown to most users. That's what we are going to find out now !
The picture below shows us what would happen in the above example: (for simplicity we are not
illustrating both Primary and Secondary DNS servers, only the Primary)
Explanation :
1. You open your web browser and enter www.cisco.com in the address field. At that point, the
computer doesn't know the IP address for www.cisco.com, so it sends a DNS query to your ISP's
DNS server (It's querying the ISP's DNS because this has been set through the dial-up properties;
if you're on a permanent connection then it's set through your network card's TCP/IP properties).
2. Your ISP's DNS server doesn't know the IP for www.cisco.com, so it will ask one of the
ROOT DNS servers.
3. The ROOT DNS server checks its database and finds that the Primary DNS for Cisco.com is
198.133.219.25. It replies to your ISP's server with that answer.
4. Your ISP's DNS server now knows where to contact Cisco's DNS server and find out if
www.cisco.com exists and its IP. Your ISP's DNS server sends a recursive query to Cisco.com's
DNS server and asks for an IP address for www.cisco.com.
5. Cisco's DNS server checks its database and finds an entry for "www.cisco.com". This entry has
an IP address of 198.133.219.25. In other words, the webserver is running on the same physical
server as the DNS ! If it wasn't running on the same server, then it would have a different IP. (Just
a note, you can actually make it look like it's on the same physical server, but actually run the
web server on a different box. This is achieved by using some neat tricks like port forwarding)
6. Your ISP's DNS server now knows the IP address for www.cisco.com and sends the result to
your computer.
7. Your computer now knows who it needs to contact to get to the website. So it sends an http
request directly to Cisco's webserver and downloads the webpage.
I hope you didn't find it too hard to follow. Remember that this query is the most common type.
The other type of query (non recursive) follows the same procedure, the difference is that the
client does all the running around trying to find the authoritative DNS server for the desired
domain, I like to think of it as "self service" :)
DNS Query Message Format
Introduction
This section will deal with the analysis of the DNS packets. This will allow us to see the way
DNS messages are formatted and the options and variables they contain. To understand a
protocol, you must understand the information the protocol carries from one host to another.
Because the DNS message format can vary, depending on the query and the answer, I've broken
this analysis into two parts. Part 1 analyses the DNS format of a query, in other words, it shows
how the packet looks when we ask a DNS server to resolve a domain. Part 2 analyses the DNS
format of an answer, where the DNS server is responding to our query.
I find this method more informative and easy to understand rather than combining the analysis of
queries and answers.
DNS Analysis - Host Query
As mentioned in the previous sections of the DNS Protocol, a DNS query is generated when the
client needs to resolve a domain name into an IP Address. This could be the result of entering
"www.firewall.cx" in the url field of your web browser, or simply by launching a program that
uses the Internet and therefore generates DNS queries in order to successfully communicate with
the host or server it needs.
Now, I've also included a live example (using my packet analyser), so you can compare theory
with practice for a better understanding. After this we will have a look at the meaning of each
field in the packet, so let's check out what a packet containing a DNS query would look like on
our network:
This is the captured packet we are going to deal with. To generate this packet, I typed "ping
www.firewall.cx" from my linux prompt. The command generated this packet, which was put on
my network with the destination being a name server in Australia. Notice the Port Destination
which is set to 53, on which the port DNS works, and the protocol used for the DNS Query,
which is UDP.
Ethernet II (Check Ethernet Frames for more info.) is the most common type of frame found on
LANs, in fact it probably is the only type you will find on 85% of all networks if you're only
running TCP/IP and Windows or Unix-like machines. This particular one contains a DNS section,
which could be either a Query or Response. We are assuming a Query, so it can fit nicely in our
example.
We are going to take the DNS Section above and analyse its contents, which are already shown in
the picture above (Right hand side, labeled "Capture") taken from my packet analyser.
Here they are again in a cool 3D diagram:
From this whole packet, the DNS Query Section is the part we're interested in (analysed shortly),
the rest is more or less overhead and information to let the server know a bit more information
about our query.
The analysis of each 3D block (field) is shown in the left picture below so you can understand the
function of each field and the DNS Query Section captured by my wonderful packet sniffer on the
right:
All fields in the DNS Query section except the DNS Name field (underlined in red in the picture
above), have set lengths. The DNS Name field has no set length because it varies depending on
the domain name length as we are going to see soon.
For example, a query for www.cisco.com will require DNS Name field to be smaller than a query
for support.novell.com simply because the second domain is longer.
The DNS Name Field
To prove this I captured a few packets that show different lengths for the domain names I just
mentioned but, because the DNS section in a packet provides no length field, we need to look one
level above, which is the UDP header, in order to calculate the DNS section length. By
subtracting the UDP header length (always 8 bytes - check UDP page for more information) from
the bytes in the Length field, we are left with the length of the DNS section:
The two examples clearly show that the Length Field in the UDP header varies depending on the
domain we are trying to resolve. The UDP header is 8 bytes in both examples and all fields in the
DNS Section, except for the DNS Name field, are always 2 bytes.
The Flags/Parameters Field
The Parameter Field (labeled Flags) is one of the most important fields in DNS because it is
responsible for letting the server or client know a lot of important information about the DNS
packet. For example, it contains information as to whether the DNS packet is a query or response
and, in the case of a query, if it should be a recursive or non-recursive type. This is most
important because as we've already seen, it determines how the query is handled by the server.
Let's have a closer look at the flags and explain the meaning of each one. I've marked the bit
numbers with black on the left hand side of each flag parameter so you can see which ones are
used during a response. The picture on the right hand side explains the various bits. You won't see
all 16 bits used in a query as the rest are used during a response or might be reserved:
As you can see, only bits 1, 2-5, 7, 8 and 12 are used in this query. The rest will be a combination
of reserved bits and bits that are used only in responses. When you read the DNS response
message format page, you will find a similar packet captured which is a reponse to the above
query and the rest of the bits used are analysed.
And that just about does it for the DNS Query message format page. Next up is the DNS
Response message format page which I'm sure you will find just as interesting!
DNS Response Message Format
Introduction
The previous page delt with the DNS Query message formats. We analysed them in great detail
and showed how various options are selected by the host using the Flags/Parameters field.
On this page we will see and analyse the responses we get from the generated queries. These
responses, in the case of a recursive query, come directly from the DNS server to which we sent
the query and, in the case of a non-recursive query, will come from the last DNS server the client
contacts in order to get the required information.
Lastly, keep in mind that this page is the continuation of the previous page, so it's important to
understand the previous material ! If you have any doubts, read the previous section again.
Now that we have all that out of the way ....let's grab a few DNS responses and get our hands
dirty :)
DNS Analysis - Server Response
Here is the response (highlighted) to the previous DNS query sent to an Australian DNS server
(139.130.4.4), where I asked for the resolution of www.firewall.cx:
Something worth paying attention to is the time this query took to come back to my Linux file
server. The time taken, from the moment the packet was sent from the Linux file server, until it
received the answer, was only 0.991 seconds !
During this short period of time the packet travelled from Greece to Australia, reached the DNS
server, which sent its queries to other DNS servers until it found the answer and then generated a
DNS response that was sent back to Greece where my home network is !
There are a lot of factors that contribute to this fairly fast reponse. The transport protocol UDP,
which does not require any 3-way handshake, the load of the DNS server to which I sent the
query, the load of DNS servers it then had to ask, the speed at which all these servers and myself
are connected to the Internet and the general load between the routers that my packet had to travel
in order to get to its various destinations !
As you can clearly see, there is a lot happening for just one DNS query and response. Try to
consider what happenes when you have 20,000,000 DNS queries happening at once on the
Internet and you have a good idea on how well this protocol and the underlying technology have
been designed !
Following is the Ethernet II packet that runs on the local network. The structure is the same, but
varies in size, regardless of whether it's a DNS Query or Response:
Now, to make the analysis of the DNS Section easier I have also included the DNS Query (left
hand side) and DNS Response (right hand side). This allows you to compare what we sent and
what we received :
........
By comparing the two packets, you can see that there are fields in the DNS Response packet
(marked with green arrows) that didn't exist in the Query. Let's see again what each field means
and anaylse them again as we did in the previous page.
The DNS Section in a response packet is considerably larger and more complex than that of a
query. For this reason we are going to analyse it in parts rather than all together. The query had
only one section that required in-depth analysis whereas the response has three since the first one
is the original query sent.
Here is the DNS Section of a DNS response in 3D:
You can clearly see that everything after the light green 3D block labeled "DNS Query Section"
is new. We are going to focus on these 3 new blocks, which are part of the DNS Response
Section, as the rest has been covered in the previous page.
DNS Response Section
The analysis of this section won't be too difficult because the format that is followed in each 3D
block of our DNS Response Section is identical. For this reason, I have not analysed all 3 3D
blocks, but only a few to help you get the idea.
The diagram below shows you the contents of the 3 3D blocks (sections) we are looking at:
Answers Section, Authoritative Name Servers Section and the Additional Records Sections:
What we need to need understand is that each one of these three sections have identical fields.
Even though the information they contain might seem a bit different, the fields are exactly the
same and we will see this shortly.
In the picture above, I have only expanded the first part of the Answer section which is
underlined in green so you can compare the fields with the ones contained in the left hand picture.
This next picture shows you the expanded version from the first part of the Answers and
Authoritative sections. I have already marked and labeled the fields to prove to you that they are
all identical and vary only in the information they contain:
If you look carefully you will notice that the Resource Data field is presented first, where
according to the analysis of the sections in the picture above (left side), you would expect it last.
The truth is that it is last, but it's presented first just because my packet sniffer likes to make the
data more readable and less confusing.
This is also the reason the first line of each part in each section is used to give you a quick
summary of the information captured.
For example, looking at line 1, part 1 in the Answers Section (underlined in green), you get a
summary of what's to follow: www.firewall.cx, type INET, cname firewall.
This proves that all fields in all of these 3 sections contained in the DNS Response Section are
identical, but contain different values/data.
You also might wonder why there are 2 parts in each section ?
Could there be more or less parts, depending on the domain name or is there always 2 parts in
each section ?
The answer is simple and logical, there are as many parts as needed, depending always on the
domain setup. For example, if I had more than two name servers for the Firewall.cx domain, you
would see more than two parts in the Authoritative nameserver section and the other sections.
Our example has only 2 parts per section whereas the one we see below has a lot more :
This DNS Response Section is based on a query generated for the IBM.COM domain:
As you can see, our query for IBM.COM gave us a response which has 4 parts per section !
Again, each part in every section has identical fields, but different data/values.
You might have noticed a pattern here as well. In every DNS Response you will find the same
number of parts per section.
For example, the picture on the left shows us 4 parts for the Answers, Authoritative and
Additional records sections and this is no coincidence.
The reason this is no coincidence - between the 3 sections (Answers, Authoritative and
Additional records) is the Type field and I will explain why.
The Type Field
The Type field determines the type or part of information we require about a domain. To give you
the simplest example, when we have a Type=A , we are given the IP Address of the domain or
host (look at Answers section above), whereas a Type=NS means we are given the Authoritative
Name Servers that are responsible for the domain (look at Authoritative Name Servers section
above).
Looking at the picture below, which is from our first example (query for firewall.cx) we can see
exactly how the Type field is responsible for the data we receive about a domain:
As you can see, the Type field in the first part of the Authoritative Name Servers section is set to
NS, which means this part contains information about the Authoritative name servers of the
queried domain.
Going to the first part of the Additional records, we can see that the Type field here is set to A,
which means the data contained in this part is an IP Address for a particular host.
So where is the logic to all this ?
Well, if I told you which servers are authoritative for a domain (Authoritative Name Server
Section), it would be useless if I answered you without giving you their IP Addresses (Additional
Records Section).
This is the reason in this example we have been told about the Name Servers for the firewall.cx
domain (Authoritative Name Server Section), but also given their IP Address (Additional Records
Section).
The same rule and logic explains why there are 2 parts for all three sections of this example.
Let's have a look at the different values the Type field can have, this will also give you an insight
into the information we can request and receive about any domain:
Type
Meaning
Contents
A
Host Address
32-Bit IP Address of host or domain
CNAME
Canonical Name (Alias)
Canonical domain name for and alias e.g www
HINFO
CPU & OS
Name of CPU and Operating System
MINFO
Mailbox
Info about a mailbox or mail list
MX
Mail Exchange
16-bit preference and name of the host that acts as a mail exchange server for a domain e.g
mail.firewall.cx
NS
Name Server
Authoritative name server for the domain
PTR
Pointer
Symbolic link for a domain. e.g net.firewall.cx points to www.firewall.cx
SOA
Start Of Authority
Multiple fields that specify which parts of the naming hiererchy a server implements
TXT
Arbitrary Text
Uninterpreted string of ASCII text
The above values the Type field can take are contained within the DNS database, which is
covered next.
Our discussion on the DNS Response message format is now complete !
Introduction To The DNS Database (BIND)
Introduction
This page is dedicated to the database that drives DNS servers all over the world. Because over
80% of the DNS servers are based on Unix, Linux and the BIND (Berkely Internet Name
Domain) program, our example will be based on a DNS server running the latest version of Bind
at the time of writing, which is Bind v 9.2.1.
Before we start though, we're going to talk about the "Zone" concept and its relation to the
"Domain" concept. This will help you understand how the database is structured and set up.
Zones and Domains
The programs that store information about the domain name space are called name servers, as you
probably already know. Name Servers generally have complete information about some part of
the domain name space (a zone), which they load from a file. The name server is then said to
have authority for that zone.
The term zone is not one that you come across every day while you're surfing on the Internet. We
tend to think that the "domain" concept is all there is when it comes to DNS, which makes life
easy for us, but when dealing with DNS servers that hold data for our domains (name servers),
then we need to introduce the zone term since it is essential so we can understand the setup of a
DNS server.
The difference between a zone and a domain is important, but subtle. The best way to understand
the difference is by using a good example, which is coming up next.
The COM domain is divided into many zones, including the hp.com zone, sun.com, it.com. At the
top of the domain, there is also a com zone.
The diagram below shows you how a zone fits within a domain:
The trick to understanding how it works is to remember that a zone exists "inside" a domain.
Name servers load zone files, not domains. Zone files contain information about the portion of a
domain for which they are responsible. This could be the whole domain (sun.com, it.com) or
simply a portion of it (hp.com + pr.hp.com).
In our example, the hp.com domain has two subdomains, support.hp.com and pr.hp.com. The first
one, support.hp.com is controlled by its own name servers as it has its own zone, called the
support.hp.com zone. The second one though, pr.hp.com is controlled by the same name server
that takes care of the hp.com zone.
The hp.com zone has very little information about the support.hp.com zone, it simply knows its
right below. If anyone requires more information on support.hp.com, it will be requested to
contact the authoritative name servsers for that subdomain, which are the name servers for that
zone.
So you see that even though support.hp.com is a subdomain just like pr.hp.com, it is not setup
and controlled the same way as pr.hp.com.
On the other hand, the Sun.com domain has one zone (sun.com zone) that contains and controlls
the whole domain. This zone is loaded by the authoritative name servers.
BIND ? Never heard of it !
As I mentioned in the beginning of this page, BIND stands for Berkely Internet Name Domain.
Keeping things simple, it's a program you download (www.bind.org) and install on your Unix or
Linux server to give it the ability to become a DNS server for your private (lan) or public
(Internet) network.
The majority of DNS servers are based on BIND as it's a proven and reliable DNS server. The
download is approximately 4.8 MBytes. I'm not going to explain how to unzip, untar and compile
BIND since it's out of the scope of this page's. If you follow the instructions provided you
shouldn't have any problems.
Setting up your Zone Data
No matter what Linux distribution you have, the file structure is pretty much the same. I have
BIND installed on my Linux server, which runs Slackware v8 with kernel 2.4.19. By following
the installation procedure found in the documentation provided with BIND, you will have the
server installed within 15 min at most.
Once the installation of BIND is complete you need to start creating your zone data files.
Remember, these are the files the DNS server will load in order to understand how your domain
is setup and the various hosts within it.
A DNS server has multiple files that contain information about the domain setup. From these
files, one will map all host names to IP Addresses and other files will map the IP Address back to
hostnames. The name-to-IP Address lookup is sometimes called forward mapping and the IP
Address-to-name lookup reverse mapping. Each network will have its own file for reversemapping.
As a convention in this section, a file that maps hostnames to IP Addresses will be called
db.DOMAIN, where DOMAIN is the name of your domain e.g. db.space.net, and db is short for
DataBase.The files mapping IP Address to hostnames are called db.ADDR where ADDR is the
network number without trailing zeros or the specification of a netmask, e.g db.192.168.0 for the
192.168.0.0 network.
The collection of our db.DOMAIN and db.ADDR files are our Zone Data files. There are a few
other zone data files, some of which are created during the installation of BIND: named.ca,
localhost.zone and named.local.
Named.ca contains information about the root servers on the Internet, should your DNS server
require to contact one of them. Localhost.zone and Named.local are there to cover the loopback
network. The loopback address is a special address hosts use to direct traffic to themselves. This
is usually IP Address 127.0.0.1, which belongs to the 127.0.0.0/24 network.
These files must be present in each DNS server and are the same for every DNS server.
Quick Summary of files so far..
Let's have a quick look at the files we have covered so far to make sure we don't lose track:
1) Following files must be created by you and will contain the data for our zone:


db.DOMAIN e.g db.space.net - Host to IP Address mapping
db.ADDR e.g db.192.168.0 - IP Address to Host mapping
2) Following files are usually created by the BIND installation:


named.ca - Contains the ROOT DNS servers
named.local & localhost.zone - Special files so the server can direct traffic to itself.
You should also be aware that the file names can change, there is no standard for names, it's just
very convenient and tidy to keep some type of convention.
To tie all the zone data files together a name server needs a configuration file. BIND version 8
and above calls it named.conf and it can be found in your /etc dir once you install the BIND
package. Named.conf simply tells the name server where your zone files are located and we will
be analysing this file later on.
Most entries in the zone data files are called DNS resource records. Since DNS lookups are case
insensitive, you can enter names in your zone data files in uppercase, lowercase or mixed case. I
tend to use lowercase.
Resource records must start in the first column of a line. The DNS RFCs have samples that
present the order in which one should enter the resource records. Some people choose to follow
this order, while others don't. You are not required to follow this order, but I do :)
Here is the order of resource records in the zone data file:
SOA record - Indicates authority for this zone.
NS record - Lists a name server for this zone
MX record - Indicates the mail exchange server for the domain
A record - Name to IP Address mapping (gives the IP Address for a host)
CNAME record - Canonical name (used for aliases)
PTR record - Address to name mapping (used in db.ADDR)
The next page deals with the construction of our first zone data file, db.space.net for the space.net
domain.
The db.space.net Zone Data File
Introduction
It's time to start creating our zone files. We'll follow the standard format, which is given in the
DNS RFCs, in order to keep everything neat and less confusing.
First step is to decide on the domain we're using and I've already chosen space.net. This means
that the first zone file will be db.space.net. Note that this file is to be placed on the Master DNS
server for our domain. We will progressively build our database by populating it step by step and
explaining each step we take. At the end of the step by step example, we'll grab each step's data
and put it all together so we can see how the final version of our file will look. This, I believe, is
the best method of explaining how to create a zone file without confusing the hell out of everyone
!
Constructing db.space.net
Note: The actual entries for our file are marked with bold green characters using italic format.
Anything else, such as plain green coloured characters, is used to help you identify specific
commands I'm talking about that have been used in the zone file we are creating. Also, keep in
mind we are setting up a primary DNS server. For a simple DNS caching or secondary name
server, the setup is a lot simpler and covered on other pages.
The first entry for our file is the Default TTL - Time To Live. This is defined using the $TTL
control statement. $TTL specifies the time to live for all records in the file that follow the
statement and don't have an explicit TTL. We are going to set ours to 3 hours.
$TTL 3h
Next up is the SOA Record. The SOA (Start Of Authority) resource record indicates that this
name server is the best source of information for the data within this zone (this record is required
in each db.DOMAIN and db.ADDR file), which is the same as saying this name server is
Authoritative for this zone. There can be only one SOA record in every data zone file
(db.DOMAIN).
space.net. IN SOA voyager.space.net. admin.voyager.space.net. (
1 ; Serial Number
3h ; Refresh after 3 hours
1h ; Retry after 1 hour
1w ; Expire after 1 week
1h ) ; Negative caching TTL of 1 hour
Let's explain the meaning of this:
space.net. is the domain name and must always be stated in the first column of our line, be sure
you include the trailing dot "." after the domain name, I'll explain later on why this is needed.
The IN stands for Internet. This is one class of data and while other classes exist, you won't see
them at all because they are not used :)
The SOA is an important resource record. What follows is the actual primary name server for
space.net. In our example, this is the server named "voyager" and its Fully Qualified Domain
Name (FQDN) is voyager.space.net. Notice the trailing "." is present here as well.
Next up is the entry admin.voyager.space.net. which is nothing but the email of the person
responsible for this domain. Take the dot "." after the admin entry and replace it with with "@"
and you have a valid email address: [email protected]. Most times you will see root,
postmaster or hostmaster where I have placed "admin".
The "(" parentheses allow the SOA record to span more than one line, while in most cases the
fields that follow are used by the secondary name servers and any other name server requesting
information about the domain.
The serial number "1 ; Serial Number" entry is used by the secondary name server to keep track
of changes that might have occured in the master's zone file. When the secondary name server
contacts the primary name server, it will check to see if this value is the same. If the secondary's
name server is lower than the primary's, then its data is out of date and, when equal, it means the
data is up to date. This means when you make any modifications to the primary's zone file, you
must increment the serial number at least by one.
The refresh "3h ; Refresh after 3 hours" tells the secondary name server how often to check that
the data for this zone is up to date.
If the secondary name server tries to contact the primary and fails, the retry "1 h ; Retry after 1
hour" is used to tell the secondary name server how long to wait until it tries to contact the
primary again.
If the secondary name server fails to contact the primary for longer than the time specified in the
fourth entry "1 w ; Expire after 1 week", then the zone data on the secondary name server is
considered too old and will expire.
The last line "1 h ) ; Negative caching TTL of 1 day" is how long a name server will send
negative responses about the zone. These negative responses say that a particular domain or type
of data sought for a particular domain name doesn't exist. Notice the SOA section finishes with
the ")" parentheses.
Next up are the name server (NS) records:
; Name Servers defined here
space.net. IN NS voyager.space.net.
space.net. IN NS gateway.space.net.
These entries define the two name servers (voyager and gateway) for our domain space.net. These
entries will be also in the db.ADDR file for this domain as we will see later on.
It's time to enter our MX records. These records define the mail exchange servers for our domain,
and this is how any client, host or email server is able to find a domain's email server:
; Mail Exchange servers defined here
space.net. IN MX 10 voyager.space.net.
space.net. IN MX 20 gateway.space.net.
Let's explain what exactly these entries mean. The first line specifies that voyager.space.net is a
mail exchanger for space.net, just as the second line (...IN MX 20 gateway...) specifies that
gateway.space.net is also a mail exchanger for the domain. The MX record is what indicates that
the following hosts are mail exchangers and the numbers right after the MX record (10, and 20)
indicate the priority level.
These entries were introduced to prevent mail loops. When another email server (unlikely for a
private domain like mine, but the same rule applies for the Internet) wants to send mail to
space.net, it will try to contact first the mail exchanger with the smallest number, which in our
case is voyager.space.net. The smaller the number, the higher the priority if there are more than
one mail servers.
In our example, if we replaced:
space.net. IN MX 10 voyager.space.net.
space.net. IN MX 20 gateway.space.net.
with
space.net. IN MX 50 voyager.space.net.
space.net. IN MX 100 gateway.space.net.
the result in matter of priority, would be the same. Let's now have a look our next part of our zone
file: Host IP Addresses and Alias records:
; Host addresses defined here
localhost.space.net. IN A 127.0.0.1
voyager.space.net. IN A 192.168.0.15
enterprise.space.net. IN A 192.168.0.5
gateway.space.net. IN A 192.168.0.10
admin.space.net. IN A 192.168.0.1
; Aliases
www.space.net. IN CNAME voyager.space.net.
Most fields in this section are easy to understand. We start by defining our localhost (local
loopback) "localhost.space.net. IN A 127.0.0.1" and continue with the servers I have on my home
network, these include voyager, enterprise, gateway and admin (my workstation). The "A" record
stands for IP Address. So "voyager.space.net. IN A 192.168.0.15" translates to a host called
voyager located in the space.net domain with an INternet ip Address of 192.168.0.15.
The second block has the aliases table, where we created a Canonical Name (CNAME) record. A
CNAME record simply maps an alias to its canonical name; in our example, www is the alias and
voyager.space.net is the canonical name. When a name server looks up a name and finds
CNAME records, it replaces the name (alias - www) with its canonical name (voyager.space.net)
and looks up the canonical name (voyager.space.net). For example, when a name server looks up
www.space.net, it will replace the www with voyager and lookup the IP Address for
voyager.space.net. This also explains the meaning of "www" in our webbrowser, it's nothing
more than an alias which, ultimately, is replaced with the CNAME record defined.
That completes a simple domain setup! We have now created a working zone file that looks like
this:
$TTL 3h
space.net. IN SOA voyager.space.net. admin.voyager.space.net. (
1 ; Serial Number
3h ; Refresh after 3 hours
1h ; Retry after 1 hour
1w ; Expire after 1 week
1h ) ; Negative caching TTL of 1 day
; Name Servers defined here
space.net. IN NS voyager.space.net.
space.net. IN NS gateway.space.net.
; Mail Exchange servers defined here
space.net. IN MX 10 voyager.space.net.
space.net. IN MX 20 gateway.space.net.
; Host Addresses Defined Here
localhost.space.net. IN A 127.0.0.1
voyager.space.net. IN A 192.168.0.15
enterprise.space.net. IN A 192.168.0.5
gateway.space.net. IN A 192.168.0.10
admin.space.net. IN A 192.168.0.1
; Aliases
www.space.net. IN CNAME voyager.space.net.
A quick glance at this file tells you a lot about my home domain space.net! This is probably the
best time to explain why we should not omit the trailing dot at the end of the domain name:
If we took gateway.space.net as an example and omitted the dot "." at the end, that particular
entry would turn out like this: gateway.space.net.space.net, not what you intended at all! As you
see, the space.net is appended to the end of our Fully Qualified Domain Name for the particular
resource record (gateway). This is why it's so important to never forget that extra dot "." at the
end!
Next is our db.ADDR file, which will take the name db.192.168.0.
The db.192.168.0 Zone Data File
Introduction
The db.192.168.0 zone data file is the second file we are creating for our DNS server. As outlined
in the DNS-BIND introduction, this file's purpose is to provide the IP Address -to- name
mappings. Note that this file is to be placed on the Master DNS server for our domain.
Constructing db.192.168.0
While we start to construct the file, you will notice many similarities with our previous file. Since
most resource records have already been covered and explained in detail, I will not repeat their
analysis again on this page.
The first line is our $TTL control statement:
$TTL 3h
Next up is the Start Of Authority resource record:
0.168.192.in-addr.arpa. IN SOA voyager.space.net. admin.space.net. (
1 ; Serial
3h ; Refresh after 3 hours
1h ; Retry after 1 hour
1w ; Expire after one week
1h ) ; Negative Caching TTL of 1 hour
As you can see, everything above, except the first column of the first line, is identical to the
db.space.net file. The "0.168.192.in-addr.arpa" entry is my network in reverse order. You simply
take your network address, reverse it, and add an ".in-addr.arpa." at the end.
Name server resource records are next. The syntax is nearly the same as the previous file,
remember that we don't enter the full reversed IP Address for the name servers but only the first 3
octecs which represent the network they belong to:
; Name Servers defined here
0.168.192.in-addr.arpa. IN NS voyager.space.net.
0.168.192.in-addr.arpa. IN NS gateway.space.net.
If you find it hard to remember this method, simply remind yourself that we replace the domain
name (previous file) with the network address reversed, followed by the ".in-addr.arpa.".
The PTR resource record follows since we use this to create our IP Address-to-name mappings:
; IP Address to Name mappings
1.0.168.192.in-addr.arpa. IN PTR admin.space.net.
5.0.168.192.in-addr.arpa. IN PTR enterprise.space.net.
10.0.168.192.in-addr.arpa. IN PTR gateway.space.net.
15.0.168.192.in-addr.arpa. IN PTR voyager.space.net.
And now we'll look at the whole file with all its entries:
$TTL 3h
0.168.192.in-addr.arpa. IN SOA voyager.space.net. admin.space.net. (
1 ; Serial
3h ; Refresh after 3 hours
1h ; Retry after 1 hour
1w ; Expire after one week
1h ) ; Negative Caching TTL of 1 hour
; Name Servers defined here
0.168.192.in-addr.arpa. IN NS voyager.space.net.
0.168.192.in-addr.arpa. IN NS gateway.space.net.
; IP Address to Name mappings
1.0.168.192.in-addr.arpa. IN PTR admin.space.net.
5.0.168.192.in-addr.arpa. IN PTR enterprise.space.net.
10.0.168.192.in-addr.arpa. IN PTR gateway.space.net.
15.0.168.192.in-addr.arpa. IN PTR voyager.space.net.
This completes the db.192.168.0 Zone data file. Remember the whole purpose of this file is to
provide an IP Address-to-name mapping, which is why we do not use the domain name in front
of each sentence but the reversed IP Address followed by the in-addr.arpa. entry.
Common BIND Files
Introduction
So far we have covered in great detail the main files required for the space.net domain. These
files, which we named db.space.net and db.192.168.0, define all the resouce records and hosts
available in the space.net domain.
Now, even though the contents of these files are common for every domain but vary in their
contents (e.g resource records defined for each domain), there are some files that are common and
have pretty much the same contents for any type of domain !
We will be looking at these files on this page and analysing them to help you understand why
they exist and how they fit into the big picture :)
Our Common Files
There are 3 common files that we're going to look at. However, the first two files' contents change
slightly depending on the domain, this is because they must be aware of the various hosts and the
domain name for which they are created. The third file is always the same amongst all DNS
servers and I'll explain more about it later on.
So here are our files:



named.local or db.127.0.0
named.conf (analysed at the end)
named.ca or db.cache
We are going to take each file and look at its contents, this will make them easier to understand.
The named.local File
The named.local file, or db.127.0.0 as some might call it, is used to cover the loopback network.
Since no one was given the responsibility for the 127.0.0.0 network, we need this file to make
sure there are no errors when the DNS server needs to direct traffic to itself (127.0.0.1 IP Address
- Loopback).
When installing BIND, you will find this file in your caching example directory:
/var/named/caching-example, so you can either create a new one or modify the existing one to
meet your requirements.
The file is no different than our example db.addr file we saw previously:
$TTL 3h
0.0.127.in-addr.arpa. IN SOA voyager.space.net. admin.space.net. (
1 ; Serial
3h ; Refresh after 3 hours
1h ; Retry after 1 hour
1w ; Expire after 1 week
1h ) ; Negative caching TTL of 1 hour
0.0.127.in-addr.arpa. IN NS voyager.space.net.
0.0.127.in-addr.arpa. IN NS gateway.space.net.
1.0.0.127.in-addr.arpa. IN PTR localhost.
That's all there is for named.local file !
The named.ca File
The named.ca file (also known as the "root hints file") is created when you install BIND and
dosen't need to be modified unless you have an old version of BIND or it's been a while since you
installed BIND.
The purpose of this file is to let your DNS server know about the ROOT Servers on the Internet.
There is no point showing the whole content of this file because it's quite big, so I'll show you an
entry of a ROOT server so you can see what it looks like:
; last update: Aug 22, 1997
; related version of root zone: 1997082200
;
;
; formerly NS.INTERNIC.NET
;
. 3600000 IN NS A.ROOT-SERVERS.NET.
A.ROOT-SERVERS.NET. 3600000 A 198.41.0.4
The domain name "." refers to the root zone and the "3600000" is an explicit time to live for the
records in the file, but it is generally ignored :) The rest are self explanatory. If you want to grab a
new copy of the root hint file you can ftp to ftp.rs.internic.net (198.41.0.6) and log on
anonymously, there you will find the latest up to date version.
The named.conf File
The named.conf file is usually located in the /etc directory and is the key file that ties all the zone
data files together and lets the DNS server know where they are located in the system. This file is
automatically created during the installation but you must edit it in order to add new entries that
will point to any new zone files you have created.
Let's have a close look at the named.conf file and explain:
options {
directory "/var/named";
};
// Root Servers
zone "." IN {
type hint;
file "named.ca";
};
// Entry for Space.net - name to ip mapping
zone "space.net" IN {
type master;
file "db.space.net";
};
// Entry for Space.net - ip to name mapping
zone "0.168.192.in-addr.arpa" IN {
type master;
file "db.192.168.0";
};
// Entry for Local Loopback
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
};
At first glance it might seem a maze, but it's a lot simpler than you think. Break down each
paragraph and you can see clearly the pattern that follows. Starting from the top, the options
section simply defines the directory where all the files to follow are located, the rest are simply
comments.
The root servers section tells the DNS server where to find the root hints file, which contains all
the root servers. Next up is the entry for our domain space.net, we let the DNS server know which
file contains all the zone entries for this domain and let it know that it will act as a master DNS
server for the domain. The same applies for the entry to follow, which contains the IP to Name
mappings, this is the 0.168.192.in-addr.arpa zone.
The last entry is required for the local loopback. We tell the DNS server which file contains the
local loopback entries.
Notice the "IN" class that is present in each section ? If we accidentally forgot to include it in our
zone files, it wouldn't matter because the DNS server will automatically figure out the class from
our named.conf file. It's imperative not to forget the "IN" (Internet) class in the named.conf,
whereas it really doesnt matter if you don't put it in the zone files. It's good practice still to enter it
in the zone files as we did, just to make sure you don't have any problems later on.
And that ends our discussion for the common DNS (BIND) files. Next up is the configuration of
our Slave/Secondary DNS server.
The Secondary (Slave) DNS Server
Introduction
Setting up a Secondary (or Slave) DNS sever is much easier than you might think. All the hard
work is done when you setup the Master DNS server by creating your database zone files and
configuring named.conf.
If you are wondering how is it that the Slave DNS server is easy to setup, well you need to
remember that all the Slave DNS server does is update its database from the Master DNS server
(zone transfer) so almost all the files we configure on the Master DNS server are copied to the
Slave DNS server, which acts as a backup in case the Master DNS server fails.
Setting up the Slave DNS Server
Let's have a closer look at the requirements for getting our Slave DNS server up and running.
Keeping in mind that the Slave DNS server is on another machine, we are assuming that you have
downloaded and successfully installed the same BIND version on it. We need to copy 3 files from
the Master DNS server, make some minor modifications to one file and launch our Slave DNS
server.... the rest will happen automatically :)
So which files do we copy ?
The files required are the following:



named.conf (our configuration file)
named.ca or db.cache (the root hints file, contains all root servers)
named.local (local loopback for the specific DNS server so it can direct traffic to itself)
The rest of the files, which are our db.DOMAIN (db.space.net for our example) and db.inaddr.arpa (db.192.168.0 for our example), will be transferred automatically (zone transfer) as
soon as the newly brought up Slave DNS server contacts the Master DNS server to check for any
zone files.
How do I copy the files ?
There are plenty of ways to copy the files between servers. The method you will use depends on
where the servers are located. If, for example, they are right next to you, you can simply use a
floppy disk to copy them or use ftp to transfer them.
If you're going to try to transfer them over a network, and especially over a public one (Internet),
then you might consider something more secure than ftp. I would recommend you use SCP,
which stands for Secure CoPy and uses SSH (Secure SHell).
SCP can be used independently of SSH as long as there is an SSH server on the other side. SCP
will allow you to transfer files over an encrypted connection and therefore is preferred for
sensitive files, plus you get to learn a new command :)
The command used is as follows: scp localfile-to-copy username@remotehost:desitnation-folder.
Here is the command line I used from my Gateway server (Master DNS): scp /etc/named.conf
root@voyager:/etc/
Keep in mind that the files we copy are placed in the same directory as on the Master DNS server.
Once we have copied all three files we need to modify the named.conf file. To make things
simple, I am going to show you the original file copied from the Master DNS and the modified
version which now sits on the Slave DNS server.
The Master named.conf file is a clear cut/paste from the "Other Common Files" page, whereas
the Slave named.conf has been modifed to suit our Slave DNS server. To help you see the
changes, I have marked them in red:
Master named.conf file
options {
directory "/var/named";
};
// Root Servers
zone "." IN {
type hint;
file "named.ca";
};
// Entry for Space.net - name to ip mapping
zone "space.net" IN {
type master;
file "db.space.net";
};
// Entry for Space.net - ip to name mapping
zone "0.168.192.in-addr.arpa" IN {
type master;
file "db.192.168.0";
};
// Entry for Local Loopback
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
};
Slave named.conf file
options {
directory "/var/named";
};
// Root Servers
zone "." IN {
type hint;
file "named.ca";
};
// Entry for Space.net - name to ip mapping
zone "space.net" IN {
type slave;
file "bak.space.net";
masters { 192.168.0.10 ; } ;
};
// Entry for Space.net - ip to name mapping
zone "0.168.192.in-addr.arpa" IN {
type salve;
file "bak.192.168.0";
masters { 192.168.0.10 ; } ;
};
// Entry for Local Loopback
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
};
As you can see, most of the slave's named.conf file is similair to the master's, except for a few
fields and values, which we are going to explain right now.
The type value is now slave, and that's pretty logical since it tells the dns server if it's a master or
slave.
The file "bak.space.net"; entry basically tells the server what name to give the zone files once
they are transfered from the master dns server. I tend to follow the bak.domain format because
that's the way I see the slave server, a backup dns server. It is not imperative to use this name
scheme, you can change it to whatever you wish. Once the server is up and running, you will see
these files soon appear in the /var/named directory.
Lastly, the masters {192.168.0.10}; entry informs our slave server that this is the IP Address of
the master DNS which it needs to contact and retrieve the zone files.
That's all there is to setup the slave DNS server ! As I told you, once the master is setup, the slave
is a peice of cake cause it involves very few changes.
DNS Caching
Introduction
In the previous pages I spoke about the Internet Domain Hierarchy and explained how the ROOT
servers are the DNS servers, which contain all the information about authoritative DNS servers
the domains immediately below e.g firewall.cx, microsoft.com. In fact, when a request is passed
to any of the ROOT DNS servers, they will redirect the client to the appropriate authoritative
DNS server that is in charge of the domain the client is after.
For example, if you're trying to resolve firewall.cx and your machine contacts a ROOT DNS
server, the server will point your computer to the DNS server in charge of the .CX domain, which
in turn will point your computer to the DNS server in charge of firewall.cx.
The Big Picture
As you can see, a simple DNS request can become quite a task in order to successfully resolve the
domain. This also means that there's a fair bit of traffic generated in order to complete the
procedure. Whether you're paying a flat rate to your ISP or your company has a permanent
connection to the Internet, the truth is that someone ends up paying for all these DNS requests !
The above example was only for one computer trying to resolve one domain. Try to imagine a
company that has 500 computers connected to the Internet or an ISP with 150,000 subscribers.
Now you're starting to get the big picture !
All that traffic is going to end up on the Internet if something isn't done about it, not to mention
who will be paying for it !
This is where DNS Caching comes in. If we're able to cache all these requests, then we don't need
to ask the ROOT DNS or any other external DNS server as long as we are trying to resolve
previously visited sites or domains, because our caching system would "remember" all the
previous domains we visited (and therefore resolved) and would be able to give us the IP Address
we're looking for !
Note: You should keep in mind that when you install BIND, by default it's setup to be a DNS
Caching server, so all you need to do it startup the service, which is called 'named'.
Almost all Internet name servers use name caching to optimise search costs. Each of these servers
maintains a cache which contains all recently used names as well as a record of where the
mapping information for that name was obtained. When a client (e.g your computer) asks the
server to resolve a domain, the server will first check to see whether it has authority (meaning if it
is in charge) for that domain. If not, the server checks its cache to see if the domain is in there and
it will find it if it's been recently resolved.
Assuming that the server does find it in the cache, it will take the information and pass it on to the
client but also mark the information as a nonauthoritative binding, which means the server tells
the client "Here is the information you required, but keep in mind, I am not in charge of this
domain".
The information can be out of date and, if it is critical for the client that it does not receive such
information, it will then try to contact the authoritative DNS server for the domain and obtain the
up to date information it requires.
DNS Caching does come with its problems !
As you can clearly see, DNS caching can save you a lot of money, but it comes with its problems
!
Caching does work well in the domain name system because name to address binding changes
infrequently. However, it does change. If the servers cached the information the first time it was
requested and never change that information, the entries in the cache could become incorrect.
The solution to the problem !
Fortunately there is a solution that will prevent DNS servers from giving out incorrect
information. To ensure that the information in the cache is correct, every DNS server will time
each entry and dispose of the ones that have exceeded a reasonable time. When a DNS server is
asked for the information after it has removed the entry from its cache, it must go back to the
authoritative source and obtain it again.
Whenever an authoritative DNS server responds to a request, it includes a Time To Live (TTL)
value in the response. This TTL value is set in the zone files as you've probably already seen in
the previous pages.
If you manage DNS server an are planning to introduce changes like redelegate (move) your
domain to some other hosting company or if the IP Address your website currently has or
changing mail servers, in the next couple weeks, then it's a good idea to get your TTL changes to
a very small value well before the scheduled changes. Reason for this is because any dns server
that will query your domain, website or any resource record that belongs to your domain, will
cache the data for the amount of time the TTL is set.
By decreasing the $TTL value to e.g 1 hour, this will ensure that all dns data from your domain
will expire in the requesters cache 1 hour after it was received. If you didn't do this, then the
servers and clients (simple home users) who access your site or domain, will cache the dns data
for the currently set time, which is normaly around 3 days. Not a good thing when you make a big
change :)
So keep in mind all the above when your about the perform a change in the DNS server zone
files. a couple of days before making the change, decrease the $TTL value to a reasonable value,
not more than a few hours, and then once you complete the change, be sure you set it back to
what it was.
I hope this has given you an insight on how you can save yourself, or company money and
problems which occur when changing field and values in the DNS zone files !
File Transfer Protocol - FTP
Introduction
File transfer is among the most frequently used TCP/IP applications and it accounts for a lot of
the network traffic on the Internet. Various standard file transfer protocols existed even before the
Internet was available to everyone and it was these early versions of the file transfer software that
helped create today's standard known as the File Transfer Protocol (FTP). Most recent
specifications of the protocol are listed in RFC 959.
The Protocol
FTP uses TCP as a transport protocol. This means that FTP inherits TCP's robustness and is very
reliable for transferring files. Chances are if you download files, you've probably used ftp a few
hundred times without realising it ! And if you have a huge warez collection, then make that a
couple of thousand times :)
The picture below shows where FTP stands in contrast to the OSI model. As I have noted in other
sections, it's important to understand the concept of the OSI model, because it will greatly help
you understand all this too :)
Now, we mentioned that FTP uses TCP as a transport, but we didn't say which ports it uses! Port
numbers 21 and 20 are used for FTP. Port 21 is used to establish the connection between the 2
computers (or hosts) and port 20 to transfer data (via the Data channel).
But there are some instances where port 21 is used for both, establishing a connection and data
transfer and I will analyse them shortly.
The best thing you can do to "see" it yourself is to grab a packet sniffer which you will
conveniently find in our download section and try to capture a few packets while you're ftp'ing to
a site.
Both Ports - 20 and 21 - Active FTP Mode
I have included a screenshot from my workstation which clearly shows the 2 ports used. In the
example, I have ftp'ed into ftp.cdrom.com. Please click here to view the full picture
Only Port 21 - Passive FTP Mode
Now, in the next picture I ftp'ed into my NetWare server here at home and guess what .... Only
Port 21 was used ! Here is the screen shot:
Please click here to view the full picture.
Let me explain why this is happening:
FTP has two separate modes of operation: Active and Passive. You will use either one depending
on whether your PC is behind a firewall.
Active Mode FTP
Active mode is usually used when there isn't any firewall between you and the FTP server. In
such cases you have a direct connection to the Internet. When you (the client) try to establish a
connection to a FTP server, your workstation includes a second port number (using the PORT
command) that is used when data is to be exchanged, this is known as the Data Channel.
The FTP server then starts the exchange of data from its own port 20 to whatever port was
designated by your workstation (in the screen shot, my workstation used port 1086), and because
the server initiated the communication, it's not controlled by the workstation client. This can also
potentially allow uninvited data to arrive to your computer from anywhere posing as a normal
FTP transfer. This is one of the reasons Passive FTP is more secure.
Passive Mode FTP
Using normal or passive FTP, a client begins a session by sending a request to communicate
through TCP port 21, the port that is conventionally assigned for this use at the FTP server. This
communication is known as the Control Channel connection.
At this point, a PASV command is sent instead of a PORT command. Instead of specifying a port
that the server can send to, the PASV command asks the server to specify a port it wishes to use
for the Data Channel connection. The server replies on the Control Channel with the port number
which the client then uses to initiate an exchange on the Data Channel. The server will thus
always be responding to client-initiated requests on the Data Channel and the firewall can
correlate these.
It's simple to configure your client FTP program to use either Active or Passive FTP. For
example, in Cute FTP, you can set your program to use Passive FTP by going to FTP--> Settings
--> Options and then selecting the "Firewall" tab :
If you remove the above options, then your workstation will be using (if possible) Active FTP
mode, and I say "if possible" cause if your already behind a firewall, there is probably no way
you will be using Active FTP, so the program will automatically change to Passive FTP mode.
So let's have a look at the process of a computer establishing an FTP connection with a server: .
........
..........
The above is assuming a direct connection to the FTP server. For simplicity reasons, we are
looking at the way the FTP connection is created and not worring if it's a Passive or Active FTP
connection. Since FTP is using TCP as a transport, you would expect to see the 3-way handshake.
Once that is completed and there is data connection established, the client will send its login
name and then password. After the authentication sequence is finished and the user is
authenticated to the Server, it's allowed access and is ready to leach the site dry :)
Finally, below are the most commonly used FTP commands:
ABOR: abort previous FTP command
LIST and NLST: list file and directories
DELE: delete a file
RMD: remove a directory
MKD: create a directory
PWD: print current working directory ( show you which dir. your at)
PASS: send password
PORT: request open port number on specific IP address/port number
QUIT: log off from server
RETR: retrieve file
STOR: send or put file
SYST: identity system type
TYPE: specify type (A for ASCII, I for binary)
USER: send username
And that just about complete's our analysis on the FTP protocol !
Trivial File Transport Protocol - TFTP
Introduction
TFTP is a file transport protocol and its name suggests it's something close to the FTP protocol
(File Transfer Protocol), which is true .. to a degree. TFTP isn't very popular because it's not
really used on the Internet because of its limitations which we'll explore next.
The Protocol
TFTP's main difference from FTP is the transport protocol it uses and the lack of any
authentication mechanisim. Where FTP uses the robust TCP protocol to establish connections and
complete the file transfers, TFTP uses the UDP protocol which is unsecure and has no error
checking built in to it (unless they have implemented some type of error checking in the program
you are using to transfer files), this also explains why you are more likely to find TFTP in a LAN,
rather than a WAN (Wide Area Network) or on the Internet.
The major limitations with TFTP are authentication and directory visibility, meaning you don't
get to see the files and directories available at the TFTP server.
As mentioned, TFTP uses UDP as a transport, as opposed to TCP which FTP uses, and works on
port 69, you can clearly see that in the cool 3D diagram on the left.
Port 69 is the default port for TFTP, but if you like, you can modify the settings on your TFTP
server so it runs on a different port.
You will find some very good TFTP servers and clients in the download section.
Now, to make things a bit clearer I have included a screen shot of my workstation tftp'ing into a
TFTP server which I have setup in my little network.
You can see my workstation (192.168.0.100) contacting the TFTP server (192.168.0.1) on port 69
(destination port). In this first packet, my workstation is contacting the server and requesting the
file I entered before I connected to the server. Click here for the full picture.
Because you don't get a listing of the files and directories, you must know which file you want to
download ! In the response I received (2nd packet) the server gets straight into business and starts
sending the file. No authentication whatsoever !
Note: The workstation usally won't send back any acknowlegement (because UDP, which is the
transport protocol, by nature, never sends acknowledgements), but the software developers can
incorporate such a feature by forcing the workstation to send a small packet which the TFTP
server is able to pickup as an acknowledgement of the previous data packet it sent to the
workstation.
In the example I provide, you can see my workstation sending small packets to the server after it
receives one packet from it. These small acknowledgements have been added by the software
company who created the program I was using for this example.
Below is a screen shot of the program I used to TFTP (TFTP Client) to the server:
Notice how I entered the file I wanted to downloaded (server.exe), and selected the name which
the file will be saved as on my local computer (Local File). If I didn't provide the Remote File
name, I would simply get an error poping up at the server side, complaing that no such file exists.
You can also send files using TFTP, as it's not just for downloading :)
So where is TFTP used ?
TFTP is used mostly for backing up router configuration files like Cisco and its IOS images, it is
also used for diskless booting PC's where, after the workstation has booted from the network
card's ROM, TFTP is used to download the program it needs to load and run from a central
server.
Below is a diagram which shows what takes place during a TFTP session:
.....
In this diagram we are assuming that there is no error checking built into the software running at
both ends (client and server).
And that pretty much sums it all up for the TFTP protocol.
Introduction To The Internet Control Message Protocol
Introduction
The Internet Control Message Protocol, or ICMP as we will be calling it, is a very popular
protocol and actually part of an Internet Protocol (IP) implementation. Because IP wasn't
designed to be absolutely reliable, ICMP came into the scene to provide feedback on problems
which existed in the communication environment.
If I said the word 'Ping' most people who work with networks would recognise that a 'ping' is part
of ICMP and in case you didn't know that, now you do :)
ICMP is one of the most useful protocols provided to troubleshoot network problems like DNS
resolutions, routing, connectivity and a lot more. Personally, I use ICMP a lot, but you need to
keep its limits in mind beause you might end up spending half a day trying to figure out why
you're not getting a 'ping reply' ('echo reply' is the correct term) from, for example,
www.firewall.cx when, in fact, the site's webserver is configured NOT to reply to 'pings' for
security reasons !
Cool Note
A few years ago there was a program released, which still circulates around the Internet, called
Click ( I got my hands on version 1.4). Click was designed to run on a Windows platform and
work against Mirc users. The program would utilise the different messages available within the
ICMP protocol to send special error messages to Mirc users, making the remote user's program
think it had lost connectivity with the IRC server, thus disconnecting them from the server ! The
magic is not what the program can do, but how it does it ! This is where a true networking guru
will be able to identify and fix any network security weakness.
The Protocol
ICMP is defined in RFC (Request For Comments) 792. Looking at its position in the OSI model
we can see that it's sitting in the Network layer (layer 3) alongside IP. There are no ports used
with ICMP, this is because of where the protocol sits in the OSI model. Ports are only used for
protocols which work at the Session layer and above:
The ICMP protocol uses different 'messages' to identify the purpose of an ICMP packet, for
example, an 'echo' (ping) is one type of ICMP message.
I am going to break down the different message descriptions as they have been defined by the
RFC792.
There is a lot of information to cover in ICMP so I have broken it down to multiple pages rather
than sticking everything into one huge page that would bore you!
Also, I haven't included all the messages which ICMP supports, rather I selected a few of the
more common ones that you're likely to come across. You can always refer to the RFC792 to get
the details on all messages.
We will start with a visual example of where the ICMP header and information are put in a
packet, to help you understand better what we are dealing with :)
The structure is pretty simple, not a lot involved, but the contents of the ICMP header will change
depending on the message it contains. For example, the header information for an 'echo' (ping)
message (this is the correct term) is different to that of a 'destination unreachable' message, also a
function of ICMP.
NOTE: If you were to run a packet sniffer on your LAN and catch a "ping" packet to see what it
looks like, you would get more than I am showing here. There will be an extra header, the
datalink header, which is not shown here because that header will change (or more likely be
removed) as the packet moves from your LAN to the Internet, but the 2 headers you see in this
picture will certainly remain the same until they reach their destination.
So, that now leaves us to analyse a few of the selected ICMP messages !
The table below shows all the ICMP messages the protocol supports. The messages that are in the
green colour are the ones covered. Please click on the ICMP message you wish to read about:
ICMP - Echo / Echo Reply (Ping) Message
Introduction
As mentioned in the previous page, an Echo is simply what most people call a 'ping'. The Echo
Reply is the 'ping reply'. ICMP Echos are used mostly for troubleshooting. When there are 2 hosts
which have communication problems, a few simple ICMP Echo requests will show if the 2 hosts
have their TCP/IP stacks configured correctly and if there are any problems with the routes
packets are taking in order to get to the other side.
The 'ping' command is very well known, but the results of it are very often misunderstood and for
that reason I have chosen to explain all those other parameters next to the ping reply, but we will
have a look at that later on.
Let's have a look at what an ICMP-Echo or Echo Reply packet looks like:
If the above packet was an ICMP Echo (ping), then the Type field takes a value of 8. If it's an
ICMP Echo Reply (ping reply) then it would take a value of 1.
The picture below is a screen shot I took when doing a simple ping from my workstation:
Okay, now looking at the screen shot above, you can see I 'pinged' www.firewall.cx. The first
thing my workstation did was to resolve that URL to an IP address. This was done using DNS.
Once the DNS server returned the IP address of www.firewall.cx, the workstation generated an
ICMP packet with the Type field set to 8.
Here is the proof:
The picture above is a screenshot from my packet sniffer the same time this experement was
taking place. The packet displayed is one of the 4 packets which were sent from my workstation
to the webserver of firewall.cx
Notice the ICMP type = 8 Echo field right under the ICMP Header section. This clearly shows
that this packet is being sent from the workstation and not received. If it was received, it would
have been an 'Echo Reply' and have a value of 1.
The next weird thing, if anyone noticed, is the data field. Look at the screen shot from command
prompt above and notice the value there and the value the packet sniffer is showing on the left.
One says 32 Bytes, and the other 40 Bytes !
The reason for this is that the packet sniffer is taking into account the ICMP header files (ICMP
type, code, checksum and identifier), and I'll prove it to you right now.
Look at the top of this page where we analysed the ICMP headers (the 3d picture), you will notice
that the lengths (in Bits) of the various fields are as follows: 8, 8, 16, 16, 16. These add up to a
total of 64 Bits. Now 8 Bits = 1 Byte, therefore 64 Bits = 8 Bytes. Take the 32 Bytes of data the
workstation's command prompt is showing and add 8 Bytes .... and you have 40 Bytes in total.
If you want to view the full screen shot of the packet sniffer, please click here.
And that just about does it for these two ICMP messages !
ICMP - Destination Unreachable Message
Introduction
This ICMP message is quite interesting, because it doesn't actually contain one message, but six!
This means that the ICMP Destination unreachable futher breaks down into 6 different messages.
We will be looking at them all and analysing a few of them to help you get the idea.
To make sure you don't get confused, keep one thing in mind: The ICMP Destination unreachable
is a generic ICMP message, the different code values or messages which are part of it are there to
clarify the type of "Destination unreachable" message was received. It goes something like this:
ICMP Destination <Code value or message> unreachable.
The ICMP - Destination net unreachable message is one which a user would usually get from the
gateway when it doesn't know how to get to a particular network.
The ICMP - Destination host unreachable message is one which a user would usually get from the
remote gateway when the destination host is unreachable.
If, in the destination host, the IP module cannot deliver the packet because the indicated protocol
module or process port is not active, the destination host may send an ICMP destination protocol /
port unreachable message to the source host.
In another case, when a packet received must be fragmented to be forwarded by a gateway but the
"Don't Fragment" flag (DF) is on, the gateway must discard the packet and send an ICMP
destination fragmentation needed and DF set unreachable message to the source host.
These ICMP messages are most useful when trying to troubleshoot a network. You can check to
see if all routers and gateways are configured properly and have their routing tables updated and
synchronised.
Let's look at the packet structure of an ICMP destination unreachable packet:
Please read on as the following example will help you understand all the above.
The Analysis
When you open a DOS command prompt and type "ping 200.200.200.200", assuming that your
workstation is NOT part of that network, then it would forward the ICMP Echo request to the
gateway that's configured in your TCP/IP properties. At that point, the gateway should be able to
figure out where to forward the ICMP Echo request.
The gateway usually has a "default route" entry, this entry is used when the gateway doesn't know
where the network is. Now, if the gateway has no "default route" you would get an "ICMP
Destination net unreachable" message when you try to get to a network which the gateway doesn't
know about. When you're connected to the Internet via a modem, then your default gateway is the
modem.
In order for me to demonstrate this, I set up my network in a way that should make it easy for you
to see how everything works. I have provided a lot of pictures hoping to make it as easy as
possible to understand.
I will analyse why and how you get an "ICMP - Destination net unreachable" message.
In the example above, I've setup my workstation to use the Linux server as a default gateway,
which has an IP of 192.168.0.5. The Linux server also has a default gateway entry and this is IP:
192.168.0.1 (the Windows 2000 Server).
When my workstation attempts to ping (send an ICMP Echo request) to IP 200.200.200.200, it
realises it's on a different network, so it sends it to the Linux server, which in turn forwards it to
its default gateway (the Win2k server) so it can then be forwarded to the Internet and eventually I
should get a ping reply (ICMP Echo reply) if the host exists and has no firewall blocking ICMP
echo requests.
Here is the packet which I captured:
When looking at the decoded section (picture above) you can see in the ICMP header section that
the ICMP Type is equal to 8, so this confirms that it's an ICMP Echo (ping). As mentioned
earlier, we would expect to receive an ICMP echo reply.
Check out though what happens when I remove the default gateway entry from the Linux server:
Now what I did was to remove the default gateway entry from the Linux server. So when it gets a
packet from my workstation, it wouldn't know what to do with it. This is how you get the
gateway to generate an "ICMP Destination net unreachable" message and send it back to the
source host (my workstation).
Here is a screen shot from the command prompt:
As you can see, the Linux server has returned an "ICMP Destination net unreachable". This is one
of the six possible 'ICMP Destination Unreachable' messages as listed at the beginning of this
page. The Linux server doesn't know what to do with the packet since it has no way of getting to
that 200.200.200.0 network, so it sends the "ICMP Destination net unreachable" message to my
workstation, notifiying it that it doesnt know how to get to that network.
Let's now take a look what the packet sniffer caught :
The decoder on the left shows that the Linux server (192.168.0.5) sent back to my workstation
(192.168.0.100) an ICMP Destination unreachable message (look at the ICMP type field, right
under the ICMP header) but if you also check out the ICMP Code (highlighted field), it's equal to
0, which means "net unreachable". Scrolling right at the top of this page, the first table clearly
shows that when the code field has a value of 0, this is indeed a "net unreachable" message.
It is also worth noticing the "Returned IP header" which exists within the ICMP header. This is
the IP header of the packet my workstation sent to the Linux server when it attempted to ping
200.200.200.200, and following that is 64 bits (8 bytes) of the original data.
This completed our discussion on the ICMP 'Destination Unreachable' generated packets.
ICMP - Source Quench Message
Introduction
The ICMP - Source quench message is one that can be generated by either a gateway or host. You
won't see any such message pop up on your workstation screen unless you're working on a
gateway which will output to the screen all ICMP messages it gets. In short, an ICMP - Source
quench is generated by a gateway or the destination host and tells the sending end to ease up
because it cannot keep up with the speed at which it's receiving the data.
Analysis
Now let's get a bit more technical: A gateway may discard internet datagrams (or packets) if it
does not have the buffer space needed to queue the datagrams for output to the next network on
the route to the destination network. If a gateway discards a datagram, it may send an ICMP Source quench message to the internet source host of the datagram.
Let's have a look at the packet structure of the ICMP - Source quench message:
A destination host may also send an ICMP - Source quench message if datagrams arrive too fast
to be processed. The ICMP - Source quench message is a request to the host to cut back the rate
at which it is sending traffic to the internet destination. The gateway may send an ICMP - Source
quench for every message that it discards. On receipt of an ICMP - Source quench message, the
source host should cut back the rate at which it is sending traffic to the specified destination until
it no longer receives ICMP - Source quench messages from the gateway. The source host can
then gradually increase the rate at which it sends traffic to the destination until it again receives
ICMP - Source quench messages.
The gateway or host may also send the ICMP - Source quench message when it approaches its
capacity limit rather than waiting until the capacity is exceeded. This means that the data
datagram which triggered the ICMP - Source quench message may be delivered.
That pretty much does it for this ICMP message.
ICMP - Redirect Message
Introduction
The ICMP - Redirect message is always sent from a gateway to the host and the example below
will illustrate when this is used.
Putting it simply (before we have a look at the example) the ICMP - Redirect message occurs
when a host sends a datagram (or packet) to its gateway (destination of this datagram is a
different network), which in turn forwards the same datagram to the next gateway (next hop) and
this second gateway is on the same network as the host. The second gateway will generate this
ICMP message and send it to the host from which the datagram originated.
There are 4 different ICMP - Redirect message types and these are:
The format of this ICMP message is as follows: ICMP - Redirect (0, 1, 2, 3 or 4) message.
Our example:
The gateway (Win2k Server) sends a redirect message (arrow No. 3) to the host in the following
situation:
Gateway 1 (the linux server), receives an Internet datagram (arrow No. 1) from a host on the
same network. The gateway checks its routing table and obtains the address of the next gateway
(hop) on the route to the datagram's Internet destination network and sends the datagram to it
(arrow No. 2).
Now, gateway 2 receives the datagram and, if the host identified by the Internet source address of
the datagram (in other words, it checks the source IP of the datagram, which will still be
192.168.0.100), is on the same network, a redirect message (arrow No. 3) is sent to the host. The
redirect message advises the host to send its traffic for the Internet network directly to gateway 2
as this is a shorter path to the destination. The gateway then forwards the original datagram's data
(arrow No. 1) to its Internet destination (arrow No.4).
For datagrams (or packets) with the IP source options and the gateway address in the destination
address field, a redirect message is not sent even if there is a better route to the ultimate
destination than the next address in the source route.
Analysis
Let's have a look at the structure of an ICMP - Redirect message:
That's all about ICMP - Redirect messages !
ICMP - Time Exceeded Message
Introduction
The ICMP - Time exceeded message is one which is usually created by gateways or routers. In
order to fully understand this ICMP message, you must be familiar with the IP header within a
packet. If you like you can go to the Download - Documents section and grab a copy of the
TCP/IP in a Ethernet II Frame file which breaks down the IP header nicely.
When looking at an IP header, you will see the TTL and Fragment Flag fields which play a big
part in how this ICMP message works. Please make sure you check them out before attempting to
continue !
The ICMP - Time exceeded message is generated when the gateway processing the datagram (or
packet, depending on how you look at it) finds the Time To Live field (this field is in the IP
header of all packets) is equal to zero and therefore must be discarded. The same gateway may
also notify the source host via the time exceeded message.
The term 'fragment' means to 'cut to pieces'. When the data is too large to fit into one packet, it is
cut into smaller pieces and sent to the destination. On the other end, the destination host will
receive the fragmented pieces and put them back together to create the original large data packet
which was fragmented at the source.
Analysis
Let's have a look at the structure of an ICMP - Time exceeded message:
If a host reassembling a fragmented datagram (or packet) cannot complete the reassembly due to
missing fragments within its time limit it discards the datagram and it may send an ICMP - time
exceeded message.
If fragment zero is not available then no ICMP - time exceeded message is needed to be sent at
all. Code 0 may be received from a gateway and Code 1 from a host.
So, summing it up, an ICMP - Time exceeded message can be generated because the Time to live
field in the IP header has reached a value of zero (0) or because a host reassembling a fragmented
datagram cannot complete the reassembly within its time limit because there are missing
fragments (Fragment reassembly time exceeded the allocated time).
IPSec - Internet Protocol Security
Introduction
IPSec is one of the new buzz words these days in the networking security area. It's becoming very
popular and also a standard in most operating systems. Windows 2000 fully supports IPSec and
that's most probably where you are likely to find it. Routers these days also support IPSec to
establish secure links and to ensure that no-one can view or read the data they are exchanging.
When the original IP (Internet Protocol) specification was created, it didn't really include much of
a security mechanisim to protect it from potential hackers. There were 2 reasons they didn't give
IP some kind of security. First was because back then (we are talking around 30 years ago) most
people thought that users and administrators would continue to behave fairly well and not make
any serious attempts to compromise other people's traffic. Second reason was because the
cryptographic technology needed to provide adequate security simply wasn't widely available and
in most cases not even known about !
How IPSec works
The Internet Security Agreement/Key Management Protocol and Oakley ( ISAKMP)
ISAKMP provides a way for two computers to agree on security settings and exchange a security
key that they can use to communicate securely. A Security Association (SA) provides all the
information needed for two computers to communicate securely. The SA contains a policy
agreement that controls which algorithms and key lengths the two machines will use, plus the
actual security keys used to securely exchange information.
There are two steps in this process. First, the two computers must agree on the following three
things:
1) The encryption algorithm to be used (DES, triple DES)
2) Which algorithm they'll use for verifying message integrity (MD5 or SHA-1)
3) How connections will be authenticated: using public-key certificate, a shared secret key or
Kerberos.
Once all that has been sorted out, they start another round of negotiations which cover the
following:
1) Whether the Authentication Header (AH) protocol will be used
2) Whether the Encapsulating Security Payload (ESP) protocol will be used
3) Which encryption algorithm will be used for ESP
4) Which authentication protocol will be used for AH
IPSec has 2 mechanisms which work together to give you the end result, which is a secure way to
send data over public networks. Keep in mind that you can use both or just one of these
mechanisms together.
These mechanisms are:
1) Authentication Header
2) Encapsulating Security Payload - ESP
The Authentication Header (AH) Mechanism
The Authentication Header information is added into the packet which is generated by the sender,
right between the Network (Layer 3) and Transport (Layer 4) Layer (see picture below).
Authentication protects your network, and the data it carries, from tampering. Tampering might
be a hacker sitting between the client and server, altering the contents of the packets sent between
the client and server, or someone trying to impersonate either the client or server, thus fooling the
other side and gaining access to sensitive data.
To overcome this problem, IPSec uses an Authentication Header (AH) to digitally sign the entire
contents of each packet. This signature provides 3 benefits:
1) Protects against replay attacks. If an attacker can capture packets, save them and modify them,
and then send them to the destination, then they can impersonate a machine when that machine is
not on the network. This is what we call a replay attack. IPSec will prevent this from happening
by including the sender's signature on all packets.
2) Protection against tampering. The signatures added to each packet by IPSec means that you
can't alter any part of a packet undetected.
3) Protection against spoofing. Each end of a connection (e.g client-server) verifies the other's
identity with the authentication headers used by IPSec.
The AH is computed on the entire packet, including payload (upper layers - 4,5,6,7) and headers
of each layer. The following picture shows us a packet using AH :
On the left you are seeing the analysis of the Authentication Header.
AH Algorithms
For point-to-point communication (e.g client to server), suitable authentication algorithms include
keyed Message Authentication Codes (MACs) based on symmetric encryption algorithms (e.g
DES) or on one-way hash functions (e.g MD5 or SHA-1).
For multicast communication (e.g between a group of routers), one-way hash algorithms
combined with asymmetric signature algorithms are usually used, but they are also more cpu
intensive.
The Encapsulating Security Payload - ESP
The Authentication Header (AH) we spoke about will protect your data from tampering, but it
will not stop people from seeing it. For that, IPSec uses an encryption which provides the
Encapsulating Security Payload (ESP). ESP is used to encrypt the entire payload of an IPSec
packet (Payload is the portion of the packet which contains the upper layer data).
ESP is a bit more complex than AH because alone it can provide authentication, replay-proofing
and integrity checking. It acomplishes this by adding 3 separate components:
1) An ESP header
2) An ESP trailer and
3) An ESP authentication block.
Each of these components contains some of the data needed to provide the necessary
authentication and integrity checking. To prevent tampering, an ESP client has to sign the ESP
header, application data, and ESP trailer into one unit, of course ESP is used to encrypt the
application data and the ESP trailer to provide confidentiality. The combination of this
overlapping signature and encryption operation provides good security.
Let's have a look at a packet using IPSec - ESP:
IPSec can get very complicated and messy. I have tried keeping everything as simple as possible,
but you should keep in mind that this topic can be studied in far greater depth than is presented
here!
Introduction To The Internet Protocol
Introduction
Perhaps one of the most important and well known protocols is the Internet Protocol or, if you
like, IP. IP gives us the ability to uniquely identify each computer in a network or on the Internet.
When a computer is connected to a network or the Internet, it is assigned a unique IP address. If
you're connecting to the Internet, chances are you're given an IP automatically by your ISP, if
you're connecting to your LAN then you're either given the IP automatically or you manually
configure the workstation with an assigned IP.
I can't over emphasise the importance of fully understanding IP if you really want to understand
how network communications work, especially when it comes to an IP network, like the Internet.
DNS, FTP, SNMP, SMTP, HTTP and a lot of other protocols and services rely heavily on the IP
protocol in order to function correctly, so you can immediately see that IP is more than just an IP
Address on your workstation.
Now, because IP is a HUGE subject and it's impossible to cover in one or two pages, I decided to
split it into a few different sections in order to make it easy to read and learn about.
Here is a summary of what's covered:

Section 1: Binary and the Internet Protocol. Here we cover a few basic Binary concepts
and get to see how Binary and IP fit together.

Section 2: Internet Protocol Header. Find out how the Internet Protocol fits in the OSI
Model. Also includes a detailed 3d diagram of the IP Header which shows the fields that
exist in the IP Header

Section 3: Internet Protocol Classes. We get to see the 5 different IP Classes and analyse
them in Binary. Also you get to learn about the Network ID and Host ID in an IP
Address.

Section 4: Subnetting. One of the most important things you should know. Detailed
explanation on how subnetting works. Includes simple to complicated examples. You
should be comfortable with the first 3 sections in order to understand this section. For
more information, please see the Subnetting Introduction page.
So, what are you waiting for ? Let's discover and learn all about one of the most important
protocols in the networking world !
Binary & The Internet Protocol
Introduction
To understand the Internet Protocol, we need to learn and understand Binary. It is very important
to know and understand Binary because part of the IP protocol is also the "Subnetting" section
which can only be explained and understood when an IP Address is converted to Binary!
Those who are experienced in Binary can skim this section quickly, but do have a look through.
A lot of people are not aware that computers do not understand words, pictures and sounds, when
we interact with them by playing a game, reading or drawing something on the screen. The truth
is that all computers can understand is zeros (0) and ones (1) !
What we see on the screen is just an interpretation of what the computer understands, so the
information displayed is useful and meaningful to us.
Binary: Bits and Bytes
Everyone who uses the Internet would have, at one stage or another, come across the "Byte" or
"Bit" term, usually when you're downloading, you get the speed indication in bytes or KBytes per
second. We are going to see exactly what a Bit, Byte and KByte is, so you understand the terms.
To put it as simply as possible, a Bit is the smallest unit/value of Binary notation. The same way
we say 1 cent is the smallest amount of money you can have , a Bit is the same thing but not in
cents or dollars, but in Binary.
A Bit can have only one value, either a one (1) or a zero (0). So If I gave you a value of zero: 0,
then you would say that is one Bit. If I gave you two of them: 00, you would say that's two Bits.
Now, if you had 8 zeros or ones together: 0110 1010 (I put a space in between to make it easier
for the eyes) you would say that's 8 Bits or, one Byte ! Yes that is correct, 8 Bits are equal to one
Byte.
The picture below gives you some examples:
It's like saying, if you have 100 cents, that is equal to one Dollar. In the same way, 8 Bits (doesn't
matter if they are all 1s or 0s or a mixture of the two) would equal one Byte.
And to sum this all up, 1024 Bytes equal 1 KByte (Kilobyte). Why 1024 and not 1000 ? Well it's
because of the way Binary works. If you did the maths, you would find the above correct.
So what's Binary got to do with IP ?
Well, just as I explained in the introduction, computers display the zeros and ones in a way that
makes the information useful to us. The Internet Protocol works a bit like this as well, where 98%
of the time we see it in a decimal notation, but the computer understands it in binary. The picture
below gives you an example of how a computer understands an IP Address:
The above example shows an IP address in decimal notation, which we understand more easily,
this IP Address - 192.168.0.1 is then converted to Binary, which is what the computer
understands and you can see how big the number gets ! It's easier for us to remember 4 different
numbers than 32 zeros or ones !
Now, keeping in mind what we said earlier about Bits and Bytes, have you ever heard or read
people saying that an IP Address is a 32 Bit address ? It is, and you can now see why:
So to sum up all the above, we now know what Binary notation is, what a Bit, Byte and KByte is
and how Binary relates to an IP Address which is usally represented in its Decimal notation.
Understanding the conversion between Decimal and Binary
Now we're going to look at how the conversion works between Decimal and Binary. This is an
important step, because you'll probably find yourself in need of such a conversion when dealing
with complex subnets.
The conversion is not that hard once you grasp the concept. The picture below shows an IP
Address that we are going to convert to Binary. Keep in mind that the method I'm going to show
you is the same for all conversions.
We are now going to convert the first octec in the IP Address 192.168.0.1 (Decimal) to Binary, in
other words, we take the "192" and convert it to Binary and we are not going to have to do any
difficult calculations, just simple additions:
If you have read and understood the first section of this page, you should know that we need 8
bits to create one octec or, if you like, the 192 number. Each bit takes a certain value which never
changes, this value is shown in purple, right above the bit, we then select the bits we need in such
a way that the sum of all selected bits gives us the decimal number we need.
If you wanted to explain the conversion in mathematical terms, you would say that each bit is a
power of 2 (2^), for example, bit 8 is actually '2^7' = 128 in decimal, bit 7 is '2^6 = 64 in decimal,
bit 6 is '2^5' = 32 in decimal, bit 5 is '2^4' = 16 in decimal, bit 4 is '2^3' = 8 in decimal, bit 3 is
'2^2' = 4 in decimal, bit 2 is '2^1' = 2 in decimal, bit 1 is '2^0' = 1 in decimal.
Note: When calculating the decimal value of an octec (192 in the example above), the Bit
numbers do NOT represent the power of two value we must use in order to get the decimal value.
This means that Bit 1 does NOT translate to 2^1=1 in decimal.
In our example, we used the 192. As you saw, we needed bits 8 and 7 and this gave us the Binary
number of 11000000 which is 192 in Decimal. You must remember that the values of each bit
never change! For example, bit 8 always has a decimal value of 128, whereas bit 1 always takes
the value of 1. Using this method, you will find it easy to convert Decimal to Binary without the
need for complex mathematical calculations.
So let's have a look at the next octec, which is the decimal number 168:
Here again you can see that we needed to choose bits 8, 6 and 4 (in other words put a "1" in the
bit's position) in order to get a decimal value of 168. So the Binary value of 10101000 is equal to
the decimal value of 168.
Let's now look at all 4 octecs of our IP Address, in Binary:
No matter which way you convert, from Decimal to Binary or Binary to Decimal, the same
method is used so you if you understood the above you should be able to convert either way any
Binary or Decimal number.
That just about does it for this section, you're now ready for the next section !
The Internet Protocol (IP) Header
Introduction
Just like every other protocol, the Internet Protocol has a place in the OSI Model. Because it's
such an important protocol and other protocols depend upon it, it needs to be placed before them,
which is why you will find it in Layer 3 of the OSI model:
When a computer receives a packet from the network, the computer will firstly check the
destination MAC address of the packet at the Datalink layer (2) and if it passes, it's then passed
on to the Network layer
At the Network layer it will check the packet to see if the destination IP Address matches with the
computer's IP Address (if the packet is a broadcast, it will pass the network layer anyway).
From there, the packet is processed as required by the upper layers.
On the other hand, if the computer is generating a packet to send to the network then, as the
packet travels down the OSI model and reaches the Network layer, the destination and source IP
Address of this packet are added in the IP Header.
The IP Header
Now we are going to analyse the Internet Protocol header, so you can see the fields it has and
where they are placed. In here you will find the destination and source IP Address field which is
essential to every packet using the protocol.
It's worth noting that the 9th field, which is the "Protocol" field, contains some important
information that the computer uses to find out where it must pass the datagram once it strips off
the IP header.
If you remember, TCP and UDP exist on layer 4 of the OSI Model, which is the transport layer.
When data arrives at a computer and the packet is processed by each layer, it needs to know
whereabouts above to pass the data. This protocol field tells the computer to give the remaining
data to either the TCP or UDP protocol, which is directly above it.
Also, the Destination IP Address is another important field which contains the IP Address of the
destination machine.
The next section talks about the 5 different classes of IP Address.
Internet Protocol Classes - Network & Host ID
Introduction
Every protocol suite defines some type of addressing that identifies computers and networks. IP
Addresses are no exception to this "rule". There are certain values that an IP Address can take and
these have been defined by the IEEE committee (as most things).
A simple IP Address is a lot more than just a number. It tells us the network that the workstation
is part of and the node ID. If you don't understand what I am talking about, don't let it worry you
too much because we are going to analyse everything here :)
IP Address Classes and Structure
When the IEEE committee sat down to sort out the range of numbers that were going to be used
by all computers, they came out with 5 different ranges or, as we call them, "Classes" of IP
Addresses and when someone applies for IP Addresses they are given a certain range within a
specific "Class" depending on the size of their network.
To keep things as simple as possible, let's first have a look at the 5 different Classes:
In the above table, you can see the 5 Classes. Our first Class is A and our last is E. The first 3
classes ( A, B and C) are used to identify workstations, routers, switches and other devices
whereas the last 2 Classes ( D and E) are reserved for special use.
As you would already know an IP Address consists of 32 Bits, which means it's 4 bytes long. The
first octec (first 8 Bits or first byte) of an IP Address is enough for us to determine the Class to
which it belongs. And, depending on the Class to which the IP Address belongs, we can
determine which portion of the IP Address is the Network ID and which is the Node ID.
For example, if I told you that the first octec of an IP Address is "168" then, using the above
table, you would notice that it falls within the 128-191 range, which makes it a Class B IP
Address.
Understanding the Classes
We are now going to have a closer look at the 5 Classes. If you remember earlier I mentioned that
companies are assigned different IP ranges within these classes, depending on the size of their
network. For instance, if a company required 1000 IP Addresses it would probably be assigned a
range that falls within a Class B network rather than a Class A or C.
The Class A IP Addresses were designed for large networks, Class B for medium size networks
and Class C for smaller networks.
Introducing Network ID and Node ID concepts
We need to understand the Network ID and Node ID concept because it will help us to fully
understand why Classes exist. Putting it as simply as possible, an IP Address gives us 2 pieces of
valuable information:
1) It tells us which network the device is part of (Network ID).
2) It identifies that unique device within the network (Node ID).
Think of the Network ID as the suburb you live in and the Node ID your street in that suburb.
You can tell exactly where someone is if you have their suburb and street name. In the same way,
the Network ID tells us which network a particular computer belongs to and the Node ID
identifies that computer from all the rest that reside in the same network.
The picture below gives you a small example to help you understand the concept:
Explanation:
In the above picture, you can see a small network. We have assigned a Class C IP Range for this
network. Remember that Class C IP Addresses are for small networks. Looking now at Host A,
you will see that its IP Address is 192.168.0.2. The Network ID portion of this IP Address is in
blue, while the Host ID is in orange.
I suppose the next question someone would ask is: How do I figure out which portion of the IP
Address is the Network ID and which is the Host ID ?
That's what we are going to answer next.
The Network and Node ID of each Class
The network Class helps us determine how the 4 byte, or 32 Bit, IP Address is divided between
network and node portions.
The table below shows you (in binary) how the Network ID and Node ID changes depending on
the Class:
Explanation:
The table above might seem confusing at first but it's actually very simple. We will take Class A
as an example and analyse it so you can understand exactly what is happening here:
Any Class A network has a total of 7 bits for the Network ID (bit 8 is always set to 0) and 24 bits
for the Host ID. Now all we need to do is calculate how much 7 bits is:
2 to the power of 7 = 128 Networks and for the hosts : 2 to the power of 24 = 16,777,216 hosts in
each Network, of which 2 cannot be used because one is the Network Address and the other is the
Network Broadcast address (see the table towards the end of this page). This is why when we
calculate the "valid" hosts in a network we always subtract "2". So if I asked you how many
"valid" hosts can you have a on Class A Network, you should answer 16,777,214 and NOT
16,777,216.
Below you can see all this in one picture:
The same story applies for the other 2 Classes we use, that's Class B and Class C, the only
difference is that the number of networks and hosts changes because the bits assigned to them are
different for each class.
Class B networks have 14 bits for the Network ID (Bits 15, 16 are set and can't be changed) and
16 bits for the Host ID, that means you can have up to '2 to the power of 14' = 16,384 Networks
and '2 to the power of 16' = 65,536 Hosts in each Network, of which 2 cannot be used because
one is the Network Address and the other is the Network Broadcast address (see the table towards
the end of this page). So if I asked you how many "valid" hosts can you have a on Class B
Network, you should answer 65,534 and NOT 65,536.
Class C networks have 21 bits for the Network ID (Bits 22, 23, 24 are set and can't be changed)
and 8 bits for the Host ID, that means you can have up to '2 to the power of 21' = 2,097,152
Networks and '2 to the power of 8' = 256 Hosts in each Network, of which 2 cannot be used
because one is the Network Address and the other is the Network Broadcast address (see the table
towards the end of this page). So if I asked you how many "valid" hosts can you have a on Class
C Network, you should answer 254 and NOT 256.
Now, even though we have 3 Classes of IP Addresses that we can use, there are some IP
Addresses that have been reserved for special use. This doesn't mean you can't assign them to a
workstation but in the case that you did, it would create serious problems within your network.
For this reason it's best that you avoid using these IP Addresses.
The following table shows the IP Addresses that you should avoid using:
IP Address
Function
Network 0.0.0.0
Refers to the default route. This route is to simplify routing tables used by IP.
Network 127.0.0.0
Reserved for Loopback. The Address 127.0.0.1 is often used to refer to the local host. Using this
Address, applications can address a local host as if it were a remote host.
IP Address with all host bits set to "0" (Network Address) e.g 192.168.0.0
Refers to the actual network itself. For example, network 192.168.0.0 can be used to identify
network 192.168. This type of notation is often used within routing tables.
IP Address with all node bits set to "1" (Subnet / Network Broadcast) e.g 192.168.255.255
IP Addresses with all node bits set to "1" are local network broadcast addresses and must NOT be
used.
Some examples: 125.255.255.255 (Class A) , 190.30.255.255 (Class B), 203.31.218.255 (Class
C). See "Multicasts" & "Broadcasts" for more info.
IP Address with all bits set to "1" (Network Broadcast) e.g 255.255.255.255
The IP Address with all bits set to "1" is a broadcast address and must NOT be used. These are
destined for all nodes on a network, no matter what IP Address they might have.
Now make sure you keep to the above guidelines because you're going to bump into a lot of
problems if you don't !
IMPORTANT NOTE: It is imperative that every network, regardless of Class and size, has a
Network Address (first IP Address e.g 192.168.0.0 for Class C network) and a Broadcast Address
(last IP Address e.g 192.168.0.255 for Class C network), as mentioned in the table and
explanation diagrams above, which cannot be used.
So when calculating available IP Addresses in a network, always remember to subtract 2 from the
number of IP Addresses within that network.
That all pretty much covers this section.
Next, is the Subnetting section, and before you proceed, make sure you're comfortable with the
new concepts and material we have covered, otherwise subnetting will be very hard to
understand.
Introduction To The Open Systems Interconnect Model (OSI)
Introduction
OSI is a standard description or "reference model" for how messages should be transmitted
between any two points in a telecommunication network. Its purpose is to guide product
implementors so that their products will consistently work with other products.
The Model
The OSI model was created by the IEEE committee so different vendors products would work
with each other. You see the problem was that when HP decided to create a network product, it
would be incompatible with similar products of a different vendor e.g IBM. So when you bought
40 network cards for your company, you would make sure that the rest of the equipment would be
from the same vendor, to ensure compatibility. As you would understand things were quite
messy, until the OSI model came into the picture.
As most would know, the OSI
model consists of 7 layers.
Each layer has been designed to do a specific task. Starting from the top layer (7) we will see how
the data which you type gets converted into segments, the segments into datagrams and the
datagrams into packets, the packets into frames and then the frames are sent down the wire,
usually twisted pair, to the receiving computer.
Please select one of the 7 layers by clicking on it, or simply use the menu :)
The OSI flash below is provided to help you futher understand the functionality of the OSI
model:
The picture below is another quick summary of the OSI model:
When you're finished reading through the OSI model, to understand how data travels through the
layers and clearly see the header which each layer add\removes, visit the Data Encapsulation Decapsulation page.
OSI Layer 1 - Physical Layer
The first four layers define how data is transmitted end-to-end.
There are no protocols which work at the Physical layer. As mentioned, Ethernet, Token Ring and
other topologies are specified here.
Layer 1 - The Physical Layer
The Physical layer has two responsibilities: it sends bits and receives bits. Bits come only in
values of 1 or 0. The Physical layer communicates directly with the various types of actual
communication media. Different kinds of media represent these bit values in different ways.
Specific protocols are needed for each type of media to describe the proper bit patterns to be used,
how data is encoded into media signals and the various qualities of the physical media's
attachment interface.
The Physical layer specifications specify the electrical, mechanical and functional requirements
for activating, maintaining and deactivating a physical link between end systems. At the physical
layer, the interface between the Data Terminal Equipment (DTE) and the Data CircuitTerminating Equipment (DCE) is identified. The Physical layer's connectors (RJ-45, BNC e.c.t)
and different physical topologies (Bus, Star, Hybrid networks) are defined by the OSI as
standards, allowing different systems to communicate.
We talk more about the Physical topologies in the Topologies section. Please refer to it if you
want to read more. You can also find out more about Ethernet at the Ethernet section.
OSI Layer 2 - Datalink Layer
The first four layers define how data is transmitted end-to-end.
Some common protocols which work at the Datalink layer are: ARP, RARP, DCAP.
Layer 2 - The Datalink Layer
The Datalink ensures that messages are delivered to the proper device and translates messages
from the Network layer into bits for the Physical layer to transmit. It formats the message into
data frames (notice how we are not using the term segments) and adds a customized header
containing the hardware destination and source address.
This added information forms a sort of capsule that surrounds the original message (or data),
think of it like grabbing a letter which has information and putting it into an envelope. The
envelope is only used to get the letter to its destination, right? So when it arrives at the addressee,
the envelope is opened and discarded, but the letter isn't because it has the information the
addressee needs.
Data traveling through a network works in a similair manner. Once it gets to the destination, it
will be opened and read (processed). This is illustrated in the Data Encapsulation - Decapsulation
section.
The Datalink layer is subdivided into two other sublayers, the Media Access Control (MAC) and
the Logical Link Control (LLC). The figure below illustrates this:
Media Access Control (MAC) 802.3
This defines how packets are placed on the media (cable). Contention media (Ethernet) access is
first come first served access where everyone shares the same bandwidth. Physical addressing is
defined here. What's Physical addressing? It's simple.
You will come across 2 addressing terms, 1)Logical addressing 2)Physical addressing.
Logical addressing is basically the address which is given by software e.g IP address.When you
get an IP address, this is considered a "logical address" which is provided to you after your
TCP/IP stack is loaded.
Physical addressing is an address which is given not by the software, but the hardware. Every
network card has a "MAC" address which is burnt into the card's eprom (a special memory chip)
and this special address is used to uniquely identify your computer's network card from all the
others on the network.
There is a whole page dedicated to MAC Addressing if you would like to read more about it.
Logical Link Control (LLC) 802.2
This sublayer is responsible for identifying Network layer protocols and then encapsulating them
when they are about to be transmitted onto the network or decapsulate them when it receives a
packet from the network and pass it onto the layer above it, which is the Network layer. An LLC
header tells the Datalink layer what to do with a packet once a frame is received. For example, a
host (computer) will receive a frame and then look in the LLC header to understand that the
packet is destined for the IP protocol at the Network layer. The LLC can also provide flow
control and sequencing of control bits.
If you are finding all this a bit too difficult to understand, I suggest that you read more on the OSI
model and check the Data Encapsulation - Decapsulation page which explains how the data
travels up and down the OSI model and shows how each layer adds or removes its header
information depending on the direction of the data.
OSI Layer 3 - Network Layer
The first four layers define how data is transmitted end-to-end.
Some common protocols which work at the Network layer are: IP, DHCP, ICMP, IGRP, EIGRP,
RIP, RIP2, MARS.
Layer 3 - The Network Layer
The Network layer is responsible for routing through an internetwork and for networking
addressing. This means that the Network layer is responsible for transporting traffic between
devices that are not locally attached. Routers, or other layer-3 devices, are specified at the
Network layer and provide routing services in an internetwork.
In the Open Systems Interconnection (OSI) communications model, the Network layer knows the
address of the neighboring nodes in the network, packages output with the correct network
address information, selects routes and quality of service and recognizes and forwards to the
Transport layer incoming messages for local host domains. Among existing protocol that
generally map to the OSI network layer are the Internet Protocol (IP) part of TCP/IP and NetWare
IPX/SPX. Both IP Version 4 and IP Version 6 (IPv6) map to the OSI network layer.
As mentioned above, the Internet Protocol works on this layer. This means that when you see an
IP address, for example 192.168.0.1, this IP address maps to the Network layer in the OSI model,
in other words only the Network layer deals with or cares about IP addresses in the OSI model.
To keep things simple, IP is analysed under the "Protocols" section.
OSI Layer 4 - Transport Layer
The first four layers define how data is transmitted end-to-end.
Some common protocols which work at the Transport layer are: TCP, UDP.
Layer 4- The Transport Layer
The Transport layer is responsible for providing mechanisms for multiplexing upper-layer
application, session establishment, data transfer and tear down of virtual circuits. It also hides
details of any network-dependent information from the higher layers by providing transparent
data transfer.
Services located in the Transport layer both segment and reassemble data from upper-layer
applications and unite it onto the same data stream. Some of you might already be familiar with
TCP and UDP and know that TCP is a reliable service and UDP is not. Application developers
have their choice of the two protocols when working with TCP/IP protocols.
As mentioned above, the Transport layer provides different mechanisms for the transfer of data
from one computer to another. Below is a brief diagram which tells you a bit about the protocols.
These protocols are also analysed in the Protocols area.
OSI Layer 5 - Session Layer
The last 3 layers of the OSI model are reffered to the "Upper" layers. These layers are
responsible for applications communicating between hosts. None of the upper layers know
anything about networking or network addresses.
Some common protocols which work at the Session layer are: DNS, LDAP, NetBIOS.
Layer 5 - The Session Layer
The Session layer is responsible for setting up, managing and then tearing down sessions between
Presentation layer entities. The Session layer also provides dialog control between devices, or
nodes. It coordinates communication between systems and serves to organise their
communication by offering three different modes: simplex, half-duplex and full-duplex. The
session layer basically keeps one application's data separate from other application's data.
Some examples of Session-layer protocols are:
Network File Systrem (NFS) : Was developed by Sun Microsystems and used with TCP/IP and
Unix workstations to allow transparent access to remote resources.
Structured Query Language (SQL): Was developed by IBM to provide users with a simpler way
to define their information requirements on both local and remote systems.
Remote Procedure Call (RPC): Is a broad client/server redirection tool used for disparate service
environments. Its procedures are created on clients and performed on servers.
X Window: Is widely used by intelligent terminals for communicating with remote Unix
computers, allowing them to operate as though they were locally attached monitors.
OSI Layer 6 - Presentation Layer
The last 3 layers of the OSI model are reffered to the "Upper" layers. These layers are
responsible for applications communicating between hosts. None of the upper layers know
anything about networking or network addresses.
There are no protocols which work specificly at the Presentation layer, but the protocols which
work at the Application layer are said to work on all 3 upper layers.
Layer 6- The Presentation Layer
The Presentation Layer gets its name from its purpose: It presents data to the Application layer.
It's basically a translator and provides coding and conversion functions. A successful data transfer
technique is to adapt the data into a standard format before transmission. Computers are
configured to receive this generically formatted data and then convert the data back into its native
format for reading. By providing translation services, the Presentation layer ensures that data
transferred from the Application layer of one system can be read by the Application layer of
another host.
The OSI has protocol standards that define how standard data should be formatted. Tasks like
data compression, decompression, encryption and decryption are associated with this layer. Some
Presentation layer standards are involved in multimedia operations. The following serve to direct
graphic and visual image presentation :
JPEG: The Joint Photographic Experts Group brings these photo standards to us.
MIDI: The Musical Intrument Digital Interface is used for digitized music.
MPEG: The Moving Pictures Experts Group's standard for the compression and coding of motion
video for CD's is very popular.
QuickTime: This is for use with Machintosh or Power PC programs, it manages audio and video
applications.
OSI Layer 7 - Application Layer
The last 3 layers of the OSI model are reffered to the "Upper" layers. These layers are
responsible for applications communicating between hosts. None of the upper layers know
anything about networking or network addresses.
FTP, TFTP, Telnet, SMTP and other protocols work on the first three layers of the OSI model,
which obviously includes the Application layer.
Layer 7- The Application Layer
The Application layer of the OSI model is where users communicate with the computer. The
Application layer is responsible for identifying and establishing the availability of the intended
communication partner and determining if sufficient resources for the intended communication
exist. The user interfaces with the computer at the application layer.
Although computer applications sometimes require only desktop resources, applications may
unite communicating components from more than one network application, for example, file
transfers, e-mail, remote access, network management activities, client/server processes.
There are various protocols which are used at this layer. Definition of a"Protocol" is a set of rules
by which two computers communicate. In plain English, you can say that a protocol is a
language, for example, English. For me to speak to you and make sense, I need to structure my
sentence in a "standard" way which you will understand. Computer communication works pretty
much the same way. This is why we have so many different protocols, each one for a specific
task.
Data Encapsulation & Decapsulation in the OSI Model
Introduction
Here we are going to explain in detail how data travels through the OSI model. You must keep in
mind that the OSI model is a guideline. It tells the computer what it's supposed to do when data
needs to be sent or when data is received.
In order to make it easier for most, there is a movie file available which will show your exactly
what we are about to analyse. Click here to to obtain the encap-decap movie (1MB). You will
need Windows media player to view it.
Our Study Case
We are going to analyse an example in order to try and understand how data encapsulation and
decapsulation works. This should make it easier for most people.
Try to see it this way :
When a car is built in a factory, one person doesn't do all the jobs, rather it's put into a production
line and as the car moves through, each person will add different parts to it so when it comes to
the end of the production line, it's complete and ready to be sent out to the dealer.
The same story applies for any data which needs to be sent from one computer to another. The
OSI model which was created by the IEEE committee is to ensure that everyone follows these
guidelines (just like the production line above) and therefore each computer will be able to
communicate with every other computer, regardless of whether one computer is a Macintosh and
the other is a PC.
One important piece of information to keep in mind is that data flows 2 ways in the OSI model,
DOWN (data encapsulation) and UP (data decapsulation).
The picture below is an example of a simple data transfer between 2 computers and shows how
the data is encapsulated and decapsulated:
Explanation :
The computer in the above picture needs to send some data to another computer. The Application
layer is where the user interface exists, here the user interacts with the application he or she is
using, then this data is passed to the Presentation layer and then to the Session layer. These three
layer add some extra information to the original data that came from the user and then passes it to
the Transport layer. Here the data is broken into smaller pieces (one piece at a time transmitted)
and the TCP header is a added. At this point, the data at the Transport layer is called a segment.
Each segment is sequenced so the data stream can be put back together on the receiving side
exactly as transmitted. Each segment is then handed to the Network layer for network addressing
(logical addressing) and routing through the internet network. At the Network layer, we call the
data (which includes at this point the transport header and the upper layer information) a packet.
The Network layer add its IP header and then sends it off to the Datalink layer. Here we call the
data (which includes the Network layer header, Transport layer header and upper layer
information) a frame. The Datalink layer is responsible for taking packets from the Network layer
and placing them on the network medium (cable). The Datalink layer encapsulates each packet in
a frame which contains the hardware address (MAC) of the source and destination computer
(host) and the LLC information which identifies to which protocol in the prevoius layer (Network
layer) the packet should be passed when it arrives to its destination. Also, at the end, you will
notice the FCS field which is the Frame Check Sequence. This is used for error checking and is
also added at the end by the Datalink layer.
If the destination computer is on a remote network, then the frame is sent to the router or gateway
to be routed to the desination. To put this frame on the network, it must be put into a digital
signal. Since a frame is really a logical group of 1's and 0's, the Physical layer is responsible for
encapsulating these digits into a digital signal which is read by devices on the same local
network.
There are also a few 1's and 0's put at the begining of the frame, only so the receiving end can
synchronize with the digital signal it will be receiving.
Below is a picture of what happens when the data is received at the destination computer.
Explanation :
The receiving computer will firstly synchronize with the digital signal by reading the few extra
1's and 0's as mentioned above. Once the synchonization is complete and it receives the whole
frame and passes it to the layer above it which is the Datalink layer.
The Datalink layer will do a Cyclic Redundancy Check (CRC) on the frame. This is a
computation which the comupter does and if the result it gets matches the value in the FCS field,
then it assumes that the frame has been received without any errors. Once that's out of the way,
the Datalink layer will strip off any information or header which was put on by the remote
system's Datalink layer and pass the rest (now we are moving from the Datalink layer to the
Network layer, so we call the data a packet) to the above layer which is the Network layer.
At the Network layer the IP address is checked and if it matches (with the machine's own IP
address) then the Network layer header, or IP header if you like, is stripped off from the packet
and the rest is passed to the above layer which is the Transport layer. Here the rest of the data is
now called a segment.
The segment is processed at the Transport layer, which rebuilds the data stream (at this level on
the sender's computer it was actually split into pieces so they can be transferred) and
acknowledges to the transmitting computer that it received each piece. It is obvious that since we
are sending an ACK back to the sender from this layer that we are using TCP and not UDP.
Please refer to the Protocols section for more clarification. After all that, it then happily hands the
data stream to the upper-layer application.
You will find that when analysing the way data travels from one computer to another most people
never analyse in detail any layers above the Transport layer. This is because the whole process of
getting data from one computer to another involves usually layers 1 to 4 (Physical to Transport)
or layer 6 (Session) at the most, depending on the type of data.
Introduction To Routing
Introduction
Routing is one of the most important features in a network that needs to connect with other
networks. In this page we try to explain the difference between Routed and Routing protocols and
explain different methods used to achieve the routing of protocols.The fact is that if routing of
protocols was not possible, then we wouldn't be able to comminucate using computers because
there would be no way of getting the data across to the other end !
Definition
Routing is used for taking a packet (data) from one device and sending it through the network to
another device on a different network. If your network has no routers then you are not routing.
Routers route traffic to all the networks in your internetwork. To be able to route packets, a router
must know the following :

Destination address

Neighbor routers from which it can lean about remote networks

Possible routes to all remote networks

The best route to each remote network

How to maintain and verify routing information
Before we go on, I would like to define 3 networking terms :
Convergence: The process required for all routers in an internetwork to update their routing tables
and create a consistent view of the network, using the best possible paths. No user data is passed
during convergence.
Default Route: A "standard" route entry in a routing table which is used as a first option. Any
packets sent by a device will be sent first to the default route. If that fails, it will try alternative
routes.
Static Route: A permanent route entered manually into a routing table. This route will remain in
the table, even if the link goes down. It can only be erased manually.
Dynamic Route: A route entry which is dynamically (automatically) updated as changes to the
network occur. Dynamic routes are basically the opposite to static routes.
We start off with the explanation of the IP routing process and move onto routed protocols, then
tackle the routing protocols and finally the routing tables. There is plenty to read about, so grab
that tea or coffee and let's start !
Please click on one of the following sections:
The IP Routing Process
Routed Protocols
Routing Protocols
The IP Routing Process
Introduction
We are going to take a look at what happens when routing occurs on a network. When I was new
to the networking area, I thought that all you needed was the IP Address of the machine you
wanted to contact but so little did I know. You actually need a bit more information than just the
IP Address !
The process we are going to explain is fairly simple and doesn't really change, no matter how big
your network is.
The Example:
In our example, we have 2 networks, Network A and Network B. Both networks are connected
via a router (Router A) which has 2 interfaces: E0 and E1. These interfaces are just like the
interface on your network card (RJ-45), but built into the router.
Now, we are going to describe step by step what happens when Host A (Network A) wants to
communicate with Host B (Network B) which is on a different network.
1) Host A opens a command prompt and enters >Ping 200.200.200.5.
2) IP works with the Address Resolution Protocol (ARP) to determine which network this packet
is destined for by looking at the IP address and the subnet mask of the Host A. Since this is a
request for a remote host, which means it is not destined to be sent to a host on the local network,
the packet must be sent to the router (the gateway for Network A) so that it can be routed to the
correct remote network (which is Network B).
3) Now, for Host A to send the packet to the router, it needs to know the hardware address of the
router's interface which is connected to its network (Network A), in case you didn't realise, we are
talking about the MAC (Media Access Control) address of interface E0. To get the hardware
address, Host A looks in its ARP cache - a memory location where these MAC addresses are
stored for a few seconds .
4) If it doesn't find it in there it means that either a long time has passed since it last contacted the
router or it simply hasn't resolved the IP address of the router (192.168.0.1) to a hardware address
(MAC). So it then sends an ARP broadcast. This broadcast contains the following "What is the
hardware (MAC) address for IP 192.168.0.1 ? ". The router identifies that IP address as its own
and must answer, so it sends back to Host A a reply, giving it the MAC address of its E0
interface. This is also one of the reasons why sometimes the first "ping" will timeout. Because it
takes some time for an ARP to be sent and the requested machine to respond with its MAC
address, by the time all that happens, the TTL (Time To Live) of the first ping packet has expired,
so it times out !
5) The router responds with the hardware address of its E0 interface, to which the 192.168.0.1 IP
is bound. Host A now has everything it needs in order to transmit a packet out on the local
network to the router. Now, the Network Layer hands down to the Datalink Layer the packet it
generated with the ping (ICMP echo request), along with the hardware address of the router. This
packet includes the source and destination IP address as well as the ICMP echo request which
was specified in the Network Layer.
6) The Datalink Layer of Host A creates a frame, which encapsulates the packet with the
information needed to transmit on the local network. This includes the source and destination
hardware address (MAC) and the type field which specifies the Network Layer protocol e.g IPv4
(that's the IP version we use), ARP. At the end of the frame, in the FCS portion of the frame, the
Datalink Layer will stick a Cyclic Redundancy Check (CRC) to make sure the receiving machine
(the router) can figure out if the frame it received has been corrupted. To learn more on how the
frame is created, visit the Data Encapsulation - Decapsulation.
7) The Datalink Layer of Host A hands the frame to the Physical layer which encodes the 1s and
0s into a digital signal and transmits this out on the local physical network.
8)The signal is picked up by the router's E0 interface and reads the frame. It will first do a CRC
check and compare it with the CRC value Host A added to this frame, to make sure the frame is
not corrupt.
9)After that, the destination hardware address (MAC) of the received frame is checked. Since this
will be a match, the type field in the frame will be checked to see what the router should do with
the data packet. IP is in the type field, and the router hands the packet to the IP protocol running
on the router. The frame is stripped and the original packet that was generated by Host A is now
in the router's buffer.
10) IP looks at the packet's destination IP address to determine if the packet is for the router.
Since the destination IP address is 200.200.200.5, the router determines from the routing table
that 200.200.200.0 is a directly connected network on interface E1.
11) The router places the packet in the buffer of interface E1. The router needs to create a frame
to send the packet to the destination host. First, the router looks in the ARP cache to determine
whether the hardware address has already been resolved from a prior communication. If it is not
in the ARP cache, the router sends an ARP broadcast out E1 to find the hardware address of
200.200.200.5
12) Host B responds with the hardware address of its network interface card with an ARP reply.
The router's E1 interface now has everything it needs to send the packet to the final destination.
13)The frame generated from the router's E1 interface has the source hardware address of E1
interface and the hardware destination address of Host B's network interface card. However, the
most important thing here is that even though the frame's source and destination hardware address
changed at every interface of the router it was sent to and from, the IP source and destination
addresses never changed. The packet was never modified at all, only the frame changed.
14) Host B receives the frame and runs a CRC. If that checks out, it discards the frame and hands
the packet to IP. IP will then check the destination IP address. Since the IP destination address
matches the IP configuration of Host B, it looks in the protocol field of the packet to determine
the purpose of the packet.
15) Since the packet is an ICMP echo request, Host B generates a new ICMP echo-reply packet
with a source IP address of Host B and a destination IP address of Host A. The process starts all
over again, except that it goes in the opposite direction. However, the hardware address of each
device along the path is already known, so each device only needs to look in its ARP cache to
determine the hardware (MAC) address of each interface.
And that just about covers our routing analysis. If you found it confusing, take a break and come
back later on and give it another shot. Its really simple once you grasp the concept of routing.
Routed Protocols
Introduction
We all understand that TCP/IP, IPX-SPX are protocols which are used in a Local Area Network
(LAN) so computers can communicate between with each other and with other computers on the
Internet.
Chances are that in your LAN you are most probably running TCP/IP. This protocol is what we
call a "routed" protocol. The term "routed" refers to something which can be passed on from one
place (network) to another. In the example of TCP/IP, this is when you construct a data packet
and send it across to another computer on the Internet
This ability to use TCP/IP to send data across networks and the Internet is the main reason it's so
popular and dominant. If you're thinking also of NetBeui and IPx/SPX, then note that NetBeui is
not a routed protocol, but IPX/SPX is! The reason for this is actually in the information a packet
holds when it uses one of the protocols.
Let me explain:
If you looked at a TCP/IP or IPX/SPX packet, you will notice that they both contain a "network"
layer. For TCP/IP, this translates to the IP layer (Layer 3), as for IPX/SPX, it's the IPX layer
(Layer 3). To make it easy to understand, I will use TCP/IP as an example.
In the picture below, you can see a TCP/IP packet within an Ethernet II Frame (The frame is like
an "envelope" which encapsulates the TCP/IP packet):
Looking closely, you will notice that Layer 3 (Network Layer) contains the IP header. It is within
this section the computer puts the Source and Destination IP number. Thanks to the existence of
this IP header, we are able to put a destination IP which can be one that's not on our network, and
the computer will figure it out after completing a simple calculation and know if it needs to send
this data to the router for it to be sent to its destination. You can read more on Layer 3 by visiting
the OSI page.
IPX/SPX contains a similar field which gives it the same ability, which is to send packets over to
different networks.
NetBeui on the other hand has no such information! This means that NetBeui has no information
about the destination network to which it needs to send the data, as it was developed for LAN use
only, or you could say that all hosts are considered to be on the same logical network and all
resources are considered to be local. This classifies NetBeui as a "non routed" protocol.
Routing Protocols
Introduction
Routing protocols were created for routers. These protocols have been designed to allow the
exchange of routing tables, or known networks, between routers. There are a lot of different
routing protocols, each one designed for specific network sizes, so I am not going to be able to
mention and analyse them all, but I will focus on the most popular.
The two main types of routing: Static routing and Dynamic routing
The router learns about remote networks from neighbor routers or from an administrator. The
router then builds a routing table, the creation of which I will explain in detail, that describes how
to find the remote networks. If the network is directly connected then the router already knows
how to get to the network. If the networks are not attached, the router must learn how to get to the
remote network with either static routing (administrator manualy enters the routes in the router's
table) or dynamic routing (happens automaticlly using routing protocols).
The routers then update each other about all the networks they know. If a change occurs e.g a
router goes down, the dynamic routing protocols automatically inform all routers about the
change. If static routing is used, then the administrator has to update all changes into all routers
and therefore no routing protocol is used.
Only Dynamic routing uses routing protocols, which enable routers to:




Dynamically discover and maintain routes
Calculate routes
Distribute routing updates to other routers
Reach agreement with other routers about the network topology
Statically programmed routers are unable to discover routes, or send routing information to other
routers. They send data over routes defined by the network Administrator.
A Stub network is so called because it is a dead end in the network. There is only one route in and
one route out and, because of this, they can be reached using static routing, thus saving valuable
bandwidth.
Dynamic Routing Protocols
There are 3 types of Dynamic routing protocols, these differ mainly in the way that they discover
and make calculations about routes (click to select):
1) Distance Vector
2) Link State
3) Hybrid

Distance Vector routers compute the best path from information passed to them from
neighbors

Link State routers each have a copy of the entire network map

Link State routers compute best routes from this local map
The Table below (clickable) shows the main characteristics of a few different types of dynamic
routing protocols:
You can also clasify the routing protocols in terms of their location on a network. For example,
routing protocols can exist in, or between, autonomous systems.
Exterior Gateway Protocols (EGP's) are found between autonomous systems, whereas
Interior Gateway Protocols (IGP'S) are found Distance Vector Routing Protocols
Introduction
Distance Vector routing protocols use frequent broadcasts (255.255.255.255 or FF:FF:FF:FF) of
their entire routing table every 30 sec. on all their interfaces in order to communicate with their
neighbours. The bigger the routing tables, the more broadcasts. This methodology limits
significantly the size of network on which Distance Vector can be used.
Routing Information Protocol (RIP) and Interior Gateway Routing Protocol (IGRP) are two very
popular Distance Vector routing protocols. You can find links to more information on these
protocols at the bottom of the page. (That's if you haven't had enough by the time you get there !)
Distance Vector protocols view networks in terms of adjacent routers and hop counts, which also
happens to be the metric used. The "hop" count (max of 15 for RIP, 16 is deemed unreachable
and 255 for IGMP), will increase by one every time the packet transits through a router.
So the router makes decisions about the way a packet will travel, based on the amount of hops it
takes to reach the destination and if it had 2 different ways to get there, it will simply send it via
the shortest path, regardless of the connection speed. This is known as pinhole congestion.
Below is a typical routing table of a router which uses Distance Vector routing protocols:
Let's explain what is happening here :
In the above picture, you see 4 routers, each connected with its neighbour via some type of WAN
link e.g ISDN.
Now, when a router is powered on, it will immediately know about the networks to which each
interface is directly connected. In this case Router B knows that interface E0 is connected to the
192.168.0.0 network and the S0 interface is connected to the 192.168.10.0 network.
Looking again at the routing table for Router B, the numbers you see on the right hand side of the
interfaces are the "hop counts" which, as mentioned, is the metric that distance vector protocols
use to keep track on how far away a particular network is. Since these 2 networks are connected
directly to the router's interface, they will have a value of zero (0) in the router's table entry. The
same rule applies for every router in our example.
Remember we have "just turn the routers on", so the network is now converging and that means
that there is no data being passed. When I say "no data" I mean data from any computer or server
that might be on any of the networks. During this "convergence" time, the only type of data being
passed between the routers is that which allows them to populate their routing tables and after
that's done, the routers will pass all other types of data between them. That's why a fast
convergence time is a big advantage.
One of the problems with RIP is that it has a slow convergence time.
Let's explain what we see :
In the above picture, the network is said to have "converged", in other words, all routers on the
network have populated their routing table and are completly aware of the networks they can
contact. Since the network is now converged, computers in any of the above networks can contact
each other.
Again, looking at one of the routing tables, you will notice the network address with the exit
interface on the right and next to that is the hop count to that network. Remember that RIP will
only count up to 15 hops, after which the packet is discarded (on hop 16).
Each router will broadcast its entire routing table every 30 seconds.
Routing based on Distance Vector can cause a lot of problems when links go up and down, this
could result in infinite loops and can also de-synchronise the network.
Routing loops can occur when every router is not updated close to the same time.
Let's have a look at the problem before we look at the various solutions:
Let's explain :
In the above picture you can see 5 routers of which routers A and B are connected with Router C,
and they all end up connecting via routers D and E to Network 5.
Now as the above picture shows, Network 5 fails.
All routers know about Network 5 from Router E. For example, Router A, in its tables, has a path
to Network 5 through routers B,D and E.
When Network 5 fails, Router E knows about it since it's directly connected to it and tells Router
D about it on its next update (when it will broadcast its entire routing table). This will result in
Router D stopping routing data to Network 5 through Router E. But as you can see in the above
picture, routers A B and C don't know about Network 5 yet, so they keep sending out update
information. Router D will eventually send out its update and cause Router B to stop routing to
Network 5, but routers A and C are still not updated. To them, it appear that Network 5 is still
available through Router B with a metric of 3 !
Now Router A sends its regular broadcast of its entire routing table which includes reachability
for Network 5. Routers C and B receive the wonderful news that Network 5 can be reached from
Router A, so they send out the information that Network 5 is now available !
From now on, any packet with a destination of Network 5 will go to Router A then to Router B
and from there back to Router A (remember that Router B got the good news that Network 5 is
available via Router A).
So this is where things get a bit messy and you have that wonderful loop, where data just gets
passed around from one router to another. Seems like they are playing ping pong :)
To deal with these problems we use the following techniques:
Maximum Hop Count
The routing loop we just looked at is called "counting to infinity" and it is caused by gossip and
wrong information being communicated between the routers. Without something to protect
against this type of a loop, the hop count will keep on increasing each time the packet goes
through a router ! One way of solving this problem is to define a maximum hop count. Distance
Vector (RIP) permits a hop count of up to 15, so anything that needs 16 hops is unreachable. So if
a loop occurred, it would go around the network until the packet reached a hop count of 15 and
the next router would simply discard the packet.
Split Horizon
Works on the principle that it's never useful to send information about a router back to the
destination from which the original packet came. So if for example I told you a joke, it's pointless
you telling me that joke again !
In our example it would have prevented Router A from sending the updated information it
received from Router B back to Router B.
Route Poisoning : Alternative to split horizon, when a router receives information about a route
from a particular network, the router advertises the route back to that network with the metric of
16, indicating that the destination is unreachable.
In our example, this means that when Network 5 goes down, Router E initiates router poisoning
by entering a table entry for Network 5 as 16, which basically means it's unreachable. This way,
Router D is not susceptible to any incorrect updates about the route to Network 5. When Router D
receives a router poisoning from Router E, it sends an update called a poison reverse, back to
Router E. This make sure all routes on the segment have received the poisoned route information.
Route poisoning, used with hold-downs (see section below) will certainly speed up convergence
time because the neighboring routers don't have to wait 30 seconds before advertising the
poisoned route.
Hold-Down Timers
Routers keep an entry for the network-down state, allowing time for other routers to recompute
for this topology change, this way, allowing time for either the downed router to come back or the
network to stabilise somewhat before changing to the next best route.
When a router receives an update from a neighbor indicating that a previously accessible network
is not working and is inaccessible, the hold-down timer will start. If a new update arrives from a
neighbor with a better metric than the original network entry, the hold-down is removed and data
is passed. But an update is received from a neighbor router before the hold-down timer expires
and it has a lower metric than the previous route, therefore the update is ignored and the holddown timer keeps ticking. This allows more time for the network to converge.
Hold-down timers use triggered updates, which reset the hold-down timer, to alert the neighbor's
routers of a change in the network. Unlike update messages from neighbor routers, triggered
updates create a new routing table that is sent immediatley to neighbor routers because a change
was detected in the network.
There are three instances when triggered updates will reset the hold-down timer:
1) The hold-down timer expires
2) The router received a processing task proportional to the number of links in the internetwork.
3) Another update is received indicating the network status has changed.
In our example, any update received by Router B from Router A, would not be accepted until the
hold-down timer expires. This will ensure that Router B will not receive a "false" update from
any routers that are not aware that Network 5 is unreachable. Router B will then send a update
and correct the other routers' tables.
Please select the Distance Vector protocol you want to read about:
Routing Information Protocol - RIP
Interior Gateway Routing Protocol - IGRPwithin autonomous systems:
Example of an EGP is the Border Gateway Protocol (BGP) which is also used amongst the
Internet routers, whereas examples of IGP protocols are RIP, IGRP, EIGRP.
Routing Information Protocol - RIP
Introduction
Routing Information Protocol (RIP) is a true Distance-Vector routing protocol. It sends the
complete routing table out to all active interfaces every 30 seconds. RIP only uses hop count to
determine the best way to a remote network, but it has a maximum allowable hop count of 15,
meaning that 16 is deemed unreachable. RIP works well in small networks, but it is inefficient on
large networks with slow WAN links or on networks with large number of routers installed.
RIP comes in two different versions. RIP version 1 uses only classful routing, which means that
all devices in the network must use the same subnet mask. This is because RIP version 1 does not
include the subnet mask when it sends updates. RIP v1 uses broadcasts (255.255.255.255).
RIP version 2 does, however, and this is what we call classless routing (check the Subnetting
section for more details). RIP v2 uses multicasts (224.0.0.9) to update its routing tables.
Route Update Timer: Sets the interval, usually 30 seconds, between periodic routing updates, in
which the router sends a complete copy of its routing table out to all neighbor routers.
Route Invalid Timer: Determines the length of time that must expire, usually 90 seconds, before
the router determines that a route is invalid. It will come to this conclusion if it doesn't hear any
updates about that route for that period. When the timer expires, the router will send out an
update to its neighbors letting them know that the route is invalid.
Route Flush Timer: Sets the time between a route becoming invalid and its removal from the
routing table (240 secs). Before it's removed, the router will notify its neighbors of that route's
impending doom ! The value of the route invalid timer must be less than that of the route flush
timer. This is to provide the router with enough time to tell its neighbors about the invalid route
before the routing table is updated.
Interior Gateway Protocol - IGRP
Introduction
Interior Gateway Routing Protocol (IGRP) is a Cisco proprietary Distance-Vector routing
protocol. This means that all your routers must be Cisco routers in order to use IGRP in your
network, keep in mind that Windows 2000 now supports it as well because they have bought a
licence from Cisco to use the protocol !
Cisco created this routing protocol to overcome the problems associated with RIP.
IGRP has a maximum hop count of 255 with a default of 100. This is helpful in larger networks
and solves the problem of there being only 15 hops maximum possible in a RIP network. IGRP
also uses a different metric from RIP. IGRP uses bandwidth and delay of the line by default as a
metric for determining the best route to an internetwork. This is called a composite metric.
Reliability, load and Maximum Transmission Unit (MTU) can also be used, although they are not
used by default.
IGRP has a set of timers to enhance its performance and functionality:
Update Timer: These specify how frequently routing-update messages should be sent. The default
is 90 seconds.
Invalid Timers: These specify how long a router should wait before declaring a route invalid if it
doesn't receive a specific update about it. The default is three times the update period.
Hold-down Timers: These specify the hold-down period. The default is three times the update
timer period plus 10 seconds.
Route Flush Timer:These indicate how much time should pass before a route should be flushed
from the routing table. The default is seven times the routing period.
Link State Routing Protocols
Introduction
Link State protocols, unlike Distance Vector broadcasts, use multicast.
Multicast is a "broadcast" to a group of hosts, in this case routers (Please see the multicast page
for more information). So if I had 10 router of which 4 where part of a "mutilcast group" then,
when I send out a multicast packet to this group, only these 4 routers will receive the updates,
while the rest of them will simply ignore the data. The multicast address is usually 224.0.0.5 &
224.0.0.6, this address is defined by the IGRP (Interior Gateway Routing Protocol).
Link State routing protocols do not view networks in terms of adjacent routers and hop counts,
but they build a comprehensive view of the overall network which fully describes the all possible
routes along with their costs. Using the SPF (Shortest Path First) algorithm, the router creates a
"topological database" which is a hierarchy reflecting the network routers it knows about. It then
puts it's self on the top of this hierarchy, and has a complete picture from it's own perspective.
When a router using a Link State protocol, such a OSPF (Open Shortest Path First) knows about a
change on the network, it will multicast this change instantly, there for flooding the network with
this information. The information routers require to build their databases is provided in the form
of Link State advertisement packets (LSAP). Routers do not advertise their entire routing tables,
instead each router advertises only its information regarding immediately adjacent routers.
Link State protocols in comparison to Distance Vector protocols have:

Big memory requirements

Shortest path computations require many CPU circles

If network is stable little bandwidth is used; react quickly to topology changes

Announcements cannot be “filtered”. All items in the database must be sent to neighbors

All neighbors must be trusted

Authentication mechanisms can be used to avoid undesired adjacencies

No split horizon techniques are possible
Even though Link State protocols work more efficiently, problem can arise. Usually problems
occur cause of changes in the network topology (links go up-down), and all routers don't get
updated immediately cause they might be on different line speeds, there for, routers connected via
a fast link will receive these changes faster than the others on a slower link.
Different techniques have been developed to deal with these problem and these are :
1) Dampen update frequency
2) Target link-state updates to multicast
3) Use link-state area hierarchy for topology
4) Exchange route summaries at area borders
5) Use Time-stamps Update numbering & counters
6) Manage partitions using a area hierarchy
Please select one of the following Link State routing protocols:
Open Shortest Path First - OSPF
Open Shortest Path First (OSPF) Routing Protocol
Introduction
Open Shortest Path First (OSPF) is a routing protocol developed for Internet Protocol (IP)
networks by the interior gateway protocol (IGP) working group of the Internet Engineering Task
Force (IETF). The working group was formed in 1988 to design an IGP based on the shortest path
first (SPF) algorithm for use in the Internet. Similar to the Interior Gateway Routing Protocol
(IGRP), OSPF was created because in the mid-1980s, the Routing Information Protocol (RIP)
was increasingly unable to serve large, heterogeneous internetworks.
OSPF is a classless routing protocol, which means that in its updates, it includes the subnet of
each route it knows about, thus, enabling variable-length subnet masks. With variable-length
subnet masks, an IP network can be broken into many subnets of various sizes. This provides
network administrators with extra network-configuration flexibility.These updates are multicasts
at specific addresses (224.0.0.5 and 224.0.0.6).
The cool 3D diagram below shows us the information that each field of an OSPF packet contains:
Analysis Of "Type" Field
All OSPF packets begin with a 24-byte header, which is shown right above. There is however one
field I would like to give a bit more attention to, and this is the "Type" field which is 1 byte long.
As illustrated in the diagram, the "Type" field identifies the OSPF packet type as one of the
following:

Hello: Establishes and maintains neighbor relationships.

Database Description: Describes the contents of the topological database. These
messages are exchanged when an adjacency is initialized.

Link-state Request: Requests pieces of the topological database from neighbor routers.
These messages are exchanged after a router discovers (by examining databasedescription packets) that parts of its topological database are out of date.

Link-state Update: Responds to a link-state request packet. These messages also are used
for the regular dispersal of Link-State Acknowledgments (LSA). Several LSAs can be
included within a single link-state update packet.

Link-state Acknowledgment: Acknowledges link-state update packets.
OSPF has two primary characteristics:
1) The protocol is open (non proprietary), which means that its specification is in the public
domain. The OSPF specification is published as Request For Comments (RFC) 1247.
2) The second principal characteristic is that OSPF is based on the SPF algorithm, which
sometimes is referred to as the Dijkstra algorithm, named for the person credited with its creation.
OSPF is a Link State routing protocol that calls for the sending of link-state advertisements
(LSAs) to all other routers within the same hierarchical area. Information on attached interfaces,
metrics used, and other variables is included in OSPF LSAs. As OSPF routers accumulate linkstate information, they use the SPF algorithm to calculate the shortest path to each node.
As a Link State routing protocol, OSPF contrasts with RIP and IGRP, which are Distance Vector
routing protocols. Routers running the Distance Vector algorithm send all or a portion of their
routing tables in routing-update messages to their neighbors.
Additional OSPF features include equal-cost, multipath routing, and routing based on upper-layer
type-of-service (TOS) requests. TOS-based routing supports those upper-layer protocols that can
specify particular types of service. An application, for example, might specify that certain data is
urgent. If OSPF has high-priority links at its disposal, these can be used to transport the urgent
datagram.
OSPF supports one or more metrics. If only one metric is used, it is considered to be arbitrary,
and TOS is not supported. If more than one metric is used, TOS is optionally supported through
the use of a separate metric (and, therefore, a separate routing table) for each of the eight
combinations created by the three IP TOS bits (the delay, throughput, and reliability bits). If, for
example, the IP TOS bits specify low delay, low throughput, and high reliability, OSPF calculates
routes to all destinations based on this TOS designation.
Hybrid Routing Protocols
Introduction
Hybrid routing protocols are something inbetween Distance Vector and Link State routing
protocols.
Please select the Hybrid protocol you want to read about:
Enhanced Interior Gateway Routing Protocol - EIGRP
Enhanced Interior Gateway Routing Protocol - EIGRP
Introduction
Enhanced Interior Gateway Routing Protocol (EIGRP) is another Cisco proprietary, hybrid (has
feature of Distance Vector and Link State protocols), interior gateway protocol (IGP) used by
routers to exchange routing information. EIGRP uses a composite metric composed of
Bandwidth, Delay, Reliability, and Loading to determine the best path between two locations.
EIGRP can route IP, IPX and Appletalk. Along with IS-IS, it is one of the few multi-protocol
routing protocols.
The Diffusing Update Algorithm (DUAL) is the heart of EIGRP. In essence, DUAL always keeps
a backup route in mind, in case the primary route goes down. DUAL also limits how many
routers are affected when a change occurs to the network.
There is no maximum allowable number of hops. In a EIGRP network, each router multi-casts
"hello" packs to discover its adjacent neighbor. This adjcency database is shared with other router
to build a topology database. From the topology database the best route (Successor) and the
second best route (Feasible Successor) is found.
EIGRP is classless, meaning it does include the subnet mask in routing updates. However, by
default 'auto-summary' is enable. You must disable if you want subnet information from other
major networks.
The EIGRP metric is a can be a complex calculation, but by default it only uses bandwidth and
delay to determine the best path.
Hubs & Repeaters
Introduction
Here we will talk about hubs and explain how they work. In the next section we will move to
switches and how they differ from hubs, how they work and the types of switching methods that
are available; we will also compare them.
Before we start there are a few definitions which I need to speak about so you can understand the
terminology we will be using.
Domain: Defined as a geographical area or logical area (in our imagination) where anything in it
becomes part of the domain. In computer land, this means that when something happens in this
domain (area) every computer that's part of it will see or hear everything that happens in it.
Collision Domain: Putting it simple, whenever a collision between two computers occurs, every
other computer within the domain will hear and know about the collision. These computers are
said to be in the same collision domain. As you're going to see later on, when computers connect
together using a hub they become part of the same collision domain. This dosen't happen with
switches.
Broadcast Domain: A domain where every broadcast (a broadcast is a frame or data which is sent
to every comeputer) is seen by all computers within the domain. Hubs and switches do not break
up broadcast domains. You need a router to achieve this.
There are different devices which can break-up collision domains and broadcast domains and
make the network a lot faster and efficient. Switches create separate collision domains but not
broadcast domains. Routers create separate broadcast and collision domains. Hubs are too simple
to do either, can't create separate collision or broadcast domain.
Hubs & Repeaters
Hubs and repeaters are basically the same, so we will be using the term "Hub" to keep things
simple. Hubs are common today in every network. They are the cheapest way to connect two or
more computers together. Hubs are also known as Repeaters and work on the first layer of the
OSI model. They are said to work on the first layer because of the function they perform. They
don't read the data frames at all (like switches and routers do), they only make sure the frame is
repeated out on each port and that's about it.
The Nodes that share an Ethernet or Fast Ethernet LAN using the CSMA/CD rules are said to be
in the same collision domain. In plain English, this means that all nodes connected to a hub are
part of the same collision domain. In a Collision domain, when a collision occurs everyone in that
domain/area will hear it and will be affected. The Ethernet section talks about CSMA/CD and
collision domains since they are part of the rules under which Ethernet functions.
The picture below shows a few hubs : 8 port Netgear and a D-link hub.
The computers (nodes) connect to the hub using Unshielded Twisted Pair cable (UTP). Only one
node can be connected to each port of the hub. The pictured hub has a total of 8 ports, which
means up to 8 computers can be networked.
When hubs were not that common and also expensive, most offices and home networks use to
install coax cable.
The way hubs work is quite simple and straightforward: When a computer on any one of the eight
ports transmits data, this is replicated and sent out to the other seven ports. Check out the below
picture which shows it clearly.
EXPLANATION:
Node 1 is transmitting some data to Node 6 but all nodes are receiving the data as well. This data
will be rejected by the rest of the nodes once they figure out it's not for them.
This is accomplished by the node's network card reading the destination MAC address of the
frame (data) it receives, it examines it and sees that it doesn't match with it's own and therefor
discards the frame. Please see the Datalink layer in the OSI section for more information on MAC
addresses.
Most hubs these days also have a special port which can function as a normal port or as an
"uplink" port. An uplink port allows you to connect another hub to the existing one, increasing
the amount of ports which will be available to you. This is a cheap solution when you need to get
a few more computers networked and it works quite well up to a point.
This is how 2 eight port hubs would look when connected via the uplink port and how the data is
replicated to all 16 ports :
In the above picture you can see that Node 1 is again transmitting data to Node 6 and that every
other node connected to the hub is receiving the information. As we said, this is a pretty good and
cheap solution, but as the network gets busier, you can clearly understand that there is going to be
a lot of unecessary data flowing all over the network. All Nodes here are in the same broastcast
and collision domain since they will hear every broadcast and collision that occurs.
This is the same situation you get when you use coax cable, where every node or computer is
connected onto the same cable and the data that's put onto it travels along the cable and is
received by every computer.
You probably also noticed the two orange boxes labled "50 Ohm". These are called terminating
resistors and are used on both ends of the coax cable so when the signal gets to them, it's
absorbed by them and that way you don't get the signal reflecting back. Think of them as shock
absorbent and the data signal is the shock wave which gets absorbed when it reaches the
terminating resistors. The coax cable can be up to 185 meters and can contain no more than 30
nodes per segment. What you're looking at in the above picture is one segment 25 meters long
with 4 nodes attached to it.
Now coming back to the hubs, there are a few standard features most of them have these include
a link and activity LED for each port, a power LED and collision LED. Some hubs have separate
link lights and activity lights, others combine them into one where the link light will flash when
there is activity, otherwise it remains constantly on. The Netgear hub which is displayed at the
beginning of this page has two separate LEDs for the activity and link but the Compex hub below
has only one.
This little hub also contains a special BNC connection so you can connect a coax cable to
it.
When you do connect it, the BNC light comes on. Notice the label at the top where they have
written "8 port Ethernet Repeater".
As we already have said, hubs are just simple repeaters.
The collision light on the hubs will only light up when a collision is detected. Collision is when 2
computers or nodes try to talk on the network at the same time. When this happens, their frames
will collide and become corrupted. The hubs are smart enough to detect this and will light up the
collision LED for a small amount of time (1/10 of a second for each collision). If you find
yourself wondering why couldn't they make things work so more than two computers can talk on
the network .. then i would ask you to visit the Ethernet section where all this is explained in
detail. Collisions and the fact that only one computer can talk on the network at any given time
along with the cabling rules are all part of the Ethernet rules. Remember that any node connected
to a hub becomes part of the same collision domain.
Switches & Bridges
Introduction
By now you can see the limitations of a simple hub and when you also read about Ethernet, you
start to understand that there are even more limitations. The companies who manufacter hubs saw
the big picture quickly and came out with something more efficient, bridges, and then the
switches came along! Bridges are analysed later on in this section.
Switching Technology
As we mentioned earlier, hubs work at the first layer of the OSI model and simply receive and
transmit information without examining any of it.
Switches (Layer-2 Switching) are a lot smarter than hubs and operate on the second layer of the
OSI model. What this means is that a switch won't simply receive data and transmit it throughout
every port, but it will read the data and find out the packet's destination by checking the MAC
address. The destination MAC address is located always at the beginning of the packet so once
the switch reads it, it is forwarded to the appropriate port so no other node or computer connected
to the switch will see the packet.
Switches use Application Specific Integrated Circuits (ASIC's) to build and maintain filter tables.
Layer-2 switches are alot faster than routers cause they dont look at the Network Layer (thats
Layer-3) header or if you like, information. Instead all they look at is the frame's hardware
address (MAC address) to determine where the frame needs to be forwarded or if it needs to be
dropped. If we had to point a few features of switches we would say:

They provide hardware based bridging (MAC addresses)

They work at wire speed, therefor have low latency

They come in 3 different types: Store & Forward, Cut-Through and Fragment Free
(Analysed later)
Below is a picture of two typical switches. Notice how they looks similair to a hubs, but they
aren't. It's just that the difference is on the inside!
The Three Stages
All switches regardless of the brand and various enhancements they carry, have something in
common, it's the three stages (sometimes 2 stages) they go through when powered up and during
operation. These are as follows:

Address Learning

Forward/Filter decisions

Loop Avoidance (Optional)
Let's have a look at them to get a better understanding!
Address Learning
When a switch is powered on, the MAC filtering table is empty. When a device transmits and an
interface receives a frame, the switch places the source address in the MAC filtering table
remembering the interface the device on which it is located. The switch has no choice but to flood
the network with this frame because it has no idea where the destination device is located.
If a device answers and sends a frame back, then the switch will take the source address from that
frame and place the MAC address in the database, associating this address with the interface that
received the frame.
Since the switch has two MAC addresses in the filtering table, the devices can make a point-topoint connection and the frames will only be forwarded between the two devices. This makes
layer-2 switches better than hubs. As we explained early on this page, in a hub network all frames
are forwarded out to all ports every time. Most desktop switches these days can hold upto 8000
MAC addresses in their table, and once the table is filled, then starting with the very first MAC
entry, the switch will start overwritting the entries. Even tho the number of entries might sound
big .. it only takes a minute or two to fill it up, and if a workstation dosen't talk on the network for
that amount of time, then chances are that its MAC address has been removed from the table and
the switch will forward to all ports the packet which has as a destination this particular
workstation.
And after the first frame has been successfully received by Node 2, Node 2 sends a reply to Node
1, check out what happens:
Notice how the frame is not transmitted to every node on the switch. The switch by now has
already learned that Node 1 is on the first port, so it send it straight there without delay. From
now on, any communication between the two will be a point-to-point connection :
Forward/Filter Decision
When a frame arrives at the switch, the first step is to check the destination hardware address,
which is compaired to the forward/filter MAC database. If the destination hardware address is
known, then it will transmit it out the correct port, but if the destination hardware address is not
known, then it will broadcast the frame out of all ports, except the one which it received it from.
If a device (computer) answers to the broadcast, then the MAC address of that device is added to
the MAC database of the switch.
Loop Avoidance (Optional)
It's always a good idea to have a redundant link between your switches, in case one decides to go
for a holiday. When you setup redundant switches in your network to stop failures, you can create
problems. Have a look at the picture below and I'll explain:
The above picture shows an example of two switches which have been placed in the network to
provide redundancy in case one fails. Both switches have their first port connected to the upper
section of the network, while their port 2 is connected to the lower section of the same network.
This way, if Switch A fails, then Switch B takes over, or vice versa.
Things will work fine until a broadcast come along and causes alot of trouble. For the simplicity
of this example, I am not going to show any workstations, but only the server which is going to
send a broadcast over the network, and keep in mind that this is what happens in real life if your
switch does not support Spanning-Tree Protocol (STP), this is why I stuck the "Optional" near the
"Loop Avoidance" at the start of this section:
It might look a bit messy and crazy at a first glance but let me explain what is going on here.
The Server for one reason or another decides to do a broadcast. This First Round (check arrow)
broadcast is sent down to the network cable and firstly reaches Port 1 on Switch A. As a result,
since Switch A has Port 2 connected to the other side of the lan, it sends the broadcast out to the
lower section of the network, this then is sent down the wire and reaches Port 2 on Switch B
which will send it out Port 1 and back onto the upper part of the network. At this point, as the
arrows indicate (orange colour) the Second Round of this broadcast starts. So again... the
broadcast reaches Port 1 of Switch A and goes out Port 2 back down to the lower section of the
network and back up via Port 2 of Switch B. After it comes out of Port 1 of Switch B, we get the
Third Round, and then the Fourth Round, Fifth Round and keeps on going without stopping.....!
This is what we call a Broadcast Storm.
A Broadcast Storm will repeat constantly, chewing up the valueble bandwidth on the network.
This is a major problem, so they had to solve it one way or another, and they did... with the
Spanning-Tree Protocol or STP in short. What STP does, is to find the redundant links, which this
case would be Port 2 of Switch B and shut it down, thus eliminating the posibility of looping to
occur.
Lan Switch Types
At the begining of this page we said that the switches are fast, therefor have low latency. This
latency does vary and depends on what type of switching mode the switch is operating at. You
might recall seeing these three switching modes at the beginning: Store & Forward, CutThrough and Fragment Free.
The picture below shows how far the different switching modes check the frame:
So what does this all mean ? Switching modes ? I Don't understand !
Let's Explain!
The fact is that switches can operate in one of the three modes. Some advance switches will allow
you to actually pick the mode you would like it to operate in, while others don't give you any
choice. Let's have a quick look at each mode:
Store & Forward mode
This is one of the most popular swtiching methods. In this mode, when the switch receives a
frame from one of it's ports, it will store it in memory, check it for errors and corruption, and if it
passes the test, it will forward the frame out the designated port, otherwise, if it discovers that the
frame has errors or is corrupt, it will discard it. This method is the safest, but also has the highest
latency.
Cut-Through (Real Time)
Cut-Through switching is the second most popular method. In this mode,the switch reads the
frame until it learns the destination MAC address of the frame it's receiving. Once it learns it, it
will forward the frame straight out the designated port without delay. This is why we say it's Real Time-, there is no delay or error checking done to the frame.
Fragment Free
The Fragment free switching method is mainly used to check for frames which have been subject
to a collision. The frame's first 64 bytes are only checked before forwarding the frame out the
designated port. Reason for this is because almost all collisions will happen within the first 64
bytes of a frame. If there is a corruption in the first 64 bytes, it's most likely that that frame was a
victim of a collision.
Just keep one important detail in mind: When you go out to buy a switch, make sure you check
the amount of memory it has. Alot of the cheap switches which support the Store & Forward
mode have very small amounts of memory buffer (256KB- 512KB) per port. The result of this is
that you get a major decrease in performance when you have more than 2 computers
communicating via that switch cause there isn't enough memory to store all incoming packets
(this also depends on th swtiching type your switch supports), and you eventually get packets
being discarded.
The table below is a guide on what amounts of memory you should be looking at for switches of
different configuration :
Bridges
Bridges are really just like switches, but there are a few differences which we will mention, but
not expand upon. These are the following:

Bridges are software based, while switches are hardware based because they use an
ASICs chip to help them make filtering decisions.

Bridges can only have one spanning-tree instance per bridge, while switches can have
many.

Bridges can only have upto 16 ports, while a switch can have hundreds !
That's pretty much as far as we will go with the bridges since they are pretty much old technology
and you probably won't see many around.
Introduction To Routers
Introduction
Welcome to the Routers section. Here we will analyse routers quite some depth; what they do and
how they work. I point out to you that you should have some knowlege on the OSI model and
understand how data is sent across the network medium. If you find the information a bit too
confusing or don't quite understand it, I would suggest you go back to the networking section and
do some reading on the OSI model and Protocols.
You will find information on Cisco routers at the end of this page.
What are they and what do they do ?
Routers are very common today in every network area, this is mainly because every network
these days connect to some other network, whether it's the Internet or some other remote site.
Routers get their name from what they do.... which is route data from one network to another.
For example, if you had a company which had an office in Sydney and another one in Melbourne,
then to connect the two you would use a leased line to which you would connect a router at each
end. Any traffic which needs to travel from one site to another will be routed via the routers,
while all the other unecessary traffic is filtered (blocked), thus saving you valuable bandwidth
and money.
There are two type of routers: 1) Hardware routers 2) Software routers.
So what's the difference ?
When people talk about routers, they usually don't use the terms "hardware" or "software" router
but we are, for the purpose of distinguishing between the two.
Hardware routers are small boxes which run special software created by their vendors to give
them the routing capability and the only thing they do is simply route data from one network to
another. Most companies prefer hardware routers because they are faster and more reliable, even
though their cost is considerably more when compared with a software router.
So what does a hardware router look like? Check the picture below, it displays a Cisco 1600 and
2500 series router along with a Netgear RT338 router. They look like a small box and run special
software as we said.
CISCO 1600 Series Router
CISCO 2500 Series Router
NetGear RT338 Router
Software routers do the same job with the above hardware routers (route data), but they don't
come in small flashy boxes. A software router could be an NT server, NetWare server or Linux
server. All network servers have built-in routing capabilities.
Most people use them for Internet gateways and firewalls but there is one big difference between
the hardware and software routers. You cannot (in most cases) simply replace the hardware router
with a software router.Why? Simply because the hardware router has the necessary hardware
built-in to allow it to connect to the special WAN link (frame relay, ISDN, ATM etc), where your
software router (e.g a NT server) would have a few network cards one of which connects to the
LAN and the other goes to the WAN via the hardware router.
I have seen a few cards in the market which allow you to connect an ISDN line directly into them.
With these special cards, which retail from $5000 to $15000 depending on their capacity, you
don't need the hardware router. But as you can understand, it's a much cheaper solution to buy a
hardware router. Plus, the hardware routers are far more advanced and faster than the software
routers since they don't have to worry about anything else but routing data, and the special
electronic components they have in them are developed with this in mind.
The Flash image below shows us what a router does when it receives packets from the LAN or
the Internet. Depending on the source and destination, it will pass them to the other network or
send them to the Internet. The router is splitting the below network into 2. Each network has a
hub to which all computers on that network connect to. Futher more, the router has one interface
connected to each network and one connected to the Internet, this allows it to pass the packets to
the right destination:
..................................
(You can click on the GO button)
The picture below illustrates a router's place in the Local Area Network (LAN):
In the example shown, the workstations see the router as their "gateway". This means that any
machine on this LAN that wants to send a packet (data) to the Internet or anywhere outside its
Local Area Network (LAN) will send the packet via the gateway. The router (gateway) will know
where it needs to send it from there on so it can arrive at its destination.
This explains the reason you need to add an Internet Protocol (IP) number for a gateway, when
you have a LAN at home or in the office, in your TCP/IP network properties on your windows
workstation.
The above figure shows only one example of how routers connect so the LAN gets Internet
access. Let's have a look how 2 offices would use routers to connect them.
The routers in the above picture connect using a particular WAN protocol, e.g ISDN.
In reality, there would be a cable (provided by your service provider) which connects to the
"WAN" interface of the router and from there the signal goes straight to your service provider's
network and eventually ends up at the other router's WAN interface.
Depending on the type of router you get, it will support one of the most commonly used WAN
protocols: ISDN, Frame Relay, ATM, HDLC, PPP. These protocols are discused in the protocols
section.
It's important to note down and remember a few of the main features of a router:

Routers are Layer 3 devices

Routers will not propagate broadcasts, unless they are programmed to

Most serious routers have their own operating system
Routers use special protocols between them to exchange information about each other (not data)
The above flash shows you how routers on the Internet work. In the example, your computer
which is located on the left is requesting data from a web server and the web server is responding
to your computer by sending it the requested data. The path which is taken for all transactions
will not remain the same, but will change, depending on the traffic and best routes available.
Now that you have a good idea of what a router looks like and what it's purpose is, we are going
to have a good look at one of the most popular router brands - Cisco.
Please choose one of the following sections:
Basics of Cisco routers - Learn the basics for the popular Cisco routers
The Modes in a Cisco router - Learn how to configure Cisco routers
Routing Protocols - Common protocols routers use to communicate and exchange information
Basics Of Cisco Routers
Introduction
Cisco is well known for its routers and switches. I must admit they are very good quality products
and once they are up and running, you can pretty much forget about them because they rarely fail.
We are going to focus on routers here since that's the reason you clicked on this page !
Cisco has a number of different routers, amongst them are the popular 1600 series, 2500 series
and 2600 series. The ranges start from the 600 series and go up to the 12000 series (now we are
talking about a lot of money).
Below are a few of the routers mentioned :
Cisco 7200 Series
Cisco 700 Series
Cisco 800 Series
Cisco 1600 Series
Cisco 2600 Series
All the above equipment runs special software called the Cisco Internetwork Operating System or
IOS. This is the kernel of Cisco routers and most switches. Cisco has created what they call Cisco
Fusion, which is supposed to make all Cisco devices run the same operating system.
We are going to begin with the basic components which make up a Cisco router (and switches)
and I will be explaining what they are used for, so grab that tea or coffee and let's get going !
The basic components of any Cisco router are :
1) Interfaces
2) The Processor (CPU)
3) Internetwork Operating System (IOS)
4) RXBoot Image
5) RAM
6) NVRAM
7) ROM
8) Flash memory
9) Configuration Register
Now I just hope you haven't looked at the list and thought "Stuff this, it looks hard and
complicated" because I assure you, it's less painful than you might think ! In fact, once you read it
a couple of times, you will find all of it easy to remember and understand.
Interfaces
These allow us to use the router ! The interfaces are the various serial ports or ethernet ports
which we use to connect the router to our LAN. There are a number of different interfaces but we
are going to hit the basic stuff only.
Here are some of the names Cisco has given some of the interfaces: E0 (first Ethernet interface),
E1 (second Ethernet interface). S0 (first Serial interface), S1 (second Serial interface), BRI 0
(first B channel for Basic ISDN) and BRI 1 (second B channel for Basic ISDN).
In the picture below you can see the back view of a Cisco router, you can clearly see the various
interfaces it has:(we are only looking at ISDN routers)
You can see that it even has phone sockets ! Yes, that's normal since you have to connect a digital
phone to an ISDN line and since this is an ISDN router, it has this option with the router. I
should, however, explain that you don't normally get routers with ISDN S/T and ISDN U
interfaces together. Any ISDN line requires a Network Terminator (NT) installed at the
customer's premises and you connect your equipment after this terminator. An ISDN S/T
interface doesn't have the NT device built in, so you need an NT device in order to use the router.
On the other hand, an ISDN U interface has the NT device built in to the router.
Check the picture below to see how to connect the router using the different ISDN interfaces:
...........
Apart from the ISDN interfaces, we also have an Ethernet interface that connects to a device in
your LAN, usually a hub or a computer. If connecting to a Hub uplink port, then you set the small
switch to "Hub", but if connecting to a PC, you need to set it to "Node". This switch will simply
convert the cable from a straight through (hub) to a x-over (Node):
..............
The Config or Console port is a Female DB9 connector which you connect, using a special cable,
to your computers serial port and it allows you to directly configure the router.
The Processor (CPU)
All Cisco routers have a main processor that takes care of the main functions of the router. The
CPU generates interrupts (IRQ) in order to communicate with the other electronic components in
the router. The Cisco routers utilise Motorola RISC processors. Usually the CPU utilisation on a
normal router wouldn't exceed 20 %.
The IOS
The IOS is the main operating system on which the router runs. The IOS is loaded upon the
router's bootup. It usually is around 2 to 5MB in size, but can be a lot larger depending on the
router series. The IOS is currently on version 12, and Cisco periodically releases minor versions
every couple of months e.g 12.1 , 12.3 etc. to fix small bugs and also add extra functionality.
The IOS gives the router its various capabilities and can also be updated or downloaded from the
router for backup purposes. On the 1600 series and above, you get the IOS on a PCMCIA Flash
card. This Flash card then plugs into a slot located at the back of the router and the router loads
the IOS "image" (as they call it). Usually this image of the operating system is compressed so the
router must decompress the image in its memory in order to use it.
The IOS is one of the most critical parts of the router, without it the router is pretty much useless.
Just keep in mind that it is not necessary to have a flash card (as described above with the 1600
series router) in order to load the IOS. You can actually configure most Cisco routers to load the
image off a network tftp server or from another router which might hold multiple IOS images for
different routers, in which case it will have a large capacity Flash card to store these images.
The RXBoot Image
The RXBoot image (also known as Bootloader) is nothing more than a "cut-down" version of the
IOS located in the router's ROM (Read Only Memory). If you had no Flash card to load the IOS
from, you can configure the router to load the RXBoot image, which would give you the ability to
perform minor maintenance operations and bring various interfaces up or down.
The RAM
The RAM, or Random Access Memory, is where the router loads the IOS and the configuration
file. It works exactly the same way as your computer's memory, where the operating system loads
along with all the various programs. The amount of RAM your router needs is subject to the size
of the IOS image and configuration file you have. To give you an indication of the amounts of
RAM we are talking about, in most cases, smaller routers (up to the 1600 series) are happy with
12 to 16 MB while the bigger routers with larger IOS images would need around 32 to 64 MB of
memory. Routing tables are also stored in the system's RAM so if you have large and complex
routing tables, you will obviously need more RAM !
When I tried to upgrade the RAM on a Cisco 1600 router, I unscrewed the case and opened it and
was amazed to find a 72 pin SIMM slot where you needed to attach the extra RAM. For those
who don't know what a 72 pin SIMM is, it's basically the type of RAM the older Pentium socket
7 CPUs took, back in '95. This type of memory was replaced by today's standard 168 pin DIMMs
or SDRAM.
The NVRAM (Non-Volatile RAM)
The NVRAM is a special memory place where the router holds its configuration. When you
configure a router and then save the configuration, it is stored in the NVRAM. This memory is
not big at all when compared with the system's RAM. On a Cisco 1600 series, it is only 8 KB
while on bigger routers, like the 2600 series, it is 32 KB. Normally, when a router starts up, after
it loads the IOS image it will look into the NVRAM and load the configuration file in order to
configure the router. The NVRAM is not erased when the router is reloaded or even switched off.
ROM (Read Only Memory)
The ROM is used to start and maintain the router. It contains some code, like the Bootstrap and
POST, which helps the router do some basic tests and bootup when it's powered on or reloaded.
You cannot alter any of the code in this memory as it has been set from the factory and is Read
Only.
Flash Memory
The Flash memory is that card I spoke about in the IOS section. All it is, is an EEPROM
(Electrical Eraseable Programmable Read Only Memory) card. It fits into a special slot normally
located at the back of the router and contains nothing more than the IOS image(s). You can write
to it or delete its contents from the router's console. Usually it comes in sizes of 4MB for the
smaller routers (1600 series) and goes up from there depending on the router model.
Configuration Register
Keeping things simple, the Configuration Register determines if the router is going to boot the
IOS image from its Flash, tftp server or just load the RXBoot image. This register is a 16 Bit
register, in other words has 16 zeros or ones. A sample of it in Hex would be the following:
0x2102 and in binary is : 0010 0001 0000 0010.
Cisco Router Modes
Introduction
From my personal experience, I have noticed that the lower end routers (600-1400) use different
commands than the mid to upper range routers (1600 and above). The commands we are going to
talk about here cover most aspects of the 1600, 1700, 2500, 2600, 3600 series. Most are the same,
but there are always a few variations to these commands depending on the interfaces your router
has, IOS version, and the type of WAN protocols they support.
Because there is such a wide range of interfaces on a router and also alot of different versions of
the Cisco IOS, I decided to stick to an example where our router is running IOS version 12 and
has one IDSN S/T (without NT terminator) interface and one Ethernet interface. That's a total of 2
interfaces. I understand that this is quite a specific example, but it would take an enourmous
amount of time and effort to cover all cases.
Now, when you power up a Cisco router, it will first run a POST test to ensure all hardware is ok,
and then look into the Flash to load the IOS. Once the IOS is loaded, it will then check the
NVRAM for any configuration file. Since this is a new router, it won't find any, so the router will
go into "setup mode".
Setup Mode
The setup mode is a step-by-step process which helps you configure basic aspects of the router.
When using this setup mode, you actually have 2 options:
1) Basic Managment Setup, which configures only enough connectivity for managment to the
system.
2) Extended Setup, which allows you to configure some global parameters and interfaces.
It should be noted that when you are prompted to enter a value at the console prompt, whatever is
between the square brackets [ ] is considered to be a default value. In other words, if you hit enter
without entering anything, the value in those brackets will be set for the specific question.
I'll try to keep this as simple and straightforward as possible.
Cisco routers have different configuration modes (depending on the router model), and by this I
mean there are different modes in which different aspects of the router can be configured.
These are :
1) User Exec Mode ( >) - Click to select
2) Privileged Mode (#) which has as a subset, the Global Configuration mode - Click to select
To be able to get into either User Exec or Privileged mode, you will most likely need a password.
This password is set during the initial configuration of the router or later on. Once in Privileged
Mode, you can then enter Global Configuration Mode (password not needed to enter this mode)
to then futher configure interfaces, routing protocols, access lists and more.
The picture below shows you a quick view of the modes. Notice the red arrow, it's pointing
towards the Global Configuration Mode and Privileged mode meaning that some of the specific
configuration modes can be entered from Global Configuration Mode and other from Privileged
mode:
I have given each mode its own separate page to avoid squezing all the information into one huge
page. This makes it easier for you to read.
Cisco Basics - User Exec Mode
Introduction
Let's see what it looks like to be in each one of these modes. Here I have telneted into our lab
router and I am in User Exec Mode:
The easiest way to keep track of the mode you're in is by looking at the prompt. The ">" means
we are in User Exec Mode. From this mode, we are able to get information like the version of
IOS, contents of the Flash memory and a few others.
Now, let's check out the available commands in this mode. This is done by using the "?"
command and hitting enter:
Wow, see all those commands available ? And just to think that this is considered a small portion
of the total commands available when in Privileged Mode ! Keep in mind that when you're in the
console and configuring your router, you can use some short cuts to save you typing full
command lines. Some of these are :
Tab: By typing the first few letters of a command and then hitting the TAB key, it will
automatically complete the rest of the command. Where there is more than one command starting
with the same characters, when you hit TAB all those commands will be displayed. In the picture
above, if i were to type "lo" and hit TAB, I would get a listing of "lock, login and logout" because
all 3 commands start with "lo".
?: The question mark symbol "?" forces the router to print a list of all available commands. A lot
of the commands have various parameters or interfaces which you can combine. In this case, by
typing the main command e.g "show" and then putting the "?" you will get a list of the
subcommands. This picture shows this clearly:
Other shortcut keys are :
CTRL-A: Positions the cursor at the beginning of the line.
CTRL-E: Positions the cursor at the end of the line.
CTRL-D: Deletes a character.
CTRL-W: Deletes a whole word.
CTRL-B: Moves cursor back by one step.
CTRL-F: Moves cursor forward by one step.
One of the most used commands in this mode is the "Show" command. This will allow you to
gather a lot of information about the router. Here I have executed the "Show version" command,
which displays various information about the router:
The "Show Interface <interface> " command shows us information on a particular interface. This
includes the IP address, encapsulation type, speed, status of the physical and logical aspect of the
interface and various statistics. When issuing the command, you need to replace the <interface>
with the actual interface you want to look at. For example, ethernet 0, which indicates the first
ethernet interface :
Some other generic commands you can use are the show "running-config" and show "startupconfig". These commands show you the configuration of your router.
The running-config refers to the running configuration, which is basically the configuration of the
router loaded into its memory at that time.
Startup-config refers to the configuration file stored in the NVRAM. This, upon bootup of the
router, gets loaded into the router's RAM and then becomes the running-config !
So you can see that User Exec Mode is used mostly to view information on the router, rather than
configuring anything. Just keep in mind that we are touching the surface here and not getting into
any details.
This completes the User Exec Mode section. If you like, you can go back and continue to the
Privileged Mode section.
Cisco Basics - Priveliged Mode
Introduction
To get into Privileged Mode we enter the "Enable" command from User Exec Mode. If set, the
router will prompt you for a password. Once in Privileged Mode, you will notice the prompt
changes from ">" to a "#" to indicate that we are now in Privileged Mode.
The Privileged Mode (and Global Configuration Mode ) is used mainly to configure the router,
enable interfaces, setup security, define dialup interfaces etc.
I have put a screen shot of the router to give you an idea of the commands available in Privileged
Mode in comparison to the User Exec Mode. Remember that these commands have subcommands and can get quite complicated:
As you can see, there is a wider choice of commands in Privileged Mode.
Now, when you want to configure certain services or parts of the router you will need to enter
Global Configuration Mode from within Privileged Mode. If you're confused by now with the
different modes available try to see it this way :
User Exec Mode (distinguished by the ">" prompt) is your first mode, which is used to get
statistics from router, see which version IOS you're running, check memory resources and a few
more things.
Privileged Mode (distingushed by the "#" prompt) is the second mode. Here you can enable or
disable interfaces on the router, get more detailed information on the router, for example, view
the running configuration of the router, copy the configuration, load a new configuration to the
router, backup or delete the configuration, backup or delete the IOS and a lot more.
Global Configuration Mode (distingushed by the " (config)# " prompt) is accessable via
Privileged Mode. In this mode you're able to configure each interface individually, setup banners
and passwords, enable secrets (encrypted passwords), enable and configure routing protocols and
a lot more. I dare say that 70% of the time you want to configure or change something on the
router, you will need to be in this mode.
Getting into Global Configuration
The picture below shows you how to enter Global Configuration Mode:
As you can see, I have telneted into the router and it prompted me for a password. I entered the
password, which is not shown, at this point I am in User Exec Mode and then entered "enable" in
order to get into the Privileged Mode. From here to get into Global Configuration Mode you need
to enter the "configure selection" command.
Now you must be wondering what the various parameters shown in the picture are, under the
"configure" command. These allow you to select how you will configure the router:

Configure Memory means you enter Global Configuration Mode and are configuring the
router in its NVRAM. This command will force the router to load up the startup-config
file stored in the NVRAM and then you can proceed with the configuration. When you're
happy with the configuration, save it to NVRAM by entering "copy running-config
startup-config".

Configure Network means you enter Global Configuration Mode and load a startupconfig file from a remote router (using tftp) into your local router's memory and
configure it. Once you're finished, you need to enter "copy running-config tftp" which
will force the router to copy its memory configuration onto a tftp server. The router will
prompt you for the IP address of the remote tftp server.

Configure Overwrite-network means that you overwrite the NVRAM's configuration with
a configuration stored on a tftp server. Issuing this command will force the router to
prompt for an IP address of the remote tftp server. Personally, I have never needed to use
this command.

Configure Terminal means you enter Global Configuration Mode and work with the
configuration which is already loaded into the router's memory (Cisco calls this the
running-config). This is the most popular command, as in most cases you need to modify
or re-configure the router on the spot and then save your changes.
You will need to save this configuration otherwise everything you configure will be lost upon
power failure or reboot of the router !
Below are the commands you need to enter to save the configuration, depending on your network
setup:

Copy running-config startup-config: Copies the configuration which is running in the
router's RAM in to the NVRAM and gives it a file name of startup-config (default). If
one already exists in the NVRAM, it will be overwritten by the new one.

Copy running-config tftp: Copies the configuration which is running in the router's RAM
in to a tftp server which might be running on your network. You will be asked for the IP
address of the tftp server and given the choice to select a filename for the configuration.
Some advanced routers can also act as tftp servers.
Generic Configuration
There are a few standard things with which you always need to configure the router . For
example, a hostname. This is also used as a login name for the remote router to which your router
needs to authenticate. Before we get stuck into the interface configuration we are going to run
through a few of these commands. The following examples assume no passwords have been set
as yet and that the router has a default hostname of "router":
We connect to the router via the console port using the serial cable and type the following
Router> enable (gets us into Privileged Mode)
Router# configure terminal (This command gets us into the appropriate Global Configuration
Mode as outlined above)
Router(config)# hostname swiftpond (This command sets the router's hostname to swiftpond.
From this moment onwards, swiftpond will appear before the ">" or "#" depending on which
mode we are in)
swiftpond(config)# username router2.isp password firewallcx (Here we are telling the router that
the remote router which we are connecting to, has a username of "router2.isp" and our password
to authenticate to router2.isp is "firewallcx")
This is a standard way of authentication with Cisco routers. Your router's hostname is your login
name and your password (in our case "firewallcx") is entered at the same time you define the
remote router's hostname.
Next we create a static route so the router will pass all packets originating from our network to
the remote router. This is usually the case when you connect to your isp.
swiftpond(config)# ip route 0.0.0.0 0.0.0.0 139.130.34.43 (Here we tell our router to create a
default route where any packet -defined by the first 0.0.0.0- no matter what subnetmask -defined
by the second 0.0.0.0- is to be sent to ip 139.130.34.43 which would be the router we are
connecting to)
In the case where you were not configuring the router to connect to the Internet but to join a small
WAN which connects a few offices, then you probably want to use a routing protocol:
swiftpond(config)# router rip (Enables RIP routing protocol. After this command you enter the
routing protocols configuration section -see below- where you can change timing parameters and
other)
swiftpond(config-router)#
At this prompt you can fine tune RIP or just leave it to the default setting which will work fine.
The "exit" command takes you one step back:
swiftpond(config-router)# exit
swiftpond(config)#
Alternatively, you can use IGRP as a routing protocol, in which case you would have to enter the
following:
swiftpond(config)# router igrp 1 (The "1" defines the Autonomous system number)
swiftpond(config-router)#
Again, the "exit" command will take you back one step:
swiftpond(config-router)# exit
swiftpond(config)#
After that, we need to create a dialer list which our WAN interface BRI (ISDN) will use to make
a call to our ISP.
swiftpond(config)# dialer-list 1 protocol ip permit (Now we are telling the router to create a dialer
list and bind it to group 1. The "protocol ip permit" tells the router to initiate a call for an ip
packet)
I'll give you a quick example to make sure you understand the reason we put this command:
If you launched your web browser, it would send an http request to the server you have set as a
homepage e.g www.firewall.cx. This request which your computer is going to send, is
encapsulated in an ip packet that will cause your router to initiate a connection, as it is now
configured to do so.
The dialup interface for Cisco routers is broken into 2 parts: a Dialer-list and a Dialer-group.
The Dialer-list defines the rules for placing a call. Later on when you configure the WAN
interface, you bind that Dialer-list to the interface by using the Dialer-group command (shown
later on).
Configuring Interfaces
In our example we said we have a router with one Ethernet and one basic ISDN interface (max of
128Kbit). We are going to go through the process of configuring the interfaces. We will start with
the Ethernet Interface.
In order to configure the interface, we need to be in Global Configuration Mode, so we need to
type first "enable" in order to get into Privileged Mode and then "configure terminal" to get into
the appropriate Global Configuration Mode (as explained above). Now we need to select the
interface we want to configure, in this case the first ethernet interface (E0) so we type "interface
e0".
This picture shows clearly all the steps:
Any commands entered here will affect the first ethernet interface only. So we start with the IP
address. It's important to understand that this IP address would be visible to both networks to
which the router is connected. If we were connecting to the Internet then everyone would be able
to see this IP. Futhermore, the IP address would also be the default gateway for our firewall or
machine which would physically connect directly to the router.
The following commands will configure the ethernet interface's IP address::
(config-if)# ip address 192.168.0.1 255.255.255.0
or
(config-if)# ip address 139.130.4.5 255.255.255.0 secondary
Now that we have given e0 its IP address, we need to give the ISDN interface its IP as well, so
we need to move to the correct interface by typing the following:
(config-if)# exit (this exits from the e0 interface configuration)
(config-if)# interface bri0 (this command enters the configuration for the first ISDN interface)
(config-if)# ip address 10.0.0.2 255.255.255.224 (this command sets the IP address for BRI 0
which is also known as the WAN IP address)
Now when it comes to configuring WAN interfaces, you need more than just an IP address (LAN
interfaces such as E0 are a lot easier to configure). You need to set the encapsulation type, the
authentication protocol the router will use to authenticate to the remote router, the phone number
it will need to dial and a few more:
(config-if)# encapsulation ppp (This command sets the packet's encapsulation to ppp which is
100% compatible with all routers no matter what brand)
(config-if)# dialer string 0294883452 (This command tells the router which phone number it
needs to dial in order to establish a connection with our remote router e.g your ISP)
(config-if)# dialer group 1 (This command tells the router to use the dialer list 1 (configured
previously) to initiate a connection)
(config-if)# idle-timeout 2000000 (This command is optional and allows us to set an idle timeout
so if the router is idle for so many seconds, it will disconnect. A value of 2 million seconds means
the router will never disconnect)
(config-if)# isdn switch-type basic-net3 (This command tells the router the type of ISDN
interface we are using. Each country has its own type, so you need to consult your Cisco manual
to figure out which type you need to put here)
(config-if)# dialer load-threshold 125 outbound (This command is optional and allows us to
specify a threshold upon which it will place another call. The value it takes is from 1 to 255. A
value of 125 means bring up the second B channel if either the inbound or outbound traffic load is
50%.
That pretty much does it for our ISDN (WAN) interface. All you need to do now is to SAVE the
configuration !
Well I hope it wasn't too bad for you, since there is a quite a bit of information on this page. I
encourage you to read through it again until you understand what is going on, then you will find it
a breeze to configure a Cisco router yourself !
The Ethernet Datalink
Introduction
"Ethernet" is the term that is casually applied to a number of very different data link
implementations. You will hear people refer to "Ethernet" and they might be referring to the
original DEC, Intel and Xerox implementation of Version 1 or Version 2 Ethernet. This, in a
sense, is the "true" definition of "Ethernet". When the IEEE built the 802.3 standards in 1984 the
term "Ethernet" was broadly applied to them as well. Today we talk about "Fast Ethernet" and,
although this technology bears many similarities to its predecessors, the engineering technology
has changed dramatically.
Whatever you call it, this is a Data Link technology - responsible for delivering a frame of bits
from one network interface to another - perhaps through a repeater, switch or bridge.
Please select one of the following links :
Frame Formats
The four ways that frames may be structured (contains 3D diagrams and analysis of frames).
Media Access
Taking turns accessing the cable using the rules of Carrier Sense Multiple Access with Collision
Detection (CSMA/CD)
Collisions
The results of simultaneous transmissions on the media: Fragments, Runts, CRC Errors
Propagation Delay
The relationship between maximum cable length and minimum frame size is based on the
propagation delay of the signal
Frame Corruption
Troubleshooting coaxial Ethernet networks by examining the types of corruption patterns that
result from specific events
Interframe Gap
The 9.6 microsecond interframe gap and an understanding of its purpose
Signal Encoding
Manchester Encoding for the electrical Ethernet signal
Ethernet Frame Formats
Introduction
An understanding of the basics of the Ethernet Frame Format is crucial to any discussion of
Ethernet technology.
In this section, we will discuss:
1. The four different frame formats used in the Ethernet world; the purpose of each of the
fields in an Ethernet frame; the reasons that there are so many different versions of the
Ethernet Frame Format - Ethernet, Ethernet, Ethernet, or Ethernet?! When somebody tells
me that they are running Ethernet on their network, I inevitably have to ask: "Which
Ethernet?". Currently, there are many versions of the Ethernet Frame Format in the
commercial marketplace, all subtly different and not necessarily compatible with each
other.
2. The explanation for the many types of Ethernet Frame Formats currently on the
marketplace lies in Ethernet's history. In 1972, work on the original version of Ethernet,
Ethernet Version 1, began at the Xerox Palo Alto Research Center. Version 1 Ethernet
was released in 1980 by a consortium of companies comprising DEC, Intel, and Xerox.
In the same year, the IEEE meetings on Ethernet began. In 1982, the DIX
(DEC/Intel/Xerox) consortium released Version II Ethernet and since then it has almost
completely replaced Version I in the marketplace. In 1983 Novell NetWare '86 was
released, with a proprietary frame format based on a preliminary release of the 802.3
spec. Two years later, when the final version of the 802.3 spec was released, it had been
modified to include the 802.2 LLC Header, making NetWare's proprietary format
incompatible. Finally, the 802.3 SNAP format was created to address backwards
compatibility issues between Version 2 and 802.3 Ethernet.
As you can see, the large number of players in the Ethernet world has created a number of
different choices. The bottom line is this: either a particular driver supports a particular frame
format, or it doesn't. Typically, Novell stations can support any of the frame formats, while
TCP/IP stations will support only one although there are no hard and fast rules in Networking.
Ethernet Frame Formats
The following sections will outline the specific fields in the different types of Ethernet frames.
Throughout the section, we will refer to fields by referencing their "offset" or number of bytes
from the start of the frame beginning with zero. Therefore, when we say that the destination
address field is from offset zero through five we are referring to the first six bytes of the frame.
The Preamble
Regardless of the frame type being used, the means of digital signal encoding on an Ethernet
network is the same. While a discussion of Manchester Encoding is beyond the scope of this
page, it is sufficient to say that on an idle Ethernet network, there is no signal. Because each
station has its own oscillating clock, the communicating stations have to have some way to
"synch up" their clocks and thereby agree on how long one bit time is. The preamble facilitates
this. The preamble consists of 8 bytes of alternating ones and zeros, ending in 11.
A station on an Ethernet network detects the change in voltage that occurs when another station
begins to transmit and uses the preamble to "lock on" to the sending station's clock signal.
Because it takes some time for a station to "lock on", it doesn't know how many bits of the
preamble have gone by. For this reason, we say that the preamble is "lost" in the "synching up"
process. No part of the preamble ever enters the adapter's memory buffer. Once locked on, the
receiving station waits for the 11 that signals that the Ethernet frame follows.
Most modern Ethernet adapters are guaranteed to achieve a signal lock within 14 bit-times.
The Different "Flavors" of Ethernet
While the preamble is common to every type of Ethernet, what follows it is certainly not. The
major types of Ethernet Frame Format are:
FRAME TYPE
Novell calls it...
Cisco calls it...
IEEE 802.3
ETHERNET_802.2 LLC
Version II
ETHERNET_II
ARPA
IEEE 802.3
ETHERNET_SNAP SNAP
SNAP
Novell
ETHERNET_802.3 NOVELL
Proprietary
("802.3 Raw")
You can click on the Frame type to get more information about it.
As you examine the table above please note that an IEEE 802.3 frame is referred to as an 802.2
frame by Novell. The frame that Novell refers to as "802.3 Raw" or "Ethernet_802.3" is their own
proprietary frame format.
What is CSMA/CD ?
Introduction
CSMA/CD stands for Carrier Sense Multiple Access with Collision Detection. It refers to the
means of media access, or deciding "who gets to talk" in an Ethernet network.
A more elegant term for "who gets to talk" is to refer to the "media access method", which, in this
case, would be "CSMA/CD".
Carrier Sense means that before a station will "talk" onto an Ethernet wire, it will listen for the
"carrier" that is present when another station is talking. If another station is talking, this station
will wait until there is no carrier present.
Multiple Access refers to the fact that when a station is done transmitting it is allowed to
immediately make another access to the medium (a 'multiple' access). This is as opposed to a
Token-Ring network where a station is required to perform other tasks inbetween accessing the
medium (like releasing a token or sometimes releasing a management frame).
Collision Detection refers to the ability of an Ethernet adapter to detect the resulting "collision" of
electrical signals and react appropriately. In a normally operating Ethernet network, it will
sometimes occur that two stations simultaneously detect no carrier and begin to talk. In this case
the two electrical signals will interfere with each other and result in a collision; an event which is
detected by the Collision Detection circuitry in the transmitting network interface cards.
The process of CSMA/CD is implemented slightly differently in a twisted pair (as opposed to a
coaxial) Ethernet network.
Ethernet Collisions
Introduction
The word "Collision" shouldn't be any new news to people who work with networks everyday. If
it is thought, don't worrie, that's why you are here.
A collision is an event that happens on an Ethernet network when two stations simultaneously
"talk" on the wire. Collisions are a normal part of life in an Ethernet network and under most
circumstances should not be considered a problem.
Even thought alot of people know that collisions do happen on a network, what they don't know
is that there are two different type of collisions ! Yes, thats right, two different type of collissions,
one is the Early Collision and the other, the Late Collision.
We will have a look at two collision examples (one of each) in the next couple of pages, and these
examples have been carefully selected to help you understand the difference between the two
types of collisions. Also, we are going to have a look at the events leading up to and immediately
following a collision.
So, grab that favorite mug of yours, fill it up with something to drink and let's start learning about
collisions !
You can use the cool menu to get to the next pages or simply click on the links below :


Early Ethernet Collisions
Late Ethernet Collisions
Propagation delay & its relationship to max. cable length
Introduction
You may know that the minimum frame size in an Ethernet network is 64 bytes or 512 bits,
including the 32 bit CRC. You may also know that the maximum length of an Ethernet cable
segment is 500 meters for 10BASE5 thick cabling and 185 meters for 10BASE2 thin cabling. It
is, however, a much less well known fact that these two specifications are directly related. In this
essay, we will discuss the relationship between minimum frame size and maximum cable length.
Propagation Delay
Before we discuss frame size and cable length, an understanding of signal propagation in copper
media is necessary. Electrical signals in a copper wire travel at approximately 2/3 the speed of
light. This is referred to as the propagation speed of the signal. Since we know that Ethernet
operates at 10Mbps or 10,000,000 bits per second, we can determine that the length of wire that
one bit occupies is approximately equal to 20 metres or 60 feet via the following maths:

speed of light in a vacuum = 300,000,000 metres/second

speed of electricity in a copper cable = 200,000,000 metres/second

(200,000,000 m/s) / (10,000,000 bits / s) = 20 metres per bit
We can further determine that a minimum size Ethernet frame consisting of 64 bytes or 512 bits
will occupy 10,240 metres of cable.
The Relationship
The only time that an Ethernet controller can detect collisions on the wire is when it is in the
transmit mode. When an Ethernet NIC has finished transmitting and switches to receive mode,
the only thing it listens for is the 64 bit preamble that signals the start of a data frame. The
minimum frame size in Ethernet is specified such that, based on the speed of propagation of
electrical signals in copper media, an Ethernet card is guaranteed to remain in transmit mode and
therefore detecting collisions long enough for a collision to propagate back to it from the farthest
point on the wire from it.
Take, for example, a length of 10BASE5 thick Ethernet cabling exactly 500 meters long (the
maximum that the spec allows) with two stations, Station A and Station B, attached to the farthest
ends of it.
If Station A begins to transmit, it will have transmitted 25 bits by the time the signal reaches
Station B, 500 meters away. If Station B begins to transmit at the last possible instant before
Station A's signal reaches it, the collision will reach Station A 25 bit-times later (the time it takes
for the signal on the wire to travel one bit-length -- 20 metres in copper cable). Station A will
have transmitted only 50 bits when the collision reaches it -- nowhere near the 512 bit boundary
for an early collision.
Upon closer examination, however, a peculiarity arises. If a normal collision happens before the
512 bit boundary, Station A would have to be over 5000 metres away from Station B before a late
collision occurred. Examine the maths for yourself: 512 bits times 20 metres/bit = 10,240 metres.
That's 256 bits or approximately 5000 metres for the signal to propagate from Station A to Station
B and 5000 metres for the collision event to propagate back to and be detected by Station A. It
seems like a late collision would never occur with a maximum cable length of only 500 meters.
What is the reason for the overhead?
The reason for the overhead is twofold. First of all, while the maximum possible cable segment
length in Ethernet is 500 metres, it is possible to extend that length with up to 4 repeaters before
the IEEE 802.3 spec is violated. This means that the signal may have to travel through as much as
2500 metres of cable to reach Station B, or 5000 metres of cable round trip. The second and final
reason for the overhead lies solely in the carefulness of Ethernet's inventors. Generally the spec is
twice as strict as it needs to be, allowing ample room for errors.
Herein lies one of the greatest strengths and weaknesses of Ethernet. It is a strength in that if you
need to, you can probably get away with violating the specs -- an extra length of cable here, an
extra repeater there and your network continues to run normally. It is a weakness in that while
you can get away with violating the specs, there is a very fine line between a network that is
violating the specs and is running and a network that is violating the specs and is crippled by late
collisions and you never know which extra bit of wire or extra repeater is going to cross the line.
Despite this dire warning, there are some general rules for violating specs:

If your vendor tells you you can violate the spec and you're not mixing vendors, it's
probably ok. If you mix vendors, obey the strictest vendor.

If something is wrong with your network and you know that it violates the spec in places,
those places should be the first ones you check. Try segmenting the network with a
bridge and see which side of the bridge the problems are on.
Ethernet Troubleshooting - Physical Frame Corruption
Introduction
When troubleshooting your Ethernet network, the first thing to look for is physical frame
corruption. In this essay, we will discuss the different causes of physical frame corruption and the
characteristics of each one. It is important to remember that the frame corruption being discussed
is SPECIFIC TO COAXIAL ETHERNET. Twisted-pair Ethernet implementation will NOT
manifest these types of corruption patterns!
Let's find the problem !
I am going to discusses troubleshooting with reference to the Network General Expert Sniffer
Network Analyzer. While the tips here are universal, other Analyzers' behavior might differ in
such a way as to make these tips invalid or unusable.
There are four possible causes of physical frame corruption in an Ethernet Network, each one
different in the way it corrupts the frame and therefore recognizable.
The four causes are:

Collisions. Caused by out of spec. cabling or faulty hardware.

Signal Reflections. Caused by un-terminated cables, impedance mismatch and exceeding
the maximum allowable bend radius of the cable.

Electrical Noise. Caused by nearby power grids, fluorescent lighting, X-ray machines,
etc...

Malfunctioning Hardware. Caused by gremlins, helpful users, natural disasters, etc...
At the end of the section there is a troubleshooting flowchart to help you identify the cause of
frame corruption. It is important to remember that these corruption patterns will only be evident
on a coaxial Ethernet (10BASE-2 Thin Ethernet, 10BASE-5 Thick Ethernet). Twisted-Pair
Ethernet networks, where each station is connected to a hub or switch, do not manifest these exact
corruption patterns.
Collisions
Collisions are the most easily recognizable of the four causes of physical frame corruption.
Generally, when a collision occurs, several bytes of the preamble of the colliding frame will be
read into your Sniffer's buffer before the signal is completely destroyed. You will see these bytes
in the hexadecimal decode of the packet as either several bytes of AAs or several bytes of 55s at
the very end of the frame (Remember, AAh=1010b, 55h=0101b. Depending on where the
collision occurred, the preamble could be perceived as either of these).
Because the preamble is only 8 bytes long, ending in 1011, if you see more than 8 bytes of AA or
55, then the corruption was not caused by a collision and more investigation is necessary.
Signal Reflections
Signal reflections are caused by electrons literally "bouncing" back along the wire. One cause of
signal reflection is an un-terminated cable. Electrons travel down the wire until they reach the
cable's end where, with no resistor to absorb the voltage potential, they reflect back from the open
end of the cable.
Another cause of signal reflections is mixing cables with different impedances. Impedance can be
thought of as the "rate of flow" of the wire. When electrons from the higher impedance wire
attempt to travel through the lower impedance wire, some of them can't make it and are reflected
back, destroying the signal.
The final cause of signal reflections is when the maximum allowable bend radius of the cable is
exceeded - the copper media is deformed, causing reflections.
The characteristic of signal reflection is very short frames (typically less than 16-32 bytes), with
no preamble in the frame and with all frames cut short within one or two bytes of the same place
in the frame. Once again, this can be determined by viewing the frames in the Hexadecimal
Decode view of your analyzer. The Expert Sniffer will also probably detect a high number of
short or runt frames, as well as a high rate of physical frame corruption.
Electrical Noise
Physical frame corruption caused by electrical noise is similar in appearance to corruption caused
by reflections in that there is no preamble in the frame -- the frame just seems to stop short, but it
is different in that the frames are generally cut off at random lengths.
Hardware Malfunctions
Frame corruption caused by hardware malfunctions is potentially the hardest to diagnose because
of the large number of ways that hardware can malfunction. Generally, hardware malfunctions
will occur either randomly or constantly, but not regularly. The type of frame corruption is
impossible to predict, generally manifesting as random "garbage" in the frame, but some common
signs are:

A stream of ones or zeros. A transceiver has malfunctioned and is "jabbering" on the
wire. Most transceivers have jabber detection circuitry that prevents the adapter from
transmitting for longer than a certain preset time.

Gigantic frames (greater than 1500 bytes). Same as above.
Troubleshooting Flowchart
REMEMBER: This applies to corruption patterns that would be visible when viewing frames on a
COAXIAL Ethernet.
1. Is a preamble (less than 8 bytes of AA or 55) visible at the very end of the frame?
If yes:
1. Make sure you haven't exceeded the specifications of your cable (maximum cable length,
maximum repeaters in between nodes, etc)
2. Use a "divide and conquer" method to isolate the troublemakers. Separate the network
into halves using a bridge and see which side of the bridge the problems occur on. Now
separate that half into halves, etc....
If no, go on.
2. Are the corrupt frames very short, and consistently the same length?
If yes:
1. Your problem is probably related to signal reflection. First check for un-terminated
cables. If the cable is terminated properly, your job becomes a lot harder. If new cable
has been installed recently, impedance mismatch is probably the problem. Avoid this
problem by buying all your cabling from the
same lot (if possible) and buying cabling all at once and putting extra in storage rather
than ordering as needed. Finally, check for cable deformation due to bending the cable or
placing heavy objects on the cable.
2. A Time Domain Reflectometer can really save you some work when diagnosing this type
of problem. This device can tell you, probably to the foot, how far down the wire the
signal reflection is occurring.
If no, go on.
3. Are the frames random in length, all cut off cleanly with no signs of bit streaming or other
hardware malfunction?
If yes:
1. Your problem is probably electrical noise. Use the "divide and conquer" method outlined
in bullet number 1 to determine where the noise is occurring and then use your intuition.
I've seen problems as bizarre as a dentist's X-ray machine being on the other side of the
wall to the wiring closet and every time the dentist took an X-ray the network would go
down!
If no, go on.
3. If you've arrived at this point, your problem is probably hardware related. Use the "divide
and conquer" method outlined in bullet 1.
The Truth About Interframe Spacing
Introduction
The IEEE 802.3 specification states that before a station can attempt to transmit on the wire, it
must first wait until it has heard 9.6 microseconds of silence. Many popular myths have arisen
surrounding the reasons for the 9.6 microsecond interframe gap. The purpose of this section is to
clarify the true reason for the 9.6 microsecond interframe gap.
The Truth
The sole reason for the 9.6 microsecond interframe gap is to allow the station that last transmitted
to cycle its circuitry from transmit mode to receive mode. Without the interframe gap, it is
possible that a station would miss a frame that was destined for it because it had not yet cycled
back into receive mode.
There is, however, an interesting sidebar to this discussion and that is that most Ethernet cards in
today's market are capable of switching from transmit to receive in much less time than 9.6
microseconds. This is an example of what can happen when 1970's specifications are applied to
1990's technology. In fact, some adapter manufacturers are designing their cards with a smaller
interframe spacing, thereby achieving higher data transfer rates than their competitors.
The problem arises when cards with a smaller interframe spacing are mixed on a network with
cards that meet the specifications. In this case, there is a potential for lost data.
The moral of the story is that a network administrator needs to know what is going on in his or
her network and be aware that not all vendors will stick to the specs. Contact your vendors and
find out what they're doing differently -- it'll pay off!
Manchester Signal Encoding
Introduction
The device driver software receives a frame of IP, IPX, NetBIOS, or other higher-layer protocol
data. From this data, the device driver constructs a frame, with appropriate Ethernet header
information and a frame check sequence at the end.
The circuitry on the adapter card then takes the frame and converts it into an electrical signal. The
voltage transitions in the transmitted bit stream are in accordance to the format called Manchester
Signal Encoding. Manchester encoding describes how a binary ONE and ZERO are to be
represented electrically. Manchester encoding is used in all 10 Megabit per second Ethernets; for
example, 10BASE2 Thin Ethernet, 10BASE5 Thick Ethernet and 10BASE-T Twisted-Pair
Ethernet.
Here we see an example of the signal transitions used to encode the hexadecimal value "0E",
which converts to "00001110" in binary:
Notice that there is a consistent transition in the middle of each bit-time. Sometimes this
transition is from low-to-high and sometimes it's from high-to-low. This is the clock transition.
The receiving adapter circuitry 'locks on' to this constant signal transition and, thereby, identifies
the timing to determine the beginning and end of each bit.
To represent a binary ONE, the first half of the bit-time is a low voltage; the second half of a bit
is always the opposite of the first half, that's how the clock transition is created. To represent a
binary ZERO, the first half of the bit-time is a high voltage. You see that sometimes there is an
additional transition at the beginning of a bit-time (not drawn in the diagram above) where the
signal is pulled either up or down in preparation for the next bit.
Consider what happens if an external electromagnetic field interferes with the Manchester bit
encoding. This external field could be the result of an electric motor, radio transmission or other
source of interference. You should be able to see that if the Manchester signal is disrupted the bits
will be destroyed - because the clock signal will be disrupted.
It would not be reasonably possible for electrical interference to change a binary ONE into a
binary ZERO. Since each bit is symmetrical (second half is always opposite the first half) the
result of electrical noise would be the destruction of the bit, not a change in bit value.
Introduction Of Fast Ethernet
Introduction
Full motion video for video conferencing requires, typically, at least 25 Mb/sec. That means that
a legacy Ethernet, at 10 Mb/sec, can only deliver poor quality real-time video. With 100 Mb/sec,
however, you can be watching a broadcast presentation in one window while you're in conference
with three people in three other windows (for a total of 100 megabits of bandwidth).
Consider a file server that requires 0.6 Mb/sec (6 million bits per second; 60% utilization on a 10
Mb/sec Ethernet). With a 100 Mb/sec Ethernet this server can now utilize interface hardware that
can pump data down the pipe at a greatly increased rate.
It seems clear that the evolution of the industry is moving away from 10 Mb/sec Ethernet and
towards the 100 Mb/sec (or higher) rates of data transfer. This section of the compendium
discusses 100 Mb/sec Ethernet technology
Virtually everyone who uses Ethernet has wished from time to time that their network had a
higher bandwidth. When Ethernet was being designed in the late 1970s, 10Mbps seemed
immense. With today's bandwidth-intensive multimedia applications, or even with just the
departmental server, that number sometimes is barely adequate. Yes, faster network technologies
were available, but they were complicated and expensive. Then came Fast Ethernet.
Anyone who understands classic Ethernet already understands much about Fast Ethernet. Fast
Ethernet uses the same cabling and access method as 10Base-T. With certain exceptions, Fast
Ethernet is simply regular Ethernet - just ten times faster! Whenever possible, the same numbers
used in the design of 10Base-T were used in Fast Ethernet, just multiplied or divided by ten. Fast
Ethernet is defined for three different physical implementations.
The Implementations of Fast Ethernet:

100BASE-TX: Category 5

100BASE-FX: Multimode fibre

100BASE-T4: Category 3
Probably the most popular form of Fast Ethernet is 100BASE-TX. 100BASE-TX runs on
EIA/TIA 568 Category 5 unshielded twisted pair, sometimes called UTP-5. It uses the same pair
and pin configurations as 10Base-T, and is topologically similar in running from a number of
stations to a central hub.
As an upgrade to 10Mbps Ethernet over multimode fibre (10Base-F), 100BASE-FX is Fast
Ethernet over fibre. Single duplex runs are supported up to 400m and full duplex runs are
supported for up to 2km.
Fast Ethernet is possible on Category 3 UTP with 100BASE-T4. There is a popular
misconception that Fast Ethernet will only run on Category 5 cable. That is true only for
100BASE-TX. If you have Category 3 cable with all four pairs (8 wires) connected between
station and hub, you can still use it for Fast Ethernet by running 100BASE-T4. 100BASE-T4
sends 100Mbps over the relatively slow UTP-3 wire by fanning out the signal to three pairs of
wire.
This "demultiplexing" slows down each byte enough that the signal won't overrun the cable.
Category 3 cable has four pairs of wire, eight wires total, running from point to point. 10Base-T
only uses four wires, two pairs. Some cables only have these two pairs connected in the RJ-45
plug. If the category 3 cabling at your site has all four pairs between hub and workstation, you
can use Fast Ethernet by running 100BASE-T4.
Please select on of the following sections:

Differences Between 100 Mb/sec and 10 Mb/sec Ethernet

Integration Of 100 Mb/sec Ethernet Into Existing 10 Mb/sec networks

Migration from 10 Mb/sec to 100 Mb/sec

Fast Ethernet Model

Troubleshooting
Differences Between Classic Ethernet And Fast Ethernet
Introduction
The two primary areas for concern when upgrading the network from 10Mbps to 100Mbps are
cabling and hubs. As discussed on the Fast Ethernet Introduction page, in Fast Ethernet twisted
pair cabling needs either to be category 5 or to be category 3 with proper twist on all four pairs.
The problem with hubs is the number of hubs allowed in a single collision domain. Classic
Ethernet allows hubs to be cascaded up to four deep between any two stations. In Fast Ethernet,
the number of hubs allowed in a collision domain is drastically reduced - to a single hub.
Sometimes it may be possible to have more than one hub in a collision domain, but it will
probably be easier in the long term to design a Fast Ethernet network assuming that only one hub
is allowed.
What the IEEE 802.3 spec does not explicitly state is that this limitation only applies to shared
100BASE-T, not to switched 100BASE-T. Since switches act like bridges in defining a separate
collision domain, installing Fast Ethernet switches will allow you to work around the single-hub
problem. Even if it is not necessary to deliver dedicated switched Fast Ethernet to each desktop,
Fast Ethernet hubs can be connected to switches. Connecting a number of repeaters to a switch
will provide shared Fast Ethernet and allow you to maintain the size of your network.
Intergrating Fast Ethernet into 10MB Ethernet Networks
Introduction
Now that Fast Ethernet is here, the question becomes, "How do I start using it ?" Integrating Fast
Ethernet into existing networks need not be done all at once.
Here are some aspects of 100Mbps implementation that should be considered:

Implementing Switching

Eliminating Bottlenecks

Expand The Topology Outwards and Downwards
Implementing Switching
Implement switching in high-traffic areas to concentrate the bottlenecks on the network. Since
Fast Ethernet provides higher throughput of bits, it makes sense to figure out which network
connections need the most relief. Which segments consistently attempt to pump the most bytes?
Which segments consistently demonstrate the highest average percent bandwidth usage according
to your protocol analyzer?
Installing switches will help you figure out which network segments are moving the most
information due to the effect switches have on your network. Installing switches is like moving
from traffic lights to limited-access highways. The idea works extremely well in isolating crosstown traffic, e.g. peer-to-peer networking, but doesn't necessarily help when all of the traffic
slows down at particular locations, e.g. an enterprise-wide server or the network Internet firewall.
Because there are other ways of isolating network bottlenecks, implementing switches is
primarily useful when installing 10/100 switches in preparation for 100Mbps Ethernet.
Installing switches also gives the added benefit of segmenting collision domains. In classic
Ethernet, there can be up to four hubs or repeaters between any two stations, but in Fast Ethernet
that number is only one or two. Installing switches in place of repeaters spares you having to
segment your network at a later point, allowing the cost of the transition to be spread over a
longer period of time.
Eliminating Bottlenecks
Once bottlenecks have been identified, upgrade those network connections to 100 Mbps. The
primary difficulty in this step is verifying that the existing cabling will be sufficient for Fast
Ethernet. On UTP, the cable either needs to meet Category 5 specifications or have four pairs
with proper twist maintained on Category 3. If you're planning on using 100BASE-TX, your
wiring closet will also need to be certified for a higher speed. There are many devices available
such as wire pair scanners, which will make this job much easier.
Installing the initial Fast Ethernet connections is much easier if the switches installed earlier are
10/100, capable of operating at either classic Ethernet speeds or Fast Ethernet speeds. If the
switches installed were only 10Mbps switches, they could be used as "hand-me-downs,"
replacing hubs in segments where users require more bandwidth.
Expand The Topology Outwards and Downwards
Gradually work the Fast Ethernet out into the rest of the network, as far out and down as desired.
Note that the price of 10/100 cards is not substantially higher than that of 10Mbps cards, so it
may be a wise idea to plan ahead by installing 10/100 cards when installing new machines.
If there comes a point in the future when 100Mbps Ethernet needs to be implemented on that
machine, all that will need to be changed is the connection on the other end. On the other hand,
upgrading a machine from a 10Mbps card to a 100Mbps card will require reconfiguring the user's
machine, installing a new driver, etc. A short-term expenditure can greatly offset the cost in manhours and down-time later on.
Upgrading And Migrating From Ethernet To Fast Ethernet
Introduction
Here we are going to analyse the following aspects of upgrading/migrating from 10Mbit Ethernet
to 100Mbit Ethernet.

Cabling

Incompatible Implementations

Repeaters In Fast Ethernet
o Replacement Of Illegal Byte
o Codes Data Translation
o Error Handling And Partitioning
Cabling
There are two methods of running Fast Ethernet over UTP and one method of running it over
fibre.
IMPLEMENTATION ..........CABLE TYPE............... NUMBER OF PAIRS
..100BASE-TX ................ Category 5 .........................2
..100BASE-T4..................Category 3 or 5................. ..4
..100BASE-FX.................. Fiber....................... (Not Applicable)
Category 3 cabling is not rated to carry the fast signaling of 100BASE-TX, so 100BASE-T4 must
be used. 100BASE-T4 may also be used on Category 5 cabling, but 100BASE-TX is probably a
better choice.
Incompatible Implementations
Fast Ethernet brings a new urgency to an old problem. Many network technologies use RJ-45
connectors. In the past, it was usually not difficult to figure out whether a jack was Ethernet or
token ring: even at a site where both were in use they seldom were found in the same vicinity, so
the network administrator could make an "educated guess". Today, with Fast and classic Ethernet
interspersed and 10/100 cards common, some mechanism is needed to allow quick identification
of the signal that is running across the wire.
Autonegotiation works by having each end of the connection send a series of pulses down the
wire to the other end. These pulses are the same signals used in 10Base-T to test link integrity and
cause the link indicator light to turn on. If a station receives a single pulse, referred to as a
Normal Link Pulse (NLP), it recognizes that the other end is only capable of 10Base-T.
If autonegotiation is being used, a station will transmit a series of these pulses spaced closely
together, referred to as a Fast Link Pulse (FLP). An FLP consists of 17 "clocking" pulses
interspersed with up to 16 "signal" pulses to form a 16-bit code word. If a signal pulse occurs
between two clocking pulses, that bit is a one. Absence of a signal pulse is a zero.
By comparing the 16-bit code words received in the FLP, a station and hub will agree on what
implementation of Ethernet to use. The 16-bit code word describes what implementations of
Ethernet are supported. Both station and hub will compare what it supports to what the other end
supports, then choose which implementation to use for that link according to following priorities,
defined by IEEE 802.3 clause 28B.3:
100BASE-TX full duplex
100BASE-T4
100BASE-TX 1
10BASE-T full duplex
10BASE-T
If the station supports 100BASE-T4, 100BASE-TX, and 10BASE-T and the hub supports full
duplex 100BASE-TX, single-duplex 100BASE-TX, and 10BASE-T, they will each discover that
the Ethernet implementations they have in common are 100BASE-TX and 10BASE-T. Since
100BASE-TX is defined to have a higher priority that 10BASE-T, the station and hub will use
100BASE-TX. This decision takes place independently on each side of the link, but since each
side uses the same decision-making process and priorities, the same decision is reached on each
end. Because each end of the connection agrees on what implementation of Ethernet is being
used, the potential problem of incompatible signaling is averted.
Repeaters In Fast Ethernet
In Fast Ethernet the number of repeaters allowed per network segment is only 1 or 2. Whether
one or two repeaters may be used is determined by what class of repeater will be used on the
segment. Two classes of Fast Ethernet repeater are defined, Class I and Class II. Only one Class I
repeater can be used in a single collision domain. Two Class II repeaters are allowed in a single
collision domain, with up to a 5 metre inter-repeater link between them. The only technical
difference between Class I and Class II repeaters is that Class II repeaters are faster than Class I
repeaters. This allows Class I repeaters to provide other services besides simple repeating, such as
translating between 100BASE-TX and 100BASE-T4. Class II repeaters are primarily used to link
two hubs supporting only a single implementation of Fast Ethernet.
However, with the trade-off in fewer repeaters comes greater intelligence in each repeater. In
addition to implementing the functionality of 10Mbps repeaters, 100Mbps repeaters are
responsible for the following:
Replacement Of Illegal Byte
Unlike classic Ethernet, Fast Ethernet does not send a straightforward representation of the actual
bits across the physical layer. A different representation of the information is sent instead. As a
result, there are possible patterns on the wire which are not defined for use in Fast Ethernet. If a
repeater detects an illegal pattern on the wire, it may replace that pattern (and every remaining
pattern in the frame) with a special symbol identifying that the frame is corrupt.
Codes Data Translation
For repeaters that implement more than one implementation of Ethernet, the repeater will change
the data encoding to be appropriate to the outgoing ports. 100BASE-T4 and 100BASE-TX use
very different representations when sending data across a network. A Class I repeater which
implements both 100BASE-TX and 100BASE-T4 needs to ensure that the signal going across the
wire is the appropriate representation for the Ethernet implementation.
Error handling and partitioning
A Fast Ethernet repeater will monitor the state of each port in order to protect the network from
any faults that might interrupt the flow of information.
If 60 consecutive collisions are detected from any particular port, the repeater will partition that
port: it will stop forwarding information from that port to the rest of the network, but will still
continue to repeat all frames from the network to the port. If the station on that port has broken so
that it no longer is obeying the rules of CSMA/CD, then it needs to be separated from the network
to allow traffic to flow.
However, it is possible that there could be 60 consecutive collisions on an extremely busy
segment, so the repeater still forwards information to that port. If the repeater detects between
450 and 560 bits of information from that port without a collision occurring, the repeater will reactivate that port. A legal frame is received from the partitioned port, so we know that the
hardware is working.
If between 40000 and 75000 consecutive bits are received from a port, the device at the other end
of that cable is assumed to be "jabbering", sending an endless stream of bits, so the output from
the port is cut off from the rest of the network. Such a "jabbering" device could prevent any
traffic from flowing on a network, since there would never be a break for the other stations to
transmit. If the station stops "jabbering", then the repeater will once again activate the port.
In 100BASE-TX and 100BASE-FX, a repeater will further monitor traffic to ensure that only
frames with a valid preamble are passed. If two consecutive "false carrier events" occur, or a
"false carrier event" lasts for 450-500 bits, the repeater will declare that link to be "unstable" and
stop sending information to that port. As a result, faulty links are isolated from the rest of the
network, resulting in improved overall network reliability. The link will be reactivated if between
24814 and 37586 bit-times have passed without any information having been received, or if a
valid carrier is received after between 64 and 86 bit-times of idle have occurred.
802.3 Fast Ethernet (100 Mb/Sec) Model
Introduction
Here we see a logical drawing of the Fast Ethernet Data Link Layer sublayers. Data is passed
down from the upper layers (such as TCP/IP or Novell Netware) to the LLC sublayer. From there
it is passed to the MAC sublayer and then, depending on whether this is a 100BASE-T4 or
100BASE-TX environment, either down the right or left-hand path to the wire.
We will intentionally avoid a detailed discussion of exactly what goes on at each of these layers
here. Some of the layers' functions, such as 8B6T encoding, Fan-out and NRZI signaling are
labeled and will be discussed in this essay.
In 10Mbps Ethernet, the data is handed directly from the MAC layer to the PMA (Physical
Medium Attachment) sublayer and onto the wire. The Reconciliation, PCS and PMD sublayers
do not exist in 10Mbps Ethernet.
Troubleshooting techniques for Fast Ethernet
Introduction
This page will primarily discuss problems unique to Fast Ethernet.

The Collision Domain

Incompatible Ethernet Jabber

Auto-negotiation Priorities And Alternatives

Incompatible Cabling Specifications
The Collision Domain
The single biggest change in network design in Fast Ethernet is the smaller collision domain.
Technically, the size of a collision domain in all flavors of Ethernet is exactly the same - 256 bits.
On the wire, ten times as many 100Mbps bits can occupy the same space as an equal number of
10Mbps bits, so the collision domain in 100Mbps Ethernet can be only physically one tenth the
size of a 10Mbps collision domain.
Effectively this means that whereas up to four hubs can legally be cascaded in 10Base-T between
any two stations, only one (or two) hubs can be used in a single segment in 100BASE-T without
going through an interconnect device that provides link segmentation; such as a store-andforward bridge, switch or bridge, or a router. A separate section of the Compendium discusses
INTERCONNECT DEVICES in detail. If you see signs of corruption on your network that
correspond to propagation delay, check to make sure that you're not cascading too many hubs.
You can make some generalizations regarding the structure of corrupted data frames (as
discussed in the 10 Mbps Ethernet FRAME CORRUPTION section) but remember that these
corruption patterns may be quite misleading, since you have a hub or switch in the network.
Note that many hub vendors sell stackable hubs. Hubs in a single stack connected via a common
backplane are usually considered to be a single hub in terms of propagation delay, but multiple
stacks cascaded externally via 100BASE-TX, 100BASE-T4, or 100BASE-FX could definitely
cause problems. These 100BASE standards are discussed in the INTRODUCTION to this Fast
Ethernet section.
Incompatible Ethernet Jabber
Another potential problem in 100Mbps Ethernet is the use of RJ-45 jacks for more than one
flavor of Ethernet. Since 100BASE-TX and 100BASE-T4 both use RJ-45 jacks, as do 10Base-T
and many other network technologies, the IEEE 802.3 specified an auto-negotiation protocol to
allow stations to figure out the networking technology to use.
Unfortunately, they made its implementation optional. If you're using equipment that does not
implement IEEE-spec auto-negotiation, the incompatible Ethernet signals could prevent one of
your stations from connecting to your network, or even simulate "jabber" by constantly
transmitting a TX idle stream and bringing down the network.
The possibility for this jabber is uncertain, considering that the flavors of Ethernet use different
signal formats in transmission. Even if data is not exchanged, it is still possible that incompatible
Ethernet flavors could assume that they have a proper connection. Ethernets using RJ-45
connections to a hub use a Link Test Pulse to verify link integrity. This pulse is the same in all
flavors of Ethernet if auto-negotiation is not used. The auto-negotiation protocol itself uses a
modified form of these pulses to negotiate a common Ethernet implementation.
If Ethernet incompatibility jabber were to occur between 100BASE-TX and another flavor of
Ethernet, the results could be catastrophic, as 100BASE-TX transmits a continuous idle signal
between frames. Although transparent to 100BASE-TX, this idle signal would completely busy
out a 10Base-T or 100BASE-T4 segment. On the other hand, the 802.3 specifies that a Fast
Ethernet repeater should implement jabber control, automatically partitioning off any port that is
streaming information for more than 40000 to 75000 bits. If the repeater were to partition off the
"jabbering" port, the symptom would be reduced to inability to connect the 100BASE-TX station
to the network.
Auto-negotiation Priorities And Alternatives
If the station and repeater both support 100BASE-TX and 100BASE-T4 and 802.3 autonegotiation, the link will autonegotiate to 100BASE-T4 instead of 100BASE-TX. Since
100BASE-TX requires Category 5 cabling but 100BASE-T4 requires only Category 3,
100BASE-T4 is assumed to be a better default.
If the cabling is known to be UTP-5, then it is probably more efficient to turn off auto-negotiation
and use 100BASE-TX wherever possible. 100BASE-T4 requires more overhead than TX because
it multiplexes and demultiplexes the data stream over three wire pairs. There is also significantly
less overhead in translating between 100BASE-TX and 100BASE-FX than between 100BASET4, as TX and FX both use 4B5B encoding instead of T4's 8B6T. 100BASE-TX and 100BASEFX also leave open the possibility of Full Duplex communication, although full duplex is not yet
part of the 802.3 spec.
On the other hand, 100BASE-TX sends an idle signal whenever it is not transmitting data. The
802.3 spec implies that it may very well be preferable to use 100BASE-T4 for battery-powered
operation, since the card would only be transmitting when there is actual information to be
moved.
Incompatible Cabling Specifications
One final problem with the advent of Fast Ethernet is the different cabling specifications. In
classic Ethernet it was difficult to mistake 10Base-2 for 10Base-5. With Fast Ethernet, special
care must be taken to verify that the entire connection between station and concentrator either
supports TX's 31.25MHz signal or maintains T4's four pairs with proper twist. There are a
number of good cable testers and pair scanners available to assist you in determining this for your
network.
Introduction To Firewalls
Introduction
A firewall is simply a system designed to prevent unauthorised access to or from a private
network. Firewalls can be implemented in both hardware and software, or a combination of both.
Firewalls are frequently used to prevent unauthorised Internet users from accessing private
networks connected to the Internet. All data entering or leaving the Intranet pass through the
firewall, which examines each packet and blocks those that do not meet the specified security
criteria.
Generally, firewalls are configured to protect against unauthenticated interactive logins from the
outside world. This helps prevent "hackers" from logging into machines on your network. More
sophisticated firewalls block traffic from the outside to the inside, but permit users on the inside
to communicate a little more freely with the outside.
Firewalls are also essential since they can provide a single block point where security and audit
can be imposed. Firewalls provide an important logging and auditing function; often they provide
summaries to the admin about what type/volume of traffic that has been processed through it.
This is an important point: providing this block point can serve the same purpose (on your
network) as a armed guard can (for physical premises).
Theoretically, there are two types of firewalls:
1. Network layer
2. Application layer
They are not as different as you may think, as described below.
Which is which depends on what mechanisms the firewall uses to pass traffic from one security
zone to another. The International Standards Organization (ISO) Open Systems Interconnect
(OSI) model for networking defines seven layers, where each layer provides services that higherlevel layers depend on. The important thing to recognize is that the lower-level the forwarding
mechanism, the less examination the firewall can perform.
Network layer firewalls
This type generally makes their decisions based on the source address, destination address and
ports in individual IP packets. A simple router is the traditional network layer firewall, since it is
not able to make particularly complicated decisions about what a packet is actually talking to or
where it actually came from.Modern network layer firewalls have become increasingly more
sophisticated, and now maintain internal information about the state of connections passing
through them at any time.
One thing that's an important difference about many network layer firewalls is that they route
traffic directly though them, so to use one you either need to have a validly assigned IP address
block or to use a private internet address block. The network layer firewalls tend to be very fast
and tend to be mostly transparent to its users.
Application layer firewalls
These generally are hosts running proxy servers, which permit no traffic directly between
networks, and which perform elaborate logging and examination of traffic passing through them.
Since proxy applications are simply software running on the firewall, it is a good place to do lots
of logging and access control. Application layer firewalls can be used as network address
translators, since traffic goes in one side and out the other, after having passed through an
application that effectively masks the origin of the initiating connection.
Having an application in the way in some cases may impact performance and may make the
firewall less transparent. Early application layer firewalls are not particularly transparent to endusers and may require some training. However more modern application layer firewalls are often
totally transparent. Application layer firewalls tend to provide more detailed audit reports and
tend to enforce more conservative security models than network layer firewalls.
The Future of firewalls sits somewhere between both network layer firewalls and application
layer firewalls. It is likely that network layer firewalls will become increasingly aware of the
information going through them, and application layer firewalls will become more and more
transparent. The end result will be kind of a fast packet-screening system that logs and checks
data as it passes through.
Firewall Topologies
Introduction
In this section we are going to talk about the different ways a firewall can be set up. Depending
on your needs, you can have a very simple firewall setup which will provide enough protection
for your personal computer or small network, or you can choose a more complicated setup which
will provide more protection and security.
Let's have a look starting from the simple solutions, and then move on to the more complicated
ones. Just keep in mind we are not talking about a firewall which is only a piece of software
which runs on the same computer you use to connect to the internet and do your work, but we are
talking about a physical computer which is a dedicated firewall.
A Simple Dual-Homed Firewall
The dual-homed firewall is one of the simplest and possibly most common way to use a firewall.
The Internet comes into the firewall directly via a dial-up modem (like me :) ) or through some
other type of connection like an ISDN line or cable modem. You can't have a DMZ (See the
DMZ page for more info) in this type of a configuration.
The firewall takes care of passing packets that pass its filtering rules between the internal network
and the Internet, and vice versa. It may use IP masquerading and that's all it does. This is known
as a dual-homed host. The two "homes" refer to the two networks that the firewall machine is part
of - one interface connected to the outside home, and the other connected to the inside home.
This particular setup has the advantage of simplicity and if your Internet connection is via a
modem and you have only one IP address, it's what you're probably going to have to live with
unless you create a more complex network like the one we are going to talk about.
A Two-Legged Network with a full exposed DMZ
In this more advanced configuration, shown in the picture below, the router that connects to the
outside work is connected to a hub (or switch).
Machines that want direct access to the outside world, unfiltered by the firewall, connect to this
hub. One of the firewall's network adapters also connects to this hub. The other network adapter
connects to the internal hub. Machines that need to be protected by the firewall need to connect to
this hub. Any of these hubs could be replaced with switches for added security and speed, and it
would be more effective to use a switch for the internal hub.
There are good things about the exposed DMZ configuration. The firewall needs only two
network cards. This simplifies the configuration of the firewall. Additionally, if you control the
router you have access to a second set of packet-filtering capabilities. Using these, you can give
your DMZ some limited protection completely separate from your firewall.
On the other hand, if you don't control the router, your DMZ is totally exposed to the Internet.
Hardening a machine enough to live in the DMZ without getting regularly compromised can be
tricky.
The exposed DMZ configuration depends on two things: 1) an external router, and 2) multiple IP
addresses.
If you connect via PPP (modem dial-up), or you don't control your external router, or you want to
masquerade your DMZ, or you have only 1 IP address, you'll need to do something else.There are
two straightforward solutions to this, depending on your particular problem.
One solution is to build a second router/firewall. This is useful if you're connecting via PPP. One
machine is the exterior router/ firewall (Firewall No.1). This machine is responsible for creating
the PPP connection and controls the access to our DMZ zone. The other firewall (Firewall No.2)
is a standard dual-homed host just like the one we spoke about at the beginning of the page, and
its job is to protect the internal network. This is identical to the situation of a dual homed firewall
where your PPP machine is the local exterior router.
The other solution is to create a three-legged firewall, which is what we are going to talk about
next.
The Three-legged firewall
This means you need an additional network adapter in your firewall box for your DMZ. The
firewall is then configured to route packets between the outside world and the DMZ differently
than between the outside world and the internal network. This is a useful configuration, and I
have seen many of our customers using it.
The three-legged setup can also give you the ability to have a DMZ if you're stuck with the
simple topology outlined first (dual homed firewall). Replace "router" with "modem," and you
can see how this is similar to the simple topology (dual homed firewall), but with a third leg stuck
on the side :)
If you're being forced or have chosen to IP masquerade, you can masquerade the machine or
machines in the DMZ too, while keeping them functionally separate from protected internal
machines. People who have cable modems or static PPP connections can use this system to run
various servers within a DMZ as well as an entire internal network off a single IP address. It's a
very economic solution for small businesses or home offices.
The primary disadvantage to the three-legged firewall is the additional complexity. Access to and
from the DMZ and to and from the internal network is controlled by one large set of rules. It's
pretty easy to get these rules wrong if you're not careful !
On the other hand, if you don't have any control over the Internet router, you can exert a lot more
control over traffic to and from the DMZ this way. It's good to prevent access into the DMZ if
you can.
And I think that just about completes our discussion of Firewall Topologies !
The DMZ Zone
Introduction
The De-Militarized Zone, or DMZ, is an expression that comes from the Korean War. There, it
meant a strip of land forcibly kept clear of enemy soldiers. The idea was to accomplish this
without risking your own soldiers' lives, thus mines were scattered throughout the DMZ like
grated Romano on a plate of fettucine :) The term has been assimilated into networking, without
the cheese :)
Network geeks use it to mean: "a portion of your network which, although under your control, is
outside your heaviest security." Compared to the rest of your network, machines you place in the
DMZ are less protected, or flat-out unprotected, from the Internet.
Once a machine has entered the DMZ, it should not be brought back inside the network again.
Assuming that it has been compromised in some way, bringing it back into the network is a big
security hazard.
Use of the DMZ
If you decide to build one, what do you do with it? Machines placed in the DMZ usually offer
services to the general public, like Web services, domain name services (DNS), mail relaying and
FTP services (all these buzzwords will be explained next). Proxy servers can also go in the DMZ.
If you decide to allow your users Web access only via a proxy server, you can put the proxy in
the firewall and set your firewall rules to permit outgoing access only to the proxy server.
As long as you've attended to the following points, your DMZ should be ok:
If you put a machine in the DMZ, it must be for a good reason. Sometimes, companies will set up
a few workstations with full Internet access within the DMZ. Employees can use these machines
for games and other insecure activities. This is a good reason if the internal machines have no
Internet access, or extremely limited access. If your policy is to let employees have moderate
access from their desktops, then creating workstations like this sends the wrong message. Think
about it: The only reason why they would use a DMZ machine is if they were doing something
inappropriate for the workplace !
It should be an isolated island, not a stepping stone. It must not be directly connected to the
internal network. Furthermore, it shouldn't contain information that could help hackers
compromise other parts of the network. This includes user names, passwords, network hardware
configuration information etc.
It must not contain anything you can't bear to lose. Any important files placed on the DMZ
should be read-only copies of originals located within the network. Files created in the DMZ
should not be able to migrate into the network unless an administrator has examined them. If
you're running a news server and would like to archive news, make sure the DMZ has its own
archival system.
What sort of things shouldn't you do? Example: If you're running an FTP server in the DMZ,
don't let users put confidential information on there so they can get it from home later.
It must be as secure a host as you can make it. Just because you're assuming it's secure doesn't
guarantee that it is. Don't make it any easier for a hacker than absolutely necessary. A hacker may
not be able to compromise your internal network from your DMZ, but they may decide to use it
to compromise somebody else's network. Give serious thought to not running Windows on your
DMZ machines; it's inherently insecure and many types of intrusions can't be detected on
Windows. Linux or openbsd can provide most, if not all, the needed functionality along with a
more secure environment.
DoS & DDoS Attacks
Introduction
In this section we are going to have a quick look at DoS and DDoS attacks, how they are
performed and why they attract so much attention ! We won't be getting into much detail as we
are just trying to give everyone a better understanding of the problem.
Denial of Service attacks
Denial of Service (DoS) attacks can be a serious federal crime with penalties that include years of
imprisonment and many countries have laws that attempt to protect against this. At the very least,
offenders routinely lose their Internet Service Provider (ISP) accounts, get suspended if school
resources are involved, etc.
There are two types of DoS attacks:
1) Operating System attacks: Which target bugs in specific operating systems and can be fixed
with patches.
2) Networking attacks: Which exploit inherent limitations of networking and may require firewall
protection.
Operating System Attacks
These attacks exploit bugs in a specific operating system (OS), which is the basic software that
your computer runs, such as Windows 98 or MacOS. In general, when these problems are
identified, the vendor, such as Microsoft, will release an update or bug fix for for them.
So, as a first step, always make sure you have the very latest version of your operating system,
including all bug fixes. All Windows users should regularly visit Microsoft's Windows Update
Site (and I mean at least once a week!) which automatically checks to see if you need any
updates.
Networking Attacks
These attacks exploit inherent limitations of networking to disconnect you from your ISP, but
don't usually cause your computer to crash. Sometimes it doesn't even matter what kind of
operating system you use and you cannot patch or fix the problem directly. The attacks on Yahoo
and Amazon by "mafiaboy" were large scale networking attacks and demonstrated that nobody is
safe against a very determined attacker.
Network attacks include ICMP flood (ping flood) and smurf which are outright floods of data to
overwhelm the capacity of your connection, spoofed unreach/redirect also known as "click"
which tricks your computer into thinking there is a network failure and voluntarily breaking the
connection (this is used to disconnect MIRC users), and a whole new generation of distributed
denial of service attacks (we speak about them later on).
Just because you were disconnected with some unusual error message doesn't mean you were
attacked. Almost all disconnects are due to natural network failures. On the other hand, you
should feel suspicious if you are frequently disconnected.
What can you do about networking attacks? If the attacker is flooding you, essentially you need
to have a better connection than he does. Otherwise your only recourse may be a firewall run by
your ISP.
Distributed Denial-of-Service
A distributed denial-of-service (DDoS) attack is similair to the DoS attack described above, but
involves a multitude of compromised systems which attack a single target, thereby causing denial
of service for users of the targeted system. The flood of incoming messages to the target system
essentially forces it to shut down, thereby denying service to the system to legitimate users.
A hacker (or, if you prefer, cracker) begins a DDoS attack by exploiting a vulnerability in one
computer system and making it the DDoS "master." It is from the master system that the intruder
identifies and communicates with other systems that can be compromised. The intruder loads
cracking tools available on the Internet on multiple -- sometimes thousands of -- compromised
systems. With a single command, the intruder instructs the controlled machines to launch one of
many flood attacks against a specified target. The result of these packets which are sent to the
target causes a denial of service.
While the press tends to focus on the target of DDoS attacks as the victim, in reality there are
many victims in a DDoS attack -- the final target and as well the systems controlled by the
intruder.
Locking Windows
Introduction
Static IPs are part of the persistent-connection problem, but Windows itself is also to blame.
(Consumer editions of Windows, anyway--NT and Windows 2000 are a different game entirely.)
Windows 95 and 98 are full of security gaps. Here are a few things you should do to close them
up.
What To Do
Turn off file sharing if you don't need it. If you're not sharing files with other computers--usually
you would do so over a home network--then disabling this feature closes up plenty of holes. To
ensure file sharing is off, right-click Network Neighborhood and pick Properties. Click the button
labeled "File and Print Sharing" and make certain that both boxes in the
resulting dialog box are unchecked.
Set up file sharing carefully if you need to use it. Right-click Network Neighborhood, choose
Properties, and click "File and Print Sharing." Check the box next to "I want to give others access
to my files." Next, pick or create a specific folder you'll let people access, such as c:\My
Documents\Photos. In Windows Explorer, right-click the folder and pick Sharing from the
context menu. In the dialog box that appears, click the radio button next to Shared As: and enter a
name for the folder in the field to the right. (The name you choose is the name that will appear to
those who browse the folder over the network or the Internet).
If you want people to be able to add, remove, or change documents in the folder, click the Full
radio button under Access Type.
If you want people to be able only to copy or look at the files in the folder, click the Read Only
radio button.
In either case, be sure to enter a password (no fewer than four and no more than eight characters)
in the field at the bottom of the dialog box. The dialog box will allow you to click OK without
your entering a password, but in that case, anybody who browses the folder will get access to the
files inside.
Monitor your shared folders using the Windows Net Watcher utility. The app displays all the
users currently connected to shared folders and lets you disconnect them if necessary. The utility
isn't part of Windows 95 or 98's default installation, but you can install it from your Windows
CD-ROM by following these steps:
1.Click Start, Settings, Control Panel and open Add/Remove Programs.
2.Click the Windows Setup tab. In Windows 98, scroll down the list of setup categories and
double-click System Tools. In Windows 95, find and double-click Accessories.
3.Check the box next to Net Watcher, and click OK twice to exit the dialogs.
4.Windows will install Net Watcher. After your system's rebooted, choose Start, Programs,
Accessories, System Tools, Net Watcher to launch the utility.
Download system patches. Windows 98 users can head to the Windows Update Web site to
automatically download security-related patches for their operating system. If you're still using
Windows 95, you'll have to download each Security Update patch manually at the Windows 95
Downloads page.
Check your shields. After you've taken the steps above, the Shields UP! Web site (run by Gibson
Research Corporation) can look at your connection to the rest of the world and let you know if
any holes remain. Drop by and see if you have any further vulnerabilities. Shields UP! also
contains some extremely in-depth advice regarding Windows networking settings.
Securing Your Home Network
Introduction
Most people who use computers these days have had to deal with a security issue of some kind –
whether they are aware of it or not. Everyone has been infected by one of the many worms or
viruses floating around the Internet, or have had someone use your password. Most home
computer users are victims of attacks that they have no idea about.
For example, certain programs called ‘spyware' come packaged into seemingly friendly programs
you download, this spyware can do any one of a number of things, though most often they send
your personal information (such as name and email address) and information about what sites you
visit to certain companies.
These in turn will sell your personal information to the spammers and email marketers who will
proceed to clog your inbox with junk that they think you might be interested in. To explain how
this works, you download a program – say a video player – from the Internet and install it. In the
background it installs some spyware. Now you start surfing to car sites, soon you can expect your
email inbox to be full of spam offering you great deals on used cars etc.
A lot of people work on the principle that their home computer contains nothing interesting
enough for an attacker, what they don't realise is that while an attacker may not target your
system specifically, it is very common for them to use programs that will scan vast ranges of the
Internet looking for vulnerable systems, if yours happens to be one, it will be automatically taken
over and placed at the attackers command. From here he can do a variety of things, like using
your computer to attack other sites on the Internet or capturing all your passwords.
Worms and email viruses work the same way, they infect one machine, and then spread by trying
to email themselves to everyone in your guest book, or turning your machine into a scanning
system to find other targets. They may even contain a malicious payload that can destroy your
files, or even worse – email your private documents to everyone you know (this was the case with
a worm a few years ago).
Given that the things we use the computer for these days such as online shopping for books or
music, electronic banking etc, these threats have a more serious implication than most people
realise. You may not have anything worthwhile on your computer, but what if an attacker is able
to steal your credit card information when you are buying a book from Amazon.com, or steal the
password to your online banking account ?
Luckily the steps you have to take to secure your own PC are fairly simple and can be
accomplished by non-technical users given the right guidance. If you follow the guidelines we
have given here, you will be safe from most forms of Internet based threats. So here are a few
steps you can take.
Email Security
A lot of viruses these days, such as the recent MyDoom virus, spread by emailing themselves to
people as an attachment, the email can appear to come from anywhere.. most often it will appear
to come from a friend, or an address like [email protected] if you use a yahoo account. The
email will try and convince you to download and run the attachment which may appear to be a
harmless JPG image or SCR screensaver. In fact, the attachment is a malicious program (known
as malware), and once opened, can do any of the nasty things we've listed above. Here are the
rules you should follow when checking your email.
1. Has the email come from someone you know ? If so, were you expecting the email and
its attachment ? If not, try and confirm with the person over the phone or some other
medium.
2. Does the message make sense ? If you receive an email from your computer illiterate
parents saying ‘download this new screensaver', you can be quite sure something is fishy.
3. Does the email appear to come from someone in authority ? If the email comes from what
appears to be the administrator of your email service, you should double check with
them. No email service will ever ask you to reveal your password, or threaten to
terminate your account unless you download the instructions in the attachment. If you are
unsure, always contact their tech-support personnel before opening any attachment.
If you've followed the above steps, and you still think you need to download the attachment,
make sure you scan it before downloading. Most popular email services like Hotmail and Yahoo
offer you the facility of scanning the attachment, use this feature ! Once you've downloaded it, it
never hurts to scan it with your own anti-virus software which you should have installed (we will
talk about this in the next tip). Only after you are completely certain this attachment is safe,
should you download it. If it is a program (ending in .exe, or something like .jpg.exe), then you
should be extra careful. Remember that anti-virus scanners must be up to date to be able to catch
new viruses, and even then, you may encounter a virus before the anti-virus companies have been
able to analyse it.
Install An Anti-virus Software
90% of the threats you will face as a home user will come not from hardcore cyber criminals, but
from automatic spreading viruses known as worms. The best way to guard against virus threats is
to download a good anti-virus scanner. Two of the best ones are Norton AntiVirus and McAfee .
Remember that the anti-virus needs to have its scanning database (known as virus definitions)
regularly updated. You should try and update the definitions once a week. The longer you put it
off for, the larger the new definitions package will be, and the more viruses your system will be
vulnerable to. All the virus scanners offer some form of automatic update system so that you don't
have to remember to keep updating the definitions yourself. Use this feature.
Disable Windows File Sharing
Most people know that Windows allows you to share files with other people on your network.
This is called “Windows File Sharing”, and is what you make use of whenever you open network
neighborhood. What most people don't know is that even if you don't specifically choose folders
to share, Windows automatically shares your entire hard-disk with anyone who knows your
system's Administrator account password. Not just will it share the hard-disk, it will allow the
person full read and write access to the disk. To disable file sharing in Windows XP, go through
the following steps:
1. Go to the Start menu and select the Control Panel.
2. In the Control Panel window, double-click on Network Connections.
3. Right-click on the icon for your network connection in the window that appears. You can
do this for all your network connections (e.g. VSNL, LAN etc)
4. From the menu which appears, choose Properties (use the left mouse button to make your
selection).
5. Under This connection uses the following items, highlight File and Printer Sharing for
Microsoft Networks.
6. Click Uninstall.
7. When you are asked if you are sure you want to uninstall File and Printer Sharing for
Microsoft Networks, click Yes.
8. Click OK or Close to close the Local Area Connection Properties window.
It is also important to understand that most people just press enter when prompted to choose an
Administrator password during the install. This is a very bad idea, as it means that anyone can log
into your system as an administrator (full access) without supplying a password. Thus you should
try and choose a strong password for the administrator account and any other account that you
may create on the system if you share it with other people. Read the tip on choosing strong
passwords later on.
Update the Operating System
From time to time, people discover bugs or vulnerabilities in operating systems. These
vulnerabilities often allow an attacker to exploit something built into your operating system and
take it over. To give you a simple example, a vulnerability may be found in MSN Messenger and
an attacker can exploit it to gain control of your system. Whenever such a vulnerability is found,
the operating system vendors release what are known as ‘patches' which will fix the problem.
If you make sure your system is up to date with the latest patches, an attacker will not be able to
exploit one of these vulnerabilities. To update windows, you have to run the ‘Windows update'
service, either by clicking ‘Start >> Programs >> Windows update”, or by going to
http://windowsupdate.microsoft.com/ . >From there you can scan your system for missing
patches and then download the ones you need. You should try and do this regularly so that the
backlog of patches you need to download is not very large. If you miss out on a lot of patches, the
download could be really huge. This is also the case when you reinstall the operating system.
Install A Personal Firewall
A personal firewall is a piece of software that runs on your machine and lets you decide exactly
what data is allowed to enter or leave your machine over the network. For example, if an attacker
is scanning your system for vulnerabilities, it will alert you. If an attacker is just looking through
ranges of the Internet for targets, your system will not respond to your probes.
In short, your system operates in a stealthy mode – invisible to an attacker. You also need to be
careful about what data leaves your system via the network. Viruses and worms that try and email
themselves to other people or use your machine to scan for more victims, spyware tries to send
data back to an advertiser, and trojan horse programs may try to connect to an attacker. The
personal firewall helps by alerting you every time a program tries to access the network
connection. This can be tricky to novice users because even when legitimate programs such as
Internet Explorer try to access the internet, the firewall will pop-up a warning box.
However, if you are unsure if an alert is malicious or not, most firewalls have a ‘more info' button
on the alert which will take you to their website and tell you whether the program is a legitimate
one or a known offender. A personal firewall is no good if you just keep answering ‘yes' to every
program that wants to access your internet connection.
Take the trouble to understand what programs on your machine need legitimate access and only
allow those. For example if you just downloaded a new screensaver program and the firewall says
it wants to access the internet, you can be pretty sure it is trying to send some data back
somewhere. It may be spyware or a trojan. Soon you will get used to weeding out the suspicious
programs. If you have a permanently on connection like cable-modem or DSL, you should most
definitely install a personal firewall. Some of the good ones you can get are:
ZoneAlarm – Very easy to install and use, there is a free version with a few less features than the
professional version. Gives you very good information about the alerts it generates. Considered
the market leader.
BlackICE – Another very highly rated personal firewall, it is not as user friendly as ZoneAlarm,
but allows for some further configuration options
Sygate Personal Firewall – Also less user friendly, but it allows you to make some very powerful
configuration changes and it contains a rudimentary intrusion detection system to alert you about
common attacks.
If you go to any search engine and search for ‘personal firewall' you will find a whole lot of other
options. If you use Windows XP, it is a good idea to turn on the built in Internet Connection
Firewall by double clicking on your connection icon near the clock, clicking properties >>
advanced >> Protect my computer and network…. This built in firewall is not meant to be a
replacement for a full solution like the ones above. It only filters incoming traffic and will not
alert you if a trojan or worm tries to use your machine for some malicious purpose.
Scan For Spyware
All through this article we have talked about spyware that lets companies customise their
advertising by watching what you do on the net. While spyware may not be destructive, it is one
of the biggest pests around and will result in a mailbox full of spam before you know it. However
there are a number of tools that will scan for well known spyware on your machine and will allow
you to delete it safely.
Note that AntiVirus packages do not usually alert you when you install spyware because it is not
considered harmful to the computer itself. Two of the most popular programs for detecting and
removing spyware are Ad-aware and Spybot Search & Destroy .
Choose Strong Passwords
Most of the time an attacker need not resort to a technical hack to break into a system because he
can simply guess at poorly chosen passwords. Here are some general rules when selecting a
password :
1. Do not use a word which can be found in a dictionary, or a birthdate / name these are
very easy to crack
2. Adding numbers like 123 at the end does not make it more difficult to crack the password
3. Choose at least a 6 character long password.
4. Use different capitalisation for the letters, e.g. “suRViVor” (Don't use this one, its in a
dictionary remember… its just an example)
5. Add some random numbers to the end or in the middle
6. If possible use a few special characters like !(;,$#& etc.
7. When choosing a password hint question, choose one that only you will be able to
answer. “What is my birthdate ?” is something anyone who knows you even remotely
will be able to guess.
A very useful method for choosing an easy to remember random password is to take a line of a
song that you remember and then take the first letter of each word in that line. Now you can
randomise the capitalisation, add a couple of numbers and special characters, and have a very
strong password that is still difficult to crack.
Remember as far as possible to use a different password for different accounts (e.g. one password
for your personal email, one for work email, one for internet banking). This may make things
more difficult to remember, but in the event that one password gets compromised, the attacker
will not have access to all the other accounts.
Introduction To Network Address Translation (NAT)
Introduction
Network Address Translation, defined by RFC 1631, is becoming very popular in today's
networks as it's supported by almost every operating system, firewall appliance and application.
NAT was born thanks to the fast depletion of public IP Addresses, in other words real IP
Addresses that can only exist on the Internet.
As IP Addresses are 32 Bit, in theory we could have upto 4,294,967,296 IP Addresses (that's
2^32) ! In practice though the number is a lot smaller, somewhere around 3.2 billion, because of
the way the IP Addresses are separated into Classes (Class A, B , C e.t.c) and the need to set aside
special addresses for multicasting (also known as Class D), broadcasting and other functions.
You might have heard or read about IPv6. This new addressing scheme has been designed to
make sure we don't face the same problem as IPv4, but its implementation requires the
modification of the entire Internet infustructure, so don't expect anytime soon to deal with IPv6.
Chances are it's going to co-exist with IPv4, since IPv6 isn't the best solution for small to medium
sized companies or small private networks.
This exciting section will show, and explain in detail, what NAT is, its different modes and how
they work. We will also see how NAT helps protect your network and minimise network security
threats.
Oh, and keep in mind...
You should also be up to scratch in your IP Addressing and Subnetting topics. In the following
sections, there are new concepts introduced which will require you to a have basic understanding
on IP Addressing and Subnetting. Please check the relevant sections if you think you need to
brush up on these before proceeding!
There's more to NAT than meets the eye !
When NAT was released, it was created to provide solutions to almost every type of network
configuration. This is achieved by the various modes in which NAT can function. Depending on
your network structure, your available real (public) IP Addresses and the results you need, you
can enable NAT in 3 different modes! Now don't assume it's hard to understand this cool stuff,
cause I'm telling you it's definitely not! Once you get the hang of the NAT Concept, the rest is
easy to digest, even late at night :)
So What's Covered ?
As with most cool networking topics, it's impossible to cover NAT on one page and if you
happen to find another site that covers NAT in one page, I assure you you're missing out on a lot
of important information, so stick to Firewall.cx :)
With all this in mind, I've split NAT into 6 sections. Each section deals with a particular NAT
mode or NAT topic, giving you an in-depth look on how each NAT mode works using a few
examples, and its advantages over the rest available NAT modes. The information provided has
been carefully selected and written to make sure it covers all ranges of user levels, meaning from
intermediate to advanced.
Section 1: NAT Concepts. A good introduction to NAT followed by its basic functions, how it
works and which devices in a network usually implement NAT. Simple, clear and colourful
diagrams will ensure you grasp this concept without any trouble.
Section 2: NAT Table. This section will introduce the NAT Table, which is the heart of NAT.
Here you will learn the purpose of the NAT table, where it's stored along with a lot of other
interesting information.
Section 3: Static NAT Mode. Learn what Static NAT is and how it functions. Two pages of
detailed diagrams, well thought examples and their analysis along with other rich information
ensures you will learn everything there is about Static NAT.
Section 4: Dynamic NAT Mode. Learn what Dynamic NAT is and how it functions. Simple
diagrams are available to help you understand how Dynamic NAT works and what its advantages
are over Static NAT. Dynamic NAT is analysed over two pages using examples and step by step
analysis, ensuring to capture all the required information and answer every question you might
have.
Section 5: NAT Overload Mode. Also known as IP Masquerading (in the Linux world), Port
Address Translation (PAT) or Dynamic NAT with PAT. Discover the most common NAT mode
for small networks. This NAT mode is used by most Internet sharing software. This section will
help you understand how NAT Overload works and what its benefits are over the rest. Again,
simple diagrams have been designed to make sure you grasp all this cool stuff :)
Section 6: Advanced NAT (Coming Soon). This pages deals with more advanced NAT concepts
and analysis. It contains more detailed and technical information about NAT, thus requires a
slightly more advanced level of networking knowledge and TCP/IP. It also outlines security
concerns and using NAT through VPN's and other complex network configurations.
The type of NAT mode you choose to use, depends on your network resources, capabilities of
your NAT-enabled device and, lastly, your needs. Together we will discover the power of NAT
and understand why its become so popular.
NAT is truly a masterpeice and one of my favourites! I've been eager to develop this section to
show you how cool it is! So grab a cuppa and maybe something to munch on and get ready for an
awesome ride! There's nothing better than knowing how your Firewall/router manipulates all
them cool packets so you can 'safely' access the Internet!
Network Address Translation (NAT) Concepts
Introduction
Before we dive into the deep waters of NAT, we need to make sure we understand exactly what
NAT does. So let me give you the background of NAT, why it's here today and how it works.
Even though there are different modes of NAT they are all basically extensions to the original
concept.
NAT has become so popular that almost all small routers, firewall software and operating systems
support at least one NAT mode. This shows how important it is to understand NAT.
The NAT Concept
NOTE: NAT is not only used for networks that connect to the Internet. You can use NAT even
between private networks as we will see in the pages to follow, but because most networks use it
for their Internet connection, we are focusing on that.
The NAT concept is simple: it allows a single device to act as an Internet gateway for internal
LAN clients by translating the clients' internal network IP Addresses into the IP Address on the
NAT-enabled gateway device.
In other words, NAT runs on the device that's connected to the Internet and hides the rest of your
network from the public, thus making your whole network appear as one device (or computer, if
you like) to the rest of the world.
NAT is transparent to your network, meaning all internal network devices are not required to be
reconfigured in order to access the Internet. All that's required is to let your network devices
know that the NAT device is the default gateway to the Internet.
NAT is secure since it hides your network from the Internet. All communications from your
private network are handled by the NAT device, which will ensure all the appropriate translations
are performed and provide a flawless connection between your devices and the Internet.
The diagram below illustrates this:
As you can see, we have a simple network of 4 hosts (computers) and one router that connects
this network to the Internet. All hosts in our network have a private Class C IP Address, including
the router's private interface (192.168.0.1), while the public interface that's connected to the
Internet has a real IP Address (203.31.220.134).
If you're having trouble understanding, the following diagram shows how the Internet would see
the above setup:
As you can see, the idea behind NAT is really simple. Remember that we have mentioned there
are 3 different NAT modes to suit all types of network configurations. If required you can use
NAT to allow the Internet to see specific machines on your internal network !
Such configurations will allow the Internet to access an internal webserver or ftp server you
might have, without directly compromising your network security. Of course special actions need
to be taken to ensure that your visitors are restricted to the resources you want and that's where
the firewall comes into the picture. We'll discover how all this is possible in the next pages, so be
patient and keep reading !
How NAT works
There are 3 different ways in which NAT works. However, the principle is the same for all 3
modes. To help understand it we need a good, simple example and the first one at the beginning
of this page will do the job just fine.
The trick to understanding how NAT works is to realise that only the device (router, firewall or
pc) that connects directly to the Internet performs NAT. For our example this device happens to
be a router, but it could even be a simple PC; it makes no difference for us.
As you already know, all requests the workstations generate are sent to the Internet via the router.
The router will then perform NAT on these packets and send them to their destination. As each
packet arrives into the router's private interface, the router will strip the source IP Address from
the 3rd layer (network layer) e.g 192.168.0.10 and place its own public IP address
(203.31.220.134) before sending it to the Internet.
This is how the packet then seems to have originated from the router itself. In some cases,
depending on the NAT mode, the source and destination port numbers (layer 4) will be changed
as well but we examine that on the pages that follow. For now, we'll just look at the simple IP
translation within the router.
The illustration below shows how the router modifies the packets:
In this illustration, a workstation from our network has generated a packet with a destination IP
Address 135.250.24.10. Logically, this packet is first sent to the gateway, which performs NAT
on this packet and then sends it to the Internet to finally make its way to the destined host.
Looking more closely at the gateway (router) during the initial NAT operation, the original
packet's Source IP is changed from 192.168.0.12 to that of the router's public interface, which is
203.31.220.134, then the router stores this information in a special address within its memory
(also called NAT Table - explained next), so when the expected reply arrives it will know to
which workstation within its network it needs to forward it.
The next page will show you the heart of NAT, the NAT Table, and briefly explain the function
of each NAT mode.
The Network Address Translation Table
Introduction
After that simple and informative introduction to the NAT concept, it's time to find out more
about how it works and this is where the NAT table comes in.
The NAT Table
The NAT table is the heart of the whole NAT operation, which takes place within the router (or
any NAT-enabled device) as packets arrive and leave its interfaces. Each connection from the
internal (private) network to the external (public-Internet) network, and vice versa, is tracked and
a special table is created to help the router determine what to do with all incoming packets on all
of its interfaces; in our example there are two. This table, known as the NAT table, is populated
gradually as connections are created across the router and once these connections are closed the
entries are deleted, making room for new entries.
The NAT table works differently depending on the NAT mode. This is explained in greater detail
on each NAT mode's page. For now, we just need to get the feeling for this table to facilitate
understanding of each NAT mode.
The larger the NAT table (which means the more memory it occupies), the more bi-directional
connections it can track. This means that a NAT-enabled device with a big NAT table is able to
serve more clients on the internal network than other similar devices with smaller NAT tables.
The illustration below shows you a typical table of a NAT-enabled device while internal clients
are trying access resources on the Internet:
Let's explain what's happening here: The above illustration shows two requests from the private
LAN, hosts 192.168.0.5 and 192.168.0.21, arriving at the NAT-enabled device's (router in this
example) private interface. These packets are temporarily stored in a special area in the router's
memory until small changes are made to them. In this example the router will take each packet's
Source IP (which is the PC the packets have come from) value and replace it with its own Public
IP (203.31.220.134).
The packets are then sent out through the Public interface to their destinations, in this case
120.0.0.2 and 124.0.0.1. In addition, before the packets leave the router, an entry is made for each
packet into the router's NAT table. These entries enable the router to behave appropriately when
the reply for each outgoing packet hits its Public interface.
The above example covers only one specific NAT scenario. Depending on your NAT mode, the
router would have dealt with the packets in a different way. This is analysed later in each NAT
mode's page but, for now, you simply need to understand what the NAT table is and the purpose
it serves.
So what happens when replies come back from the Internet ?
Well, strictly speaking, exactly the opposite from when they are received from the internal
network and sent to the Internet:
When the reply comes back, the router will consult the NAT table, locate the correct entries and
perform another change to the incoming (for the Internet) packets by replacing the "destination
IP" value from 203.31.220.134 to 192.168.0.5 for the first packet, and 192.168.0.21 for the
second. The new packets are then sent to their destination, which are hosts 192.168.0.5 and
192.168.0.21 so the router can then delete their NAT table entries.
With most NAT devices, the NAT session limit is bound by the available memory in the device.
Each NAT translation consumes about 160 bytes in the device's memory. As a result, 10,000
translations (a lot more than would normally be handled by a small router) will consume about
1.6 MB of memory. Therefore, a typical routing platform has more than enough memory to
support thousands of NAT translations but in practice the story (as always) is different.
Typically on smaller Cisco routers, e.g 700, 800, 1600 series, that have an IOS with NAT
capabilities, the number of NAT sessions they are able to track is around 2000 without much
trouble but this also depends on the NAT mode being used. Pump that up to something like 3000
to 4000 sessions and you start having major problems as the NAT table gets too big for the
router's CPU to manage. As you see, it's not only a memory issue :) This is when you start to see
big delays in ping replies and eventually an exponential increase in packet loss.
I've actually seen Cisco routers having some problems while handling NAT translations (NAT
Overload mode in particular). I also confirmed this with Mike Sweeney - a good friend of mine
and webmaster of www.packetattack.com, so keep in mind that the Cisco IOS seems sometimes
to behave a bit weird with NAT. Personally I don't like performing NAT on routers that connect
directly to the Internet, but sometimes your options are limited.
To give you the right idea, having a huge NAT table on a small router is like having a Windows
machine and opening 20 CPU and memory intensive applications at once.... Your PC tries to
open all programs together but, because the CPU is processing so much information, they take
hours to finally start and even then the PC is so slow you can't do any work. I'm sure everyone
has experienced something similar !
The larger router models and dedicated gateway/firewall appliances are able to track a lot more
connections simultaneously (8000 to 25000), which makes them ideal for large corporations that
need such capacity.
Static Network Address Translation (Part 1)
Introduction
Static NAT (also called inbound mapping) is the first mode we're going to talk about and also
happens to be the most uncommon between smaller networks.
Static NAT was mainly created to allow hosts on your private network to be direcly accessible via
the Internet using real public IPs; we'll see in great detail how this works and is maintained. Static
NAT is also considered a bit dangerous because a misconfiguration to your firewall or other
NAT-enabled device can result in the full exposure of the machine on your private network to
which the public IP Address maps, and we'll see the security risks later on this page.
What exactly does Static NAT do ?
As mentioned in the introduction, Static NAT allows the mapping of public IP Addresses to hosts
inside the internal network. In simple english, this means you can have a computer on your
private network that exists on the Internet with its own real IP.
The diagram below has been designed to help you understand exactly how Static NAT works:
In this diagram you can see that we have our private network connected to the Internet via our
router, which has been configured for Static NAT mode. In this mode each private host has a
single public IP Address mapped to it, e.g private host 192.168.0.1 has the public IP Address
203.31.218.208 mapped to it. Therefore any packets generated by 192.168.0.1 that need to be
routed to the Internet will have their source IP field replaced with IP Address 203.31.218.208.
All IP translations take place within the router's memory and the whole process is totally
transparent to both internal and external hosts. When hosts from the Internet try to contact the
internal hosts, their packets will either be dropped or forwarded to the internal hosts depending on
the router's & firewall configuration.
But where would Static NAT be used?
Everyone's needs are different and with this in mind Static NAT could be the solution for many
companies that require a host on their internal network to be visible and accessible from the
Internet.
Let's take a close look at a few examples of places where Static NAT could be used.
Implementation of Static NAT - Example 1
We have a development server (192.168.0.20) that needs to be secure, but also allow certain
customers to gain access to various services it offers for development purposes. At the same time,
we need to give the customers access to a special database located on our main file server
(192.168.0.10):
In this case, Static NAT, with a set of complex filters to make sure only authorised IP Addresses
get through, would do the job just fine.
Also, if you wanted a similar setup for the purpose of using only one service, e.g http, then you're
better off using a different NAT mode simply because it offers better security and is more
restrictive.
Let me remind you that Static NAT requires one public IP Address for each mapping to a private
IP Address. This means that you're not able to map a public IP Address to more than one private
IP Address.
Implementation of Static NAT - Example 2
Another good example of using Static NAT is in a DMZ zone. The principle of having a DMZ
zone is when you require certain machines e.g webservers, email servers, to be directly accessible
to the Internet but at the same time, should these machines be compromised, all data can be
restored without much trouble and they won't expose the internal private network to the Internet.
The diagram above might seem very complex, but it's actually extremely simple. Breaking it
down will help you see how simple it is. If we focus on Firewall No.1 we see that it's connected
to 3 networks, first one is the Internet (203.31.218.X), second one the DMZ (192.168.100.X) and
the third is the small private network between our two Firewalls (192.168.200.X)
Firewall No.1 is configured to use Static NAT for 3 different hosts - that's two from the DMZ
zone and one for Firewall No.2. Each interface of the Firewall must be part of a different network
in order to route traffic between them. This explains why we have so many different IP Addresses
in the diagram, resulting in the complex appearance.
With this setup in mind, the Static NAT table of Firewall No.1 would look like this:
Firewall No.1 Static NAT Table
External Public IP Address
Mapped to Internal Private IP Address
203.31.218.2
Firewall No.1 Public Interface
203.31.218.3
192.168.100.2 - Public WebServer in DMZ
203.31.218.4
192.168.100.3 - Public MailServer in DMZ
203.31.218.5
192.168.200.2 - Firewall No.2 of Private Net.
As you can see, this table is a good summary of what is happening in the diagram above. Each
external IP Address is mapped to an internal private IP Address and if we want to restrict access
to particular hosts then we can simply put an access policy (packet filters) on Firewall No.1. This
type of firewall setup is actually one of my favourites :)
Static Network Address Translation (Part 2)
Introduction
The previous page helped us understand what exactly happens with Static NAT and how it works,
and we saw a few examples of how to use it in various network configurations.
This page will deal with the transformations the packets undertake as they pass through the Static
NAT device, which is normally a router or firewall appliance.
So let's get started ! Now would be a good time to fill that cup of yours and reload yourself with
your special edible supplies :)
How NAT translations take place
So what exactly happens to the packet that enters or exits the Static NAT-enabled device ? Well
it's not that complicated once you get the hang of it. The concept is simple and we're going to see
it and analyse it using an example, which is really the best possible approach.
The process of the Static NAT translation is the same for every device that supports it (assuming
the manufacturer has followed the RFCs). This means that whether we use a router or a firewall
appliance to perform Static NAT they'll both follow the same guidelines.
Consider our example network:
As the diagram describes we have Workstation No.1, which sends a request to the Internet. Its
gateway is the router that connects the LAN to the Internet and also performs Static NAT.
The diagram below shows us how the Workstation's packet is altered as it transits the router
before it's sent to the Internet (outgoing packet):
As you can see, the only thing that changes is the Source IP, which was 192.168.0.3 and was
given the value of 203.31.220.135, which is a real IP Address on the Internet. The Destination IP
Address, Source Port and Destination Port are not modified.
Assuming the packet arrives at its destination, we would most likely expect to see a reply. It
would be logical to assume that the reply, or incoming packet, will require some sort of
modification in order to successfully arrive at the originating host located on our private network
(that's Workstation 1).
Here is how the incoming packet is altered as it transits the router:
The diagram above shows the part of the incoming packet that is altered by the router. Only the
destination IP Address is changed, from 203.31.220.135 to 192.168.0.3 so the packet can then be
routed to the internal workstation. Source IP Address, Source Port and Destination Port remain
the same.
And in case you're wondering why the ports have changed in comparison to the original outgoing
packet, this is not because of NAT but the way IP communications work and happens to be way
out of the scope of this topic.
Now, because I understand that even a simple diagram can be very confusing, here's one more
that summarises all the above. The diagram below shows you what the outgoing and incoming
packets looked like before and after transiting the router:
So there you have it, Static NAT should now make sense to you :)
As you've seen, the concept is very simple and it varies slightly depending on the NAT mode
you're working with. So NAT is not that difficult to understand after all ! If there are still a few
things that are unclear to you, please try reading the page again and keep in mind the forum to
which you can post your questions and doubts !
Next up is Dynamic NAT! So sit tight and let's rock and roll.... :)
Dynamic Network Address Translation (Part 1)
Introduction
Dynamic NAT is the second NAT mode we're going to talk about. Dynamic NAT, just like Static
NAT, is not that common in smaller networks but you'll find it used within larger corporations
with complex networks.
The way Dynamic NAT differentiates from Static NAT is that where Static NAT provides a oneto-one internal to public static IP mapping, Dynamic NAT does the same but without making the
mapping to the public IP static and usually uses a group of available public IPs.
Confused ? Don't worry, I would be too :) Let's explain it better...
What exactly does Dynamic NAT do ?
While looking at Static NAT, we understood that for every private IP Address that needs access
to the Internet we would require one static public IP Address. This public IP Address is mapped
to our internal host's IP Address and it is then able to communicate with the rest of the world.
With Dynamic NAT, we also map our internal IP Addresses to real public IP Addresses, but the
mapping is not static, meaning that for each session our internal hosts communicate with the
Internet, their public IP Addresses remain the same, but are likely to change. These IPs are taken
from a pool of public IP Addresses that have been reserved by our ISP for our public network.
With Dynamic NAT, translations don't exist in the NAT table until the router receives traffic that
requires translation. Dynamic translations have a timeout period after which they are purged from
the translation table, thus making them available for other internal hosts.
The diagram below illustrates the way Dynamic NAT works:
The diagram above is our example network and shows our router, which is configured to perform
Dynamic NAT for the network. We requested 4 public IPs from our ISP (203.31.218.210 to
203.31.218.213), which will be dynamically mapped by our router to our internal hosts. In this
particular session our workstation, with IP Address 192.168.0.1, sends a request to the Internet
and is assigned the public IP address 203.31.218.210. This mapping between the workstation's
private and public IP Address will remain until the session finishes.
The router is configured with a special NAT timeout and, after this timeout is reached (no traffic
sent/received during that time), the router will expire the particular mapping and reuse it for a
different internal host.
Let's say that around noon, the users of workstations with IP Address 192.168.0.1 and
192.168.0.3 go to lunch, so they log off and leave their PCs on (even if they switched them off, it
wouldn't make a difference unless they had some program running that was constantly generating
Internet traffic because the NAT timeout would never be reached). While these users went out for
lunch, the user on the workstation with IP Address 192.168.0.2 decided to stay and do some extra
work on the Internet. After 1 hour, the users return and log back on, launch their web browser and
start to search on the net.
The router, as expected, deleted the old mappings once the NAT timeout had been reached for
each mapping and created new ones once the users launched their web browsers, because that
action generated traffic to the Internet and therefore had to transit the router.
Here's how the new mappings look:
By now, I would like to believe that you have understood what Dynamic NAT is all about and
roughly how it works.
But where would Dynamic NAT be used?
Again, everyone's network needs are different, though I must admit that finding a practical
implementation for Dynamic NAT is perhaps more difficult than any other NAT mode :)
Come to think of it, I can't recall ever being required to implement Dynamic NAT for a customer
or my own network, but that doesn't mean it's not used. There are some network setups in which
Dynamic NAT would work perfectly and that's what i'm about to show you.
Implementation of Dynamic NAT
This example is about a company called 'Dynasoft'. Dynasoft deals with the development of high
quality software applications. As a large software firm, it has multiple contractors that help
complete special sections of the software it sells.
Because of the nature of this production model, Dynasoft requires its contractors to have a
permanent link into its private development network, so the source code of all ongoing projects is
available to all contractors:
Now because Dynasoft is concerned about its network security, it purchased a firewall that was
configured to regulate each contractor's access within the development network.
For the rest of this example, we will concentrate on Dynasoft's and Datapro's (green) networks:
Dynasoft has configured its firewall only to allow a particular part of Datapro's internal network
to access the servers and that is network 192.168.50.0/24, which is Datapro's main development
network.
This setup has been working fine, but Datapro is expanding its development network, so a second
separate network (192.168.100.0/24) was created that also needs access to Dynasoft's
development network. All hosts on this new network will be using the new DHCP server, which
means that they'll have a dynamic IP Address.
In order for the new network to access Dynasoft's network, we need to somehow trick Dynasoft's
Firewall into thinking that any workstation from the new network is actually part of the
192.168.50.0 network, that way it won't be denied access.
There was a suggestion to use Static NAT but there are a few problems:
a) All workstations are on DHCP, so Static NAT will not work properly since it requires the
internal hosts in the new network to have static IP Addresses.
b) Datapro's administrator wants maximum security for this new network therefore having
dynamic IPs makes it more difficult for someone to track a particular host from it by using its IP
Address.
So, after taking all this into consideration it was decided to implement Dynamic NAT and here's
what the solution looks like:
A Dynamic NAT router in this situation would do the job just fine. We would place the router
between the existing (192.168.50.0) and new (192.168.100.0) network. Because of the way
Dynamic NAT works, we would need to reserve a few IP Addresses from the 192.168.50.0
network in order to allow the Dynamic NAT router to use them for mapping hosts on the new
network - to the existing network. This way, no matter which IP Address any host in the new
network has, Dynasoft's Firewall device will think it's part of the 192.168.50.0 network !
I should also point out that the number of IP Addresses we'd need to reserve from network
192.168.50.0 would depend on how many simultaneous connections we want to allow from
network 192.168.100.0 to Dynasoft's development network.
For example, if we required 25 workstations from network 192.168.100.0 to have simultaneous
connection to Dynasoft we'd need to reserve at least 25 IP Addresses from the 192.168.50.0
network.
As previously explained, the reserved IP Addresses will be used to map hosts coming from the
192.168.100.0 network and must not be used by any host or workstation within the 192.168.50.0
network. If any were used in this way it would cause IP conflicts between the host in the
192.168.50.0 network and the Dynamic NAT router that's mapping that particular IP Address to
the host from the 192.168.100.0 network.
So a good practice would be to set aside the last 30 IP Addresses from the 192.168.50.0 network,
which would be 192.168.50.224 to 192.168.50.254, and ensure no one is assigned any IP Address
within that range.
All this assumes networks 192.168.50.0 and 192.168.100.0 are using a Class C default
subnetmask (255.255.255.0).
On this page, you need to understand why we're going to introduce the Dynamic NAT router,
how it will solve the problem (by mapping hosts on the new network to the existing one) and the
requirements for the implementation of this solution (to reserve the required IP addreess from the
existing network).
The next page deals with the analysis of the packets that will traverse the Dynamic NAT router. It
will help you understand the changes in the packet and complete your understanding of Dynamic
NAT.
Dynamic Network Address Translation (Part 2)
Introduction
Now that you understand the basic idea of Dynamic Network Address Translation we're going to
take a closer look at the packets as they traverse the Dynamic NAT enabled device, which can be
a router, a firewall appliance or even a PC running special software !
Don't be too troubled about what's to follow, it's really simple and neat to know, so let's get right
into it !
How NAT translations take place
Most of the rules that apply for Static NAT (which we've already covered), also apply for
Dynamic NAT and there are very few changes between the two, making it very easy to
understand and digest :)
The actual process remains the same no matter which device we use, e.g Firewall appliance,
Linux gateway, router etc.
Because we don't want to get confused by using a different example, we'll stick to the previous
page's network between Dynasoft and its contractor - Datapro, but we're now focusing on
Datapro's internal network to learn how the router between its two internal networks
(192.168.50.0 and 192.168.100.0) will deal with the Dynamic NAT required in order for the new
network to gain access to Dynasoft's development network:
Even though the diagram explains everything, I'm just going to point out a few important things
about the Dynamic NAT router. It's very important that you understand that the IP Addresses in
the router's Pool are reserved addresses from the 192.168.50.0 network - this means that no
device or host on that network, apart from the router itself, is allowed to use them.
The dynamic mapping that is created will be in place only for that particular session, meaning
that once the workstation in the new network finishes its work on the Dynasoft network, or
doesn't send any packets across the Dynamic NAT router within a given time period, then the
router will clear the dynamic mapping and make the IP Address available to the next host or
workstation that needs it.
The timeout period is different for each transport protocol (TCP/UDP) and NAT device. The
ability to modify these timeouts depends entirely on the NAT device being used. As always, the
RFCs give some guidelines for these values but not all vendors follow them :) You will find more
interesting information about this subject in the NAT advanced section.
So, after getting all that out of the way, it's now time to have a closer look at the packets as they
traverse the router to either network:
After it is determined that this packet must traverse the router, an IP Address is picked from the
available pool that will be used to map IP Address 192.168.100.5. These entries are then stored
within the router's RAM (NAT Table). As you can see, the Source, Destination ports and
Destination IP are never modified on outgoing packets.
The router will then send the packet on to the 192.168.50.0 network and after a few milliseconds
it receives the reply that our workstation on network 192.168.100.0 is waiting for:
The router finds an entry within its NAT mapping table (don't forget this table is stored in the
router's RAM) and replaces destination IP 192.168.50.200 with destination IP 192.168.100.5 and
then forwards the packet to the new network. The Source, Destination ports and Source IP are not
modified.
In case you're wondering why the ports have changed in comparison to the original outgoing
packet, this is not because of NAT but the way IP communications work and happens to be way
out of the scope of this page.
One important small detail I should bring to your attention is how the packet reply managed to
arrive at the router's interface, which is on the existing network. You should know that to the
existing Datapro network, the router is like a host with multiple IP Addresses.
I explained how the router maps IP Addresses on the existing network to the new network, but if
someone on the existing network tried to send an ARP request for 192.168.50.200, then the router
would immediately answer with its own MAC address. This is done to ensure that all traffic
intended for workstations on the new network finds its way there. The same principle would
apply no matter which NAT mode we used.
To sum up all the above while trying to keep things simple, because sometimes no matter how
much you analyse a diagram it can still confuse you, the next diagram is a summary of how the
packets are modified as they traverse a Dynamic NAT device which, in our example, is a router:
It's very easy to see that the Source IP Address (192.168.100.5) is changed as the packet traverses
the Dynamic NAT router to arrive at Datapro's exist network and then move on to Dynasoft's
network, whereas the reply from Dynasoft's network will enter Datapro's existing network and
traverse the Dynamic NAT router and have its Destination IP Address modified to 192.168.100.5,
thus reaching the workstation its intended to.
Believe it or not, we've come to the end of this page, next page talks about NAT Overload, which
is also known as Network Address Port Translation, Port Address Translation or IP Masquerade
in the Linux/Unix world.
Network Address Translation Overload (Part 1)
Introduction
NAT Overload is the most common NAT method used throughout all networks that connect to
the Internet. This is because of the way it functions and the limitations it can overcome, and we'll
explore all of these in the next two pages.
Whether you use a router, firewall appliance, Microsoft's Internet sharing ability or any 3rd party
program that enables all your home computers to connect to the Internet via one connection,
you're using NAT Overload.
This NAT mode is also know by other names, like NAPT (Network Address Port Translation), IP
Masquerading and NAT with PAT (Port Address Translation). The different names logically
come from the way NAT Overload works, and you'll understand this by the time we're finished
with the topic.
NOTE: You should be familiar with TCP/IP & UDP communications, as well as how they use
various Ports in order to identify the resources/applications they are trying to use. It's very
important you understand them because NAT Overload is based on these Ports in order to
identify sessions between hosts.
The bad news is that this topic is not covered as yet on the site, but be sure I will be analysing it
soon. Until then, there are plenty of other resources on the Internet to learn about the basics.
What exactly does NAT Overload do ?
NAT Overload is a mix of Static & Dynamic NAT with a few enhancements thrown in (PATPort Address Translation) to make it work the way we need. By now you understand how both
Static & Dynamic NAT work so we won't get into the details again. NAT Overload takes a Static
or Dynamic IP Address that is bound to the public interface of the gateway (this could be a PC,
router or firewall appliance) and allows all PCs within the private network to access the Internet.
If you find yourself wondering how this is possible with one only IP Address, you will be happy
to find that the answer lies within PAT.
The diagram below shows you how a single session is handled by a NAT Overload enabled
device:
So we have a host on a private network, its IP Address is 192.168.0.1 and it's sending a packet to
the Internet, more specifically to IP Address 200.0.0.1, which we're assuming is a server. The
Port, which is 23, tells us that it's trying to telnet to 200.0.0.1, since this is the default port telnet
uses.
As the original packet passes through the router, the Source IP Address field is changed by the
router from 192.168.0.1 to 203.31.218.100. However, notice that the ports are not changed.
The reason the Source IP Address is changed is obvious: The router's public IP Address must be
placed in the Source IP Address field of the packet so the server we're trying to telnet to knows
where the request is coming from so it can then send the reply.
That takes care of making sure the packet from the server we're telneting to finds its way back to
the router's public interface. From there, the router needs to know which host on the private
network it must send the reply to. For this, it uses the ports and we will be looking at that closer
very soon.
Some might think that this example is pretty much the way a Static NAT router would behave,
and if you're thinking just that you're totally right! In order to understand how a NAT Overload
enabled router is different from Static NAT, we must add at least one more host in the private
network, which we'll do right now.
With two or more hosts on the private network, in Static NAT mode we would require the
equivalent number of public IP Addresses, right ? One for each private host, because Static NAT
maps one public IP Address to each private host.
NAT Overload overcomes this limitation by using one public IP Address for all private hosts, but
utilising the thousands of ports available in order to identify each private host's session.
Unleashing the true Power of NAT Overload
To help cover all possibilities and questions that might come up from these examples, we're going
to add another two private hosts in our internal network. We'll assume that:
1) The 2nd host in our private network is trying to telnet to the same server as the 1st host
2) The 3rd host in our private network is trying to telnet to a different server on the Internet
So let's see how our example network looks:
Host 1 and 2 are telneting to the same server (203.31.218.100), the only difference between the
two packets is their Source Port Numbers, the router uses these to keep track of which packet
belongs to each host.
Let's examine what happens when Host 1's reply arrives:
A packet arrives on our router's public interface and is accepted. The packet's details are
examined and show that it came from IP Address 200.0.0.1 Port 23 with a destination of
203.31.218.100 Port 3000. The router remembers that Host 1 and 2 just sent a packet to this IP
Address and now, in order to determine to whom this response belongs, it carefully examines its
Destination Port.
It focuses on the Destination Port because in any reply, the Destination Port takes the value of the
initial packet's Source Port. This means that this packet is a reply to one sent previously to IP
Address 200.0.0.1 with Source Port 3000. The router refers to its NAT table and finds a matching
entry for the described initial packet. It recognises that the reply is intended for Host 1 and will
forward it to the host.
The server to which Host 1 and 2 of our example private network are telneting uses the same
logic to distinguish between the two separate sessions.
Because this can also be a bit difficult to imagine, I've included a diagram which shows the server
receiving Host 1's initial packet and then sending a reply:
The example on this page is intended to show you the idea behind NAT Overload and how it
works. We saw our little NAT Overload enabled router doing wonders with one single public IP
Address. If we wanted to use Static or Dynamic NAT in this same example, we would definitely
require 3 public IP Addresses for our 3 private hosts but thanks to NAT Overload, we only need
one IP Address.
The next page will deal with a more detailed analysis of the packets as they traverse the router
and take a look at a few more interesting parts of NAT Overload.
Network Address Translation Overload (Part 2)
Introduction
This page deals with the analysis of the packets that traverse a NAT Overload enabled device.
We'll examine which fields of the packets are modified and how the NAT device, a router in our
example, keeps track of them in its NAT Table.
In order to keep things simple, we're going to use a few simple examples and then deal with a few
more complicated ones, this should help make the complex stuff much easier to understand and
digest.
Time to grab something to drink or munch on, and prepare to download this information into your
head!
- Oh, and don't worry, this information has been virus checked :)
How NAT translations take place
When covering Dynamic and Static NAT, we saw that it was either the Source or Destination IP
Address that had to be modified by the NAT device. No matter which mode was used, the Source
and Destination ports were never altered in any way.
NAT Overload on the other hand will use a single public IP Address for the routing process and
change, in most cases, the Source or Destination port depending on whether it's an incoming or
outgoing packet.
In the next diagram we have two computers that have each sent a packet out to the Internet and
are expecting a reply. We take a look at how the router deals with these packets individually and
where the information required to identify the expected replies is stored:
You've got to agree that that's a very simple setup. To make life easy, I haven't included any
additional information about the generated packets because we'll deal with them individually.
So it's time to take a look at how the router deals with this first packet which belongs to
Workstation 1:
The packet Workstation 1 generated arrives at the router's private interface which has IP Address
192.168.0.1. The router accepts the packet and processes it. Once inside the router, the packet's
Source IP Address, Destination IP Address, Source Port and Destination Port are examined and
the router decides that this is a valid packet so it should be forwarded to the Internet.
NAT is now about to take place (check NAT Table in the above diagram). The router will replace
the Source IP Address (192.168.0.5) with its Public IP Address (200.0.0.1) and keep the rest of
the information intact.
Note that in most cases, the Source Port is not changed unless it has already been used by a
previous packet from the private network but, since this is the first outgoing packet, this cannot be
the case.
Here's how the packet looked once it exited the router's public interface:
Time to check our second packet that will traverse the router, which is generated by Workstation
2 (the router has not yet received the reply to Workstation 1's packet).
We're going to assume that Workstation 2 uses the same Source Port (2400) as Workstation 1's
packet, so you can see how the router will react:
This packet is a very good example to show you how great NAT Overload is because the router
will need to 'do' something to make sure it's able to successfully receive the reply.
Let's look at what happens within the router as the packet arrives at its private interface:
As you can see, our second packet arrives at the router's private interface. It enters the router and
since the packet is valid (it's not corrupt and contains a valid Destination IP Address in its IP
Header) it's routed to the public interface and NAT is performed while a new entry is added into
the router's NAT Table.
Looking at the NAT entries, you will notice that both first and second packets have the same
Source Port, which is 2400. Since the router already used port 2400 as a Source Port for the first
packet's NAT Translation and hasn't yet received a reply, it cannot use it again for the second
packet. If it did use it, when a reply for any of the two packets came back the router wouldn't be
able to distinguish whose it was.
For this reason, the router assigns a new Source Port for the second packet (14500), ensuring that
the reply will get to the internal host for which it is intended:
Usually most NAT Overload enabled devices will assign Source Ports in the range of 1025 to
65500.
Keep in mind that when one host sends a packet to another, the Source Port it used in the packet
will be substituted for the Destination Port in the reply packet.
For example, Host A sends a packet to Host B and this packet has a Source Port of 200, then
when Host B replies to Host A its packet will have a Destination Port of 200. This way, Host A
knows this is a reply to the initial packet it sent.
This is why it's important for the router to keep close track of Source Ports on outgoing packets
and Destination Ports in all incoming packets.
Last Notes
NAT Overload will work perfectly no matter which type of IP Address assignment you have.
Whether it's Dynamic IP or Static, via dial up or a permanent connection, it makes no difference
to the NAT device. You can now see how this particular NAT mode has helped preserve real
(public) IP Addresses, because no matter how many internal hosts any private network has, it
only requires one real IP Address in order to allow all internal hosts to access the Internet.
Now you have a good idea why NAT is so much a favourite subject of mine, simply cause it's
made such a big difference in every network that connects to the Internet.
The next page dives a bit deeper into the NAT function and discusses subjects such as its
performance depending on the transport protocol used, implemented timeouts for TCP and UDP
connections and the effect they have, exact NAT translation procedures depending on the packet's
origin and a lot more.
If you feel your head can take a bit more bashing, then give it a go, otherwise leave it for another
day, as it will still be here waiting for you next time :)
Virtual Local Area Networks (VLANs) - Introduction
Introduction
Virtual Local Area Networks or VLANs are one of the latest and coolest network technologies
developed in the past few years, though have only recently started to gain recognition. The nonstop growth of Local Area Networks (LANs) and the need to minimize the cost for this expensive
equipment, without sacrificing network performance and security, created the necessary soil for
the VLAN seed to surface and grow into most modern networks.
The truth is that VLANs are not as simple as most people peceive it to be. Instead they cover
extensive material to be a whole study in itself as they contain a mixture of protocols, rules, and
guidelines that a network administrator should be well aware of. Unfortunately, most
documentation provided by vendors and other sites is inadequate or very shallow. They lightly
touch upon the VLAN topic and fail to give the reader a good understanding on how VLANs
really work and the wonderful things one can do when implementing them.
Like most topics covered on our site, VLANs have been broken down into a number of pages,
each one focusing on specific areas to help the reader build up their knowledge as preparation for
designing and building their own VLAN network.
Since VLANs is a topic that requires strong background knowledge of certain areas, as they
contain a lot of information at the techincal and protocol level, we believe that the reader should
be familiar and comfortable with the following concepts:




Switches and hubs
Broadcast and collision domains
Internet Protocol (IP)
IP routing
As we cover all the theory behind VLANs and how they are implemented within various network
topologies, we will finally demonstrate the configuration of a Cisco powered network utilising
VLANs!
Protocols such as Spanning Tree Protocol (STP) are essential when implementing VLANs within
a mid to large sized network, so we will briefly touch upon the topic, without thoroughly
analysing it in great detail because STP will be covered as a separate topic.
So What's Covered ?
Before we begin our journey into the VLAN world, let's take a look at what we will be covering:
Section 1: The VLAN Concept. This page explains what a VLAN is and how it differs from a
normal switched environment. Be sure to find our well known diagrams along with illustrations
to help cover your questions. In short, its a great introductory page for the topic.
Section 2: Designing VLANs.
Section 2.1: Designing VLANs - [Subsection 1] A Comparison With Old Networks. This
subsection will give you an insight to the different VLAN implemenations: Static and Dynamic
VLANs. The subsection begins with an introduction page to help you 'see' the actual difference in
the network infrastructure between the old boring networks and VLAN powered networks. This
way, you will be able to appreciate the technology much better!
Section 2.2: Designing VLANs - [Subsection 2]: Static VLANs. Definately the most wide spread
VLAN implementation. The popular Static VLANs are analysed here. We won't be covering any
configuration commands here as this page serves as an introduction to this VLAN
implementation. As always, cool 3D diagrams and examples are included to help you understand
and process the information.
Section 2.3: Designing VLANs - [Subsection 3]: Dynamic VLANs. Dynamic VLANs are less
common to most networks but offer substantial advantages over Static VLANs for certain
requirements. Again, this page serves as an introduction to the specific VLAN implementation.
Section 3: VLAN Links: Access Links & Trunk Links. Access links are used to connect hosts,
while Trunk links connect to the network backbone. Learn how Access & Trunk links operate,
the logic which dictates the type of link and interface used and much more.
Section 4: VLAN Tagging - ISL, 802.1q, LANE and IEEE 802.10. To tag or not to tag!
Understand the VLAN tagging process and find out the different tagging methods available,
which are the most popular and how they diffirentiate from each other. Neat diagrams and
examples are included to ensure no questions are left unanswered!
Section 5: Analysing Popular Tagging Protocols.
Section 5.1: InterSwitch Link Analysis (ISL): Analysis of Cisco's proprietry ISL protocol. We
take a look at how it is implemented and all available fields it contains.
Section 5.2: IEEE 802.1q Analysis: IEEE's 802.1q protocol is the most widely spead trunking
protocol. Again, we take a look at its implementation with an analysis of all its fields.
Section 6: InterVLAN Routing. A very popular topic, routing between VLANs is very important
as it allows VLANs to communicate. We'll examine all possible InterVLAN routing methods and
analyse each one's advantages and disadvantages. Needless to say, our cool diagrams also make
their appearance here!
Section 7: Virtual Trunk Protocol (VTP)
Section 7.1: Introduction To The VTP Protocol. The introductory page deals with understanding
the VTP concept. Why it's required and what are its advantages.
Section 7.2: In-Depth Analysis Of VTP. Diving deeper, this page will analyse the VTP protocol
structure. It includes 3d diagrams explaining each VTP message usage and much more.
Section 7.3: Virtual Trunk Protocol Prunning ( VTP Pruning). VTP Prunning is an essential
service in any large network to avoid broadcast flooding over trunk links. This page will explain
what VTP Prunning does and how it works by reading through our excellent examples. The
diagrams used here have been given extra special attention!
The following sections are yet to be complete as we are still working on them. There will be an
announcement once they are ready for you to read!
Section 8: Special VLAN requirements - Voice over IP (VoIP), Quality Of Service (QOS) &
Spanning Tree Protocol (STP).
Section 9: Troubleshooting VLANs in a Cisco powered network.
Section 10: VLAN Redundancy using redundant links and STP.
Section 11: Building your own VLAN Network.
Section 11.1: Configuring VLANs and Trunks (ISL & 802.1q).
Section 11.2: Configuring InterVLAN Routing.
Section 11.3: Configuring VTP Server, Client, Transparent.
Section 11.4: Configuring VTP Pruning.
Section 11.5: Migrating to a Cisco VLAN oriented network.
Virtual Local Area Networks (VLANs) - The Concept
Introduction
We hear about them everywhere, vendors around the world are constantly trying to push them
into every type of network and as a result, the Local Area Network (LAN) we once knew starts to
take a different shape. And yet, for some of us, the concept of what VLANs are and how they
work might still be a bit blurry.
To help start clearing things up we will define the VLAN concept not only through words, but
through the use of our cool diagrams and at the same time, compare VLANs to our standard flat
switched network.
We will start by taking a quick look at a normal switched network, pointing out it's main
characteristics and then move on to VLANs. So, without any delay, let's get right into this cool
stuff!
The Traditional Switched Network
Almost every network today has a switch interconnecting all network nodes, providing a fast and
reliable way for the nodes to communicate. Switches today are what hubs were a while back - the
most common and necessary equipment in our network, and there is certainly no doubt about that.
While switches might be adequate for most type of networks, they prove inadequate for mid to
large sized networks where things are not as simple as plugging a switch into the power outlet
and hanging a few Pc's from it!
For those of you who have already read our "switches and bridges" section, you will be well
aware that switches are layer 2 devices which create a flat network:
The above network diagram illustrates a switch with 3 workstations connected. These
workstations are able to communicate with each other and are part of the same broadcast domain,
meaning that if one workstation were to send a broadcast, the rest will receive it.
In a small network multiple broadcast might not be too much of a problem, but as the size of the
network increases, so will the broadcasts, up to the point where they start to become a big
problem, flooding the network with garbage (most of the times!) and consuming valuable
bandwidth.
To visually understand the problem, but also the idea of a large flat network, observe the diagram
below:
The problem here starts to become evident as we populate the network with more switches and
workstations. Since most workstations tend to be loaded with the Windows operating system, this
will result in unavoidable broadcasts being sent occasionaly on the network wire - something we
certainly want to avoid.
Another major concern is security. In the above network, all users are able to see all devices. In a
much larger network containing critical file servers, databases and other confidential information,
this would mean that everyone would have network access to these servers and naturally, they
would be more susceptible to an attack.
To effectively protect such systems from your network you would need to restrict access at the
network level by segmenting the exisiting network or simply placing a firewall in front of each
critical system, but the cost and complexity will surely make most administrators think twice
about it. Thankfully there is a solution ..... simply keep reading.
Introducing VLANs
Welcome to the wonderful world of VLANs!
All the above problems, and a lot more, can be forgotten with the creation of VLANs...well, to
some extent at least.
As most of you are already aware, in order to create (and work with) VLANs, you need a layer 2
switch that supports them. A lot of people new to the networking field bring the misconception
that it's a matter of simply installing additional software on the clients or switch, in order to
"enable" VLANs throughout the network - this is totally incorrect!
Because VLANs involve millions of mathematical calculations, they require special hardware
which is built into the switch and your switch must therefore support VLANs at the time of
purchase, otherwise you will not be able to create VLANs on it!
Each VLAN created on a switch is a separate network. This means that a separate broadcast
domain is created for each VLAN that exists. Network broadcasts, by default, are filtered from all
ports on a switch that are not members of the same VLAN and this is why VLANs are very
common in today's large network as they help isolate network segments between each other.
To help create the visual picture on how VLANs differentiate from switches, consider the
following diagram:
What we have here is a small network with 6 workstations attached to a VLAN capable switch.
The switch has been programmed with 2 VLANs, VLAN1 and VLAN2 respectfully, and 3
workstations have been assigned to each VLAN.
VLANs = Separate Broadcast Domains
With the creation of our VLANs, we have also created 2 broadcast domains. This mean that if
any workstation in either VLAN sends a broadcast, it will propagate out the ports which belong to
the same VLAN as the workstation that generated the broadcast:
This is clearly illustrated in the diagram above where Workstation 1, belonging to VLAN1, sends
a network broadcast (FF:FF:FF:FF:FF:FF). The switch receives this broadcast and forwards it to
Workstation 2 and 3, just as it would happen if these three workstations were connected to a
normal switch, while the workstations belonging to VLAN2 are totally unaware of the broadcast
sent in VLAN1 as they do not receive any packets flowing in that network.
To help clear any questions or doubts on how the above setup works, the diagram below shows
the logical equivalent setup of our example network:
By this stage, you should begin seeing the clear advantages offered by the use of VLANs within
your network. Security, cost and network traffic are reduced as more hosts are added to the
network and the number of VLANs are increased.
VLANs Help Reduce The Cost
To briefly touch upon the financial side of things, let's take an example to see exactly how we are
saving money by using VLANs.
Consider you're the network administrator for a large company and you have been asked to split
the existing network infrastructure into 12 seperate networks, but without the possibility of these
new networks to communicate between each other. Since the cabling is already in place, we need
to simply group the ports of each network we create to one physical switch and for the 12
network, a total of 12 switches will be required.
By using VLANs, the above task would be possible with one or more VLAN capable switches
that will cover the number of hosts we need to connect to them, and the cost would surely be a lot
less than that compared to 12 switches.
During the implementation of the above task, you would connect all workstations to the switch
and then assign the appropriate workstations/nodes to their respectful VLAN, creating a total of
12 VLANs. It is worth noting here that most entry level VLAN switches e.g Cisco 2900 series,
are capable of handling up to 64 VLANs, so if we were to use these switches, we would still have
plently of room to create more.
Switch Model
Maximum VLANs Supported
VLAN Trunking Supported
Catalyst 2912 XL, Catalyst 2924 XL &
Catalyst 2924C XL
64
yes
Catalyst 2900 LRE XL
250
yes
Catalyst 2912M and Catalyst 2924M modular
250
yes
Catalyst 3500 XL & 3550
250
yes
There are a lot more examples one can use to show how these new generation switches are able to
solve complex network designs, security issues and at the same time, keep the budget low. Lastly,
the best example is one that is able to solve your own requirements, so take a minute to think
about it and you will surely agree.
Summary
This page introduced the concept of VLANs and indicated the differences existing between them
and normal switched networks. We also briefly examined their efficiency in terms of cost,
security and implementation.
The information here serves as an introduction to the VLAN technology and we will now start
diving deeper into the topic, analysing it in greater detail. Having said that, our next page deals
with the design of VLANs, showing different logical and physical configurations of VLANs
within networks. So, make yourself comfortable and let's continue cause there is still so much to
cover!
Designing VLANs - A Comparison With Old Networks
Introduction
Designing and building a network is not a simple job. VLANs are no exception to this rule, in fact
they require a more sophisticated approach because of the variety of protocols used to maintain
and administer them.
Our aim here is not to tell you how to setup your VLANs and what you should or shouldn't do,
this will be covered later on. For now, we would like to show you different physical VLAN
layouts to help you recognise the benefits offered when introducing this technology into your
network, regardless of its size.
The technology is available and we simply need to figure out how to use it and implement it using
the best possible methods, in order to achieve outstanding performance and reliability.
We understand that every network is unique as far as its resources and requirements are
concerned, which is another reason why we will take a look at a few different VLAN
implementations. However, we will not mention the method used to set them up - this is up to you
to decide once you've read the following pages!
Designing your first VLAN
Most common VLAN setups involve grouping departments together regardless of their physical
placement through the network. This allows us to centralise the administration for these
departments, while also limiting unwanted incidents of unauthorised access to resources of high
importance.
As always, we will be using neat examples and diagrams to help you get a visual on what we are
talking about.
Let's consider the following company: Packet Industries
Packet Industries is a large scale company with over 40 workstations and 5 servers. The company
deals with packet analysis and data recovery and has labs to recover data from different media
that require special treatment due to their sensitivity. As with every other company, there are
quite a few different departments that deal with different aspects of the business and these are:



Management/HR Department
Accounting Department
Data Recovery & IT Department
These five departments are spread throughout 3 floors in the building the company is situated.
Because the IT department takes confidentiality of their own and customer's data seriously, they
have decided to redesign their network and also take a look at the VLAN solutions available, to
see if they are worth the investment.
We are going to provide two different scenarios here, the first one will not include VLANs, while
the second one will. Comparing the two different solutions will help you see the clear advantages
of VLANs and also provide an insight to how you can also apply this wonderful technology with
other similar networks you might be working with.
Solution 1 - Without VLANs!
The IT department decided that the best way to deal with the security issue would be to divide the
existing network by partitioning it. Each department would reside in one broadcast domain and
access lists would be placed between each network's boundaries to ensure access to and from
them are limited according to the access policies.
Since there are three departments, it is important that three new networks had to be created to
accommodate their new design. The budget, as in most cases, had to be controlled so it didn't
exceed the amount granted by the Accounting Department.
With all the above in mind, here's the proposal the IT department created:
As you can see, each department has been assigned a specific network. Each level has a dedicated
switch for every network available. As a result, this will increase the network security since we
have separate physical networks and this solution also seems to be the most logical one. These
switches are then grouped together via the network backbone which, in its turn, connects to the
network's main router.
The router here undertakes the complex role of controlling access and routing between the
networks and servers with the use of access lists as they have been created by the IT Department.
If needed, the router can also be configured to allow certain IP's to be routed between the three
networks, should there be such a requirement.
The above implementation is quite secure as there are physical and logical restrictions placed at
every level. However, it is somewhat restrictive as far as expanding and administering the
network since there is no point of central control. Lastly, if you even consider adding full
redundancy to the above, essentially doubling the amount of equipment required, the cost would
clearly be unreasonable...
So let's now take a look at the second way we could implement the above, without blowing the
budget, without compromising our required security level and also at the same time create a
flexible and easily expandable network backbone.
Solution 2 - With VLANs!
The solution we are about to present here is surely the most preferred and economical. The
reasons should be fairly straight forward: We get the same result as the previous solution, at
almost half the cost and as a bonus, we get the flexibility and expandability we need for the future
growth of our network, which was very limited in our previous example.
By putting the VLAN concept we covered on the previous page into action, you should be able to
visualise the new setup:
As you can see, the results in this example are a lot neater and the most apparent change would be
the presence of a single switch per level, connecting directly to the network backbone. These
switches of course are VLAN capable, and have been configured to support the three separate
logical and physical networks. The router from the previous solution has been replaced by what
we call a 'layer 3 switch'.
These type of switches are very intelligent and understand layer 3 (IP Layer) traffic. With such a
switch, you are able to apply access-lists to restrict access between the networks, just like you
normally would on a router, but more importantly, route packets from one logical network to
another! In simple terms, layer 3 switches are a combination of a powerful switch, with a built-in
router :)
Summary
If the above example was interesting and provided a insight into the field of VLANs, we can
assure you - you haven't seen anything yet. When unleashing the power of VLANs, there are
amazing solutions given for any problem or need that your network requires.
It's now time to start looking at the VLAN technology in a bit more detail, that is, how it's
configured, the postive and negative areas for each type of VLAN configuration and more much.
The next page analyses Static VLANs which are perhaps the most popular implementation of
VLANs around the world. Take a quick break for some fresh air if needed, otherwise, gear up and
let's move!
Designing VLANs - Static VLANs
Introduction
VLANs are usually created by the network administrator, assigning each port of every switch to a
VLAN. Depending on the network infrastructure and security policies, the assignment of VLANs
can be implemented using two different methods: Static or Dynamic memberships - these two
methods are also known as VLAN memberships.
Each of these methods have their advantages and disadvantages and we will be analysing them in
great depth to help you decide which would best suite your network.
Depending on the method used to assign the VLAN membership, the switch may require further
configuration, but in most cases it's a pretty straight forward process. This page deals with Static
VLANs while Dynamic VLANs are covered next.
Static VLANs
Static VLAN membership is perhaps the most widely used method because of the relatively small
administration overhead and security it provides. With Static VLANs, the administrator will
assign each port of the switch to one VLAN. Once this is complete, they can simply connect each
device or workstation to the appropriate port.
The picture below depicts an illustration of the above, where 4 ports have been configured for 4
different VLANs:
The picture shows a Cisco switch (well, half of it :>) where ports 1, 2, 7 and 10 have been
configured and assigned to VLANs 1, 5, 2 and 3 respectively.
At this point, we should remind you that these 4 VLANs are not able to communicate between
each other without the use of a router as they are treated as 4 separate physical networks,
regardless of the network addressing scheme used on each of them. However, we won't provide
further detail on VLAN routing since it's covered later on.
Static VLANs are certainly more secure than traditional switches while also considerably easy to
configure and monitor. As one would expect, all nodes belonging to a VLAN must also be part of
the same logical network in order to communicate with one another. For example, on our switch
above, if we assigned network 192.168.1.0/24 to VLAN 1, then all nodes connecting to ports
assigned to VLAN 1 must use the same network address for them to communicate between each
other, just as if this was an ordinary switch.
In addition, Static VLANs have another strong point - you are able to control where your users
move within a large network. By assigning specific ports on your switches throughout your
network, you are able to control access and limit the network resources to which your users are
able to use.
A good example would be a large network with multiple departments where any network
administrator would want to control where the users can physically connect their workstation or
laptop and which servers they are able to access.
The following diagram shows a VLAN powered network where the switches have been
configured with Static VLAN support.
The network diagram might look slightly complicated at first, but if you pay close attention to
each switch, you will notice that it's quite simple - six switches with 6 VLANs configured- one
VLAN per department, as shown. While each VLAN has one logical network assigned to it, the
IT department has, in addition, placed one workstation in the following departments for support
purposes: Management, R&D, and HR department.
The network administrator has assigned Port 1 (P1) on each department switch to VLAN 5 for the
workstation belonging to the IT department, while the rest of the ports are assigned to the
appropriate VLAN as shown in the diagram.
This setup allows the administrator to place any employee in the IT department, anywhere on the
network, without worrying if the user will be able to connect and access the IT department's
resources.
In addition, if a user in any of the above departments e.g the Management department, decided to
get smart by attempting to gain access to the IT department's network and resources by plugging
his workstation to Port 1 of his department's switch. He surely wouldn't get far because his
workstation would be configured for the 192.168.1.0 network (VLAN 1), while Port 1 requires
him to use a 192.168.5.0 network address (VLAN 5). Logically, he would have to change his IP
address to match the network he is trying to gain access to, and in this case this would be network
192.168.5.0.
Summary
To sum up, with Static VLANs, we assign each individual switch port to a VLAN. The network
addresses are totally up to us to decide. In our example, the switches do not care what network
address is used for each VLAN as they totally ignore this information unless routing is performed
(this is covered in the InterVLAN routing page). As far as the switches are concerned, if you have
two ports assigned to the same VLAN, then these two ports are able to communicate between
each other as it would happen on any normal layer 2 switch.
Designing VLANs - Dynamic VLANs
Introduction
Dynamic VLANs were introduced to grant the flexibility and complexity(!) that Static VLANs
did not provide. Dynamic VLANs are quite rare because of their requirements and initial
administrative overhead. As such, most administrators and network engineers tend to prefer Static
VLANs.
Dynamic VLANs
Dynamic VLANs, as opposed to Static VLANs, do not require the administrator to individually
configure each port, but instead, a central server called the VMPS (VLAN Member Policy
Server). The VMPS is used to handle the on-the-spot port configuration of every switch
participating on the VLAN network.
The VMPS server contains a database of all workstation MAC addresses, along with the
associated VLAN the MAC address belongs to. This way, we essentially have a VLAN-to-MAC
address mapping:
The above diagram works as an aim to help us understand the mapping relationship that exists in
the VMPS server. As shown, each MAC address, which translates to a host on the network, is
mapped to a VLAN, allowing this host to move inside the network, connecting to any switch that
is part of the VMPS network and maintain its VLAN configuration.
You can now start to imagine the initial workload involved when configuring a VMPS server for
a network of over 300 workstations:)
As one would expect, the above model works very well and also requires the switches to be in
constant contact with the VMPS server, requesting configuration information everytime a host
connects to a switch participating in the VLAN network. Of course, there is a lot more
information we can use to configure the VMPS database, but we won't be covering that just as
yet.
Like all network services offered, Cisco has cleverly designed this model to be as flexible as our
network might require. For example, you are able to connect more than one host on one
dynamically configured port, as long as all hosts are part of the same VLAN:
The diagram on the left shows us a VLAN capable switch that has been configured to support
Dynamic VLANs. On port No.5, we have connected a simple switch (not VLAN aware) from
which another 4 workstations are connected.
As mentioned previously, this type of configuration is valid and therefore supported, but it also
has its restrictions and limitations.
One of the restrictions, which by the way can also be considered as a semi-security feature, is that
all workstations connected to the same port, must be configured in the VMPS server as part of the
same VLAN, otherwise the port is
most likely to shut down as a security precaution.
To consider the limitations of this configuration: if the switch detects more than 20 active hosts
(20 MAC addresses) on the port, it will once again shut it down, leaving the workstations without
any network connection. When this happens, the port that shuts down will return into an isolated
state, not belonging to any VLAN.
The fact is that Dynamic VLANs are really not suitable for every network, even though they
allow a great deal of flexibility and security. If you consider the advantage one single feature of
Dynamic VLANs can provide you with, then it might be all you need to implement them.
Because each host connected to the switch is checked against the VMPS database for its VLAN
membership before the port is activated and assigned to a VLAN, this gives the network
administrator the ability to ensure no foreign host is able to walk up to a wall socket and simply
plug their workstation to access the network, if his MAC address is not stored in the VMPS
database. For a large scale network, this could be considered an ACE card under your sleeve.
Choosing Correct Switches
One important factor we haven't yet mentioned is that you cannot run the VMPS server on a
Cisco Catalyst 2900 or 3500 series. The Catalyst 4500 and upwards are able to act as a VMPS,
and at the time of writing, this switch has reached its end of retail life. For those who have dealt
with Cisco Catalyst switches in the past, you would know that a Catalyst 4500 is not the type of
switch you would use in a 20 or 50 node network!
The Catalyst 4500, 6500 series, are switches designed for enterprise networks, as such, they are
built to be modular, easily expandable depending on your needs, and lastly, fully redundant
because you can't have your core backbone switch failing when all other switches and network
equipment are directly connected to it.
We've added a few pictures of the Catalyst 6500 series for you to admire :)
You can clearly see the slots available that allow the Catalyst switches to expand and grow with
your network. In the likely event you require more ports as your network expands, you simply
buy a Fastethernet blade (some people call them 'slices') and insert it into an available slot!
Dynamic VLANs & FallBack VLANs
Another very interesting and smart feature Dynamic VLANs support is the fallback VLAN. This
neat feature allows you to automatically configure a port to a VLAN specially created for
workstations whose MAC address is not in the VMPS server. Consider company visitors or
clients who require specific or restricted access to your network, they can freely connect to the
network and have Internet access, alongside with limited rights on public directories.
In the event the fallback VLAN has not been configured and the MAC address connected to the
switch's port is unknown, the VMPS server will send an 'access-denied' response, blocking access
to the network, but the port will remain active. If the VMPS server is running in 'secure-mode', it
will proceed and shutdown the port as an additional security measure.
The above diagram represents a portion of a large scale network using a Cisco 6500 Catalyst as
the core switch. The switch has been configured to support Dynamic VLANs, therefore a VMPS
server has been configured inside the switch, alongside with a DHCP server for each created
VLAN. The administrator has already assigned the 3 workstations MAC addresses to the VLANs
shown and also created the fallback VLAN for any MAC address that does not exist in the
database.
Now consider this interesting scenario: One morning a visitor arrives in the office and requires
Internet connection so he can demonstate a new product to the management. As an administrator,
you've already configured a fallback VLAN with a DHCP server activated for the VLAN,
pushing the necessary settings to the clients so they may obtain Internet access services.
The visitor finds a free RJ-45 socket on the wall, which connects to a Catalyst 3550 switch
nearby, and plugs in his laptop. Before the user is allowed to access the network, the Cisco 3550
switch checks the laptop's MAC address and reads 4B:63:3F:A2:3E:F9. At this point, the port is
blocked, not allowing the laptop computer to send or receive data. The Cisco 3550 switch sends
the MAC address to the 6500 Catalyst switch which is acting as the VMPS server and it checks
for an entry that matches the specified MAC address but is unable to find one.
Naturally, it determines that this a visitor, so it creates an entry for that MAC address to the
fallback VLAN and sends the information back to the Cisco 3550 switch. The switch will then
enable access to the port our visitor is connected to by configuring the port to the fallback VLAN.
If the visitor's computer is configured to obtain an IP Address automatically, it will do so, once
the operating system has booted. When this happens, the visitor's DHCP request will arrive to the
6500 Catalyst switch and its DHCP server will send the requested information, enabling the client
(our visitor) to configure itself with all the parameters required to access the VLAN. This will
also mean our visitor is now able to access the Internet!
Finishing, if the computer is not configured for DHCP, the client must be advised with the correct
network settings or asked to enable automatic IP configuration in their network properties.
Summary
The past pages could be considered as an 'eye-opener' for people who are new to the VLAN
concept, and at the same time a 'quick-overview' for those who are well aware of their existence!
We hope all your questions to this point have been answered, if not, they are most likely too
advanced and will surely be answered in the pages that follow.
As we complete our Dynamic & Static VLAN overview we are ready to dive in deeper. The next
page begins by examining VLAN interfaces and their properties. There's no turning back now so
click on the lower right link to get started!
VLAN Links: Access & Trunk Links
Introduction
By now we should feel comfortable with terms such as 'VLAN', 'Static & Dynamic VLANs', but
this is just the beginning in this complex world. On this page, we will start to slowly expand on
these terms by introducing new ones!
To begin with, we will take a closer look at the port interfaces on these smart switches and then
start moving towards the interfaces connecting to the network backbone where things become
slightly more complicated, though do not be alarmed since our detailed and easy to read diagrams
are here to ensure the learning process is as enjoyable as possible.
VLAN Links - Interfaces
When inside the world of VLANs there are two types of interfaces, or if you like, links. These
links allow us to connect multiple switches together or just simple network devices e.g PC, that
will access the VLAN network. Depending on their configuration, they are called Access Links,
or Trunk Links.
Access Links
Access Links are the most common type of links on any VLAN switch. All network hosts
connect to the switch's Access Links in order to gain access to the local network. These links are
your ordinary ports found on every switch, but configured in a special way, so you are able to
plug a computer into them and access your network.
Here's a picture of a Cisco Catalyst 3550 series switch, with it's Access Links (ports) marked in
the Green circle:
We must note that the 'Access Link' term describes a configured port - this means that the ports
above can be configured as the second type of VLAN links - Trunk Links. What we are showing
here is what's usually configured as an Access Link port in 95% of all switches. Depending on
your needs, you might require to configure the first port (top left corner) as a Trunk Link, in
which case, it is obviously not called a Access Link port anymore, but a Trunk Link!
When configuring ports on a switch to act as Access Links, we usually configure only one VLAN
per port, that is, the VLAN our device will be allowed to access. If you recall the diagram below
which was also present during the introduction of the VLAN concept, you'll see that each PC is
assigned to a specific port:
In this case, each of the 6 ports used have been configured for a specific VLAN. Ports 1, 2 and 3
have been assigned to VLAN 1 while ports 4, 5 and 6 to VLAN 2.
In the above diagram, this translates to allowing only VLAN 1 traffic in and out of ports 1, 2 and
3, while ports 4, 5 and 6 will carry VLAN 2 traffic. As you would remember, these two VLANs
do not exchange any traffic between each other, unless we are using a layer 3 switch (or router)
and we have explicitly configured the switch to route traffic between the two VLANs.
It is equally important to note at this point that any device connected to an Access Link (port) is
totally unaware of the VLAN assigned to the port. The device simply assumes it is part of a single
broadcast domain, just as it happens with any normal switch. During data transfers, any VLAN
information or data from other VLANs is removed so the recipient has no information about
them.
The following diagram illustrates this to help you get the picture:
As shown, all packets arriving, entering or exiting the port are standard Ethernet II type packets
which are understood by the network device connected to the port. There is nothing special about
these packets, other than the fact that they belong only to the VLAN the port is configured for.
If, for example, we configured the port shown above for VLAN 1, then any packets
entering/exiting this port would be for that VLAN only. In addition, if we decided to use a logical
network such as 192.168.0.0 with a default subnet mask of 255.255.255.0 (/24), then all network
devices connecting to ports assigned to VLAN 1 must be configured with the appropriate network
address so they may communicate with all other hosts in the same VLAN.
Trunk Links
What we've seen so far is a switch port configured to carry only one VLAN, that is, an Access
Link port. There is, however, one more type of port configuration which we mentioned in the
introductory section on this page - the Trunk Link.
A Trunk Link, or 'Trunk' is a port configured to carry packets for any VLAN. These type of ports
are usually found in connections between switches. These links require the ability to carry
packets from all available VLANs because VLANs span over multiple switches.
The diagram below shows multiple switches connected throughout a network and the Trunk
Links are marked in purple colour to help you identify them:
As you can see in our diagram, our switches connect to the network backbone via the Trunk
Links. This allows all VLANs created in our network to propagate throughout the whole network.
Now in the unlikely event of Trunk Link failure on one of our switches, the devices connected to
that switch's ports would be isolated from the rest of the network, allowing only ports on that
switch, belonging to the same VLAN, to communicate with each other.
So now that we have an idea of what Trunk Links are and their purpose, let's take a look at an
actual switch to identify a possible Trunk Link:
As we noted with the explanation of Access Link ports, the term 'Trunk Link' describes a
configured port. In this case, the Gigabit ports are usually configured as Trunk Links, connecting
the switch to the network backbone at the speed of 1 Gigabit, while the Access Link ports connect
at 100Mbits.
In addition, we should note that for a port or link to operate as a Trunk Link, it is imperative that
it runs at speeds of 100Mbit or greater. A port running at speeds of 10Mbit's cannot operate as a
Trunk Link and this is logical because a Trunk Link is always used to connect to the network
backbone, which must operate at speeds greater than most Access Links!
Summary
This page introduced the Access and Trunk links. We will be seeing a lot of both links from now
on, so it's best you get comfortable with them! Configuration of these links is covered later on,
because there is still quite a bit of theory to cover!
Next up is the VLAN Tagging topic where we will see what really runs through those Access and
Trunk links!
VLAN Tagging
Introduction
We mentioned that Trunk Links are designed to pass frames (packets) from all VLANs, allowing
us to connect multiple switches together and independently configure each port to a specific
VLAN. However, we haven't explained how these packets run through the Trunk Links and
network backbone, eventually finding their way to the destination port without getting mixed or
lost with the rest of the packets flowing through the Trunk Links.
This is process belongs to the world of VLAN Tagging!
VLAN Tagging
VLAN Tagging, also known as Frame Tagging, is a method developed by Cisco to help identify
packets travelling through trunk links. When an Ethernet frame traverses a trunk link, a special
VLAN tag is added to the frame and sent across the trunk link.
As it arrives at the end of the trunk link the tag is removed and the frame is sent to the correct
access link port according to the switch's table, so that the receiving end is unaware of any VLAN
information.
The diagram below illustrates the process described above:
Here we see two 3500 series Catalyst switches and one Cisco 3745 router connected via the
Trunk Links. The Trunk Links allow frames from all VLANs to travel throughout the network
backbone and reach their destination regardless of the VLAN the frame belongs to. On the other
side, the workstations are connected directly to Access Links (ports configured for one VLAN
membership only), gaining access to the resources required by VLAN's members.
Again, when we call a port 'Access Link' or 'Trunk Link', we are describing it based on the way it
has been configured. This is because a port can be configured as an Access Link or Trunk Link
(in the case where it's 100Mbits or faster).
This is stressed because a lot of people think that it's the other way around, meaning, a switch's
uplink is always a Trunk Link and any normal port where you would usually connect a
workstation, is an Access Link port!
VLAN Tagging Protocol
We're now familiar with the term 'Trunk Link' and its purpose, that is, to allow frames from
multiple VLANs to run across the network backbone, finding their way to their destination. What
you might not have known though is that there is more than one method to 'tag' these frames as
they run through the Trunk Links or ... the VLAN Highway as we like to call it.
InterSwitch Link (ISL)
ISL is a Cisco propriety protocol used for FastEthernet and Gigabit Ethernet links only. The
protocol can be used in various equipments such as switch ports, router interfaces, server
interface cards to create a trunk to a server and much more. You'll find more information on
VLAN implementations on our last page of the VLAN topic.
Being a propriety protocol, ISL is available and supported naturally on Cisco products only:) You
may also be interested in knowing that ISL is what we call, an 'external tagging process'. This
means that the protocol does not alter the Ethernet frame as shown above in our previous diagram
- placing the VLAN Tag inside the Ethernet frame, but encapsulating the Ethernet frame with a
new 26 byte ISL header and adding an additional 4 byte frame check sequence (FCS) field at the
end of frame, as illustrated below:
Despite this extra overhead, ISL is capable of supporting up to 1000 VLANs and does not
introduce any delays in data transfers between Trunk Links.
In the above diagram we can see an ISL frame encapsulating an Ethernet II frame. This is the
actual frame that runs through a trunk link between two Cisco devices when configured to use
ISL as their trunk tagging protocol.
The encapsulation method mentioned above also happens to be the reason why only ISL-aware
devices are able to read it, and because of the addition of an ISL header and FCS field, the frame
can end up being 1548 bytes long! For those who can't remember, Ethernet's maximum frame
size is 1518 bytes, making an ISL frame of 1548 bytes, what we call a 'giant' or 'jumbo' frame!
Lastly, ISL uses Per VLAN Spanning Tree (PVST) which runs one instance of the Spanning Tree
Protocol (STP) per VLAN. This method allows us to optimise the root switch placement for each
available VLAN while supporting neat features such as VLAN load balancing between multiple
trunks.
Since the ISL's header fields are covered on a separate page, we won't provide further details
here.
IEEE 802.1q
The 802.1q standard was created by the IEEE group to address the problem breaking large
networks into smaller and manageable ones through the use of VLANs. The 802.1q standard is of
course an alternative to Cisco's ISL, and one that all vendors implement on their network
equipment to ensure compatibility and seamless integration with the existing network
infrastructure.
As with all 'open standards' the IEEE 802.1q tagging method is by far the most popular and
commonly used even in Cisco oriented network installations mainly for compatability with other
equipment and future upgrades that might tend towards different vendors.
In addition to the compatability issue, there are several more reasons for which most engineers
prefer this method of tagging. These include:



Support of up to 4096 VLANs
Insertion of a 4-byte VLAN tag with no encapsulation
Smaller final frame sizes when compared with ISL
Amazingly enough, the 802.1q tagging method supports a whopping 4096 VLANs (as opposed to
1000 VLANs ISL supports), a large amount indeed which is merely impossible to deplet in your
local area network.
The 4-byte tag we mentioned is inserted within the existing Ethernet frame, right after the Source
MAC Address as illustrated in the diagram below:
Because of the extra 4-byte tag, the minimum Ethernet II frame size increases from 64 bytes to 68
bytes, while the maximum Ethernet II frame size now becomes 1522 bytes. If you require more
information on the tag's fields, visit our protocol page where further details are given.
As you may have already concluded yourself, the maximum Ethernet frame is considerably
smaller in size (by 26 bytes) when using the IEEE 802.1q tagging method rather than ISL. This
difference in size might also be interpreted by many that the IEEE 802.1q tagging method is
much faster than ISL, but this is not true. In fact, Cisco recommends you use ISL tagging when in
a Cisco native environment, but as outlined earlier, most network engineers and administrators
believe that the IEEE802.1q approach is much safer, ensuring maximum compatability.
And because not everything in this world is perfect, no matter how good the 802.1q tagging
protocol might seem, it does come with its restrictions:

In a Cisco powered network, the switch maintains one instance of the Spanning Tree
Protocol (STP) per VLAN. This means that if you have 10 VLANs in your network, there
will also be 10 instances of STP running amongst the switches. In the case of non-Cisco
switches, then only 1 instance of STP is maintained for all VLANs, which is certainly not
something a network administrator would want.

It is imperative that the VLAN for an IEEE 802.1q trunk is the same for both ends of the
trunk link, otherwise network loops are likely to occur.

Cisco always advises that disabling a STP instance on one 802.1q VLAN trunk without
disabling it on the rest of the available VLANs, is not a good idea because network loops
might be created. It's best to either disable or enable STP on all VLANs.
LAN Emulation (LANE)
LAN Emulation was introduced to solve the need of creating VLANs over WAN links, allowing
network managers to define workgroups based on logical function, rather than physical location.
With this new technology (so to speak - it's actually been around since 1995!), we are now able to
create VLANs between remote offices, regardless of their location and distance.
LANE is not very common and you will most probably never see it implemented in small to midsized networks, however, this is no reason to ignore it. Just keep in mind that we won't be looking
at it in much depth, but briefly covering it so we can grasp the concept.
LANE has been supported by Cisco since 1995 and Cisco's ISO release 11.0. When implemented
between two point-to-point links, the WAN network becomes totally transparent to the end users:
Every LAN or native ATM host, like the switch or router shown in the diagram, connects to the
ATM network via a special software interface called 'LAN Emulation Client'. The LANE Client
works with the LAN Emulation Server (LES) to handle all messages and packets flowing through
the network, ensuring that the end clients are not aware of the WAN network infrastructure and
therefore making it transparent.
The LANE specification defines a LAN Emulation Configuration Server (LECS), a service
running inside an ATM switch or a physical server connected to the ATM switch, that resides
within the ATM network and allows network administrators to control which LANs are combined
to form VLANs.
The LAN Emulation Server with the help of the LANE Client, maps MAC addresses to ATM
addresses, emulating Layer 2 protocols (DataLink layer) and transporting higher layer protocols
such as TCP/IP, IPX/SPX without modification.
802.10 (FDDI)
Tagging VLAN frames on Fiber Distributed Data Interface (FDDI) networks is quite common in
large scale networks. This implementation is usually found on Cisco's high-end switch models
such as the Catalyst 5000 series where special modules are installed inside the switches,
connecting them to an FDDI backbone. This backbone interconnects all major network switches,
providing a fully redundant network.
The various modules available for the Cisco Catalyst switches allow the integration of Ethernet
into the FDDI network. When intalling the appropriate switch modules and with the use of the
802.10 SAID field, a mapping between the Ethernet VLAN and 802.10 network is created, and as
such, all Ethernet VLANs are able to run over the FDDI network.
The diagram above shows two Catalyst switches connected to a FDDI backbone. The links
between the switches and the backbone can either be Access type links (meaning one VLAN
passes through them) or Trunk links (all VLANs are able to pass through them). At both ends, the
switches have an Ethernet port belonging to VLAN 6, and to 'connect' these ports we map each
switch's Ethernet module with its FDDI module.
Lastly, the special FDDI modules mentioned above support both single VLANs (non-trunk) and
multiple VLANs (trunk).
To provide further detail, the diagram below shows the IEEE 802.10 frame, along with the SAID
field in which the VLAN ID is inserted, allowing the frame to transit trunk links as described:
It's okay if your impressed or seem confused with the structure of the above frame, that's normal:)
You'll be suprised to find out that the Cisco switch in the previous diagram must process the
Ethernet II frame and convert it before placing it on the IEEE 802.10 backbone or trunk.
During this stage, the original Ethernet II frame is converted to an Ethernet SNAP frame and then
finally to an IEEE 802.10 frame. This conversion is required to maintain compatability and
reliability between the two different topologies. The most important bit to remember here is the
SAID field and its purpose.
Summary
This page introduced four popular VLAN tagging methods, providing you with the frame
structure and general details of each tagging method. Out of all, the IEEE 802.1q and ISL tagging
methods are the most popular, so make sure you understand them quite well.
The next page provides further detail by analysing the two popular tagging methods mentioned
above. While some readers might find the details unnecessary and time wasting, we feel that they
are required if you want to build a rock solid network library in your head:)
Analysing The InterSwitch Link Protocol
Introduction
Deciding whether to use ISL or IEEE 802.1q to power your trunk links can be quite confusing if
you cannot identify the advantages and disadvantages of each protocol within your network.
This page will cover the ISL protocol in great detail, providing an insight to its secrets and
capabilities which you probably were unaware of. In turn, this will also help you understand the
existence of certain limitations the protocol has, but most importantly allow you to decide if ISL
is the tagging process you require within your network.
InterSwitch Link (ISL)
ISL is Cisco's propriety tagging method and supported only on Cisco's equipment through Fast &
Gigabit Ethernet links. The size of an ISL frame can be expected to start from 94 bytes and
increase up to 1548 bytes due to the overhead (additional fields) the protocol places within the
frame it is tagging.
These fields and their length are also shown on the diagram below:
We will be focusing on the two purple coloured 3D blocks, the ISL header and ISL Frame Check
Sequence (FCS) respectively. The rest of the Ethernet frame shown is a standard Ethernet II
frame as we know it. If you need more information, visit our Ethernet II page.
The ISL Header
The ISL header is 26 byte field containing all the VLAN information required (as one would
expect), to allow a frame traverse over a Trunk Link and find its way to its destination.
Here is a closer look at the header and all the fields it contains:
You can see that the ISL header is made out of quite a few fields, perhaps a lot more than what
you might have expected, but this shouldn't alarm you as only a handful of these fields are
important. As usual, we will start from the left field and work our way to the far right side of the
header. First up...... the DA field:
Destination Address (DA) Field
The 'DA' field is a 40 bit destination address field that contains a multicast address usually set to
"0x01-00-0C-00-00" or "0x03-00-0C-00-00". This address is used to signal to the receiver that
the packet is in ISL format.
Type Field
The 'Type' field is 4 bits long and helps identify the encapsulated original frame. Depending on
the frame type, the ISL 'Type' field can take 4 possible values as outlined in the table below:
Type Value
Encapsulated Frame
0000
Ethernet
0001
Token-Ring
0010
FDDI
0011
ATM
The 4 bits of space assigned to the 'Type Value' field allow a maximum of 2^4=16 different
values. Since all combinations are not used, there is plenty of room for future encapsulations that
might be developed.
User Defined Field
The 'User' field occupying 4 bits serves as an extension to the previous 'Type' field and is mostly
used when the original encapsulated frame is an Ethernet II type frame. When this happens, the
first two bits of the 'User' field act as a prioritisation mechanism, allowing the frames to find their
way to the destination much faster.
Currently, there are 4 different priorities available, as shown in the table below:
Type Value
Frame Priority
XX00
Normal Priority
XX01
Priority 1
XX10
Priority 2
XX11
Highest Priority
We should also note that the use of priorities is optional and not required.
Source Address (SA) Field
The 'SA' field is the source MAC address of the switch port transmitting the frame. This field is as expected- 48 bits long. The receiving device can choose to ignore this field. It is worth noting
that while the Destination Address field located at the beginning of the header contains a
multicast MAC Address, the Source MAC address field we are looking at here contains the MAC
address of the sending device - usually a switch.
Length Field
The 'Length' field is 16 bits long and contains the whole ISL frame's length minus the DA, Type,
User, SA, LEN and FCS fields. If you're good at mathematics, you can easily calculate the total
length of the excluded fields, which is 18 bytes. With this in mind, a quick way to find this field's
value is to take the total frame size and subtract 18 bytes :)
Length fields are used in frames to help the receiving end identify where specific portions of the
frame exist within the frame received.
AAAA03 (SNAP) Field
The SNAP field is a 24 bit long field with a value of "0xAAAA03".
High bits Source Address (HSA) Field
The 'HSA' field is a 24 bit value. This field represents the upper three bytes of the SA field (the
manufacturers ID portion) and must contain the value "0x00-00-0C". Since the SA field is 48 bits
long or 6 bytes, the upper 3 bytes of the SA field would translate to 24 bits, hence the length of
the HSA field.
VLAN - Destination Virtual LAN ID Field
The 'VLAN' field is the Virtual LAN ID of the frame. This is perhaps the most important field of
all as our frame moves between trunk links because it allows all trunk links to identify the VLAN
this frame belongs to. The VLAN ID field is 15 bits long and often referred to as the "color" of
the frame.
Without this field, there would be no way of identifying which VLAN the frame transitting a
trunk link belongs to.
Bridge Protocol Data Unit (BPDU) & Cisco Discovery Protocol (CDP) Indicator
The 'BPDU' field is only 1 bit long but very important as it is set for all BPDU packets
encapsulated by the ISL frame. For those unaware, BPDU's are used by the Spanning Tree
Protocol (STP) to shut down redundant links and avoid network loops. This field is also used for
CDP and Virtual Trunk Protocol (VTP) frames that are encapsulated.
Index Field
The 'Index' field is a 16 bit value and indicates the port index of the source of the packet as it exits
the switch. It is used for diagnostic purposes only and may be set to any value by other devices.
RES Field - Reserved for Token Ring and Fiber Distributed Data Interface (FDDI)
The 'RES' field is a 16 bit value and used when Token Ring or FDDI packets are encapsulated
with an ISL frame. In the case of Token Ring frames, the Access Control (AC) and Frame
Control (FC) fields are placed here whereas in the case of FDDI, the FC field is placed in the
Least Significant Byte (LSB) of this field (as in a FC of "0x12" would have a RES field of
"0x0012"). For Ethernet packets, the RES field should be set to all zeros.
Frame Check Sequence (ISL FCS)
Coming to the end of the ISL protocol analysis, we met the 'FCS' field which consists of four
bytes. The FCS contains a 32-bit CRC value, which is created by the sending MAC (switch) and
is recalculated by the receiving MAC (switch) to check for corrupt frames. In an Ethernet II
frame, the FCS is generated using the Destination MAC, Source MAC, Ethertype, and Data fields
while ISL's FCS is calculated based on the entire ISL frame and added to the end of it.
Summary
This page analysed all fields of the ISL header and FCS. The next page deals with the popular
IEEE 802.1q, an alternative to Cisco's ISL tagging protocol.
If you require, have a quick break to freshen up and when you return, click on the link below to
be transported to the wonderful IEEE 802.1q world!
Analysing The IEEE 802.1q Link Protocol
Introduction
Our VLAN Tagging page briefly covered the IEEE 802.1q protocol and we are about to continue
its analysis here. As mentioned previously, the IEEE 802.1q tagging method is the most popular
as it allows the seemless integration of VLAN capable devices from all vendors who support the
protocol.
So, without any more delay, let's get right into the protocol.
IEEE 802.1q Analysis
The IEEE 802.1q tagging mechanism seems quite simple and efficient thanks to its 4-byte
overhead squeezed between the Source Address and Type/Length field of our Ethernet II frame:
The process of inserting the 802.1q tag into an Ethernet II frame results in the original Frame
Check Sequence (FCS) field to become invalid since we are altering the frame, hence it is
essential that a new FCS is recalculated, based on the new frame now containing the IEEE 802.1q
field. This process is automatically performed by the switch, right before it sends the frame down
a trunk link. Our focus here will be the pink 3D block, labeled as the IEEE 802.1q header.
The IEEE 802.1q Header
As noted, the 802.1q header is only 4 bytes or 32 bits in length while within this space there is all
the necessary information required to successfully identify the frame's VLAN and ensure it
arrived to the correct destination. The diagram below analyses all fields contained in a 802.1q
header:
The structure is quite simple as there are only 4 fields when compared with the 11 ISL has. We
will continue by analysing each of these fields in order to discover what the protocol is all about.
TPID - Tag Protocol IDentifier
The TPID field is 16 bit long with a value of 0x8100. It is used to identify the frame as an IEEE
802.1q tagged frame.
Note: The next three fields, Priority, CFI and VLAN ID are also known as the TCI (Tag Control
Information) field and are often represented as one single field (TCI Field).
Priority
The Priority field is only 3 bits long but used for prioritisation of the data this frame is carrying.
Data Prioritisation is a whole study in itself but we won't be analysing it here since it's well
beyond the scope of our topic. However, for those interested, data prioritisation allows us to give
special priority to time-latency sensitive services, such as Voice Over IP (VoIP), over normal
data. This means that the specified bandwidth is allocated for these critical services to pass them
through the link without any delay.
The IEEE 802.1p priority protocol was developed to provide such services and is utilised by the
IEEE 802.1q tagging protocol.
The Priority field is approximately 3 bits long, allowing a total of 2^3=8 different priorities for
each frame, that is, level zero (0) to seven (7) inclusive.
CFI - Canonical Format Indicator
The CFI field is only 1 bit long. If set to '1', then it means the MAC Address is in non-canonical
format, otherwise '0' means it is canonical format. For Ethernet switches, this field is always set to
zero (0). The CFI field is mainly used for compatibility reasons between Ethernet and Token Ring
networks.
In the case where a frame arrives to an Ethernet port and the CFI flag is set to one (1), then that
frame should not be forwarded as it was received to any untagged port (Access Link port).
VLAN ID - Virtual Local Area Network Identifier
The VLAN ID field is perhaps the most important field out of all because we are able to identify
which VLAN the frame belongs to, allowing the receiving switch to decide which ports the frame
is allowed to exit depending on the switch configuration.
For those who recall our VLAN Tagging page, we mentioned that the IEEE 802.1q tagging
method supports up to 4096 different VLANs. This number derives from the 12 bit VLAN ID
field we are analysing right now and here are the calculations to prove this: 2^12=4096, which
translates from VLAN 0 to VLAN 4095 inclusive.
Summary
That completes our analysis on the IEEE 802.1q protocol. As a last note, you should remember
that this protocol is the most wide spread tagging method used around the world that supports up
to 4096 VLANs!
Next up is the popular InterVLAN Routing topic, which is often a misunderstood and confusing
subject, but we have managed to make it simple and clear. It's now time for a break - go get some
fresh air and we'll see you back in a few moments for the rest of our cool VLAN topic!
InterVLAN Routing
Introduction
Surely most of you network gurus would agree without a doubt that the invention of VLANs for
networks are as good, if not better, as the invention of the mouse for computers!
Being able to create new network segments using the existing backbone and without rewiring is,
for most administrators, a dream come true! Add the ability to move users or deparments between
these networks with a just few keystrokes and you're in paradise.
VLANs have certainly become popular and are very welcomed in every administrator's or
engineer's network. However, they also raised several issues which troubled many of us. One
major issue concerns routing between existing and newly created VLANs.
The Need For Routing
Each network has it's own needs, though whether it's a large or small network, internal routing, in
most cases, is essential - if not critical. The ability to segment your network by creating VLANs,
thus reducing network broadcasts and increasing your security, is a tactic used by most engineers.
Popular setups include a separate broadcast domain for critical services such as File Servers, Print
servers, Domain Controllers e.t.c, serving your users non-stop.
The issue here is how can users from one VLAN (broadcast domain), use services offered by
another VLAN?
Thankfully there's an answer to every problem and in this case, its VLAN routing:
The above diagram is a very simple but effective example to help you get the idea. Two VLANs
consisting of two servers and workstations of which one workstation has been placed along with
the servers in VLAN 1, while the second workstation is placed in VLAN 2.
In this scenario, both workstations require access to the File and Print servers, making it a very
simple task for the workstation residing in VLAN 1, but obviously not for our workstation in
VLAN 2.
As you might have already guessed, we need to somehow route packets between the two VLANs
and the good news is that there is more than one way to achieve this and that's what we'll be
covering on this page.
VLAN Routing Solutions
While the two 2924 Catalyst switches are connected via a trunk link, they are unable to route
packets from one VLAN to another. If we wanted the switch to support routing, we would require
it to be a layer 3 switch with routing capabilities, a service offered by the popular Catalyst 3550
series and above.
Since there are quite a few ways to enable the communcation between VLANs (InterVLAN
Routing being the most popular) there is a good chance that we are able to view all possible
solutions. This follows our standard method of presenting all possible solutions, giving you an indepth view on how VLAN routing can be setup, even if you do not have a layer 3 switch.
Note: The term 'InterVLAN Routing' refers to a specific routing method which we will cover as a
last scenario, however it is advised that you read through all given solutions to ensure you have a
solid understanding on the VLAN routing topic.
VLAN Routing Solution No.1: Using A Router With 2 Ethernet Interfaces
A few years ago, this was one of the preferred and fastest methods to route packets between
VLANs. The setup is quite simple and involves a Cisco router e.g 2500 series with two Ethernet
interfaces as shown in the diagram, connecting to both VLANs with an appropriate IP Address
assigned to each interface. IP Routing is of course enabled on the router and we also have the
option of applying access lists in the case where we need to restrict network access between our
VLANs.
In addition, each host (servers and workstations) must either use the router's interface connected
to their network as a 'default gateway' or a route entry must be created to ensure they use the
router as a gateway to the other VLAN/Network. This scenario is however expensive to
implement because we require a dedicated router to router packets between our VLANs, and is
also limited from an expandability prospective.
In the case where there are more than two VLANs, additional Ethernet interfaces will be required,
so basically, the idea here is that you need one Ethernet interface on your router that will connect
to each VLAN.
To finish this scenario, as the network gets bigger and more VLANs are created, it will very
quickly get messy and expensive, so this solution will prove inadequate to cover our future
growth.
VLAN Routing Solution No.2: Using A Router With One Ethernet (Trunk) Interface
This solution is certainly fancier but requires, as you would have already guessed, a router that
supports trunk links. With this kind of setup, the trunk link is created, using of course the same
type of encapsulation the switches use (ISL or 802.1q), and enabling IP routing on the router side.
The downside here is that not many engineers will sacrifice a router just for routing between
VLANs when there are many cheaper alternatives, as you will soon find out. Nevertheless,
despite the high cost and dedicated hardware, it's still a valid and workable solution and
depending on your needs and available equipment, it might be just what you're looking for!
Closing this scenario, the router will need to be configured with two virtual interfaces, one for
each VLAN, with the appropriate IP Address assigned to each one so routing can be performed.
VLAN Routing Solution No.3: Using A Server With Two Network Cards
We would call this option a "Classic Solution". What we basically do, is configure one of the
servers to perform the routing between the two VLANs, reducing the overal cost as no dedicated
equipment is required.
In order for the server to perform the routing, it requires two network cards - one for each VLAN
and the appropriate IP Addresses assigned, therefore we have configured one with IP Addresses
192.168.1.1 and the other with 192.168.2.1. Once this phase is complete, all we need to do is
enable IP routing on the server and we're done.
Lastly, each workstation must use the server as either a gateway, or a route entry should be
created so they know how to get to the other network. As you see, there's nothing special about
this configuration, it's simple, cheap and it gets the job done.
VLAN Routing Solution No.4: InterVLAN Routing
And at last.... InterVLAN routing! This is without a doubt the best VLAN routing solution out of
all of the above. InterVLAN routing makes use of the latest in technology switches ensuring a
super fast, reliable, and acceptable cost routing solution.
The Cisco Catalyst 3550 series switches used here are layer 3 switches with built-in routing
capabilities, making them the preferred choice at a reasonable cost. Of course, the proposed
solution shown here is only a small part of a large scale network where switches such as the
Catalyst 3550 are usually placed as core switches, connecting all branch switches together (2924's
in this case) via superfast fiber Gigabit or Fast Ethernet links, ensuring a fast and reliable network
backbone.
We should also note that InterVLAN routing on the Catalyst 3550 has certain software
requirements regarding the IOS image loaded on the switch as outlined on the table below:
Image Type & Version
InterVLAN Routing Capability
Enhanced Multilayer Image (EMI) - All Versions
YES
Standard Multilayer Image (SMI) - prior to 12.1(11)EA1
NO
Standard Multilayer Image (SMI) - 12.1(11)EA1 and later
YES
If you happen to have a 3550 Catalyst in hand, you can issue the 'Show version' to reveal your
IOS version and find out if it supports IP routing.
In returning to our example, our 3550 Catalyst will be configured with two virtual interfaces, one
for each VLAN, and of course the appropriate IP Address assigned to them to ensure there is a
logical interface connected to both networks. Lastly, as you might have guessed, we need to issue
the 'IP Routing' command to enable the InterVLAN Routing service!
The diagram above was designed to help you 'visualise' how switches and their interfaces are
configured to specific VLAN, making the InterVLAN routing service possible. The switch above
has been configured with two VLANs, VLAN 1 and 2. The Ethernet interfaces are then assigned
to each VLAN, allowing them to communicate directly with all other interfaces assigned to the
same VLAN and the other VLAN, when the internal routing process is present and enabled.
Access Lists & InterVLAN Routing
Another common addition to the InterVLAN routing service is the application of Access Lists
(packet filtering) on the routing switch,to restrict access to services or hosts as required.
In modern implementations, central file servers and services are usually placed in their own
isolated VLAN, securing them from possible network attacks while controlling access to them.
When you take into consideration that most trojans and viruses perform an initial scan of the
network before attacking, an administrator can smartly disable ICMP echoes and other protocols
used to detect a live host, avoiding possible detection by an attacker host located on a different
VLAN.
Summary
InterVLAN is a terrific service and one that you simply can't live without in a large network. The
topic is a fairly easy one once you get the idea, and this is our aim here, to help you get that idea,
and extend it further by giving you other alternative methods.
The key element to the InterVLAN routing service is that you must have at least one VLAN
interface configured with an IP Address on the InterVLAN capable switch, which will also
dictate the IP network for that VLAN. All hosts participating in that VLAN must also use the
same IP addressing scheme to ensure communication between them. When the above
requirements are met, it's then as simple as enabling the IP Routing service on the switch and you
have the InterVLAN service activated.
Next in line is the Virtual Trunk Protocol (VTP), a protocol that ensures every administrator's and
engineer's life remains nice and easy .... how can this be possible?
Keep reading to find out :)
Introduction To The Virtual Trunk Protocol - VTP
Introduction
The invention of VLANs was very much welcomed by all engineers and administrators, allowing
them to extend, redesign and segment their existing network with minimal costs, while at the
same time making it more secure, faster and reliable!
If you're responsible for a network of up to 4-6 switches that include a few VLANs, then you'll
surely agree that it's usually a low overhead to administer them and periodically make changes most engineers can live with that:)
Ask now an engineer who's in charge of a medium to a large scale network and you will
definately not receive the same answer, simply because these small changes can quickly become
a nightmare and if you add the possibility of human error, then the result could be network
outages and possibly downtime.
Welcome To Virtual Trunk Protocol (VTP)
VTP, a Cisco proprietary protocol, was designed by Cisco with the network engineer and
administrator in mind, reducing the administration overhead and the possibility of error as
described above in any switched network environment.
When a new VLAN is created and configured on a switch without the VTP protocol enabled, this
must be manually replicated to all switches on the network so they are all aware of the newly
created VLAN. This means that the administrator must configure each switch separately, a task
that requires a lot of time and adds a considerable amount of overhead depending on the size of
the network.
The configuration of a VLAN includes the VLAN number, name and a few more parameters
which will be analysed further on. This information is then stored on each switch's NVRAM and
any VLAN changes made to any switch must again be replicated manually on all switches.
If the idea of manually updating all switches within your network doesn't scare you because your
network is small, then imagine updating more than 15-20 switches a few times per week, so your
network can respond to your organisation's needs....have we got you thinking now? :)
With the VTP protocol configured and operating, you can forget about running around making
sure you have updated all switches as you only need to make the changes on the nominated VTP
server switch(es) on your network. This will also ensure these changes are magically propagated
to all other switches regardless of where they are.
Introducing The VTP Modes
The VTP protocol is a fairly complex protocol, but easy to understand and implement once you
get to know it. Currently, 3 different versions of the protocol exist, that is, version 1, 2 (adds
support for Token Ring networks) and 3, with the first version being used in most networks.
Despite the variety of versions, it also operates in 3 different modes: Server, client and
transparent mode, giving us maximum flexibility on how changes in the network effect the rest of
our switches. To help keep things simple and in order to avoid confusion, we will work with the
first version of the VTP protocol - VTP v1, covering more than 90% of networks.
Below you'll find the 3 modes the VTP protocol can operate on any switch throughout the
network:



VTP Server mode
VTP Client mode
VTP Transparent mode
Each mode has been designed to cover specific network setups and needs, as we are about to see,
but for now, we need to understand the purpose of each mode and the following network diagram
will help us do exactly that.
A typical setup involves at least one switch configured as a VTP Server, and multiple switches
configured as VTP Clients. The logic behind this setup is that all information regarding VLANs is
stored only on the VTP Server switch from which all clients are updated. Any change in the
VLAN database will trigger an update from the VTP Server towards all VTP clients so they can
update their database.
Lastly, be informed that these VTP updates will only traverse Trunk links. This means that you
must ensure that all switches connect to the network backbone via Trunk links, otherwise no VTP
updates will get to your switches.
Let's now take a closer look at what each VTP mode does and where it can be used.
VTP Server Mode
By default all switches are configured as VTP Servers when first powered on. All VLAN
information such as VLAN number and VLAN name is stored locally, on a separate NVRAM
from where the 'startup-config' is stored. This happens only when the switch is in VTP Server
mode.
For small networks with a limited number of switches and VLANs, storing all VLAN information
on every switch is usually not a problem, but as the network expands and VLANs increase in
number, it becomes a problem and a decision must be made to select a few powerful switches as
the VTP Servers while configuring all other switches to VTP Client mode.
The diagram above shows a Cisco Catalyst 3550 selected to take the role of the network's VTP
Server since it is the most powerful switch. All other Catalyst switches have been configured as
VTP Clients, obtaining all VLAN information and updates from the 3550 VTP Server.
The method and frequency by which these updates occur is covered in much detail on the pages
that follow, so we won't get into any more detail at this point. However, for those who noticed,
there is a new concept introduced in the above diagram that we haven't spoken about: The VTP
Domain.
The VTP Domain - VLAN Management Domain
The VTP Domain, also known as the VLAN Management Domain, is a VTP parameter
configured on every switch connected to the network and used to define the switches that will
participate in any changes or updates made in the specified VTP domain.
Naturally, the core switch (VTP Server) and all other switches participate in the same domain, e.g
firewall, so when the VTP Server advertises new VLAN information for the VTP firewall
domain, only clients (switches) configured with the same VTP Domain parameter will accept and
process these changes, the rest will simply ignore them.
Lastly, some people tend to relate the VTP Domain with the Internet Domain name space,
however, this is completely incorrect. Even though the acronym 'DNS' contains the word
'Domain', it is not related in any way with the VTP Domain. Here (in VTP land), the word
'Domain' is simply used to describe a logical area in which certain hosts (switches) belong to or
participate in, and are affected by any changes made within it.
We should also note that all Cisco switches default to VTP Server mode but will not transmit any
VLAN information to the network until a VTP Domain is set on the switch.
At this point we are only referencing the VTP Domain concept as this is also analysed in greater
depth further on, so let's continue with the VTP modes!
VTP Client Mode
In Client Mode, a switch will accept and store in its RAM all VLAN information received from
the VTP Server, however, this information is also saved in NVRAM, so if the switch is powered
off, it won't loose its VLAN information.
The VTP Client behaves like a VTP Server, but you are unable to create, modify or delete
VLAN's on it.
In most networks, the clients connect directly to the VTP Server as shown in our previous
diagram. If, for any reason, two clients are cascaded together, then the information will propagate
downwards via the available Trunk links, ensuring it reaches all switches:
The diagram shows a 3550 Catalyst switch configured as a VTP Server and 4 Catalyst 2950
switches configured as VTP Clients and cascaded below our 3550. When the VTP Server sends a
VTP update, this will travel through all trunk links (ISL, 802.1q, 802.10 and ATM LANE), as
shown in the diagram.
The advertised information will firstly reach the two Catalyst 2950 switches directly connected to
the 3550 and will then travel to the cascaded switches below and through the trunk links. If the
link between the cascaded 2950's was not a trunk link but an access link, then the 2nd set of
switches would not receive and VTP updates:
As you can see, the VTP updates will happlily arrive at the first catalyst switches but stop there as
there are no trunk links between them and the 2950's below them. It is very important you keep
this in mind when designing a network or making changes to the existing one.
VTP Transparent Mode
The VTP Transparent mode is something between a VTP Server and a VTP Client but does not
participate in the VTP Domain.
In Transparent mode, you are able to create, modify and delete VLANs on the local switch,
without affecting any other switches regardless of the mode they might be in. Most importantly, if
the transparently configured switch receives an advertisement containing VLAN information, it
will ignore it but at the same time forward it out its trunk ports to any other switches it might be
connected to.
NOTE: A Transparent VTP switch will act as a VTP relay (forward all VTP information it
receives, out its trunk ports) only when VTP version 2 is used in the network. With VTP version
1, the transparent switch will simply ignore and discard any VTP messages received from the rest
of the network.
Lastly, all switches configured to operate in Transparent mode save their configuration in their
NVRAM (just like all the previous two modes) but not to advertise any VLAN information of its
own, even though it will happily forward any VTP information received from the rest of the
network.
This important functionality allows transparently configured switches to be placed anywhere
within the network, without any implications to the rest of the network because as mentioned,
they act as a repeater for any VLAN information received:
Our 3550 Catalyst here is configured as a VTP Server for the domain called "Firewall". In
addition, we have two switches configured in VTP Client mode, obtaining their VLAN
information from the 3550 VTP Server, but between these two VTP Clients, we have placed
another switch configured to run in VTP Transparent mode.
Our Transparent switch has been configured with the domain called "Lab", and as such, the
switch will forward all incoming VTP updates belonging to the "Firewall" domain out its other
trunk link, without processing the information. At the same time, it won't advertise its own
VLAN information to its neighbouring switches.
Closing, the VTP Transparent mode is not often used in live networks, but is well worth
mentioning and learning about.
Summary
This page introduced a few new and very important concepts. The VTP Protocol is considered to
be the heart of VLANs in large scale networks as it completely makes the administration point of
view easy and transparent for every switch on your network.
We briefly spoke about the three different modes offered by the VTP protocol: Server, Client and
Transparent mode. To assist in providing a quick summary, the table below shows the main
characteristics for each mode:
VTP Mode
Description
VTP Server
The default mode for all switches supporting VTP. You can create, modify, and delete VLANs
and
specify other configuration parameters (such as VTP version)
for the entire VTP domain.
VTP servers advertise their VLAN configurations to other switches in the same VTP domain and
synchronize their VLAN configurations with other switches based on advertisements received
over trunk
links. VLAN configurations are saved in NVRAM.
VTP Client
Behaves like a VTP server, but you cannot create, change, or delete VLANs on a VTP client.
VLAN configurations are saved in NVRAM.
VTP Transparent
Does not advertise its VLAN configuration and does not synchronize its VLAN configuration
based on received advertisements. However, they will forward VTP advertisements as they are
received from other switches.
You can create, modify, and delete VLANs on a switch in VTP transparent mode. VLAN
configurations are saved in NVRAM, but they are not advertised to other switches.
All switches by default are configured as VTP Servers but without a domain. At this point we
need to select the 'Core' switch (usually the most powerful) and configure it as a VTP Server,
while reconfiguring all the rest to Client mode. Also, VTP Updates sent by the Server will only
propagate through trunk links configured for ISL, IEEE 802.1q, 802.10 or LANE encapsulation.
NOTE: You should be aware that all VTP Messages are sent through what we call the
"Management VLAN". This specially created VLAN is usually the first one in the network VLAN 1 - and by rule is never used by anyone else other than the switches themselves.
The creation of a Management VLAN ensures all switches have their own network to
communicate between each other without any disruptions.
The next page will analyse the VTP Protocol structure, messages and updates. This will provide a
deep understanding on how VTP works and what information it's messages contain. For those out
there keen on configuring a switch for VTP, it's covered towards the end of the VLAN topic as
shown on the VLAN Introduction page.
In-Depth Analysis Of VTP
Introduction
The previous page introduced the VTP protocol and we saw how it can be used within a network,
to help manage your VLANs and ease the administrative overhead providing a stress-free VLAN
environment, automatically updating all the network switches with the latest VLAN information.
This page extends on the above by delving into the VTP protocol itself and analysing it's structure
and format in order to gain a better understanding and enhance those troubleshooting skills.
The VTP Protocol Structure
We've mentioned that the VTP protocol runs only over trunk links interconnecting switches in the
network. Whether you're using ISL or IEEE 802.1q as your encapsulation protocol, it really
doesn't matter as the VTP structure in both cases remains the same.
Following are the fields which consist the VTP protocol:




VTP Protocol Version (1 or 2)
VTP Message Type (See Below)
Management Domain Length
Management Domain Name
What we need to note here is that because there are a variety of "VTP Message Types", the VTP
Header changes depending on these messages, but the fields we just mentioned above are always
included.
To be more specific, here are the different messages currently supported by the VTP protocol:




Summary Advertisements
Subset Advertisement
Advertisement Requests
VTP Join Messages
It is obvious that all switches use these different messages to request information or advertise the
VLANs they are aware of. These messages are extremely important to understand as they are the
foundations of the VTP protocol.
We'll take each message and analyse them individually, explaining their purpose and usage, but
before we proceed, let's take a quick visual look at the messages and their types to help make all
the above clearer:
First up is the 'Summary Advertisements'.
VTP Protocol - Summary Advertisement Message
The 'Summary Advertisement' message is issued by all VTP Domain Servers in 5 minute
intervals, or every 300 seconds. These advertisements inform nearby Catalyst switches with a
variety of information, including the VTP Domain name, configuration revision number,
timestamp, MD5 encryption hash code, and the number of subset advertisements to follow.
The configuration version number is a value each switch stores to help it identify new changes
made in the VTP domain. For those experienced with DNS, it's pretty much the same as the DNS
serial number. Each time a VTP Server's configuration is changed, the configuration revision
number will automatically increment by one.
When a switch receives a summary advertisement message, it will first compare the VTP domain
name (Mgmt Domain Name field) with its own.
If the Domain Name is found to be different, it will discard the message and forward it out its
trunk links. However, in the likely case that the domain name is found to be the same, it will then
check the configuration revision number (Config Revision No.) and if found to be the same or
lower than it's own, it will ignore the advertisement. If, on the other hand, it is found to be
greater, an advertisement request is sent out.
The Updater Identity field contains the IP Address of the switch that last incremented the
Configuration Revision Number, while the Update Timestamp field gives the time the last update
took place.
The MD5 (Message Digest 5) field contains the VTP password in the case where it is configured
and used to ensure the validation of the VTP Update.
Lastly, summary advertisements are usually followed by Subset Advertisements, this is indicated
by the Followers field and is the next message we'll be closely examining.
VTP Protocol - Subset Advertisement
As mentioned in the previous message, when VLAN changes are made on the Catalyst VTP
Server, it will then issue a Summary Advertisement, followed by a Subset Advertisement.
Depending on how many VLANs are configured in the domain, there might be more than one
Subset Advertisement sent to ensure all VLAN information is updated on the VTP Clients.
Comparing the fields of this message with the previous one, you'll notice most of them are
identical, except for the Sequence No. and VLAN Info. Field.
The Code field for a Subset Advertisement of this type is set to 0x02 while the Sequence No.
field contains the sequence of the packet in the stream of packets following a summary
advertisement. The sequence starts with 1 and increments based on the number of packets in the
stream.
Apart from these fields, we also have the VLAN Info Field, which happens to be the most
important as it contains all the VLAN information the switches are waiting for.
The VLAN Info Field will be presented in segments. Complexity and importance requires us to
break it up further and analyse the subfields it contains:
Each VLAN Info Field contains all the information required for one VLAN. This means that if
our network is powered with 10 VLANs and a Subset Advertisement is triggered, the VTP Server
will send a total of 10 Subset Advertisements since each VLAN Info Field contains data for one
VLAN.
The most important subfields in the VLAN Info Field are the VLAN Name Length, ISL VLAN
ID, MTU Size and VLAN Name. These subfields contain critical information about the VLAN
advertised in the particular Subset Advertisement frame. Some might be suprised to see settings
such as MTU's to be configurable in VLAN's, and this confirms that each VLAN is treated as a
separate network, where even different MTU sizes are possible amongst your network's VLANS.
Advertisement Requests
Turning a switch off will result loosing all its VTP information stored in its memory (RAM).
When the switch is next turned on, all its database information is reset and therefore requires to
be updated with the latest version available from the VTP Server(s).
A switch will also send an Advertisement Request when it hears a VTP summary
advertisement with a higher revision number than what it currently has. Another scenario where a
request would be issued is when the VTP domain membership has changed, even though this is
quite uncommon since the VTP domain name is rarely, if ever, changed after its initial
configuration.
So what happens when a Advertisement Request hits the streets of your network?
As you would already be aware from the message types we have just covered, the VTP Server
will respond with Summary Advertisement, followed by as many Subset Advertisements required
to inform the VTP Clients about the currently configured VLANs.
The diagram below shows the structure of an Advertisement Request sent by a VTP Client
switch:
Most fields as you can see, are similar to the previous messages we've seen, except two: The
Reserved and Starting Advertisement To Request . The Reserved is exactly what it implies reserved and not used in the Advertisement Request messages, while the Starting Advertisement
To Request is the actual request sent by the VTP Client.
VTP Join Messages
VTP Join Messages are similar to the Advertisement Request messages but with a different
Message Type field value and a few more parameters. As indicated by the message name, a VTP
Join Message is sent when the VTP Client first joins a VTP domain, informing the VTP Server(s)
about the new guy in 'town':)
Other VTP Options - VTP Password
The VTP Password is a feature that all security conscious Administrators/Engineers will
welcome. With the password feature, you are able to secure your VTP Domain since only
switches configured with the correct password are able to properly decrypt the VTP messages
advertised in the management VLAN.
By default the VTP Password option is not turned on and therefore most management VLANs are
set to use non-secure advertisements. Once enabled on the VTP Domain Server(s), all switches
participating in the domain must be manually configured with the same password, otherwise it
will fail to decrypt all incoming VTP messages.
Summary
This page analysed the structure of each message the VTP protocol currently supports to maintain
the network's switches in synchronisation with the VTP domain server(s):




Summary Advertisements
Subset Advertisement
Advertisement Requests
VTP Join Messages
We're sure you would agree that VLAN's are in fact a whole study case alone, but surely at the
same time it's quite exciting as new concepts and methods of ensuring stability, speed and
reliability are revealed.
This completes our in-depth discussion on the VTP Protocol messages. Next up is VTP Prunning,
a nice service that ensures our network backbone is not constantly flooded with unnecessary
traffic. We are sure you'll enjoy the page, along with the awesome diagrams we have prepared.
VTP Pruning
Introduction
As you would be aware a switched network creates one broadcast domain, similar to that of a
VLAN powered network where all nodes belonging to the same VLAN are part of the same
broadcast domain, receiving all broadcasts sent on their network.
The Broadcast And Unicast Problem In VLAN Networks
What we are about to see is how these broadcasts can actually create problems by flooding the
VLAN network with unnecessary traffic, and depending on your network setup, this can prove to
be a huge problem. The reason for this is because the trunk links interconecting your network
switches will carry these broadcasts to every switch in the network, regardless of which VLAN
the broadcast is intended for.
As shown and described, a host connected to a port configured for VLAN 2 on Switch 1 (first
switch on the left), generates a network broadcast. Naturally, the switch will forward the
broadcast out all ports assigned to the same VLAN it was received from, that is, VLAN 2.
In addition, the Catalyst switch will forward the broadcast out its trunk link, so it may reach all
ports in the network assigned to VLAN 2. The Root switch receives the broadcast through one of
it's trunks and immediately forwards it out the other two - towards Switch 2 & 3.
Switch 2 is delighted to receive the broadcast as it does in fact have one port assigned to VLAN
2. Switch 3 however, is a different case - it has no ports assigned to VLAN 2 and therefore will
drop the broadcast packet it receives.
In this example, the bandwidth usage was ineffecient because one broadcast packet was sent over
all possible trunk links, and was then dropped by Switch 3.
You might ask yourself 'So what's the big deal?'.
The problem here is small and can easily be ignored... but consider a network of fifteen or more
12 port switches (this translates to at least 210 nodes) and you can start to appreciate how serious
the problem can get. To make things worse (and more realistic), consider you're using 24 port
switches, then you're all of a sudden talking about more than 300 nodes!
To further help you understand how serious the problem gets, take a look at our example network
below:
Here you see a medium sized network powered by Cisco Catalyst switches. The two main
switches up the top are the VTP servers and also perform 3rd layer switching by routing packets
between the VLANs we've created.
Right below them you'll find our 2950's Catalyst switches which are connected to the core
switches via redundant fiber trunk links. Directly below our 2950's are our 2948 Catalyst switches
that connect all workstations to the network.
A workstation connected to a port assigned to VLAN 2 decided to send a network broadcast
looking for a specific network resource. While the workstation is totally unaware of our network
design and complexity, its broadcast is the reason all our trunks will flood with unwanted traffic,
consuming valuable bandwidth!
Take a look at what happens:
We don't think describing the above is actually required as the diagram shows all the information
we need and we're confident you will agree that we dealing with a big problem:)
So how do we fix this mess ?
Keep reading on as you're about to learn........
The Solution: Enabling VTP Pruning
VTP Pruning as you might have already guessed solves the above problem by reducing the
unnecessary flooded traffic described previously. This is done by forwarding broadcasts and
unknown unicast frames on a VLAN over trunk links only if the receiving end of the trunk has
ports in that VLAN.
Looking at the above diagram you will notice that the Root Catalyst 3550 Switch receives a
broadcast from Switch 1, but only forwards it out one of it's trunks. The Root Switch knows that
the broadcast belongs to VLAN 2 and furthermore it's aware no port is assigned to VLAN 2 on
Switch 3, therefore it won't forward it out the trunk that leads to that switch.
Support For VTP Pruning
The VTP Pruning service is supported by both VTP 1 and VTP 2 versions of the VTP protocol.
With VTP 1, VTP pruning is possible with the use of additional VTP message types.
When a Cisco Catalyst switch has ports associated with a VLAN, it will send an advertisement to
its neighboring switches informing them about the ports it has active on that VLAN. This
information is then stored by the neighbors and used to decide if flooded traffic from a VLAN
should be forwarded to the switch via the trunk port or not.
Note: VTP Pruning is disabled by default on all Cisco Catalyst switches and can be enabled by
issuing the "set vtp pruning enable" command.
If this command is issued on the VTP Server(s) of your network, then pruning is enabled for the
entire management domain.
VTP Pruning configuration and commands are covered in section 11.4 as outlined in the VLAN
Introduction page, however, we should inform you that you can actually enable pruning for
specific VLANs in your network.
When you enable VTP Pruning on your network, all VLANs become eligible for pruning on all
trunk links. This default list of pruning eligibility can thankfully be modified to suite your needs
but you must first clear all VLANs from the list using the "clear vtp prune-eligible vlan-range"
command and then set the VLAN range you wish to add in the prune eligible list by issuing the
following command: "set vtp prune-eligible vlan-range" where the 'vlan-range' is the actual
inclusive range of VLANs e.g '2-20'.
By default, VLANs 2–1000 are eligible for pruning. VLAN 1 has a special meaning because it is
normally used as a management VLAN and is never eligible for pruning, while VLANs 1001–
1005 are also never eligible for pruning. If the VLANs are configured as pruning-ineligible, the
flooding continues as illustrated in our examples.
Summary
VTP Pruning can in fact be an administrator's best friend in any Cisco powered network,
increasing available bandwidth by restricting flooded traffic to those trunk links that the traffic
must use to reach the destination devices.
At this point, we have also come to the end of the first part of our VLAN presentation. As we are
still working on the second and final part of the VLAN topic, we hope these pages will keep you
going until it is complete.