Download Dealing with multiple clients

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Zigbee wikipedia , lookup

Net bias wikipedia , lookup

Asynchronous Transfer Mode wikipedia , lookup

Power over Ethernet wikipedia , lookup

Lag wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Airborne Networking wikipedia , lookup

IEEE 1355 wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Deep packet inspection wikipedia , lookup

Point-to-Point Protocol over Ethernet wikipedia , lookup

Network tap wikipedia , lookup

Distributed firewall wikipedia , lookup

AppleTalk wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Computer network wikipedia , lookup

Dynamic Host Configuration Protocol wikipedia , lookup

UniPro protocol stack wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Internet protocol suite wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Transcript
Dealing with multiple clients
[edit] CSMA/CD shared medium Ethernet
Ethernet originally used a shared coaxial cable (the shared medium) winding around a
building or campus to every attached machine. A scheme known as carrier sense multiple
access with collision detection (CSMA/CD) governed the way the computers shared the
channel. This scheme was simpler than the competing token ring or token bus
technologies. When a computer wanted to send some information, it used the following
algorithm:
[edit] Main procedure
1. Frame ready for transmission.
2. Is medium idle? If not, wait until it becomes ready and wait the interframe gap
period (9.6 µs in 10 Mbit/s Ethernet).
3. Start transmitting.
4. Did a collision occur? If so, go to collision detected procedure.
5. Reset retransmission counters and end frame transmission.
[edit] Collision detected procedure
1. Continue transmission until minimum packet time is reached (jam signal) to
ensure that all receivers detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort
transmission.
4. Calculate and wait random backoff period based on number of collisions.
5. Re-enter main procedure at stage 1.
This can be likened to what happens at a dinner party, where all the guests talk to each
other through a common medium (the air). Before speaking, each guest politely waits for
the current speaker to finish. If two guests start speaking at the same time, both stop and
wait for short, random periods of time (in Ethernet, this time is generally measured in
microseconds). The hope is that by each choosing a random period of time, both guests
will not choose the same time to try to speak again, thus avoiding another collision.
Exponentially increasing back-off times (determined using the truncated binary
exponential backoff algorithm) are used when there is more than one failed attempt to
transmit.
Computers were connected to an Attachment Unit Interface (AUI) transceiver, which was
in turn connected to the cable (later with thin Ethernet the transceiver was integrated into
the network adapter). While a simple passive wire was highly reliable for small
Ethernets, it was not reliable for large extended networks, where damage to the wire in a
single place, or a single bad connector, could make the whole Ethernet segment unusable.
Multipoint systems are also prone to very strange failure modes when an electrical
discontinuity reflects the signal in such a manner that some nodes would work properly
while others work slowly because of excessive retries or not at all (see standing wave for
an explanation of why); these could be much more painful to diagnose than a complete
failure of the segment. Debugging such failures often involved several people crawling
around wiggling connectors while others watched the displays of computers running a
ping command and shouted out reports as performance changed.
Since all communications happen on the same wire, any information sent by one
computer is received by all, even if that information is intended for just one destination.
The network interface card interrupts the CPU only when applicable packets are received:
the card ignores information not addressed to it unless it is put into "promiscuous mode".
This "one speaks, all listen" property is a security weakness of shared-medium Ethernet,
since a node on an Ethernet network can eavesdrop on all traffic on the wire if it so
chooses. Use of a single cable also means that the bandwidth is shared, so that network
traffic can slow to a crawl when, for example, the network and nodes restart after a power
failure.
Physical layer
This section does not cite any references or sources. (March 2008)
Please improve this section by adding citations to reliable sources. Unverifiable material may be
challenged and removed.
Main article: Ethernet physical layer
The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a
shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable
(with BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5
and 10BASE-T used twisted pair connected to Ethernet hubs with 8P8C modular
connectors (not to be confused with FCC's RJ45).
Currently Ethernet has many varieties that vary both in speed and physical medium used.
Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T.
All three utilize twisted pair cables and 8P8C modular connectors (often called RJ45).
They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. However each version has
become steadily more selective about the cable it runs on and some installers have
avoided 1000BASE-T for everything except short connections to servers.
Fiber optic variants of Ethernet are commonly used in structured cabling applications.
These variants have also seen substantial penetration in enterprise datacenter
applications, but are rarely seen connected to end user systems for cost/convenience
reasons. Their advantages lie in performance, electrical isolation and distance, up to tens
of kilometers with some versions. Fiber versions of a new higher speed almost invariably
come out before copper. 10 gigabit Ethernet is becoming more popular in both enterprise
and carrier networks, with development starting on 40 Gbit/s [6][7] and 100 Gbit/s
Ethernet. Metcalfe now believes commercial applications using terabit Ethernet may
occur by 2015 though he says existing Ethernet standards may have to be overthrown to
reach terabit Ethernet. [8]
A data packet on the wire is called a frame. A frame viewed on the actual physical wire
would show Preamble and Start Frame Delimiter, in addition to the other data. These are
required by all physical hardware. They are not displayed by packet sniffing software
because these bits are removed by the Ethernet adapter before being passed on to the host
(in contrast, it is often the device driver which removes the CRC32 (FCS) from the
packets seen by the user).
The table below shows the complete Ethernet frame, as transmitted. Note that the bit
patterns in the preamble and start of frame delimiter are written as bit strings, with the
first bit transmitted on the left (not as byte values, which in Ethernet are transmitted least
significant bit first). This notation matches the one used in the IEEE 802.3 standard.
Start-ofMAC
MAC
Interframe
Preamble FrameEthertype/Length Payload CRC32
destination source
gap
Delimiter
7 octets of 1 octet of
6 octets
10101010 10101011
6
2 octets
octets
64-1518 octets
46-1500
960 ns
4 octets
octets
(100M)
24 cycles
(100M)
72-1526 octets
After a frame has been sent transmitters are required to pause a specified time before
transmitting the next frame. For 10M this is 9600 ns, 100M 960 ns, 1000M 96 ns.
10/100M transceiver chips (MII PHY) work with 4-bits (nibble) at a time. Therefore the
preamble will be 7 instances of 0101 + 0101, and the Start Frame Delimiter will be 0101
+ 1101. 8-bit values are sent low 4-bit and then high 4-bit. 1000M transceiver chips
(GMII) work with 8 bits at a time, and 10 Gbit/s (XGMII) PHY works with 32 bits at a
time.
Some implementations use larger jumbo frames.
[edit] Ethernet frame types and the EtherType field
Main article: Ethernet II framing
There are several types of Ethernet frames:




The Ethernet Version 2 or Ethernet II frame, the so-called DIX frame (named
after DEC, Intel, and Xerox); this is the most common today, as it is often used
directly by the Internet Protocol.
Novell's non-standard variation of IEEE 802.3 ("raw 802.3 frame") without an
IEEE 802.2 LLC header.
IEEE 802.2 LLC frame
IEEE 802.2 LLC/SNAP frame
In addition, Ethernet frames may optionally contain a IEEE 802.1Q tag to identify what
VLAN it belongs to and its IEEE 802.1p priority (quality of service). This encapsulation
is defined in the IEEE 802.3ac specification and increases the maximum frame by 4 bytes
to 1522 bytes.
.
MAC address
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In computer networking a Media Access Control address (MAC address) or Ethernet
Hardware Address (EHA) or hardware address or adapter address is a quasi-unique
identifier attached to most network adapters (NICs Network Interface Card). It is a
number that acts like a name for a particular network adapter, so, for example, the
network cards (or built-in network adapters) in two different computers will have
different names, or MAC addresses, as would an Ethernet adapter and a wireless adapter
in the same computer, and as would multiple network cards in a router. However, it is
possible to change the MAC address on most of today's hardware, often referred to as
MAC spoofing.
Most layer 2 network protocols use one of three numbering spaces managed by the IEEE:
MAC-48, EUI-48, and EUI-64, which are designed to be globally unique. Not all
communications protocols use MAC addresses, and not all protocols require globally
unique identifiers. The IEEE claims trademarks on the names "EUI-48" and "EUI-64"
("EUI" stands for Extended Unique Identifier).
MAC addresses, unlike IP addresses and IPX addresses, are not divided into "host" and
"network" portions. Therefore, a host cannot determine from the MAC address of another
host whether that host is on the same layer 2 network segment as the sending host or a
network segment bridged to that network segment.
ARP is commonly used to convert from addresses in a layer 3 protocol such as Internet
Protocol (IP) to the layer 2 MAC address. On broadcast networks, such as Ethernet, the
MAC address allows each host to be uniquely identified and allows frames to be marked
for specific hosts. It thus forms the basis of most of the layer 2 networking upon which
higher OSI Layer protocols are built to produce complex, functioning networks.
Contents
[hide]





1 Notational conventions
2 Address details
o 2.1 Individual address block
3 Bit-reversed notation
4 See also
5 References

6 External links
[edit] Notational conventions
The standard (IEEE 802) format for printing MAC-48 addresses in human-readable
media is six groups of two hexadecimal digits, separated by hyphens (-) in transmission
order, e.g. 01-23-45-67-89-ab. This form is also commonly used for EUI-64. Other
conventions include six groups of two separated by colons (:), e.g. 01:23:45:67:89:ab;
or three groups of four hexadecimal digits separated by dots (.), e.g. 0123.4567.89ab;
again in transmission order.
[edit] Address details
The original IEEE 802 MAC address comes from the original Xerox Ethernet addressing
scheme.[1] This 48-bit address space contains potentially 248 or 281,474,976,710,656
possible MAC addresses.
All three numbering systems use the same format and differ only in the length of the
identifier. Addresses can either be "universally administered addresses" or "locally
administered addresses."
A universally administered address is uniquely assigned to a device by its
manufacturer; these are sometimes called "burned-in addresses" (BIA). The first three
octets (in transmission order) identify the organization that issued the identifier and are
known as the Organizationally Unique Identifier (OUI). The following three (MAC-48
and EUI-48) or five (EUI-64) octets are assigned by that organization in nearly any
manner they please, subject to the constraint of uniqueness. The IEEE expects the MAC48 space to be exhausted no sooner than the year 2100; EUI-64s are not expected to run
out in the foreseeable future.
A locally administered address is assigned to a device by a network administrator,
overriding the burned-in address. Locally administered addresses do not contain OUIs.
Universally administered and locally administered addresses are distinguished by setting
the second least significant bit of the most significant byte of the address. If the bit is 0,
the address is universally administered. If it is 1, the address is locally administered. The
bit is 0 in all OUIs. For example, 02-00-00-00-00-01. The most significant byte is 02h.
The binary is 00000010 and the second least significant bit is 1. Therefore, it is a locally
administered address.[2]
If the least significant bit of the most significant byte is set to a 0, the packet is meant to
reach only one receiving NIC. This is called unicast. If the least significant bit of the most
significant byte is set to a 1, the packet is meant to be sent only once but still reach
several NICs. This is called multicast.
MAC-48 and EUI-48 addresses are usually shown in hexadecimal format, with each octet
separated by a dash or colon. An example of a MAC-48 address would be "00-08-74-4C7F-1D". If you cross-reference the first three octets with IEEE's OUI assignments,[3] you
can see that this MAC address came from Dell Computer Corp. The last three octets
represent the serial number assigned to the adapter by the manufacturer.
The following technologies use the MAC-48 identifier format:








Ethernet
802.11 wireless networks
Bluetooth
IEEE 802.5 token ring
most other IEEE 802 networks
FDDI
ATM (switched virtual connections only, as part of an NSAP address)
Fibre Channel and Serial Attached SCSI (as part of a World Wide Name)
The distinction between EUI-48 and MAC-48 identifiers is purely semantic: MAC-48 is
used for network hardware; EUI-48 is used to identify other devices and software. (Thus,
by definition, an EUI-48 is not in fact a "MAC address", although it is syntactically
indistinguishable from one and assigned from the same numbering space.)
Note: The IEEE now considers the label MAC-48 to be an obsolete term which was
previously used to refer to a specific type of EUI-48 identifier used to address hardware
interfaces within existing 802-based networking applications and should not be used in
the future. Instead, the term EUI-48 should be used for this purpose.
Ethernet repeaters and hubs
For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size
which depended on the medium used. For example, 10BASE5 coax cables had a
maximum length of 500 meters (1,640 ft). Also, as was the case with most other highspeed buses, Ethernet segments had to be terminated with a resistor at each end. For
coaxial-cable-based Ethernet, each end of the cable had a 50-ohm resistor attached.
Typically this resistor was built into a male BNC or N connector and attached to the last
device on the bus, or, if vampire taps were in use, to the end of the cable just past the last
device. If termination was not done, or if there was a break in the cable, the AC signal on
the bus was reflected, rather than dissipated, when it reached the end. This reflected
signal was indistinguishable from a collision, and so no communication would be able to
take place.
A greater length could be obtained by an Ethernet repeater, which took the signal from
one Ethernet cable and repeated it onto another cable. If a collision was detected, the
repeater transmitted a jam signal onto all ports to ensure collision detection. Repeaters
could be used to connect segments such that there were up to five Ethernet segments
between any two hosts, three of which could have attached devices. Repeaters could
detect an improperly terminated link from the continuous collisions and stop forwarding
data from it. Hence they alleviated the problem of cable breakages: when an Ethernet
coax segment broke, while all devices on that segment were unable to communicate,
repeaters allowed the other segments to continue working - although depending on which
segment was broken and the layout of the network the partitioning that resulted may have
made other segments unable to reach important servers and thus effectively useless.
People recognized the advantages of cabling in a star topology, primarily that only faults
at the star point will result in a badly partitioned network, and network vendors started
creating repeaters having multiple ports, thus reducing the number of repeaters required
at the star point. Multiport Ethernet repeaters became known as "Ethernet hubs". Network
vendors such as DEC and SynOptics sold hubs that connected many 10BASE2 thin
coaxial segments. There were also "multi-port transceivers" or "fan-outs". These could be
connected to each other and/or a coax backbone. The best-known early example was
DEC's DELNI. These devices allowed multiple hosts with AUI connections to share a
single transceiver. They also allowed creation of a small standalone Ethernet segment
without using a coaxial cable.
A twisted pair CAT-3 or CAT-5 cable is used to connect 10BASE-T Ethernet
Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and
continuing with 10BASE-T, was designed for point-to-point links only and all
termination was built into the device. This changed hubs from a specialist device used at
the center of large networks to a device that every twisted pair-based network with more
than two machines had to use. The tree structure that resulted from this made Ethernet
networks more reliable by preventing faults with (but not deliberate misbehavior of) one
peer or its associated cable from affecting other devices on the network, although a
failure of a hub or an inter-hub link could still affect lots of users. Also, since twisted pair
Ethernet is point-to-point and terminated inside the hardware, the total empty panel space
required around a port is much reduced, making it easier to design hubs with lots of ports
and to integrate Ethernet onto computer motherboards.
Despite the physical star topology, hubbed Ethernet networks still use half-duplex and
CSMA/CD, with only minimal activity by the hub, primarily the Collision Enforcement
signal, in dealing with packet collisions. Every packet is sent to every port on the hub, so
bandwidth and security problems aren't addressed. The total throughput of the hub is
limited to that of a single link and all links must operate at the same speed.
Collisions reduce throughput by their very nature. In the worst case, when there are lots
of hosts with long cables that attempt to transmit many short frames, excessive collisions
can reduce throughput dramatically. However, a Xerox report in 1980 summarized the
results of having 20 fast nodes attempting to transmit packets of various sizes as quickly
as possible on the same Ethernet segment.[4] The results showed that, even for the
smallest Ethernet frames (64B), 90% throughput on the LAN was the norm. This is in
comparison with token passing LANs (token ring, token bus), all of which suffer
throughput degradation as each new node comes into the LAN, due to token waits.
This report was wildly controversial, as modeling showed that collision-based networks
became unstable under loads as low as 40% of nominal capacity. Many early researchers
failed to understand the subtleties of the CSMA/CD protocol and how important it was to
get the details right, and were really modeling somewhat different networks (usually not
as good as real Ethernet).[5]
[edit] Bridging and switching
While repeaters could isolate some aspects of Ethernet segments, such as cable
breakages, they still forwarded all traffic to all Ethernet devices. This created practical
limits on how many machines could communicate on an Ethernet network. Also as the
entire network was one collision domain and all hosts had to be able to detect collisions
anywhere on the network the number of repeaters between the farthest nodes was limited.
Finally segments joined by repeaters had to all operate at the same speed, making phasedin upgrades impossible.
To alleviate these problems, bridging was created to communicate at the data link layer
while isolating the physical layer. With bridging, only well-formed packets are forwarded
from one Ethernet segment to another; collisions and packet errors are isolated. Bridges
learn where devices are, by watching MAC addresses, and do not forward packets across
segments when they know the destination address is not located in that direction.
Prior to discovery of network devices on the different segments, Ethernet bridges and
switches work somewhat like Ethernet hubs, passing all traffic between segments.
However, as the switch discovers the addresses associated with each port, it only
forwards network traffic to the necessary segments improving overall performance.
Broadcast traffic is still forwarded to all network segments. Bridges also overcame the
limits on total segments between two hosts and allowed the mixing of speeds, both of
which became very important with the introduction of Fast Ethernet.
Early bridges examined each packet one by one using software on a CPU, and some of
them were significantly slower than hubs (multi-port repeaters) at forwarding traffic,
especially when handling many ports at the same time. In 1989 the networking company
Kalpana introduced their EtherSwitch, the first Ethernet switch. An Ethernet switch does
bridging in hardware, allowing it to forward packets at full wire speed. It is important to
remember that the term switch was invented by device manufacturers and does not appear
in the 802.3 standard. Functionally, the two terms are interchangeable.
Since packets are typically only delivered to the port they are intended for, traffic on a
switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this,
switched Ethernet should still be regarded as an insecure network technology, because it
is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC
flooding. The bandwidth advantages, the slightly better isolation of devices from each
other, the ability to easily mix different speeds of devices and the elimination of the
chaining limits inherent in non-switched Ethernet have made switched Ethernet the
dominant network technology.
When a twisted pair or fiber link segment is used and neither end is connected to a hub,
full-duplex Ethernet becomes possible over that segment. In full duplex mode both
devices can transmit and receive to/from each other at the same time, and there is no
collision domain. This doubles the aggregate bandwidth of the link and is sometimes
advertised as double the link speed (e.g. 200 Mbit/s) to account for this. However, this is
misleading as performance will only double if traffic patterns are symmetrical (which in
reality they rarely are). The elimination of the collision domain also means that all the
link's bandwidth can be used and that segment length is not limited by the need for
correct collision detection (this is most significant with some of the fiber variants of
Ethernet).
IP address
From Wikipedia, the free encyclopedia
(Redirected from Ip addresses)
Jump to: navigation, search
This article is missing citations or needs footnotes.
Using inline citations helps guard against copyright violations and factual inaccuracies. (November
2007)
The five-layer TCP/IP model
5. Application layer
DHCP · DNS · FTP · Gopher · HTTP ·
IMAP4 · IRC · NNTP · XMPP · POP3
· RTP · SIP · SMTP · SNMP · SSH ·
TELNET · RPC · RTCP · RTSP · TLS
(and SSL) · SDP · SOAP · GTP ·
STUN · NTP · (more)
4. Transport layer
TCP · UDP · DCCP · SCTP · RSVP ·
ECN · (more)
3. Network/internet layer
IP (IPv4 · IPv6) · OSPF · IS-IS · BGP ·
IPsec · ARP · RARP · RIP · ICMP ·
ICMPv6 · IGMP · (more)
2. Data link layer
802.11 (WLAN) · 802.16 · Wi-Fi ·
WiMAX · ATM · DTM · Token ring ·
Ethernet · FDDI · Frame Relay · GPRS
· EVDO · HSPA · HDLC · PPP · PPTP
· L2TP · ISDN · ARCnet · LLTD ·
(more)
1. Physical layer
Ethernet physical layer · PLC ·
SONET/SDH · G.709 · Optical fiber ·
Coaxial cable · Twisted pair · (more)
This box: view • talk • edit
An IP address (or Internet Protocol address) is a unique address that certain electronic
devices use in order to identify and communicate with each other on a computer network
utilizing the Internet Protocol standard (IP)—in simpler terms, a computer address. Any
participating network device—including routers, switches, computers, infrastructure
servers (e.g., NTP, DNS, DHCP, SNMP, etc.), printers, Internet fax machines, and some
telephones—can have its own address that is unique within the scope of the specific
network. Some IP addresses are intended to be unique within the scope of the global
Internet, while others need to be unique only within the scope of an enterprise.
The IP address acts as a locator for one IP device to find another and interact with it. It is
not intended, however, to act as an identifier that always uniquely identifies a particular
device. In current practice, an IP address is not always a unique identifier, due to
technologies such as dynamic assignment and network address translation.
IP addresses are managed and created by the Internet Assigned Numbers Authority
(IANA). The IANA generally allocates super-blocks to Regional Internet Registries, who
in turn allocate smaller blocks to Internet service providers and enterprises.
Contents
[hide]







1 IP versions
o 1.1 IP version 4 addresses
 1.1.1 IPv4 address networks
 1.1.2 IPv4 private addresses
o 1.2 IP version 6 addresses
 1.2.1 IPv6 private addresses
2 IP address subnetting
3 Static and dynamic IP addresses
o 3.1 Method of assignment
o 3.2 Uses of dynamic addressing
o 3.3 Uses of static addressing
4 Modifications to IP addressing
o 4.1 IP blocking and firewalls
o 4.2 IP address translation
5 See also
6 References
7 External links
o 7.1 RFCs
[edit] IP versions
The Internet Protocol (IP) has two versions currently in use (see IP version history for
details). Each version has its own definition of an IP address. Because of its prevalence,
"IP address" typically refers to those defined by IPv4.
An illustration of an IP address (version 4), in both dot-decimal notation and binary.
[edit] IP version 4 addresses
Main article: IPv4#Addressing
IPv4 only uses 32-bit (4-byte) addresses, which limits the address space to 4,294,967,296
(232) possible unique addresses. However, many are reserved for special purposes, such
as private networks (~18 million addresses) or multicast addresses (~270 million
addresses). This reduces the number of addresses that can be allocated as public Internet
addresses, and as the number of addresses available is consumed, an IPv4 address
shortage appears to be inevitable in the long run. This limitation has helped stimulate the
push towards IPv6, which is currently in the early stages of deployment and is currently
the only contender to replace IPv4.
IPv4 addresses are usually represented in dotted-decimal notation (four numbers, each
ranging from 0 to 255, separated by dots, e.g. 147.132.42.18). Each range from 0 to 255
can be represented by 8 bits, and is therefore called an octet. It is possible, although less
common, to write IPv4 addresses in binary or hexadecimal. When converting, each octet
is treated as a separate number. (So 255.255.0.0 in dot-decimal would be FF.FF.00.00 in
hexadecimal.)
[edit] IPv4 address networks
Main article: Subnetwork
Currently, three classes of networks are commonly used. These classes may be
segregated by the number of octets used to identify a single network, and also by the
range of numbers used by the first octet.



Class
Class A networks (the largest) are identified by the first octet, which ranges from
1 to 127.
Class B networks are identified by the first two octets, the first of which ranges
from 128 to 191.
Class C networks (the smallest) are identified by the first three octets, the first of
which ranges from 192 to 223.
Range of first
octet
Network
ID
Host
ID
Possible number of
networks
Possible number of
hosts
A 1 - 127
a
b.c.d
126 = (27 - 2)
16,777,214 = (224 - 2)
B 128 - 191
a.b
c.d
16,384 = (214)
65,534 = (216 - 2)
C 192 - 223
a.b.c
d
2,097,151 = (221 - 1)
254 = (28 - 2)
Some first-octet values have special meanings:


First octet 127 represents the local computer, regardless of what network it is
really in. This is useful when testing internal operations.
First octet 224 and above are reserved for special purposes such as multicasting.
Octets 0 and 255 are not acceptable values in some situations, but 0 can be used as the
second and/or third octet (e.g. 10.2.0.100).
A class A network does not necessarily consist of 16 million machines on a single
network, which would excessively burden most network technologies and their
administrators. Instead, a large company is assigned a class A network, and segregates it
further into smaller sub-nets using Classless Inter-Domain Routing. However, the class
labels are still commonly used as broad descriptors.
[edit] IPv4 private addresses
Main article: Private network
Machines not connected to the outside world (e.g. factory machines that communicate
with each other via TCP/IP) need not have globally-unique IP addresses. Three ranges of
IPv4 addresses for private networks, one per class, were standardized by RFC 1918; these
addresses will not be routed, and thus need not be coordinated with any IP address
registrars.
IANA Reserved Private Network Ranges Start of range End of range
The 24-bit Block
10.0.0.0
10.255.255.255
The 20-bit Block
172.16.0.0
172.31.255.255
The 16-bit Block
192.168.0.0
192.168.255.255
Each block is not necessarily one single network, although it is possible. Typically the
network administrator will divide a block into subnets; for example, many home routers
automatically use a default address range of 192.168.0.0 - 192.168.0.255
(192.168.0.0/24).
Windows IP Configuration
Host Name . . . . . . . . . . . . : DellDesktop
Primary Dns Suffix . . . . . . . :
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
DNS Suffix Search List. . . . . . : HomeNet
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . : HomeNet
Description . . . . . . . . . . . : Intel(R) PRO/100 VE Network Connection
Physical Address. . . . . . . . . : 00-13-20-D8-2A-AE
Dhcp Enabled. . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . : Yes
IP Address. . . . . . . . . . . . : 192.168.2.3
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.2.1
DHCP Server . . . . . . . . . . . : 192.168.2.1
DNS Servers . . . . . . . . . . . : 167.206.251.130
Lease Obtained. . . . . . . . . . : Saturday, May 03, 2008 10:33:49 AM
Lease Expires . . . . . . . . . . : Saturday, May 03, 2008 11:03:49 AM
0.0.129.0
TCP/IP model
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article or section needs copy editing for grammar, style, cohesion, tone
or spelling.
You can assist by editing it now. A how-to guide is available. (December 2007)
All or part of this article may be confusing or unclear.
Please help clarify the article. Suggestions may be on the talk page. (December 2007)
The five-layer TCP/IP model
5. Application layer
DHCP · IP (IPv4 ·DNS · FTP · Gopher
· HTTP · IMAP4 · IRC · NNTP ·
XMPP · POP3 · RTP · SIP · SMTP ·
SNMP · SSH · TELNET · RPC · RTCP
· RTSP · TLS (and SSL) · SDP · SOAP
· GTP · STUN · NTP · (more)
4. Transport layer
TCP · UDP · DCCP · SCTP · RSVP ·
ECN · (more)
3. Network/internet layer
IPv6 · OSPF · IS-IS · BGP · IPsec ·
ARP · RARP · RIP · ICMP · ICMPv6 ·
IGMP · (more)
2. Data link layer
802.11 (WLAN) · 802.16 · Wi-Fi ·
WiMAX · ATM · DTM · Token ring ·
Ethernet · FDDI · Frame Relay · GPRS
· EVDO · HSPA · HDLC · PPP · PPTP
· L2TP · ISDN · ARCnet · LLTD ·
(more)
1. Physical layer
Ethernet physical layer · PLC ·
SONET/SDH · G.709 · Optical fiber ·
Coaxial cable · Twisted pair · (more)
This box: view • talk • edit
The TCP/IP Model is a specification for computer network protocols created in the
1970s by DARPA, an agency of the United States Department of Defense. It laid the
foundations for ARPANET, which was the world's first wide area network and a
predecessor of the Internet. The TCP/IP Model is sometimes called the Internet
Reference Model, the DoD Model or the ARPANET Reference Model.
TCP/IP defines a set of rules to enable computers to communicate over a network.
TCP/IP provides end to end connectivity specifying how data should be formatted,
addressed, shipped, routed and delivered to the right destination. The specification
defines protocols for different types of communication between computers and provides a
framework for more detailed standards.
TCP/IP is generally described as having four 'layers', or five if you include the bottom
physical layer. The layer view of TCP/IP is based on the seven-layer OSI Reference
Model written long after the original TCP/IP specifications, and is not officially
recognized. Regardless, it makes a good analogy for how TCP/IP works and comparison
of the models is common.
The TCP/IP Model and related protocols are currently maintained by the Internet
Engineering Task Force (IETF).
Contents
[hide]






1 Key Architectural Principles
o 1.1 Layers in the TCP/IP model
o 1.2 OSI and TCP/IP Layering Differences
2 The layers
o 2.1 Application layer
o 2.2 Transport layer
o 2.3 Network layer
o 2.4 Data link layer
o 2.5 Physical layer
3 Hardware and software implementation
4 See also
5 References
6 External links
[edit] Key Architectural Principles
An early architectural document, RFC 1122, emphasizes architectural principles over
layering[1].
1. End-to-End Principle: This principle has evolved over time. Its original
expression put the maintenance of state and overall intelligence at the edges, and
assumed the Internet that connected the edges retained no state and concentrated
on speed and simplicity. Real-world needs for firewalls, network address
translators, web content caches and the like have forced changes in this Principle.
[2]
2. Robustness Principle: "Be liberal in what you accept, and conservative in what
you send. Software on other hosts may contain deficiencies that make it unwise to
exploit legal but obscure protocol features".
Even when layer is examined, the assorted architectural documents -- there is no single
architectural model such as ISO 7498, the OSI Reference Model -- have fewer, less
rigidly defined layers than the commonly referenced OSI model, and thus provides an
easier fit for real-world protocols. In point of fact, one frequently referenced document
does not contain a stack of layers. The lack of emphasis on layering is a strong difference
between the IETF and OSI approaches. It only refers to the existence of the
"internetworking layer" and generally to "upper layers"; this document was intended as a
1996 "snapshot" of the architecture: "The Internet and its architecture have grown in
evolutionary fashion from modest beginnings, rather than from a Grand Plan. While this
process of evolution is one of the main reasons for the technology's success, it
nevertheless seems useful to record a snapshot of the current principles of the Internet
architecture."
No document officially specifies the model, another reason to deemphasize the emphasis
on layering. Different names are given to the layers by different documents, and different
numbers of layers are shown by different documents.
There are versions of this model with four layers and with five[citation needed] layers. RFC
1122 on Host Requirements makes general reference to layering, but refers to many other
architectural principles not emphasizing layering. It loosely defines a four-layer version,
with the layers having names, not numbers, as




Process Layer or Application Layer: this is where the "higher level" protocols
such as SMTP, FTP, SSH, HTTP, etc. operate.
Host-To-Host (Transport) Layer: this is where flow-control and connection
protocols exist, such as TCP. This layer deals with opening and maintaining
connections, ensuring that packets are in fact received.
Internet or Internetworking Layer: this layer defines IP addresses, with many
routing schemes for navigating packets from one IP address to another.
Network Access Layer: this layer describes both the protocols (i.e., the OSI Data
Link Layer) used to mediate access to shared media, and the physical protocols
and technologies necessary for communications from individual hosts to a
medium.
The Internet protocol suite (and corresponding protocol stack), and its layering model,
were in use before the OSI model was established. Since then, the TCP/IP model has
been compared with the OSI model numerous times in books and classrooms, which
often results in confusion because the two models use different assumptions, including
about the relative importance of strict layering.
[edit] Layers in the TCP/IP model
Transport layer
The transport layer's responsibilities include end-to-end message transfer capabilities
independent of the underlying network, along with error control, fragmentation and flow
control. End to end message transmission or connecting applications at the transport layer
can be categorized as either:
1. connection-oriented e.g. TCP
2. connectionless e.g UDP
The transport layer can be thought of literally as a transport mechanism e.g. a vehicle
whose responsibility is to make sure that its contents (passengers/goods) reach its
destination safely and soundly, unless a higher or lower layer is responsible for safe
delivery.
The transport layer provides this service of connecting applications together through the
use of ports. Since IP provides only a best effort delivery, the transport layer is the first
layer of the TCP/IP stack to offer reliability. Note that IP can run over a reliable data link
protocol such as the High-Level Data Link Control (HDLC). Protocols above transport,
such as RPC, also can provide reliability.
For example, TCP is a connection-oriented protocol that addresses numerous reliability
issues to provide a reliable byte stream:





data arrives in-order
data has minimal error (i.e correctness)
duplicate data is discarded
lost/discarded packets are resent
includes traffic congestion control
The newer SCTP is also a "reliable", connection-oriented, transport mechanism. It is
Message-stream-oriented — not byte-stream-oriented like TCP — and provides multiple
streams multiplexed over a single connection. It also provides multi-homing support, in
which a connection end can be represented by multiple IP addresses (representing
multiple physical interfaces), such that if one fails, the connection is not interrupted. It
was developed initially for telephony applications (to transport SS7 over IP), but can also
be used for other applications.
UDP is a connectionless datagram protocol. Like IP, it is a best effort or "unreliable"
protocol. Reliability is addressed through error detection using a weak checksum
algorithm. UDP is typically used for applications such as streaming media (audio, video,
Voice over IP etc) where on-time arrival is more important than reliability, or for simple
query/response applications like DNS lookups, where the overhead of setting up a
reliable connection is disproportionately large. RTP is a datagram protocol that is
designed for real-time data such as streaming audio and video.
TCP and UDP are used to carry an assortment of higher-level applications. The
appropriate transport protocol is chosen based on the higher-layer protocol application.
For example, the File Transfer Protocol expects a reliable connection, but the Network
File System assumes that the subordinate Remote Procedure Call protocol, not transport,
will guarantee reliable transfer. Other applications, such as VoIP, can tolerate some loss
of packets, but not the reordering or delay that could be caused by retransmission.
The applications at any given network address are distinguished by their TCP or UDP
port. By convention certain well known ports are associated with specific applications.
(See List of TCP and UDP port numbers.)
[edit] Network layer
As originally defined, the Network layer solves the problem of getting packets across a
single network. Examples of such protocols are X.25, and the ARPANET's Host/IMP
Protocol.
With the advent of the concept of internetworking, additional functionality was added to
this layer, namely getting data from the source network to the destination network. This
generally involves routing the packet across a network of networks, known as an
internetwork or (lower-case) internet.[8]
In the Internet protocol suite, IP performs the basic task of getting packets of data from
source to destination. IP can carry data for a number of different upper layer protocols;
these protocols are each identified by a unique protocol number: ICMP and IGMP are
protocols 1 and 2, respectively.
Some of the protocols carried by IP, such as ICMP (used to transmit diagnostic
information about IP transmission) and IGMP (used to manage IP Multicast data) are
layered on top of IP but perform internetwork layer functions, illustrating an
incompatibility between the Internet and the IP stack and OSI model. All routing
protocols, such as OSPF, and RIP are also part of the network layer. What makes them
part of the network layer is that their payload is totally concerned with management of
the network layer. The particular encapsulation of that payload is irrelevant for layering
purposes.
[edit] Data link layer
The link layer, which is the method used to move packets from the network layer on two
different hosts, is not really part of the Internet protocol suite, because IP can run over a
variety of different link layers. The processes of transmitting packets on a given link
layer and receiving packets from a given link layer can be controlled both in the software
device driver for the network card, as well as on firmware or specialist chipsets. These
will perform data link functions such as adding a packet header to prepare it for
transmission, then actually transmit the frame over a physical medium.
For Internet access over a dial-up modem, IP packets are usually transmitted using PPP.
For broadband Internet access such as ADSL or cable modems, PPPoE is often used. On
a local wired network, Ethernet is usually used, and on local wireless networks, IEEE
802.11 is usually used. For wide-area networks, either PPP over T-carrier or E-carrier
lines, Frame relay, ATM, or packet over SONET/SDH (POS) are often used.
The link layer can also be the layer where packets are intercepted to be sent over a virtual
private network. When this is done, the link layer data is considered the application data
and proceeds back down the IP stack for actual transmission. On the receiving end, the
data goes up the IP stack twice (once for routing and the second time for the VPN).
The link layer can also be considered to include the physical layer, which is made up of
the actual physical network components (hubs, repeaters, fiber optic cable, coaxial cable,
network cards, Host Bus Adapter cards and the associated network connectors: RJ-45,
BNC, etc), and the low level specifications for the signals (voltage levels, frequencies,
etc).
IP suite stack showing the physical network connection of two hosts via two routers and
the corresponding layers used at each hop. The dotted line represents a virtual
connection.
Sample encapsulation of data within a UDP datagram within an IP packet
The layers near the top are logically closer to the user application (as opposed to the
human user), while those near the bottom are logically closer to the physical transmission
of the data. Viewing layers as providing or consuming a service is a method of
abstraction to isolate upper layer protocols from the nitty gritty detail of transmitting bits
over, say, Ethernet and collision detection, while the lower layers avoid having to know
the details of each and every application and its protocol.
This abstraction also allows upper layers to provide services that the lower layers cannot,
or choose not to, provide. Again, the original OSI Reference Model was extended to
include connectionless services (OSIRM CL).[3] For example, IP is not designed to be
reliable and is a best effort delivery protocol. This means that all transport layers must
choose whether or not to provide reliability and to what degree. UDP provides data
integrity (via a checksum) but does not guarantee delivery; TCP provides both data
integrity and delivery guarantee (by retransmitting until the receiver receives the packet).
This model lacks the formalism of the OSI Reference Model and associated documents,
but the IETF does not use a formal model and does not consider this a limitation, as in the
comment by David D. Clark, "We don't believe in kings, presidents, or voting. We
believe in rough consensus and running code." Criticisms of this model, which have been
made with respect to the OSI Reference Model, often do not consider ISO's later
extensions to that model.
1. For multiaccess links with their own addressing systems (e.g. Ethernet) an address
mapping protocol is needed. Such protocols can be considered to be below IP but
above the existing link system. While the IETF does not use the terminology, this
is a subnetwork dependent convergence facility according to an extension to the
OSI model, the Internal Organization of the Network Layer (IONL) [4].
2. ICMP & IGMP operate on top of IP but do not transport data like UDP or TCP.
Again, this functionality exists as layer management extensions to the OSI model,
in its Management Framework (OSIRM MF) [5]
3. The SSL/TLS library operates above the transport layer (utilizes TCP) but below
application protocols. Again, there was no intention, on the part of the designers
of these protocols, to comply with OSI architecture.
4. The link is treated like a black box here. This is fine for discussing IP (since the
whole point of IP is it will run over virtually anything). The IETF explicitly does
not intend to discuss transmission systems, which is a less academic but practical
alternative to the OSI Reference Model.
[edit] OSI and TCP/IP Layering Differences
The three top layers in the OSI model - the application layer, the presentation layer and
the session layer - usually are lumped into one layer in the TCP/IP model. While some
pure OSI protocol applications, such as X.400, also lumped them together, there is no
requirement that a TCP/IP protocol stack needs to be monolithic above the transport
layer. For example, the Network File System (NFS) application protocol runs over the
eXternal Data Representation (XDR) presentation protocol, which, in turn, runs over a
protocol with session layer functionality, Remote Procedure Call (RPC). RPC provides
reliable record transmission, so it can run safely over the best-effort User Datagram
Protocol (UDP) transport.
The session layer roughly corresponds to the Telnet virtual terminal functionality, which
is part of text based protocols such as HTTP and SMTP TCP/IP model application layer
protocols. It also corresponds to TCP and UDP port numbering, which is considered as
part of the transport layer in the TCP/IP model. The presentation layer has similarities to
the MIME standard, which also is used in HTTP and SMTP.
Since the IETF protocol development effort is not concerned with strict layering, some of
its protocols may not appear to fit cleanly into the OSI model. These conflicts, however,
are more frequent when one only looks at the original OSI model, ISO 7498, without
looking at the annexes to this model (e.g., ISO 7498/4 Management Framework), or the
ISO 8648 Internal Organization of the Network Layer (IONL). When the IONL and
Management Framework documents are considered, the ICMP and IGMP are neatly
defined as layer management protocols for the network layer. In like manner, the IONL
provides a structure for "subnetwork dependent convergence facilities" such as ARP and
RARP.
IETF protocols can be applied recursively, as demonstrated by tunneling protocols such
as Generic Routing Encapsulation (GRE). While basic OSI documents do not consider
tunneling, there is some concept of tunneling in yet another extension to the OSI
architecture, specifically the transport layer gateways within the International
Standardized Profile framework [6]. The associated OSI development effort, however, has
been abandoned given the real-world adoption of TCP/IP protocols.
7 Application
ECHO, ENRP, FTP, Gopher, HTTP, NFS, RTSP, SIP, SMTP, SNMP,
SSH, Telnet, Whois, XMPP
6 Presentation XDR, ASN.1, SMB, AFP, NCP
5 Session
ASAP, TLS, SSL, ISO 8327 / CCITT X.225, RPC, NetBIOS, ASP
4 Transport
TCP, UDP, RTP, SCTP, SPX, ATP, IL
3 Network
IP, ICMP, IGMP, IPX, OSPF, RIP, IGRP, EIGRP, ARP, RARP, X.25
2 Data Link
Ethernet, Token ring, HDLC, Frame relay, ISDN, ATM, 802.11 WiFi,
FDDI, PPP
1 Physical
10BASE-T, 100BASE-T, 1000BASE-T, SONET/SDH, G.709, Tcarrier/E-carrier, various 802.11 physical layers
Dynamic Host Configuration Protocol
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The five-layer TCP/IP model
5. Application layer
DHCP · IP (IPv4 ·DNS · FTP · Gopher
· HTTP · IMAP4 · IRC · NNTP ·
XMPP · POP3 · RTP · SIP · SMTP ·
SNMP · SSH · TELNET · RPC · RTCP
· RTSP · TLS (and SSL) · SDP · SOAP
· GTP · STUN · NTP · (more)
4. Transport layer
TCP · UDP · DCCP · SCTP · RSVP ·
ECN · (more)
3. Network/internet layer
IPv6 · OSPF · IS-IS · BGP · IPsec ·
ARP · RARP · RIP · ICMP · ICMPv6 ·
IGMP · (more)
2. Data link layer
802.11 (WLAN) · 802.16 · Wi-Fi ·
WiMAX · ATM · DTM · Token ring ·
Ethernet · FDDI · Frame Relay · GPRS
· EVDO · HSPA · HDLC · PPP · PPTP
· L2TP · ISDN · ARCnet · LLTD ·
(more)
1. Physical layer
Ethernet physical layer · PLC ·
SONET/SDH · G.709 · Optical fiber ·
Coaxial cable · Twisted pair · (more)
This box: view • talk • edit
"DHCP" redirects here. This article is about the networking protocol. For other uses, see
DHCP (disambiguation).
Dynamic Host Configuration Protocol (DHCP) is a protocol used by networked
devices (clients) to obtain various parameters necessary for the clients to operate in an
Internet Protocol (IP) network. By using this protocol, system administration workload
greatly decreases, and devices can be added to the network with minimal or no manual
configurations.
Contents
[hide]










1 Applicability
2 History
3 Basic Protocol Operation
4 Security
5 IP address allocation
6 DHCP and firewalls
o 6.1 Example in ipfw firewall
o 6.2 Example in Cisco IOS Extended ACL
7 Technical details
o 7.1 DHCP discovery
o 7.2 DHCP offers
o 7.3 DHCP requests
o 7.4 DHCP acknowledgement
o 7.5 DHCP information
o 7.6 DHCP releasing
o 7.7 Client configuration parameters
o 7.8 Options
8 See also
9 References
10 External links
[edit] Applicability
Dynamic Host Configuration Protocol is a way to administer network parameter
assignment at a single DHCP server, or a group of such servers arranged in a faulttolerant manner. Even in a network which has a few machines, Dynamic Host
Configuration Protocol is useful because a machine can be added to the local network
with little effort.
Even for servers whose addresses rarely change, DHCP is recommended for setting their
addresses, so if the servers need to be readdressed (RFC2071), the changes need to be
made in as few places as possible. For devices, such as routers and firewalls, that should
not use DHCP, it can be useful to put Trivial File Transfer Protocol (TFTP) or SSH
servers on the same machine that runs DHCP, again to centralize administration.
DHCP is also useful for directly assigning addresses to servers and desktop machines,
and, through a Point-to-Point Protocol (PPP) proxy, for dialup and broadband on-demand
hosts, as well as for residential Network address translation (NAT) gateways and routers.
DHCP is usually not appropriate for infrastructure such as non-edge routers and DNS
servers.
[edit] History
DHCP emerged as a standard protocol in October 1993 as defined in RFC 1531,
succeeding the BOOTP protocol. The next RFC was 2131, released in 1997. The current
DHCP definition can be found in RFC 2131, while a proposed standard for DHCP over
IPv6 (DHCPv6) can be found in RFC 3315.
[edit] Basic Protocol Operation
The Dynamic Host Configuration Protocol (DHCP) automates the assignment of IP
addresses, subnet masks, default gateway, and other IP parameters. [1]
When a DHCP-configured client (be it a computer or any other network aware device)
connects to a network, the DHCP client sends a broadcast query requesting necessary
information from a DHCP server. The DHCP server manages a pool of IP addresses and
information about client configuration parameters such as the default gateway, the
domain name, the DNS servers, other servers such as time servers, and so forth. Upon
receipt of a valid request the server will assign the computer an IP address, a lease (the
length of time for which the allocation is valid), and other IP configuration parameters,
such as the subnet mask and the default gateway. The query is typically initiated
immediately after booting and must be completed before the client can initiate IP-based
communication with other hosts.
DHCP provides three modes for allocating IP addresses. The best-known mode is
dynamic, in which the client is provided a "lease" on an IP address for a period of time.
Depending on the stability of the network, this could range from hours (a wireless
network at an airport) to months (for desktops in a wired lab). At any time before the
lease expires, the DHCP client can request renewal of the lease on the current IP address.
A properly-functioning client will use the renewal mechanism to maintain the same IP
address throughout its connection to a single network, otherwise it may risk losing its
lease while still connected, thus disrupting network connectivity while it renegotiates
with the server for its original or a new IP address.
The two other modes for allocation of IP addresses are automatic (also known as DHCP
Reservation), in which the address is permanently assigned to a client, and manual, in
which the address is selected by the client (manually by the user or any other means) and
the DHCP protocol messages are used to inform the server that the address has been
allocated.
The automatic and manual methods are generally used when finer-grained control over IP
address is required (typical of tight firewall setups), although typically a firewall will
allow access to the range of IP addresses that can be dynamically allocated by the DHCP
server.
DHCP stands for dynamic host configuration protocol, it is useful to provide IP add to the
required host machine,the process which is choosen by DHCP server is known as ROSA.
R-Request, O-Offer, S-send, A-Accept.
[edit] Security
Due to its standardization before Internet security became an issue, the basic DHCP
protocol does not include any security provisions, potentially exposing it to two types of
attacks:[2]

Unauthorized DHCP Servers: as you can not specify the server you want, an
unauthorized server can respond to client requests, sending the client network
configuration values that are beneficial to a hijacker. As an example, a hacker can
configure the DHCP server to configure clients to a DNS server which has been
poisoned.

Unauthorized DHCP Clients: By masquerading as a legitimate client, an
unauthorized client can gain access to network configuration and an IP address on
a network it should otherwise not be allowed to utilize. Also, by flooding the
DHCP server with requests for IP addresses, it is possible for an attacker to
exhaust the pool of available IP addresses, disrupting normal network activity (a
denial of service attack).
To combat these threats RFC 3118 ("Authentication for DHCP Messages") introduced
authentication information into DHCP messages allowing clients and servers to reject
information from invalid sources. Although support for this protocol is widespread, a
large number of clients and servers still do not fully support authentication, thus forcing
servers to support clients that do not support this feature. As a result, other security
measures are usually implemented around the DHCP server (such as IPsec) to ensure that
only authenticated clients and servers are granted access to the network.
Wherever possible, DHCP-assigned addresses should be dynamically linked to a secure
DNS server, to allow troubleshooting by name rather than by a potentially unknown
address. Effective DHCP-DNS linkage requires having a file of either MAC addresses or
local names that will be sent to DNS that uniquely identifies physical hosts, IP addresses,
and other parameters such as the default gateway, subnet mask, and IP addresses of DNS
servers from a DHCP server. The DHCP server ensures that all IP addresses are unique,
i.e., no IP address is assigned to a second client while the first client's assignment is valid
(its lease has not expired). Thus IP address pool management is done by the server and
not by a network administrator.
[edit] IP address allocation
Depending on implementation, the DHCP server has three methods of allocating IPaddresses (WARNING--the terminology below contradicts the terminology above in
#Basic_Protocol_Operation):



dynamic allocation: A network administrator assigns a range of IP addresses to
DHCP, and each client computer on the LAN has its IP software configured to
request an IP address from the DHCP server during network initialization. The
request-and-grant process uses a lease concept with a controllable time period,
allowing the DHCP server to reclaim (and then reallocate) IP addresses that are
not renewed (dynamic re-use of IP addresses).
automatic allocation: The DHCP server permanently assigns a free IP address to
a requesting client from the range defined by the administrator.
manual allocation: The DHCP server allocates an IP address based on a table
with MAC address - IP address pairs manually filled in by the server
administrator. Only requesting clients with a MAC address listed in this table will
be allocated an IP address.
Some DHCP server software can manage hosts by more than one of the above methods.
For example, the known hosts on the network can be assigned an IP address based on
their MAC address (manual allocation) whereas "guest" computers (such as laptops via
WiFi) are allocated a temporary address out of a pool compatible with the network to
which they're attached (dynamic allocation).
[edit] DHCP and firewalls
Firewalls usually have to permit DHCP traffic explicitly. Specification of the DHCP
client-server protocol describes several cases when packets must have the source address
of 0x00000000 or the destination address of 0xffffffff. Anti-spoofing policy rules and
tight inclusive firewalls often stop such packets. Multi-homed DHCP servers require
special consideration and further complicate configuration.
To allow DHCP, network administrators need to allow several types of packets through
the server-side firewall. All DHCP packets travel as UDP datagrams; all client-sent
packets have source port 68 and destination port 67; all server-sent packets have source
port 67 and destination port 68. For example, a server-side firewall should allow the
following types of packets:



Incoming packets from 0.0.0.0 or dhcp-pool to dhcp-ip
Incoming packets from any address to 255.255.255.255
Outgoing packets from dhcp-ip to dhcp-pool or 255.255.255.255
where dhcp-ip represents any address configured on a DHCP server host and dhcp-pool
stands for the pool from which a DHCP server assigns addresses to clients
How DNS works in theory
Domain names, arranged in a tree, cut into zones, each served by a nameserver.
[edit] The domain name space
The domain name space consists of a tree of domain names. Each node or leaf in the tree
has one or more resource records, which hold information associated with the domain
name. The tree sub-divides into zones. A zone consists of a collection of connected nodes
authoritatively served by an authoritative DNS nameserver. (Note that a single
nameserver can host several zones.)
When a system administrator wants to let another administrator control a part of the
domain name space within his zone of authority, he can delegate control to the other
administrator. This splits a part of the old zone off into a new zone, which comes under
the authority of the second administrator's nameservers. The old zone ceases to be
authoritative for what goes under the authority of the new zone.
[edit] Parts of a domain name
A domain name usually consists of two or more parts (technically labels), separated by
dots. For example example.com.


The rightmost label conveys the top-level domain (for example, the address
www.example.com has the top-level domain com).
Each label to the left specifies a subdivision, or subdomain of the domain above
it. Note;"subdomain" expresses relative dependence, not absolute dependence.

For example: example.com comprises a subdomain of the com domain, and
www.example.com comprises a subdomain of the domain example.com. In
theory, this subdivision can go down to 127 levels deep. Each label can contain up
to 63 characters. The whole domain name does not exceed a total length of 255
characters. In practice, some domain registries may have shorter limits.
A hostname refers to a domain name that has one or more associated IP addresses;
ie: the 'www.example.com' and 'example.com' domains are both hostnames,
however, the 'com' domain is not.
[edit] DNS servers
The Domain Name System consists of a hierarchical set of DNS servers. Each domain or
subdomain has one or more authoritative DNS servers that publish information about that
domain and the name servers of any domains "beneath" it. The hierarchy of authoritative
DNS servers matches the hierarchy of domains. At the top of the hierarchy stand the root
nameservers: the servers to query when looking up (resolving) a top-level domain name
(TLD).
[edit] DNS resolvers
A resolver looks up the resource record information associated with nodes. A resolver
knows how to communicate with name servers by sending DNS queries and heeding
DNS responses.
A DNS query may be either a recursive query or a non-recursive query:


A non-recursive query is one where the DNS server may provide a partial answer
to the query (or give an error). DNS servers must support non-recursive queries.
A recursive query is one where the DNS server will fully answer the query (or
give an error). DNS servers are not required to support recursive queries.
The resolver (or another DNS server acting recursively on behalf of the resolver)
negotiates use of recursive service using bits in the query headers.
Resolving usually entails iterating through several name servers to find the needed
information. However, some resolvers function simplistically and can only communicate
with a single name server. These simple resolvers rely on a recursive query to a recursive
name server to perform the work of finding information for them.
[edit] Address resolution mechanism
(This description deliberately uses the fictional .example TLD in accordance with
the DNS guidelines themselves.)
In theory a full host name may have several name segments, (e.g
ahost.ofasubnet.ofabiggernet.inadomain.example). In practice, in the experience of the
majority of public users of Internet services, full host names will frequently consist of
just three segments (ahost.inadomain.example, and most often
www.inadomain.example).
For querying purposes, software interprets the name segment by segment, from right to
left, using an iterative search procedure. At each step along the way, the program queries
a corresponding DNS server to provide a pointer to the next server which it should
consult.
A DNS recursor consults three nameservers to resolve the address www.wikipedia.org.
As originally envisaged, the process was as simple as:
1. the local system is pre-configured with the known addresses of the root servers in
a file of root hints, which need to be updated periodically by the local
administrator from a reliable source to be kept up to date with the changes which
occur over time.
2. query one of the root servers to find the server authoritative for the next level
down (so in the case of our simple hostname, a root server would be asked for the
address of a server with detailed knowledge of the example top level domain).
3. querying this second server for the address of a DNS server with detailed
knowledge of the second-level domain (inadomain.example in our example).
4. repeating the previous step to progress down the name, until the final step which
would, rather than generating the address of the next DNS server, return the final
address sought.
The diagram illustrates this process for the real host www.wikipedia.org.
The mechanism in this simple form has a difficulty: it places a huge operating burden on
the root servers, with each and every search for an address starting by querying one of
them. Being as critical as they are to the overall function of the system such heavy use
would create an insurmountable bottleneck for trillions of queries placed every day. The
section DNS in practice describes how this is addressed.
[edit] Circular dependencies and glue records
Name servers in delegations appear listed by name, rather than by IP address. This means
that a resolving name server must issue another DNS request to find out the IP address of
the server to which it has been referred. Since this can introduce a circular dependency if
the nameserver referred to is under the domain that it is authoritative of, it is occasionally
necessary for the nameserver providing the delegation to also provide the IP address of
the next nameserver. This record is called a glue record.
For example, assume that the sub-domain en.wikipedia.org contains further sub-domains
(such as something.en.wikipedia.org) and that the authoritative nameserver for these
lives at ns1.something.en.wikipedia.org. A computer trying to resolve
something.en.wikipedia.org will thus first have to resolve
ns1.something.en.wikipedia.org. Since ns1 is also under the
something.en.wikipedia.org subdomain, resolving
ns1.something.en.wikipedia.org requires resolving something.en.wikipedia.org
which is exactly the circular dependency mentioned above. The dependency is broken by
the glue record in the nameserver of en.wikipedia.org that provides the IP address of
ns1.something.en.wikipedia.org directly to the requestor, enabling it to bootstrap the
process by figuring out where ns1.something.en.wikipedia.org is located.
[edit] In practice
When an application (such as a web browser) tries to find the IP address of a domain
name, it doesn't necessarily follow all of the steps outlined in the Theory section above.
We will first look at the concept of caching, and then outline the operation of DNS in
"the real world."
[edit] Caching and time to live
Because of the huge volume of requests generated by a system like DNS, the designers
wished to provide a mechanism to reduce the load on individual DNS servers. To this
end, the DNS resolution process allows for caching (i.e. the local recording and
subsequent consultation of the results of a DNS query) for a given period of time after a
successful answer. How long a resolver caches a DNS response (i.e. how long a DNS
response remains valid) is determined by a value called the time to live (TTL). The TTL
is set by the administrator of the DNS server handing out the response. The period of
validity may vary from just seconds to days or even weeks.
[edit] Caching time
As a noteworthy consequence of this distributed and caching architecture, changes to
DNS do not always take effect immediately and globally. This is best explained with an
example: If an administrator has set a TTL of 6 hours for the host www.wikipedia.org,
and then changes the IP address to which www.wikipedia.org resolves at 12:01pm, the
administrator must consider that a person who cached a response with the old IP address
at 12:00pm will not consult the DNS server again until 6:00pm. The period between
12:01pm and 6:00pm in this example is called caching time, which is best defined as a
period of time that begins when you make a change to a DNS record and ends after the
maximum amount of time specified by the TTL expires. This essentially leads to an
important logistical consideration when making changes to DNS: not everyone is
necessarily seeing the same thing you're seeing. RFC 1537 helps to convey basic rules
for how to set the TTL.
Note that the term "propagation", although very widely used in this context, does not
describe the effects of caching well. Specifically, it implies that [1] when you make a
DNS change, it somehow spreads to all other DNS servers (instead, other DNS servers
check in with yours as needed), and [2] that you do not have control over the amount of
time the record is cached (you control the TTL values for all DNS records in your
domain, except your NS records and any authoritative DNS servers that use your domain
name).
Some resolvers may override TTL values, as the protocol supports caching for up to 68
years or no caching at all. Negative caching (the non-existence of records) is determined
by name servers authoritative for a zone which MUST include the Start of Authority
(SOA) record when reporting no data of the requested type exists. The MINIMUM field
of the SOA record and the TTL of the SOA itself is used to establish the TTL for the
negative answer. RFC 2308
Many people incorrectly refer to a mysterious 48 hour or 72 hour propagation time when
you make a DNS change. When one changes the NS records for one's domain or the IP
addresses for hostnames of authoritative DNS servers using one's domain (if any), there
can be a lengthy period of time before all DNS servers use the new information. This is
because those records are handled by the zone parent DNS servers (for example, the .com
DNS servers if your domain is example.com), which typically cache those records for 48
hours. However, those DNS changes will be immediately available for any DNS servers
that do not have them cached. And any DNS changes on your domain other than the NS
records and authoritative DNS server names can be nearly instantaneous, if you choose
for them to be (by lowering the TTL once or twice ahead of time, and waiting until the
old TTL expires before making the change).
[edit] In the real world
DNS resolving from program to OS-resolver to ISP-resolver to greater system.
Users generally do not communicate directly with a DNS resolver. Instead DNSresolution takes place transparently in client-applications such as web-browsers, mailclients, and other Internet applications. When an application makes a request which
requires a DNS lookup, such programs send a resolution request to the local DNS
resolver in the local operating system, which in turn handles the communications
required.
The DNS resolver will almost invariably have a cache (see above) containing recent
lookups. If the cache can provide the answer to the request, the resolver will return the
value in the cache to the program that made the request. If the cache does not contain the
answer, the resolver will send the request to one or more designated DNS servers. In the
case of most home users, the Internet service provider to which the machine connects will
usually supply this DNS server: such a user will either have configured that server's
address manually or allowed DHCP to set it; however, where systems administrators
have configured systems to use their own DNS servers, their DNS resolvers point to
separately maintained nameservers of the organization. In any event, the name server thus
queried will follow the process outlined above, until it either successfully finds a result or
does not. It then returns its results to the DNS resolver; assuming it has found a result, the
resolver duly caches that result for future use, and hands the result back to the software
which initiated the request.