* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Document
Piggybacking (Internet access) wikipedia , lookup
Internet protocol suite wikipedia , lookup
Zero-configuration networking wikipedia , lookup
Distributed firewall wikipedia , lookup
Backpressure routing wikipedia , lookup
Network tap wikipedia , lookup
Computer network wikipedia , lookup
Serial digital interface wikipedia , lookup
IEEE 802.1aq wikipedia , lookup
Airborne Networking wikipedia , lookup
TCP congestion control wikipedia , lookup
Asynchronous Transfer Mode wikipedia , lookup
Recursive InterNetwork Architecture (RINA) wikipedia , lookup
Cracking of wireless networks wikipedia , lookup
Wake-on-LAN wikipedia , lookup
Deep packet inspection wikipedia , lookup
Packet switching wikipedia , lookup
Multiprotocol Label Switching wikipedia , lookup
UniPro protocol stack wikipedia , lookup
Unit 5 The network layer 1 Topics Network layer design issues Routing algorithms Congestion control algorithms Quality of service X.25 architecture Frame Relay architecture 2 Network Layer design issues 3 Network layer design issues Store and forward packet switching Services provided to the transport layer Implementation of connectionless service Implementation of connection oriented service Comparison of virtual-circuit and datagram subnets 4 Store-and-Forward Packet Switching The environment of the network layer protocols. fig 5-1 5 Services provided to the transport layer 6 Services provided to the transport layer The network layer services have been designed with the following goals in mind: 1. The services should be independent of the subnet technology. 2. The transport layer should be shielded from the number, type, and topology of the subnets present. 3. The network addresses made available to the transport layer should use a uniform numbering plan, even across LANs and WANs. The network layer can be connectionless or connectionoriented The internet has a connectionless network layer whereas ATM network has connection oriented network layer The network layer should provide a means to send packets from a node A to B 7 Internal organization of the network layer Virtual Circuits, in analogy with the physical circuits set up by the telephone system Datagrams, in analogy with telegrams 8 Implementation of Connectionless Service Routing within a datagram subnet. 9 Implementation of ConnectionOriented Service Routing within a virtual-circuit subnet. Connection identifier 10 Comparison of Virtual-Circuit and Datagram Subnets 5-4 11 Routing Algorithms 12 Routing algorithms The optimality principle Shortest path routing Flooding Distance vector routing Link state routing Hierarchical routing Broadcast routing Multicast routing Routing for mobile hosts Routing in Ad hoc networks 13 Routing algorithms Routing algorithm: determine the route and maintain the routing table desired properties for a routing algorithm: 1. correctness 2. simplicity 3. robustness with respect to failures and changing conditions 4. stability of the routing decisions 5. fairness of the resource allocation 6. optimality of the packet travel times 14 Routing algorithms Fairness and optimality are often contradictory goals. 15 Routing algorithms What is it that we seek to optimize? Minimizing mean packet delay is an obvious candidate, but so is maximizing total network throughput. Furthermore, these two goals are also in conflict, since operating any queuing system near capacity implied a long queuing delay. As a compromise, many networks attempt to minimize the number of hops a packet must make, because reducing the number of hops tends to improve the delay and also reduce the amount of bandwidth consumed, which tends to improve the throughput as well. 16 Routing algorithms Non-adaptive (static) algorithms: Routing decision is not based on the estimation of the current traffic and topology However the choice of the network is done in advance, offline and is downloaded to routers Adaptive (dynamic) algorithm: Routing decisions can be changed if there any changes in topology or traffic 17 The optimality principal The optimality principle states that if router J is on the optimal path from router I to router K, then the routes from I to J and from J to K are also optimal. As a direct consequence of the optimality principle, we can see that the set of optimal routes from all sources to a given destination form a tree rooted at the destination. Such a tree is called a sink tree. 18 The Optimality Principle (a) A subnet. (b) A sink tree for router B. 19 The Optimality Principle A sink tree does not contain any loops, so each packet will be delivered within a finite and bounded number of hops. Links and routers can go down and come back up during operation, so different routers may have different ideas about the current topology. One of the issue is whether each router has to individually acquired the information on which to base its sink tree computation, or whether this information is collected by some other means. 20 Shortest path routing 1. 2. Dijikstra’s algorithm: Let the node we are starting be called an initial node. Let a distance of a node Y be the distance from the initial node to it. Dijkstra's algorithm will assign some initial distance values and will try to improve them stepby-step. Assign to every node a distance value. Set it to zero for our initial node and to infinity for all other nodes. Mark all nodes as unvisited. Set initial node as current. 21 Shortest path routing 3. 4. 5. For current node, consider all its unvisited neighbours and calculate their distance (from the initial node). For example, if current node (A) has distance of 6, and an edge connecting it with another node (B) is 2, the distance to B through A will be 6+2=8. If this distance is less than the previously recorded distance (infinity in the beginning, zero for the initial node), overwrite the distance. When we are done considering all neighbours of the current node, mark it as visited. A visited node will not be checked ever again; its distance recorded now is final and minimal. Set the unvisited node with the smallest distance (from the initial node) as the next "current node" and continue from step 3. 22 Shortest Path Routing The first 5 steps used in computing the shortest path from A to D. The arrows indicate the working node. 23 Flooding P flooding P P P Transmit a copy of each packet it receives on every one of its transmission links advantages: robust, simple, broadcasting, discovery disadvantages: use too much resource How to limit the flooding: 1. hop count 2. time stamp A variation of flooding that is slightly more practical is selective flooding. In this algorithm the routers do not send every incoming packet out on every line, only on those lines that are going approximately in the right direction. 24 Applications of flooding Military applications Update many databases simultaneously In wireless networks 25 Distance Vector Routing Distance vector routing algorithms operate by having each router maintain a table (i.e., a vector) giving the best known distance to each destination and which line to use to get there. These tables are updated by exchanging information with the neighbors. E.g.: Routing table for Router A Destination B C D cost(delay, distance, …) 10 12 ... via B B 26 Distance Vector Routing It was the original ARPANET routing algorithm and was also used in the Internet under the name RIP (Routing Information Protocol) AppleTalk and Cisco routers use improved distance vector protocols. Also known as Bellman-Ford algorithm and Ford Fulkerson algorithm Once every T msec each router sends to each neighbor a list of its estimate delays to each destination. It also receives a similar list from each neighbor. 27 Distance Vector Routing Each router maintains a routing table. It contains one entry for each router in the subnet. This entry has two parts: The first part shows the preferred outgoing line to reach to the destination Second path gives an estimate of the time or distance to the destination The metric used can be one of the following: Number of hops Time delay Number of packets in a queue etc, 28 Distance Vector Routing A subnet. (b) Input from A, I, H, K, and the new routing table for J. (a) 29 Distance Vector Routing The count-to-infinity problem A is down Then A comes up. The good news spreads quickly. 30 Distance Vector Routing The count-to-infinity problem A is up Then A comes down. The bad news travels slowly. 31 Distance Vector Routing Bad news travels slowly: no router ever has a value more than one higher than the minimum of all its neighbors. Gradually, all the routers work their way up to infinity, but the number of exchanges required depends on the numerical value used for infinity. For this reason, it is wise to set infinity to the longest path plus 1 (if using hop count as metric). If the metric is time delay, there is no well-defined upper bound, so a high value is needed to prevent a path with a long delay from being treated as down. 32 Link State Routing • Each router must do the following: Discover its neighbors, learn their network address. Measure the delay or cost to each of its neighbors. Construct a packet telling all it has just learned. Send this packet to all other routers. Compute the shortest path to every other router. Distance vector routing differs significantly from the link state routing. With link state algorithms, routers share only the identity of their neighbors, but they flood this information through the entire network. Distance vector algorithms adopt an opposite approach. Routers periodically share knowledge of the entire network, but only with their neighbors. 33 Link State Routing Learning about neighbors When a router is booted, its first task is to learn who its neighbor are. It accomplishes this goal by sending a special HELLO packet on each point-to-point line. The router on the other end is expected to send back a reply telling who it is. When two or more routers are connected by a LAN, the situation is slighted more complicated. One way to model the LAN is to consider it as a node itself. 34 Link State Routing Learning about neighbors 35 Link State Routing Measuring line cost Cost is measured by sending a special ECHO packet that the other side is required to send back again. By measuring the round trip time and dividing it by two can give an estimate of delay An interesting issue is whether or not to take the load into account when measuring the delay. To factor the load in, the round-trip timer must be started when the ECHO packet is queued. To ignore the load, the timer should be started when the ECHO packet reaches the front of the queue. 36 Link State Routing Measuring Line Cost Including queuing cost: can use the best line, but may lead to routing table oscillating. Same bandwidth on the two links 37 Link State Routing Building Link State Packets Building the link state packets is easy. The hard part is determining when to build them. 1. Periodically 2. When some significant event occurs, such as a line or neighbor 38 going down or coming back up. Link State Routing Distributing the link state packets The trickiest part of the algorithm is distributing the link state packets reliably. As the packets are distributed and installed, the routers getting the first ones will change their routes. The different routers may be using different versions of the topology, which can lead to inconsistencies, loops, unreachable machines, and other problems. The fundamental idea is to use flooding to distribute the link state packets. 39 Link State Routing Distributing the link state packets To keep the flood in check, each packet contains a sequence number that is incremented for each new packet sent. Routers keep track of all the (source router, sequence) pairs they see. When a new link state packet comes in, it is checked against the list of packets already seen. 1. If new: forward on all lines except the one it arrived on 2. If duplicate or old packet: discard 40 Link State Routing Distributing the link state packets Problems: If the sequence numbers wrap around, confusion will reign. The solution here is to use a 32-bit sequence number. With one link state packet per second, it would take 137 years to wrap around. If a router crashes, it will lose track of its sequence number. 41 Link State Routing Distributing the link state packets The solution to all these problems is to include the age of each packet after the sequence number and decrement it once per second. When the age hits zero, the information from that router is discarded. The age field is also decremented by each router during the initial flooding process, to make sure no packet can get lost and live for an indefinite period of time. 42 Link State Routing Distributing the Link State Packets To guard against errors on the router-router lines, all link state packets are acknowledged. Packet buffer for router B 43 Link State Routing Computing the new routes Once a router has accumulated a full set of link state packets, it can construct the entire subnet graph because every link is represented. Now Dijkstra’s algorithm can be run locally to construct the shortest path to all possible destinations. The OSPF (Open Shortest Path First) protocol uses link state routing algorithm. 44 Hierarchical routing Cx from U.S. send a packet to csie.ndhu.edu.tw 1. first send to domain tw (Taiwan) 2. then to subdomain MOE (edu) 3. then to subsubdomain ndhu 4. then to host csie Advantage: simple, efficient, and saving routing table space In a 1000 users network, each routing table needs 999 entries. If it is divided into 10 domains, then each table only needs 99+9=108 entries. 45 Hierarchical routing 46 Hierarchical routing Unfortunately, the gain in routing table space are not free. There is a penalty to be paid, and this penalty is in the form of increased path length. For example, the best route from 1A to 5C is via region 2, but with hierarchical routing all traffic to region 5 goes via region 3, because that is better for most destinations in region 5. When a single network becomes very large, an interesting question is: How many levels should the hierarchy have? Answer: the optimal number of levels for an N router subnet is logN, 47 Broadcast routing One broadcasting method that requires no special features from the subnet is for the source to simply send a distinct packet to each destination. Waste bandwidth and require the source to have a complete list of all destinations. Flooding is another obvious candidate. But it generates too many packets and consumes too much bandwidth. 48 Broadcast routing Multidestination routing Each packet contains either a list of destinations or a bit map indicating the desired destinations. When a packet arrives at a router, the router checks all the destinations to determine the set of output lines that will be needed. The router generates a new copy of the packet for each output line to be used and includes in each packet only those destinations that are to use the line. In effect, the destination set is partitioned among the output lines. 49 Broadcast routing A fourth broadcast algorithm makes explicit use of the sink tree for the router initiating the broadcast, or any other convenient spanning tree for that matter. This method makes excellent use of bandwidth, generating the absolute minimum number of packets necessary to do the job. The only problem is that each router must have knowledge of some spanning tree for it to be applicable. 50 Broadcast routing Reverse path forwarding When a broadcast packet arrives at a router, the router checks to see if the packet arrived on the line that is normally used for sending packets to the source of the broadcast. If so, forward it. Reverse path forwarding. (a) A subnet. (b) a Sink tree. (c) The tree built by reverse path forwarding. 51 Multicast routing To do multicasting, group management is required. Some way is needed to create and destroy groups, and for processes to join and leaves groups. It is important that routers know which of their hosts belong to which groups. Either hosts must inform their routers about changes in group membership, or routers must query their hosts periodically. Either way, routers learn about which of their hosts are in which groups. Routers tell their neighbors, so the information propagates through the subnet. 52 Multicast routing To do multicast routing, each router computes a spanning tree covering all other routers in the subnet. (a) A network. (b) A spanning tree for the leftmost router. 53 (c) A multicast tree for group 1. (d) A multicast tree for group 2. Multicast routing Various ways of pruning the spanning tree are possible. The simplest one can be used if link state routing is used, and each router is aware of the complete subnet topology, including which hosts belong to which groups. Then the spanning tree can be pruned by starting at the end of each path and working toward the root, removing all routers that do not belong to the group in question. 54 Multicast routing With distance vector routing, whenever a router with no hosts interested in a particular group and no connections to other routers receives a multicast message for that group, it responses with a PRUNE message, telling the sender not to send it any more multicasts for that group. source-specific multicast trees: scales poorly to large networks n groups, m members: a total of nm trees core-based tree approach: each group has only one multicast tree n groups: n trees 55 Routing for mobile hosts A WAN to which LANs, MANs, and wireless cells are attached. 56 Routing for mobile hosts When a new user enters an area, either by connecting to it, or just wandering into the cell, his computer must register itself with the foreign agent there. The registration procedure typically works like this: 1. Periodically, each foreign agent broadcasts a packet announcing its existence and address. A newly arrived mobile host may wait for one of these messages, but if none arrives quickly enough, the mobile host can broadcast a packet saying: “Are there any foreign agents around?” 57 Routing for mobile hosts 2. The mobile host registers with the foreign agent, giving its home address, current data link layer address, and some security information. 3. The foreign agent contacts the mobile host’s home agent and says: “One of your hosts is over here.” The message from the foreign agent to the home agent contains the foreign agent’s network address. It also includes the security information, to convince the home agent that the mobile host is really there. 58 Routing for mobile hosts 4. The home agent examines the security information, which contains a time stamp, to prove that it was generated within the past few seconds. If it is happy, it tells the foreign agent to proceed. 5. When the foreign agent gets the acknowledgement from the home agent, it makes an entry in its tables and informs the mobile host that it is now registered. Ideally, when a user leaves an area, that, too, should be announced to allow deregistration, but many users abruptly turn off their computers when done. 59 Routing for mobile hosts Packet routing for mobile hosts 60 Routing in Ad Hoc Networks Possibilities when the routers are mobile: Military vehicles on battlefield. A fleet of ships at sea. All moving all the time Emergency works at earthquake . No infrastructure. The infrastructure destroyed. A gathering of people with notebook computers. In an area lacking 802.11. 61 Route Discovery (a) Range of A's broadcast. (b) After B and D have received A's broadcast. (c) After C, F, and G have received A's broadcast. (d) After E, H, and I have received A's broadcast. Shaded nodes are new recipients. Arrows show possible reverse 62 routes. Route Discovery Format of a ROUTE REQUEST packet. Format of a ROUTE REPLY packet. 63 Route Maintenance (a) D's routing table before G goes down. (b) The graph after G has gone down. (a) D's routing table before G goes down. (b) The graph after G has gone down. 64 Congestion control algorithms 65 Congestion control algorithms General principles of congestion control Congestion prevention policies Congestion control in virtual-circuit subnets Congestion control in datagram subnets Load shedding Jitter control 66 Congestion When too much traffic is offered, congestion sets in and performance degrades sharply. 67 Common causes of congestion As packets arrive at a node, they are stored in the input buffer. If packets arrive too fast an incoming packet may find that there is no available buffer space Even very large buffer cannot prevent congestion due to delay, timeout and retransmissions Slow processors may lead to congestion Low bandwidth lines may also lead to congestion 68 Congestion If packet arrival rate exceeds the packet transmission rate, the queue size grows without bound Delay in delivery of packets leads to retransmissions When the line for which packets are queuing becomes more than 80% utilized the queue length grows alarmingly When too many packets arrive at a part of packet switched network, the performance degrades. This is known as congestion 69 General principles of congestion control Congestion control refers to technique which can remove congestion from happening or which can remove congestion after it has taken place Two categories: Open loop: based on prevention of congestion Closed loop: used for removing the congestion 70 General principles of congestion control Open loop control: Try to solve the problems by excellent design to prevent congestion from happening Open loop congestion control is exercised by using tools such as deciding when to accept the new packets, when to discard the packets, which packet are to be discarded and making scheduling decisions at various points Includes retransmission policy, window policy, acknowledgement policy, discarding policy, admission policy 71 General principles of congestion control Closed loop congestion control: Uses feedback mechanism It contains following steps: Monitor the system . 1. 2. 3. detect when and where congestion occurs. Pass information to where action can be taken. Adjust system operation to correct the problem. 72 Congestion prevention policies 73 Congestion Control in VirtualCircuit Subnets One of congestion control technique is admission control This technique is used to keep the congestion which has already begun to a manageable level Its principle is: once congestion has been detected do not set up any more virtual circuits until congestion has been cleared The advantage is that it is simple and easy to carry out control Alternative approach: allow virtual circuits to set up even when a congestion has taken place However carefully route all the new virtual circuits around the problem area 74 Congestion Control in Virtual-Circuit Subnets (a) A congested subnet. (b) A redrawn subnet, eliminates congestion and a virtual circuit from A to B. 75 Congestion Control in VirtualCircuit Subnets Another strategy is to negotiate an agreement between the host and the subnet when a virtual circuit is set up The agreement specifies volume and shape of traffic, QoS required, etc. To keep part of the agreement, the subnet will typically reserve resources along the path when the circuit is set up These resources can include table and buffer space in routers and bandwidth on lines In this way congestion is unlikely to occur This may lead to underutilization of resources 76 Congestion Control in datagram Subnets Can be used in virtual circuits as well In this technique each router associates a real variable with each of its output lines. This real variable say “u” has a value between 0 and 1 and indicates the percentage of utilization of that line. If the value of “u” goes above threshold then that output line will enter into a “warning” state. The router will check each arriving packet to see if its output line is in “warning state” If it is in warning state, following action is taken. Using warning bit Using choke packets 77 Congestion Control in datagram Subnets The warning bit: Warning state denoted by setting a special bit in the packet’s header. When a packet is arrived at its destination, the transport entity copied the bit into the next acknowledgement sent back to the source The source then cut back on traffic Source monitors the fraction of acknowledgement with the bit set and adjusted its transmission accordingly As long as the warning bits continued to flow in, the source continued to decrease its transmission rate When they slowed, source increases its transmission rate 78 Congestion Control in datagram Subnets Each node monitors the utilization of the output lines If the utilization goes beyond some threshold level, an output line enters a “warning state” If the output line of the newly arriving packet is in the warning state, a control packet called choke packet is sent from the congested node to the source station The original packet is tagged so that it does not generate any more choke packets After receiving a choke packet the source station reduces the traffic by certain percentage. Initially reduces traffic by 50% then by 25% and so on….. 79 Hop-by-Hop Choke Packets (a) A choke packet that affects only the source. (b) A choke packet that affects each hop it passes through. 80 Load shedding When other methods fail, the nodes can resort to a heavy artillery: Load shedding When the nodes are not able to handle the packets, the packets are discarded A node drowning in packets, one approach is to discard packets at random In many situations, some packets are more important than others To implement this intelligent discard policy, the applications must mark the packets with priority classes Policy for file transfer is called wine (old is better than new) and that for multimedia is called milk (new is better than old) 81 Jitter Jitter is defined as the variation in delay for packets belonging to the same flow. The real time audio and video cannot tolerate jitter on the other hand The jitter does not matter if the packets are carrying information contained in a file For the audio and video if the video transmission of the packets take 20 msec to 30 msec to reach to the destination, it doesn’t matter , provided the delay remains constant The quality of delay will be hampered if the delays associated with the different packets have different values An agreement that 99% packets can be delivered with a delay ranging from 24.5 msec to 25.5 msec is acceptable 82 Jitter Control (a) High jitter. (b) Low jitter. 83 Jitter Control When a packet arrives at a router, the router will check to see whether the packet is ahead or behind and by what time This information is stored in the packet and is updated at every hop If the packet is ahead of schedule (early) the router will hold it for a slightly longer period of time, and if the packet is behind the schedule (late) then the router will try to send it as quickly as possible This will help in avoiding the average delay per packet constant and will avoid time jitter 84 Quality of service 85 Quality of service Requirements Techniques for achieving good quality of service Integrated services Differentiated services Label Switching and MPLS 86 Requirements A stream of packets is called flow. The needs of each flow may follow different routes. The needs of flow may be characterised by 4 primary parameters: reliability, delay, jitter and bandwidth Together these determine the QoS the flow requires 87 Techniques to achieve good quality of service Overprovisioning Buffering Traffic shaping The Leacky bucket algorithm The token bucket algorithm Resource reservation Admission control Proportional routing Packets scheduling 88 Overprovisioning Provide much router capacity, buffer space and bandwidth so that the packets just fly through the network easily. It is expensive 89 Buffering Smoothing the output stream by buffering packets. • • Flows can be buffered on the receiving side before being delivered Buffering does not affect reliability or bandwidth, increases delay, buts smoothes out jitter 90 Traffic shaping One of the reason behind congestion is the bursty nature of the traffic. If the traffic has a uniform data rate then congestion could be less common Traffic shaping is open loop control. It manages the congestion by forcing the packet transmission rate to be more predictable Thus traffic shaping will regulate the average rate or burstiness of data transmission Check if a packet stream obeys its descriptor, and if it violates its descriptor, give penalty For this the network may want to monitor the traffic flow during the connection period. 91 Traffic shaping The process of monitoring and enforcing the traffic flow is called traffic policing Penalty will be: Drop the packets that violate the descriptor Give low priority to them Traffic shaping is a mechanism to control the amount and rate of the traffic sent to the network The two types of traffic shaping techniques are: Leaky bucket Token bucket 92 The Leaky Bucket Algorithm (a) A leaky bucket with water. (b) a leaky bucket with packets. 93 The Leaky Bucket Algorithm Used to control congestion in network traffic Leaky bucket is a bucket with a hole at bottom. Flow of the water from the bucket is at a constant rate which is independent of water entering the bucket If the bucket is full, any additional water entering in the bucket is thrown out Same technique is applied to congestion control Every host in the network is having a buffer with finite queue length 94 The Leaky Bucket Algorithm Packets which are put into the buffer when buffer is full are thrown away. The buffer may drain onto the subnet either by some number of packets per unit time, or by some total number of bytes per unit time. A FIFO queue is used for holding the packets If the arriving packets are fixed in size, then the front of queue removes a fixed number of packets from the queue at each tick of the clock If the arriving packets are of variable size, then the fixed output rate will not be based on number of departing packets, instead on the number of departing bits 95 The Token Bucket Algorithm 5-34 (a) Before. (b) After. 96 The Token Bucket Algorithm Similar to leaky bucket but allows varying traffic A token generator constantly produces tokens at the rate of one token every ΔT sec Every time when a packet is transmitted a token is destroyed and if the number of tokens are over then no packets can be transmitted The token bucket can be easily implemented with a counter. The token is initialized to zero. Each time the token is added, the counter is incremented by 1 and each time a unit of data is dispatched the counter is decremented by 1 If the counter contains zero no data can be transmitted 97 The Leaky Bucket Algorithm (a) Input to a leaky bucket. (b) Output from a leaky bucket. Output from a token bucket with capacities of (c) 250 KB, (d) 500 KB, (e) 750 KB, (f) Output from a 500KB token bucket feeding a 10-MB/sec leaky bucket. 98 Resource reservation The data flow is dependent on the following resources: i) Buffer ii) bandwidth and iii) CPU time The QoS can be improved by reserving these resources The QoS model called integrated services depends heavily on the principle of resource reservation for improvement in QoS 99 Admission control Admission control technique is used by the router or switch They use this mechanism to accept or reject the flow based on predefined parameters called flow specifications Before accepting a flow for processing, a router checks to the flow specifications to judge if it can handle this new data flow It does this by checking its capacity in terms of bandwidth, buffer size, CPU speed, etc and its commitments to other flow An example of flow specification 100 Proportional routing Most routers use best path to send all traffic to the destination A different approach is to split the traffic for each destination over multiple paths Can use locally available information Divide the traffic equally to the proportion of the outgoing links 101 Packet Scheduling (a) A router with five packets queued for line O. (b) Finishing times for the five packets. 102 Packet scheduling Each router must implement some queuing discipline which governs how the packets are buffered when they are waiting to get transmitted If a router is handling multiple flows, there is a danger that one flow will take too much of its capacity and starve all other flows Two algorithms are used for this: Fair queuing: queues are scanned in a round robin manner. Packets are sorted in the order of their finishing time and sent in that order Weighted fair queuing: Instead of giving same priority to all the hosts, higher priority can be given to some hosts, such as server leading to modified algorithm called weighted fair queuing 103 Integrated services Used for multimedia streams Also known as flow-based algorithms Used for multicast applications 104 Resource reSerVation protocol Multicast flows from multiple source to multiple destinations Uses multicast routing using spanning trees Each group is assigned a group address which is used to send packets to that group The multicast routing algorithm builds a spanning tree covering all group members To eliminate congestion any of the receivers can send reservation message up the tree to the sender 105 RSVP-The ReSerVation Protocol (a) A network, (b) The multicast spanning tree for host 1. (c) The multicast spanning tree for host 2. 106 RSVP-The ReSerVation Protocol (a) Host 3 requests a channel to host 1. (b) Host 3 then requests a 107 second channel, to host 2. (c) Host 5 requests a channel to host 1. Differentiated services (DS) DS can be offered by a set of routers forming an administrative domain The administration defines a set of service classes with corresponding forwarding rules: If a customer signs up for DS, customer packets entering the domain may carry a type of Service field in them with a better service provided to some classes than to others Traffic within a class may be required to conform to some specific shape, such as leaky bucket with some specified drain rate Also known as class based service Can be implemented using Expedited forwarding Assured forwarding 108 Differentiated services (DS) Example of Internet telephony: With a flow-based scheme, each telephone call gets its own resources and guarantees With a class based scheme, all telephone calls together get the resources reserved for the class telephony. These resources cannot be taken away by packets from file transfer or other classes, but no telephone call gets any private resources reserved for it alone 109 Expedited Forwarding Expedited packets experience a traffic-free network. 110 Assured Forwarding Manages service classes It specifies that there shall be four priority classes, each class having its own resource It defines three discard probabilities for packets that are undergoing congestion: low, medium and high Packets processed under assured forwarding is as follows: Step 1: classify the packets into four priority classes Step 2: Mark the packets according to their classes Step 3: Pass the packets through a shaper/ dropper that may delay some of them into acceptable forms 111 Assured Forwarding A possible implementation of the data flow for assured forwarding. 112 Label Switching and MPLS Transmitting a TCP segment using IP, MPLS, and PPP. 113 Label Switching and MPLS A technique of adding a label and doing the routing based on that label in front of each packet instead of destination address is known as label switching Using this technique routing can be done quickly and any necessary resources can be reserved along the path A label called MPLS (MultiProtocol label switching) is created 114 The X.25 network To use X.25 a computer first established a connection to the remote computer, that is placed a telephone call. This connection was given a connection number to be used in data transfer packets Data packets were very simple, consisting of a 3byte header and up to 128 bytes of data. The header consisted of a 12-bit connection number, a packet sequence number, an acknowledge number, and a few miscellaneous bits. 115 X.25 network Designed to provide a low cost alternative for data communication over public networks Pay only for bandwidth actually used Ideal for “bursty” communication over low quality circuits Standard provides error detection and correction for reliable data transfer X.25 standard approved in 1976 by CCITT (now known as ITU) Can support speeds of 9.6 Kbps to 2 Mbps Can provide multiplexing of up to 4095 virtual circuits over on DTE-DCE link 116 X.25 Devices Data Terminal Equipment (DTE) Data Circuit-terminating Equipment (DCE) Terminals, personal computers, and network hosts Located on premises of subscriber Modems and packet switches Usually located at carrier facility Packet Switching Exchange (PSE) Switches that make up the carrier network 117 Sample X.25 Network PSE X.25 WAN PSE Modem DCE Terminal DTE Personal Computer DTE Modem DCE PSE PSE Modem DCE Server DTE 118 Frame relay Virtual circuit WAN Prior to frame relay X.25 was used X.25 had following problems: X.25 has low 64 kbps data rate Intensive flow control and error control at data link and network layer which slows down the network and also a overhead Originally used for private networks, not for internet so has its own network layer 119 Frame relay features Operates at higher speeds (1.554 Mbps and 44.376) Operates in physical and data link layers Acts as backbone networks Allows bursty data Allows frame size of 9000 bytes, which can accommodate all local area networks frames Cheap compared to others Error detection at data link layer only No flow control or error control at higher layers 120 Architecture Frame relay network Packet Switch DCE Packet Switch DCE Personal Computer DTE Terminal DTE Frame Relay WAN Packet Switch DCE Packet Switch DCE Network Host DTE 121 Frame Relay Devices Devices attached to a Frame Relay WAN fall into the following two general categories: Data terminal equipment (DTE) DTEs generally are considered to be terminating equipment for a specific network and typically are located on the premises of a customer. Example of DTE devices are terminals, personal computers, routers, and bridges. Data circuit-terminating equipment (DCE) DCEs are carrier-owned internetworking devices. The purpose of DCE equipments is to provide clocking and switching services in a network, which are the devices that actually transmit data through the WAN. 122 Frame Relay Virtual Circuits Provides connection-oriented data link layer communications. Defined communication path exists and connections associated with a connection identifier (ID). Implemented by using a FR virtual circuit, which is a logical connection created between two DTE devices across a Frame Relay packet-switched network (PSN). Virtual circuits provide a bidirectional communication path from one DTE device to another and uniquely identified by a data-link connection identifier (DLCI). 123 Frame Relay Virtual Circuits (cont.) A number of virtual circuits can be multiplexed into a single physical circuit for transmission across the network. This capability often can reduce the equipment and network complexity required to connect multiple DTE devices. A virtual circuit can pass through any number of intermediate DCE devices (switches) located within the Frame Relay PSN. Frame Relay virtual circuits fall into two categories: switched virtual circuits (SVCs) and permanent virtual circuits (PVCs). 124 Switched Virtual Circuits (SVCs) Temporary connections used in situations requiring only sporadic data transfer between DTE devices. A communication session across an SVC consists of the following four operational states: Call setup—The virtual circuit between two Frame Relay DTE devices is established. Data transfer—Data is transmitted between the DTE devices over the virtual circuit. Idle—The connection between DTE devices is still active, but no data is transferred. If an SVC remains in an idle state for a defined period of time, the call can be terminated. Call termination—The virtual circuit between DTE devices is terminated. 125 Permanent Virtual Circuits (PVCs) Permanently established connections used for frequent and consistent data transfers between DTE devices. Does not require the call setup and termination states PVCs always operate in one of the following two operational states: Data transfer—Data is transmitted between the DTE devices over the virtual circuit. Idle—The connection is active, but no data is transferred. Unlike SVCs, PVCs will not be terminated under any circumstances when in an idle state. DTE devices can begin transferring data whenever they are ready because the circuit is permanently established. 126 Data-Link Connection Identifier (DLCI) Frame Relay virtual circuits are identified by data-link connection identifiers (DLCIs). DLCI values typically are assigned by the Frame Relay service provider (for example, the telephone company). Frame Relay DLCIs have local significance, which means that their values are unique in the LAN, but not necessarily in the Frame Relay WAN. 127 Frame Relay switches Each switch in a frame relay network has a table to route frames. The table matches an incoming port DLCI with an outgoing port DLCI combination 128