* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Router Architecture
Internet protocol suite wikipedia , lookup
Piggybacking (Internet access) wikipedia , lookup
Network tap wikipedia , lookup
Parallel port wikipedia , lookup
Asynchronous Transfer Mode wikipedia , lookup
Nonblocking minimal spanning switch wikipedia , lookup
Recursive InterNetwork Architecture (RINA) wikipedia , lookup
Bus (computing) wikipedia , lookup
Deep packet inspection wikipedia , lookup
Cracking of wireless networks wikipedia , lookup
Router Architecture EECS 569 Sanjeev Kayath Classification of Routers Access Network Router Need to support heterogeneous high-speed ports and variety of protocols Enterprise Network Router Needs to support a large number of ports at a lower cost Backbone Network Router Needs to support few, but high speed links. Design Issues Enterprise Routers Primary goal to provide connectivity to large number of endpoints as cheaply as possible. Support for QoS classes. Support for mulitcast and broadcast traffic. Support multiple network protocols. Support features such as firewall, administrative and security policies. Design Issues Backbone Routers Cost: secondary issue Reliability and speed are the primary issue. Reliability : hot spares, duplicate datapaths. Speed: Forwarding decision Table lookup Among the matching entry, find the longest match. Stability and reliability of routing protocol implementation. Components of Router Input Port Entry point for incoming packet. Output Port Exit point of the packet. Switching Fabric Switch the packet from I/P port to O/P port. Routing Processor Participate in routing protocol to make forwarding table. Evolution of Router Architecture Earliest router were based on general purpose computer: shared central bus, central CPU, memory and line cards. Each incoming packet sent to CPU. Forwarding decision made in CPU. Packet forwarded to output port. Packet traversed the bus twice and all decision made by a single CPU. Evolution of Router Architecture Multiple CPUs introduced to handle portions of incoming traffic. Processing power put in the line cards. So packet needed to traverse the bus once. ASICs were used in line cards. Shared bus replaced by crossbar switch. Input Port Line Card: supports 4-16 ports. Read packet header and do Route lookup. Classify the packet in to QoS traffic classes. Perform data link layer related functionality. Arbitrate the access to switching fabric. Custom hardware/ Processor is used to handle these functionalities. Switching Fabric Bus Limit due to Arbitration overheard and capacitance. Crossbar A scheduler turns on and off the crosspoints. Shared Memory Only pointers to packets are switched. Limited by the memory access time. Output Port Packets heading for same output link need to be stored in a buffer; so as to avoid packet loss. Supports sophisticated scheduling algorithms to support priorities and guarantees. Support data link layer functionality. Routing Processor Computes the forwarding table based on the updates received from other routers according to routing protocol. Runs the software to configure and manage the router. Datapath / Control functions Router functions can also be divided as Datapath functions Functions applied to every packet. E.g. header lookup, forwarding, scheduling. Handled by input, output port and switching fabric. Control functions Functions not applied to every packet. E.g. system configuration, management, table update. Handled by routing processor Goal for high speed requires increase in the rate at which datapath functions are performed. Trends: Route Lookup Speed of algorithm determined by: Number of memory accesses to match one address. Speed of memory. Rule of thumb: 1000 pps for 1Mbps. (Avg. packet size: 125 bytes) E.g. OC 192 - 10 Gbps implies: 10 million pps. Traditional algorithm is to store routes in a tree; every path in the tree from root to leaf corresponds to an entry in forwarding table. Worst time proportional to the length of the destination address for longest prefix match. Trends: Route Lookup Techniques to improve the speed Hardware oriented techniques. CAM: Content Addressable memory. Increase memory to store entries. (stanford) Intelligent memory. (harvard) Table compaction technique. Use better data structure for forwarding table. (sweden) Hashing Binary search on hash table. (wustl) Trends: Switching Fabric Blocking vs Non-blocking packets contend for the same internal link blocking. Banyan switch -blocking, Batcher-Banyan switch - nonblocking: ATM switches. Multi-stage interconnection networks. Use of ATM switch cores in IP routers. Trends: Output Port Speed up output queue increase the speed at which the queue can be accessed. Use very wide memory. Integrate port controller with queue on a single chip. Queuing FCFS - cannot offer different QoS. Fair Queuing - “weighted” service. Trends: Cost of port Cost of port depends upon Amount and kind of memory Backbone routers uses SRAMs Enterprise routers use DRAMs Processing power Backbone routers use Processors (general/network processor). Extensible functionality. Enterprise routers use ASICs. Lack of flexibility. Communication between routing processor and the port. Trends: Avoid Route lookup Edge router translates the destination address to a tag/label/VCI. Core routers do not need to do longest prefix matching. Forwarding done in one memory access. Labels/tags/VCI - Address mapping needs to be distributed. MPLS - A very popular protocol. Trends: Router OS Value Added Services Security, Accounting, Caching and resource management. Research Purdue, Princeton etc. Router API standardization to open up the router architecture. Products Single Box Architecture High capacity switching fabrics. Blocking LAN interconnect to link multiple boxes to increase overall capacity. Max. Line capacity: 25 Gbps -160 Gbps. Multi-chasis Integrated Architecture Expandable switching fabric to provide non-blocking interconnection between multiple expansion chasis. Max. Line capacity: 160 Gbps - 19.2 Tbps.