Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Dynamic Host Configuration Protocol wikipedia , lookup
Distributed firewall wikipedia , lookup
IEEE 802.1aq wikipedia , lookup
Airborne Networking wikipedia , lookup
Wake-on-LAN wikipedia , lookup
Remote Desktop Services wikipedia , lookup
Cracking of wireless networks wikipedia , lookup
Zero-configuration networking wikipedia , lookup
NetLord: A Scalable Multi-tenant Network Architecture for Virtualized Cloud Datacenters Jayaram Mudigonda Praveen Yalagandula, Jeff Mogul, Bryan Stiekes, Yanick Pouffary Hewlett-Packard Company © Copyright 2010 Hewlett-Packard Development Company, L.P. The Goal Build the right network for a cloud datacenter? 2 25 May 2017 The Goal Build the right network for a cloud datacenter? 3 25 May 2017 Cloud Datacenter • Provides Infrastructure as a Service − Shared across multiple tenants • Virtualized − Tenants run Virtual Machines (VMs) 4 25 May 2017 The Right Network • Virtualization + Multi-tenancy − A Virtual Network to each tenant − Flexible abstraction: No restrictions on addressing or protocols • Scale − Tenants, Servers: 10s of 1000s, VMs: 100s of 1000s − Adequate bandwidth • Inexpensive − Capital: Cheap COTS components − Operational: Ease of management, High bisection BW 5 25 May 2017 The Challenge • Basic COTS switching gear − Limited functionality − Limited resources: • Not enough Forwarding Information Base (FIB) space • COTS Switches + Bisection BW Multipathing • Multipathig and/or Scale More switch resources 6 25 May 2017 The Challenge Assume no switch support and conserve switch resources (FIB) to Simultaneously achieve: Scale, Multi-tenancy, Bisection BW, and Ease of config 7 25 May 2017 State of the Art – Scale • Most prior work is limited by one or more of: − New protocols − Modified control and/or data planes − Preferred topologies − Resources (such as table space) on switches 8 25 May 2017 State of the Art – Multi-tenancy • Traditional VLANs − Single tenant − No L3 abstraction − Cannot scale beyond 4K • Commercial: Amazon EC2 and Microsoft Azure • Most of recent work is on segregation not virtualization • Diverter [WREN 2010] − Carves a SINGLE IP address space among tenants − Restricts: VM placements and IP address assignment − Can be limited by the FIB space in switches 9 25 May 2017 NetLord • An encapsulation scheme and A complementary switch configuration • Scalable multi-tenancy • Ease of configuration • Order of magnitude reduction in FIB requirements • High bisection BW 10 25 May 2017 Why Encapsulate? • Unmodified VM packets onto the network − Excessive FIB pressure and no L3 abstraction • Alternative: Rewrite headers − Cannot assume L3 header Rewrite only MAC hdr • Rewrite with server MACs Somewhat reduced FIB • Cannot identify the right VM on destination Server 11 25 May 2017 NetLord Encapsulation • Two headers: a MAC and an IP • Reduced FIB pressure − Outer Src MAC = MAC of the Src edge switch − Outer Dst MAC = MAC of the Dst edge switch • Correct delivery − Right edge switch: The outer MAC header − Right server: Right port # in the outer dest IP addr − Right VM: Tenant-ID frm outer dest IP + Inner dest MAC • Clean abstraction − No assumptions about VM protocols and/or addressing 12 25 May 2017 NetLord Encapsulation VM1 Tenant 15 Server 1 Hypervisor NetLor d Agent ES-MAC1 P1 IP Address Format 0 Port# 7 bits VM2 Tenant 15 TenantID 24 bits Hypervisor NetLor d Agent COTS Ethernet VM1 VM2 13 25 May 2017 IP-Range-2 ES-MAC2 P2 ES-MAC2 ES-MAC1 P1.0.0.15 P2.0.0.15 Server 2 ES-MAC1 ES-MAC2 Switch Configuration • Outer MAC hdr takes pkt to egress edge switch • A switch on MAC Pkt addressed to itself − Strips MAC hdr and forwards based on IP hdr inside − Standard behavior • Correct forwarding − Configure the L3 forwarding tables right − Make sure to match the server configs 14 25 May 2017 Switch Configuration Server 1 Server 2 Hypervisor Hypervisor 1.*.*.* 1.0.0.1 NLA 15 NLA Prefix Gateway, port 1.*.*.* 1.0.0.1, 1 2.*.*.* 2.0.0.1, 2 … … 48.*.*.* 48.0.0.1, 48 25 May 2017 Server 48 … 2.*.*.* 2.0.0.1 2 1 Hypervisor NLA … 48 48.*.*.* 48.0.0.1 NL-ARP • Builds and maintains the mapping <VM-MAC,Subnet,Tenant-ID> <SWITCH-MAC,PORT> • Query – response. Similar to regular ARP • NL-Agent broadcasts on VM boot-up and migration 16 25 May 2017 Goals Achieved: Abstraction • No assumptions about VM-to-VM pkt contents • No restrictions on addressing schemes • No restrictions on L3 protocols 17 25 May 2017 Goals Achieved: Scale • 24-bit tenant ID Millions of tenants • Order of magnitude reduction in FIB requirements − Non-edge switches see only edge-switch MACs − Edge switches see their local server MACs in addition • Small number of L3 table entries in edge switches − One per downlink 18 25 May 2017 Goals Achieved: Ease of Config • Automatic • Local • Same across all switches Easy to debug • Does not change Easy to debug • Easy per-tenant traffic engineering in the network − Tenant ID readily available in standard IP header 19 25 May 2017 Bisection Bandwidth • Reduced FIB pressure Utilize SPAIN effectively J. Mudigonda, P. Yalagandula, M. Al-Fares, and J. C. Mogul. SPAIN: COTS Data-Center Ethernet for Multipathing over Arbitrary Topologies. In Proc. USENIX NSDI, 2010. 20 25 May 2017 SPAIN in One Slide • Multipath forwarding in container/pod datacenters − Few thousand servers (or unique MAC addresses) − COTS Ethernet (with VLAN support) − Arbitrary topologies • A central, off-line manager: − − − − • Discover topology Compute paths; exploit redundancy in the topology Merge paths into a minimal set of trees Install trees as VLANs into switches End-host (NIC driver): − Download VLAN maps from the manager − Spread flows over VLANs (i.e., paths) 21 25 May 2017 SPAIN & Extreme Scalability • Critical resource: Forwarding table entries (FIB) − Each MAC address appears on multiple VLANs − Worst case FIB = #VLANS × #MACs − Overflowing FIB Floods and poor goodput • 64K Entries only a few 1000 unique MACs • NetLord’s ability to hide MACs helps greatly 22 25 May 2017 Putting It All Together Server 1 VM1 Server 2 VM2 VM1VM2 Hypervisor Hypervisor NetLor dVM1VM2 Agent P1.*.*.* macAmac IP1IP 2 B NetLor d VM1VM2 Agent VM1VM2 COTS Ethernet 23 25 May 2017 IP1IP 2 macB macA IP1IP P2.*.*.* 2 macAmac B Evaluation • Analytical arguments: − Max VMs : ~650 K using 48-port 64K FIB switches − NL-ARP overheads: < 1% of link BW, ~0.5% of Server memory • Prototype + VM emulation − NLA, encaping overheads are ignorable − Substantial goodput improvements 24 25 May 2017 Experimental Setup: Topology • 25 74 Servers in a 2-level fat-tree topology 25 May 2017 Prototype, VM Emulation User-level Process • NLA Kernel module … • VM Emulator Protocol Stack − A thin module above NLA − Exports a virtual device − Re-writes MAC addresses VM Emulator NetLord Agent NIC Driver • Unique pair for each TCP cxn Physical NIC • Network 26 25 May 2017 Up to 3K VMs / Servers − 200K VMs in all Experimental Setup: Workload • Parallel shuffles − Emulating shuffle-phase of Map-Reduce jobs − Each shuffle: 74 mappers & 74 reducers, one per server − Each mapper transfers 10MB of data to all reducers 27 25 May 2017 Goodput 25 Traditional SPAIN NetLord Goodput (in Gbps) 20 15 10 5 0 74 3.7K 7.4K 14.8K 37K Number of VMs 74K 148K 222K Floods Traditional SPAIN NetLord Flooded Packets (in Millions) 30 25 20 15 10 5 0 74 3.7K 7.4K 14.8K 37K Number of VMs 74K 148K 222K Summary NetLord combines simple existing primitives in a novel fashion to achieve several out-sized benefits of practical importance: • Scale • Multi-tenancy • Ease-of-use • Bisection BW 30 25 May 2017 • 31 BACKUP 25 May 2017 Putting It All Together Server 1 VM1 Server 2 VM2 VM1VM2 Hypervisor Hypervisor NetLor dVM1VM2 Agent P1.*.*.* macAmac IP1IP 2 B NetLor d VM1VM2 Agent VM1VM2 MAC-A COTS Ethernet 32 25 May 2017 IP1IP 2 IP1IP P2.*.*.* 2 macAmac B