* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download An Experimenter`s Guide to OpenFlow
Piggybacking (Internet access) wikipedia , lookup
Distributed firewall wikipedia , lookup
Computer network wikipedia , lookup
Zero-configuration networking wikipedia , lookup
Airborne Networking wikipedia , lookup
Telephone exchange wikipedia , lookup
Internet protocol suite wikipedia , lookup
Deep packet inspection wikipedia , lookup
TCP congestion control wikipedia , lookup
Wake-on-LAN wikipedia , lookup
Network tap wikipedia , lookup
Recursive InterNetwork Architecture (RINA) wikipedia , lookup
Spanning Tree Protocol wikipedia , lookup
An Experimenter’s Guide to OpenFlow GENI Engineering Workshop June 2010 Rob Sherwood (with help from many others) Talk Overview • • • • What is OpenFlow How OpenFlow Works OpenFlow for GENI Experimenters Deployments Next Session: OpenFlow “Office Hours” • Overview of available software, hardware • Getting started with NOX What is OpenFlow? Short Story: OpenFlow is an API • Control how packets are forwarded • Implementable on COTS hardware • Make deployed networks programmable – not just configurable • Makes innovation easier • Goal (experimenter’s perspective): – No more special purpose test-beds – Validate your experiments on deployed hardware with real traffic at full line speed How Does OpenFlow Work? Ethernet Switch Control Path (Software) Data Path (Hardware) OpenFlow Controller OpenFlow Protocol (SSL/TCP) Control Path OpenFlow Data Path (Hardware) OpenFlow Flow Table Abstraction Software Layer Controller PC OpenFlow Firmware Flow Table Hardware Layer MAC src MAC dst IP Src IP Dst TCP TCP Action sport dport * * * 5.6.7.8 * port 1 5.6.7.8 port 2 * port 3 port 1 port 4 1.2.3.4 OpenFlow Basics Flow Table Entries Rule Action Stats Packet + byte counters 1. 2. 3. 4. 5. Switch VLAN Port ID Forward packet to port(s) Encapsulate and forward to controller Drop packet Send to normal processing pipeline Modify Fields MAC src MAC dst + mask what fields to match Eth type IP Src IP Dst IP Prot TCP sport TCP dport Examples Switching Switch MAC Port src * MAC Eth dst type 00:1f:.. * * VLAN IP ID Src IP Dst IP Prot TCP TCP Action sport dport * * * * IP Dst IP Prot TCP TCP Action sport dport * * port6 Flow Switching Switch MAC Port src MAC Eth dst type port3 00:20.. 00:1f.. 0800 VLAN IP ID Src vlan1 1.2.3.4 5.6.7.8 4 17264 80 port6 Firewall Switch MAC Port src * * MAC Eth dst type * * VLAN IP ID Src IP Dst IP Prot TCP TCP Forward sport dport * * * * * 22 drop Examples Routing Switch MAC Port src * * MAC Eth dst type * * VLAN IP ID Src IP Dst * 5.6.7.8 * * VLAN IP ID Src IP Dst IP Prot vlan1 * * * TCP TCP Action sport dport port6, port7, * * port9 * IP Prot TCP TCP Action sport dport * port6 VLAN Switching Switch MAC Port src * * MAC Eth dst type 00:1f.. * OpenFlow Usage Controller Dedicated OpenFlow Network Aaron’s code OpenFlow Rule Switch Action PC Statistics OpenFlow Protocol OpenFlow Action Switch Rule OpenFlowSwitch.org Statistics OpenFlow Action Switch Rule Statistics OpenFlow Road Map • OF v1.0 (current) – bandwidth slicing – match on Vlan PCP, IP ToS • OF v1.1: Extensions for WAN, late 2010 – multiple tables: leverage additional tables – tags, tunnels, interface bonding • OF v2+ : 2011? – generalized matching and actions: an “instruction set” for networking What OpenFlow Can’t Do (1) • Non-flow-based (per-packet) networking – ex: sample 1% of packets – yes, this is a fundamental limitation – BUT OpenFlow can provide the plumbing to connect these systems • Use all tables on switch chips – yes, a major limitation (cross-product issue) – BUT an upcoming OF version will expose these What OpenFlow Can’t Do (2) • New forwarding primitives – BUT provides a nice way to integrate them • New packet formats/field definitions – BUT plans to generalize in OpenFlow (2.0) • Setup new flows quickly – ~10ms delay in our deployment – BUT can push down flows proactively to avoid delays – Only a fundamental issue when delays are large or new flow-rate is high OpenFlow for Experimenters • Experiment Setup • Design considerations • OpenFlow GENI architecture • Limitations Why Use OpenFlow in GENI? • Fine-grained flow-level forwarding control – e.g., between PL, ProtoGENI nodes – Not restricted to IP routes or Spanning tree • Control real user traffic with Opt-In – Deploy network services to actual people • Realistic validations – by definition: runs on real production network – performance, fan out, topologies Experiment Setup Overview Step 1: Write/Configure/Deploy OpenFlow controller • Each controller implements per-experiment custom forwarding logic • Write your own or download pre-existing • Configure per-experiment topology, queuing Step 2: Create Slice and register experiment Step 3: Control the traffic of Users that opt-in to Your experiment •restricted to subset of real topology • Specify desired user traffic: e.g., tcp.port=80 • Users opt-in via the Opt-In Manager website • Reserving a compute node makes the experimenter a user on the network Experiment Design Decisions • • • • Forwarding logic (of course) Centralized vs. distributed control Fine vs. coarse grained rules Reactive vs. Proactive rule creation • Likely more: open research area Centralized vs Distributed Control Centralized Control Controller OpenFlow Switch Distributed Control Controller OpenFlow Switch Controller OpenFlow Switch OpenFlow Switch OpenFlow Switch Controller OpenFlow Switch Flow Routing vs. Aggregation Both models are possible with OpenFlow Aggregated Flow-Based • • • • Every flow is individually set up by controller Exact-match flow entries Flow table contains one entry per flow Good for fine grain control, e.g. campus networks • • • • One flow entry covers large groups of flows Wildcard flow entries Flow table contains one entry per category of flows Good for large number of flows, e.g. backbone Reactive vs. Proactive Both models are possible with OpenFlow Reactive Proactive • • • • • First packet of flow triggers controller to insert flow entries Efficient use of flow table Every flow incurs small additional flow setup time If control connection lost, switch has limited utility • • • Controller pre-populates flow table in switch Zero additional flow setup time Loss of control connection does not disrupt traffic Essentially requires aggregated (wildcard) rules Examples of OpenFlow in Action • • • • • • • • • • VM migration across subnets energy-efficient data center network WAN aggregation network slicing default-off network scalable Ethernet scalable data center network load balancing formal model solver verification distributing FPGA processing Summary of demos in next session Opt-In Manager • User-facing website + List of experiments • User’s login and opt-in to experiments – Use local existing auth, e.g., ldap – Can opt-in to multiple experiments • subsets of traffic: Rob & port 80 == Rob’s port 80 – Use priorities to manage conflicts • Only after opt-in does experimenter control any traffic Deployments OpenFlow Deployment at Stanford Switches (23) APs (50) WiMax (1) 34 Live Stanford Deployment Statistics http://yuba.stanford.edu/ofhallway/wide-right.html http://yuba.stanford.edu/ofhallway/wide-left.html GENI OpenFlow deployment (2010) 8 Universities and 2 National Research Backbones Three EU Projects similar to GENI: Ophelia, SPARC, CHANGE Pan-European experimental facility L2 Packet Wireless Routing L2 Packet Optics Content delivery L2 Packet Emulation Wireless Content delivery L2 Packet Shadow networks L2 L3Packet Optics Content delivery 37 Other OpenFlow deployments • Japan - 3-4 Universities interconnected by JGN2plus • Interest in Korea, China, Canada, … An Experiment of OpenFlow-enabled Network (Feb. 2009 - Sapporo Snow Festival Video Transmission) Seoul OpenFlow Switch (Linux PC) Suwon NOX OpenFlow Controller VLAN on KOREN Data Transmission TJB Daejeon Controller TJB Broadcasting Company KOREA OpenFlow Network Deagu Gwangju Busan Japan OpenFlow Network Sapporo Studio Sapporo Japan A video clip of Sapporo snow festival is transmitted to TJB (Daejeon, KOREA) via ABC server (Osaka, JAPAN). Server Asahi Broadcasting Cooperation (ABC) at Osaka, Japan Highlights of Deployments • Stanford deployment – McKeown group for a year: production and experiments – To scale later this year to entire building (~500 users) • Nation-wide trials and deployments – 7 other universities and BBN deploying now – GEC9 in Nov, 2010 will showcase nation-wide OF – Internet 2 and NLR to deploy before GEC9 • Global trials – Over 60 organizations experimenting 2010 likely to be a big year for OpenFlow Slide Credits • • • • • Guido Appenzeller Nick McKeown Guru Parulkar Brandon Heller Lots of others – (this slide was also stolen) Conclusion • OpenFlow is an API for controlling packet forwarding • OpenFlow+GENI allows more realistic evaluation of network experiments • Glossed over many technical details – What does the API look like? • Stay for the next session An Experimenter’s Guide to OpenFlow: Office Hours GENI Engineering Workshop June 2010 Rob Sherwood (with help from many others) Office Hours Overview • • • • • Controllers Tools Slicing OpenFlow OpenFlow switches Demo survey • Ask questions! Controllers Controller is King • Principle job of experimenter: customize a controller for your OpenFlow experiment • Many ways to do this: – Download, configure existing controller • e.g., if you just need shortest path – Read raw OpenFlow spec: write your own • handle ~20 OpenFlow messages – Recommended: extend existing controller • Write a module for NOX – www.noxrepo.org Starting with NOX • Grab and build – `git clone git://noxrepo.org/nox` – `git checkout -b openflow-1.0 origin/openflow-1.0` – `sh boot.sh; ./configure; make` • Build nox first: non-trivial dependencies • API is documented inline – `cd doc/doxygen; make html` – Still very UTSL Writing a NOX Module • Modules live in ./src/nox/{core,net,web}apps/* • Modules are event based – Register listeners using APIs – C++ and Python bindings – Dynamic dependencies • e.g., many modules (transitively) use discovery.py • Currently have to update build manually – Automated with ./src/scripts/nox-new-c-app.py • Most up to date docs are at noxrepo.org Useful NOX Events • Datapath_{join,leave} – New switch and switch leaving • Packet_in/Flow_in – New Datagram, stream; respectively – Cue to insert a new rule/flow_mod • Flow_removed – Expired rule (includes stats) • Shutdown – Tear down module; clean up state Tools • OpenFlow Wireshark plugin • MiniNet • oftrace • many more… OpenFlow WireShark Plugin Ships with OpenFlow reference controller MiniNet • Machine-local virtual network – great dev/testing tool • Uses linux virtual network features – Cheaper than VMs • Arbitrary topologies, nodes • Scriptable – Plans to move FV testing to MiniNet • http://www.openflow.org/foswiki/bin/view/OpenFlow/Mininet OFtrace • API for analyzing OF Control traffic • Calculate: – – – – OF Message distribution Flow Setup time % of dropped LLDP messages … extensible • http://www.openflow.org/wk/index.php/Liboftrace Slicing OpenFlow • Vlan vs. FlowVisor slicing • Use cases Switch Based Virtualization Exists for NEC, HP switches but not flexible enough for GENI Research VLAN 2 Flow Table Controller Research VLAN 1 Flow Table Controller Production VLANs Normal L2/L3 Processing FLOWVISOR BASED VIRTUALIZATION Heidi’s Controller Aaron’s Controller Craig’s Controller OpenFlow Protocol OpenFlow Switch OpenFlow FlowVisor & Policy Control OpenFlow Protocol OpenFlow Switch OpenFlow Switch Stanford Infrastructure Uses Both WiMax WiFi APs OpenFlow switches Flows Packet processors – The individual controllers and the FlowVisor are applications on commodity PCs (not shown) Use Case: VLAN Based Partitioning • Basic Idea: Partition Flows based on Ports and VLAN Tags – Traffic entering system (e.g. from end hosts) is tagged – VLAN tags consistent throughout substrate Switch MAC Port src MAC Eth dst type VLAN IP ID Src IP Dst IP Prot TCP TCP sport dport * * * * 1,2,3 * * * * * * * * * 4,5,6 * * * * * * * * * 7,8,9 * * * * * FLOWVISOR BASED VIRTUALIZATION Separation not only by VLANs, but any L1-L4 pattern Broadcast Multicast http Load-balancer OpenFlow Protocol OpenFlow Switch OpenFlow FlowVisor & Policy Control OpenFlow Protocol OpenFlow Switch OpenFlow Switch Use Case: New CDN - Turbo Coral ++ • Basic Idea: Build a CDN where you control the entire network – – – – All traffic to or from Coral IP space controlled by Experimenter All other traffic controlled by default routing Topology is entire network End hosts are automatically added (no opt-in) Switch MAC Port src MAC Eth dst type VLAN IP ID Src IP Dst IP Prot TCP TCP sport dport * * * * * * * * 84.65.* * * * * * * * 84.65.* * * * * * * * * * * * * * Use Case: Aaron’s IP – A new layer 3 protocol – Replaces IP – Defined by a new Ether Type Switch MAC Port src MAC Eth dst type VLAN IP ID Src IP Dst IP Prot TCP TCP sport dport * * * AaIP * * * * * * * * * !AaIP * * * * * * Switches Stanford Reference Implementation • Linux based Software Switch • Release concurrently with specification • Kernel and User Space implementations • Note: no v1.0 kernel-space implementation • Limited by host PC, typically 4x 1Gb/s • Not targeted for real-world deployments • Useful for development, testing • Starting point for other implementations • Available under the OpenFlow License (BSD Style) at http://www.openflowswitch.org Wireless Access Points • Two Flavors: – OpenWRT based (Busybox Linux) • v0.8.9 only – Vanilla Software (Full Linux) • Only runs on PC Engines Hardware • Debian disk image • Available from Stanford • Both implementations are software only. NetFPGA • NetFPGA-based implementation – Requires PC and NetFPGA card – Hardware accelerated – 4 x 1 Gb/s throughput • • • • Maintained by Stanford University $500 for academics $1000 for industry Available at http://www.netfpga.org Open vSwitch • Linux-based Software Switch • Released after specification (v1.0 support 1 week old!) • Not just an OpenFlow switch; also supports VLAN trunks, GRE tunnels, etc • Kernel and User Space implementations • Limited by host PC, typically 4x 1Gb/s • Available under the Apache License (BSD Style) at http://www.openvswitch.org OpenFlow Vendor Hardware Product Prototype Core Router Enterprise Campus Data Center Circuit Switch Juniper MX-series Cisco Catalyst 6k (prototype) (prototype) Cisco Catalyst 3750 (prototype) Arista 7100 series (Q4 2010) HP ProCurve 5400 and others Pronto NEC IP8800 Ciena CoreDirector WiMAX (NEC) more to follow... Wireless 67 HP ProCurve 5400 Series (+ others) • Chassis switch with up to 288 ports of 1G or 48x10G (+ other interfaces available) • • • • Line-rate support for OpenFlow Deployed in 23 wiring closets at Stanford Limited availability for Campus Trials Contact HP for support details Praveen Yalagandula Jean Tourrilhes Sujata Banerjee Rick McGeer Charles Clark NEC IP8800 • 24x/48x 1GE + 2x 10 GE • Line-rate support for OpenFlow • Deployed at Stanford • Available for Campus Trials • Supported as a product • Contact NEC for details: • Don Clark ([email protected]) • Atsushi Iwata ([email protected]) Atsushi Iwata Hideyuki Jun Shimonishi Suzuki Masanori Nobuyuki Takashima Enomoto Philavong Minaxay Shuichi Saito (NEC/NICT) Tatsuya Yabe Yoshihiko Kanaumi (NEC/NICT) Pronto Switch • Broadcom based 48x1Gb/s + 4x10Gb/s • Bare switch – you add the software • Supports Stanford Indigo and Toroki releases • See openflowswitch.org blog post for more details Stanford Indigo Firmware for Pronto • Source available under OpenFlow License to parties that have NDA with BRCM in place • Targeted for research use and as a baseline for vendor implementations (but not direct deployment) • No standard Ethernet switching – OpenFlow only! • Hardware accelerated • Supports v1.0 • Contact Dan Talayco ([email protected]) Toroki Firmware for Pronto • Fastpath-based OpenFlow Implementation • Full L2/L3 management capabilities on switch • Hardware accelerated • Availability TBD Ciena CoreDirector • • • Circuit switch with experimental OpenFlow support Prototype only Demonstrated at Super Computing 2009 Juniper MX Series • Up to 24-ports 10GE or 240-ports 1GE • OpenFlow added via Junos SDK • Hardware forwarding • Deployed in Internet2 in NY and at Stanford • Prototype • Availability TBD Umesh Krishnaswamy Michaela Mezo Parag Bajaria James Kelly Bobby Vandalore Cisco 6500 Series • Various configurations available • Software forwarding only • Limited deployment as part of demos • Availability TBD Work on other Cisco models in progress Pere Monclus Sailesh Kumar Flavio Bonomi Stanford Reference Controller • Comes with reference distribution • Monolithic C code – not designed for extensibility • Ethernet flow switch or hub NOX Controller • Available at http://NOXrepo.org • Open Source (GPL) • Modular design, programmable in C++ or Python • High-performance (usually switches are the limit) • Deployed as main controller in Stanford Martin Casado Scott Shenker Teemu Koponen Natasha Gude Justin Pettit Simple Network Access Control (SNAC) • Available at http://NOXrepo.org • Policy + Nice GUI • Branched from NOX long ago • Available as a binary • Part of Stanford deployment Demo Previews • • • • • • • FlowVisor Plug-n-Serve Aggregation OpenPipes OpenFlow Wireless MobileVMs ElasticTree Demo Infrastructure with Slicing WiMax WiFi APs OpenFlow switches Flows Packet processors – The individual controllers and the FlowVisor are applications on commodity PCs (not shown) Be sure to check out the demos during the break!! OpenFlow Demonstration Overview Topic Network Virtualization Hardware Prototyping Demo FlowVisor OpenPipes Load Balancing PlugNServe Energy Savings ElasticTree Mobility MobileVMs Traffic Engineering Aggregation Wireless Video OpenRoads FlowVisor Creates Virtual Networks OpenPipes Demo Each demo presented here runs in an isolated slice of Stanford’s production network. OpenFlow Switch OpenFlow Switch PlugNServe Load-balancer OpenRoads Demo OpenFlow Protocol OpenFlow Protocol OpenFlow Switch FlowVisor OpenPipes Policy FlowVisor slices OpenFlow networks, creating multiple isolated and programmable logical networks on the same physical topology. OpenPipes •Plumbing with OpenFlow to build hardware systems Partition hardware designs Mix resources Test Plug-n-Serve: Load-Balancing Web Traffic using OpenFlow Goal: Load-balancing requests in unstructured networks What we are showing OpenFlow-based distributed load-balancer Smart load-balancing based on network and server load Allows incremental deployment of additional resources OpenFlow means… Complete control over traffic within the network Visibility into network conditions Ability to use existing commodity hardware This demo runs on top of the FlowVisor, sharing the same physical network with other experiments and production traffic. Dynamic Flow Aggregation on an OpenFlow Network Scope •Different Networks want different flow granularity (ISP, Backbone,…) • Switch resources are limited (flow entries, memory) • Network management is hard • Current Solutions : MPLS, IP aggregation How OpenFlow Helps? •Dynamically define flow granularity by wildcarding arbitrary header fields •Granularity is on the switch flow entries, no packet rewrite or encapsulation •Create meaningful bundles and manage them using your own software (reroute, monitor) Higher Flexibility, Better Control, Easier Management, Experimentation Intercontinental VM Migration Moved a VM from Stanford to Japan without changing its IP. VM hosted a video game server with active network connections. ElasticTree: Reducing Energy in Data Center Networks • Shuts off links and switches to reduce data center power • Choice of optimizers to balance power, fault tolerance, and BW • OpenFlow provides network routes and port statistics • The demo: • Hardware-based 16node Fat Tree • Your choice of traffic pattern, bandwidth, optimization strategy • Graph shows live power and latency variation demo credits: Brandon Heller, Srini Seetharaman, Yiannis Yiakoumis, David Underhill