Download ORAN: OpenFlow Routers for Academic Networks

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer network wikipedia , lookup

Power over Ethernet wikipedia , lookup

Point-to-Point Protocol over Ethernet wikipedia , lookup

Multiprotocol Label Switching wikipedia , lookup

CAN bus wikipedia , lookup

Deep packet inspection wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

PC/104 wikipedia , lookup

Network tap wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Spanning Tree Protocol wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

RapidIO wikipedia , lookup

IEEE 1355 wikipedia , lookup

Airborne Networking wikipedia , lookup

Telephone exchange wikipedia , lookup

Transcript
ORAN: OpenFlow Routers for Academic Networks
A. Rostami∗ , T. Jungel‡ , A. Koepsel‡ , H. Woesner‡ , A. Wolisz∗
Networks Group (TKN), Technical University of Berlin, Germany
{rostami, wolisz}@tkn.tu-berlin.de
‡ European Center for Information and Communication Technologies (EICT GmbH), Berlin, Germany
{tobias.jungel, andreas.koepsel, hagen.woesner}@eict.de
∗ Telecommunication
Abstract— We present the design and prototyping of an
OpenFlow-enabled Gigabit Ethernet switch based on an ATCA
industrial standard platform. Our work consists of an architectural design for introducing OpenFlow switch functionalities to
the ATCA switching platform, design and development of an
enhanced library for OpenFlow protocol stack, and porting the
data-path functionalities and protocol end-point to the selected
platform. We also carry out several performance evaluation
tests on the developed switching platform and present selected
results. The results demonstrate the successful introduction of
OpenFlow protocol to the ATCA platform and contribute, among
other things, to the identification of performance bottlenecks and
crucial design issues in OpenFlow switches.
I. I NTRODUCTION
OpenFlow [1] proposes decoupling–at least partially–the
intelligence from data-path of a switch/router and delegating
it to an external controller that can easily be programmed
by network operators. In this approach, different control protocols (e.g., for signaling and routing) can be implemented
or modified in the external controller without needing to
make modifications to the switches’ data-path forwarding
algorithms. To facilitate this process, OpenFlow defines an
open standard interface for programming the switches’ datapath, that in turn enables network operators/researchers to
design their own control logic and incorporate it into the
network. The introduction of such a standard interface allows
the development of control protocols and switching hardware
to evolve independently from each other, that can lead to
a more competitive market. Another consequence of having
a standard open interface for controlling switches’ data-path
is the simplicity in management of a network composed of
switches developed by different vendors. In addition to that,
with this approach the creation of new services are facilitated,
since the network operators in an OpenFlow-enabled network
are not bound to limitations imposed by different switch
vendors.
While the idea of OpenFlow has attracted a lot of attention
in the networking community, the number of switches following high-performance industry standards on one hand, and
being open for flexible configuration and supporting OpenFlow
on the other hand is rather limited. Although several networking equipment vendors have already introduced OpenFlowenabled Ethernet switches, the available OpenFlow switches
have a closed architecture, which limits the integration of
1 This work was supported in part by the Deutsche Telekom Innovation
Laboratories.
new components and functionalities, if at all possible. In fact,
OpenFlow provides only the first step towards opening the
switches and routers for developing and experimenting new
control mechanisms and integrating them into the network.
To achieve a fully programmable network that supports quick
integration of new services, e.g. in wireless or optical domains
for mobility or service aware transport networks, we need
OpenFlow switches and routers that are flexible and extensible for the integration of new functionalities. These features
are unfortunately missing in most of the currently available
OpenFlow switches.
In order to address this problem, we design and prototype an OpenFlow-enabled Gigabit Ethernet switch based on
the Advanced Telecommunications Computing Architecture
(ATCA) [2] industrial standard. The standard defines highly
modular and vendor-independent components and platforms
that are mostly used by telecommunications operators to create
networks consisting of processing elements (computers) and
data transport modules (e.g., Ethernet switches, mobile basestations and optical transmission cards). The modularity and
the fact that there is a large market for ATCA components lead
to low prices for the individual components. The advantages
of using ATCA for an experimental network are manifold:
flexibility and composability of components that support quick
system development and extension for special purposes (e.g.,
wireless or optical networking), a well-introduced and widelydeployed industrial standard, which guarantees the availability
of components over years, and finally the low cost, which is
a crucial factor in developing experimental set-ups.
Design and implementation of an OpenFlow switch based
on an ATCA platform introduces several challenges, as to
our knowledge it is the first of its kind. The challenges
are mainly related to the architectural design of the switch,
i.e., mapping functional elements of an OpenFlow switch to
the ATCA platform as well as to the development of an
OpenFlow protocol library and porting it to the platform. In
our work, we address the challenges and develop a prototypical
composition of ATCA components that supports OpenFlow.
This establishes an example for other universities and research
institutions to follow the same strategy and build their own
platforms that can jointly create a larger experimental facility,
and hence we name our switch ORAN (OpenFlow Routers
for Academic Networks). There are many examples for a
successful implementation of this strategy like XORP [3] or
NetFPGA [4], where through the use of standard hardware
components and a publicly available software extension the
control
protocols
OpenFlow Switch
OF Protocol
end point
secure
channel
OF Protocol
end point
OpenFlow Controller
Flow
Fig. 1.
Table
Functional elements of an OpenFlow switch and a controller
whole branch of research has been opened.
The rest of this paper is organized as follows. In Section II
the concept of OpenFlow is reviewed and its building blocks
are discussed. In Section III we introduce our selected ATCA
platform and detail its capabilities and functional elements.
In Section IV we present the architectural design of the
ORAN switch and the mapping of the OpenFlow data-path
functionalities to the selected hardware platform. In Section
V we discuss the design and implementation of an OpenFlow
firmware for our switch. Section VI describes selected results
on evaluation tests of the ORAN switch. Finally, we conclude
in Section VII by summarizing the achieved results and
discussing future works.
II. O PEN F LOW C ONCEPT
The concept of OpenFlow is based on separating forwarding logic and control functions of a (Ethernet) switch and
implementing the latter in a remote control unit. To realize
this, the proposed architecture for OpenFlow is composed of
three main building blocks as depicted in Fig. 1. The building
blocks are namely the OpenFlow switch, the controller and the
OpenFlow protocol that enables the connection between the
first two blocks over a secure channel [1]. In the following,
we explain each of the three blocks in more details.
An OpenFlow switch includes a data-path element that
realizes the packet forwarding engine. In the heart of the
data-path there is a table that is used to decide about how
to forward incoming packets1 . The table has an entry per
identified flow of packets, and hence it is called flow table2 .
Each flow entry in the flow table includes rules for identifying
the associated frames, a set of actions to be executed on the
associated frames, and counters for collecting statistics about
the frames that have successfully matched that entry. The datapath element is exposed to an external control unit by means
of a standard application programming interface (API) (the
OpenFlow protocol end-point in Fig. 1).
The rules for matching incoming frames with flow entries
are based on the values of frames’ L1-L4 header fields. Fig. 2
1 Throughout
this paper we use packet and frame interchangeably.
this work we assume OpenFlow switch specification version
1.0.0 [5]. More recent versions of the OpenFlow switch specification allows
more than one flow table per switch [6].
2 Throughout
depicts the header fields that are used for the matching process
in OpenFlow version 1.0. The value of a field for a flow entry
can also be set to a wildcard, which means the content of that
field in an incoming frame does not influence the result of the
matching process. This facilitates a flexible definition of flows
ranging from micro flows (e.g., an individual TCP/UDP flow)
to highly aggregated flows (e.g., all packets originated from
a given autonomous system (AS) domain). The flow table is
usually implemented in hardware using the ternary content
addressable memory (TCAM) [7], which enables efficient
matching of frames with flow entries at line rates. TCAMs are
quite expensive as compared to conventional random access
memories (RAMs), which can put a limit on the size of the
flow table in an OpenFlow switch. Accordingly, it is of great
importance to utilize the space of a flow table in an efficient
manner.
Once a frame enters the switch and hits a flow entry in
the flow table, a set of actions associated with that entry
are executed on the frame. The actions could be for example
forwarding the frame to one or all switch ports. If no match
can be found for an incoming Ethernet frame in the flow
table, it is encapsulated in an OpenFlow frame and forwarded
to the remote controller to which the switch is connected.
The controller then decides about what to do with the frame
and if necessary can cache its decision into the flow table
in form of a new entry. In that case, the switch can directly
handle upcoming frames of the same flow without consulting
the controller. To realize that, an OpenFlow controller utilizes
the API provided by the OpenFlow protocol to implement a
control layer for a network of OpenFlow switches. A single
controller can manage several OpenFlow switches if it enjoys
enough resources (e.g., processing power). This is an important
feature, which facilitates realization of a centralized control
plane for a network of OpenFlow switches.
The standard open API provided by the OpenFlow further
offers the flexibility that any network provider can develop its
own controller, which can also be used to control OpenFlow
switches developed by different vendors. Therefore, there is
no standard specification for developing an OpenFlow controller, which has led to the development of several controllers
with different features. Examples of developed OpenFlow
controllers are NOX [8] and SNAC [9], where the former
one is available as an open-source implementation. In addition
to that, through developing appropriate OpenFlow controllers
one can realize the network virtualization, which can improve
the network utilization and offer a possibility for running
experimental activities in production networks. An example for
an OpenFlow controller that enables virtualization of Ethernet
switches is FlowVisor [10]. FlowVisor is a controller that
acts between a network of physical OpenFlow switches and
several OpenFlow controllers, where each controller controls
a slice of the network. The responsibility of FlowVisor is to
harmonize the usage of resources at the physical switches
by the controllers in a way that no conflict occurs. In this
approach, the physical switches do not need to be aware of
the virtualization.
IP ToS
TCP/UDP
Src. port
TCP/UDP
Dst. port
IP Proto.
IP Dst.
IP Src.
VLAN prior.
VLAN id
Ether. type
Ether. Dst.
Ether. Src.
Ingress port
Fig. 2.
Header fields used for matching frames to flow entries
III. ATCA P LATFORM OF T HE ORAN S WITCH
ATCA is a set of standard specifications for developing
next-generation high-performance, carrier-grade communication equipments. The set of specifications have been developed
in a cooperation among a large number of telecommunication
equipment vendors and providers with the coordination of
the PCI Industrial Computers Manufacturers Group (PICMG)
[11]. The specifications, denoted as PICMG 3.X, are aimed at
addressing the requirements of telecommunication equipment
providers to have a standardized and carrier-grade common
platform. This can help network equipment providers to offer
solutions within a short time by integrating components from
different vendors. That is, the ATCA contributes to improving
simplicity, interoperability and time to market. These features
have contributed to the popularity of ATCA platforms among
equipment providers over the last several years. It has been
estimated that the ATCA market grow to 6.7 billion USD by
2012 [12].
The hardware equipment used to develop the ORAN switch
is an ATCA integrated platform with 10 GbE fabric and uplink
modules from Kontron (Kontron OM5080 [13]). The platform,
as shown in Figs. 3, is a 2U 19” platform with slots for two
carrier boards. Each carrier board (Kontron AT8404) includes
a Gigabit Ethernet switch based on Broadcom chipset (BCM
56502) [14] with two 10GbE and twenty-four GbE ports, out
of which 3 ports (two GbE and a 10GbE) appears on the front
side of the platform as uplinks. A carrier board can host up to
four mid-size advanced mezzanine cards (AMC). AMC slots
can be used to extend the carrier boards for example with
I/O functionalities (e.g., GbE ports) or processing modules
(e.g., network processors). Each AMC slot is connected to
the Gigabit Ethernet switch on the carrier board through five
GbE ports. Additionally, the Gigabit Ethernet switches of the
two carrier boards are interconnected through the backplane
with a 10GbE link. The switching chipset enjoys internal
packet memories and a TCAM-based flow table that can
accommodate up to 2048 flow entries.
Fig. 4 depicts a high-level architecture of AT8404 carrier
boards. The Ethernet switch on each board is managed by
an embedded PC (PowerPC 405GPr) running at 400 MHZ
with 128 MB flash memory, 256 MB SDRAM and 16 kB
D-cache. The PowerPC manages the Ethernet switch over
a 32bit/33MHZ PCI bus. The switch controller (PowerPC
processor) is accessible via a RS232 port on the front side
of the platform. The carrier boards further support intelligent
platform management interface (IPMI) and simple network
management protocol (SNMP) for system management [13].
The dual interconnected GbE switching systems and the
high total capacity are important features of the selected ATCA
platform, which facilitate the development of large testbeds for
OpenFlow. In addition, the AMC slots allow for extension of
processing capacity for different use-cases (e.g., using network
processor for further protocol de/encapsulation).
IV. A RCHITECTURAL D ESIGN OF T HE ORAN S WITCH
We have designed and prototyped an architecture for supporting OpenFlow protocol stack on the AT8404 carrier board.
The work is composed of two main parts namely, designing
an architecture for introducing OpenFlow switch functionality
into the ATCA carrier board as wells as the design and
implementation of a new library for OpenFlow protocol and
porting the developed protocol stack to the carrier board. In
this section we focus on the architectural design of our switch.
The first step in developing an OpenFlow hardware switch
is to deal with mapping OpenFlow functional architecture to
the selected hardware architecture. This is an important step
since a proper architectural design would have a direct impact
on the performance of the switch. A switch is composed of a
data-path (also called fast path) for forwarding of data packets
and a control path (also called slow path) to exchange routing,
signaling and management information. As discussed in Section II, the idea of OpenFlow is to push the intelligence out of
the switch as much as possible and to keep only the data-path
and a minimal control/management interface inside the switch
and expose this interface to a remote controller. Accordingly,
in this work we focus on the design and implementation of
the data-path and the OpenFlow protocol end-point inside the
switch. A natural choice for the protocol end-point would be
to implement it in the embedded system of the platform. The
mapping of other functional elements of the data-path to the
platform is a crucial step in our switch design process.
According to the OpenFlow specification [5], once an
Ethernet frame arrives at an input switch port and successfully
passes the low layer processing, it will be delivered to the
OpenFlow forwarding engine. The forwarding engine consists
of a header parser, a flow matching processor (including the
flow table), an action execution unit (e.g., forwarding, dropping) as well as an OpenFlow protocol encapsulation unit for
delivering unknown frames to a remote OpenFlow controller.
In general, there are two possibilities for implementing a
functional element of the data-path on the platform. Specifically, each of the functional elements can be realized either in
hardware (switch chipset) or alternatively in the software, i.e.,
running in a software on the embedded system. Obviously, the
hardware realization of a functional element leads to a higher
performance in terms of packet forwarding rate. Nonetheless,
making a decision with this regard depends to a high extent
on the capabilities and features of the selected platform. The
features to be considered include the processing power of
the embedded system, the available bandwidth between the
embedded system and the switch chipset (the PCI interface)
and above all the switch chipset itself and the functions that
it supports.
In case any functional element of the data-path, which deals
with every incoming frame, cannot be directly supported in
the chipset then the forwarding rate of the switch would
Fig. 3.
Kontron OM5080 ATCA integrated platform
be limited by capabilities of the embedded system. The
limiting factors here include the communication path between
the switch chipset and the processor (i.e., the PCI bus) as
well as the processing and buffering resources available at
the embedded system. On the other hand, todays’ Ethernet
switching platforms are designed in a way that almost all
packet processing functions are supported in the switch chipset
using ASIC (application specific integrated circuit), and the
embedded system is typically responsible for table updates
and other simple configuration functionalities. That is, the
resources available in the embedded system of switches are
usually limited. For example as detailed in the last section, the
embedded system in our ATCA platform has a limited processing power and buffering capacity. It also has a connection with
limited bandwidth to the switching chipset. This leads us to the
conclusion that the data-path elements should be pushed into
the hardware if the required functionalities can be supported by
the switch chipset. We should also note that the performance
of the switch in terms of the flow-entry handling rate would
be in any case dependent on the embedded processor since
the protocol end-point is implemented there. Therefore, the
embedded processor of an OpenFlow switch can become a
bottleneck of an OpenFlow switch if it is not powerful enough,
regardless of how capable the switching chipset is.
The chipset available on the selected ATCA platform directly supports header parsing, packet classification as well
as frame forwarding based on a combination of header fields’
values of frames. As a result we map all these functionalities to
the Broadcom Ethernet chipset of the carrier board in order to
maximize the forwarding capacity of the switch. Nevertheless,
the encapsulation of frames for which no match is found in
the flow table cannot be directly supported in the hardware
and therefore it is delegated to the embedded processor. In
Section VI we will discuss the performance implications of
pushing OpenFlow encapsulation into the embedded system.
Fig. 5 shows a logical architecture and functional elements of
the data-path in an OpenFlow switch and the mapping to our
hardware platform.
V. I MPLEMENTATION OF O PEN F LOW F IRMWARE FOR
ATCA C ARRIER B OARD
Various implementations of OpenFlow have recently been
developed by academic and commercial institutions. Porting
one of these available libraries seems to be a natural starting point for introducing OpenFlow to the ORAN switch.
Fig. 4. Schematic of the internal architecture of Kontron OM5080 ATCA
platform [13]
However, when having a closer look at the actual OpenFlow
specification3 , it became clear that OpenFlow is still on its
way to become more mature and a number of desirable
features, although being already proposed, are still missing in
the OpenFlow specification. Below, we review some of such
desirable features in detail.
First, let us consider the FlowVisor approach to defining a
mechanism for virtualizing and slicing the overall name-space
under the control of a data-path element [10]. While OpenFlow
restricts itself to a single controller being responsible of a datapath element, FlowVisor allows different controllers to control
disjunct parts of the overall flow name-space in parallel.
However, configuring name-spaces and name-space/controller
mappings is done in FlowVisor via an external management
interface, rather than allowing the controller and the data-path
to negotiate these name-space parameters during establishment
of the OpenFlow control connection.
Proposition 1: An OpenFlow implementation should integrate OpenFlow and FlowVisor functionalities, thereby extending the basic OpenFlow API with means to enable the
controller and the data-path to negotiate parts of a name-space
under control by this control connection.
The OpenFlow API binds mutually user and control plane,
and OpenFlow’s messages define the set of information
exposed by the user plane towards the controlling entity.
FlowVisor reveals immediately another interesting property
of OpenFlow: its self-similarity. Acting as a proxy entity,
FlowVisor manipulates OpenFlow messages that are traveling
upstream from user to control plane and vice versa. This is
a quite useful feature allowing network operators to reveal
different levels of detail of their network towards a customer,
and to change these exposed network properties dynamically
over time if needed.
Proposition 2: An OpenFlow implementation should enable native stacking of OpenFlow compliant layers on each
other, not only on a single device, but rather distributed among
different controlling entities.
The OpenFlow framework makes specific assumptions
about the information exposed by a user plane and its
functional elements: consider OpenFlow’s interface definition
(i.e., Ethernet-based port with static behavior in terms of
bandwidth). In fixed network deployments these assumptions
will most likely prove valid. However, other link types exist,
3 Although we assume OpenFlow 1.0 in this project, things haven’t changed
considerably with OpenFlow 1.1.
To Controller
OUT Ports
IN Ports
Buffer
Embedded System
Linux Kernel
eth0
PCI Subsystem
NIC
Pkt−In
Parser
Flow
matching
Action
execution
no
match
Flow
Table
OF
encapsulation
Functional Elements of OpenFlow Data Path
Fig. 5.
BCM 56502
ASIC (parser, flow
matching, processing)
Ethernet Ports
Hardware Data Path
Mapping OpenFlow functional elements to the ATCA carrier board
e.g., dedicated transport channels with associated physical
resources as in GMPLS-controlled optical paths, or in the
wireless domain, where the channel’s capacity underlying the
logical link may vary considerably over time. Such properties
make interfaces much more dynamic and exposing these
dynamic characteristics is crucial for advanced control entities.
Proposition 3: An OpenFlow implementation should expose more details about the functional elements in the user
plane, e.g., interface details, where applicable. The level of
details sent to the control plane should be negotiable to relieve
implementors from dealing with unnecessary details.
Processing in OpenFlow is typically limited to a restricted
number of protocols, typically bounded by the capabilities
of the underlying switching chipset. The ATCA platform
facilitates integration of additional processing entities, e.g.,
network processor units or FPGA-based designs. Such entities
differ from fixed switching chipsets as they support dynamic
deployment of processing logic. OpenFlow’s current specification assumes a rather static set of available processing
capabilities (i.e., actions) and lacks means to dynamically
manipulate processing logic on a data-path element. Various
options for integrating such functionality compete with each
other, e.g., virtual ports hosting processing entities, or generic
actions that refer again to independent processing entities. The
impact of advanced processing capabilities on other OpenFlow
functional elements (like the packet classifier) must be taken
into account carefully.
Proposition 4: An OpenFlow implementation should allow
integration of more advanced processing capabilities based on
network processors or FPGA.
Starting from these observations and desired features, we
decided to create a new OpenFlow implementation from
beginning. Based on C++ and the standard template library
(STL), the implementation comprises a library based on the
OpenFlow 1.0 specification and provides a hardware abstraction layer in order to facilitate porting the code to various
hardware designs. Based on this core library, a data-path
element was developed for a Linux PC environment and the
AT8404 carrier board that serves as testing environment.
As already outlined, the AT8404 carrier board hosts a PowerPC based embedded system for controlling the Broadcom
BCM 56502 switching chipset. The data-path implementation
has been ported to the PowerPC environment and the hardware
abstraction layer has been extended by the functionality for
controlling the Broadcom chipset API.
Some of the key features of the developed library are as
follows.
• A base class provides full OpenFlow 1.0 compatibility
and supports arbitrary (only limited by memory and
processing constraints) number of lower layer data-path
elements and higher layer controlling entities.
• It uses RPC stub class for transforming OpenFlow messages transparently from TCP to C++ and vice versa. Full
stacking supports for an arbitrary number of instances.
• The hardware abstraction layer is integrated with Broadcom chipset API.
• It supports multiple controlling entities per data-path
element (FlowVisor like) using vendor extensions.
We are planning to make our OpenFlow library available to
public in the near future.
VI. P ERFORMANCE E VALUATION OF T HE ORAN S WITCH
In this section we present selected results from the performance evaluation of the ORAN switch. One of the most
important metrics in evaluating any switch is the maximum
achievable throughput, i.e., the rate at which the switch
can forward arriving frames without loss. In general, the
throughput is evaluated for different packet sizes since for
example small packets might saturate the maximum access
rate of buffers in the switch. In addition, the throughput of
an OpenFlow switch depends on flow-table matching results,
because if an incoming frame is not matched to the existing
flow entries, it should wait for a remote controller decision and
it cannot be forwarded at the line rate. To account for both
factors, i.e., frame sizes and the matching results, we have
designed and carried out the following sets of experiments.
In the first set of experiments we measure the throughput
of the switch with a known input flow, i.e., we populate the
flow table with flow entries and inject frames that match an
installed flow entry. The action part of the flow entry is set
to ”forward associated frames to a given port of the switch”.
The experiments are run for an exhaustive combination of 4
different frame sizes (64, 500, 1000 and 1518 Bytes) and 4
different number of flow entries pre-installed in the flow table
(1, 100, 1000, 2048). In case of more than one flow entries,
only one entry is relevant for the injected traffic and the rest
are randomly generated just to fill the table to the desired
level. Our measurements demonstrate that the ORAN switch
can successfully forward incoming frames at the line rate. We
have measured successful forwarding of frames at the line rate
for up to 25 Gb/s input data rate. We should note that this is
the maximum data rate at which we can inject frames to the
switch due to limitations of our equipment. To further account
for the impact of packet modifications–as introduced by the
OpenFlow specification–on the throughput, we have repeated
the experiments with updating the action part of the known
flow to ”add a VLAN tag and forward associated frames to a
given port of the switch”. Also in this case, we have observed
line rate forwarding and modification of frames for up to 25
Gb/s input data rate. This comes at no surprise, since in our
switch forwarding of frames belonging to configured flows and
per packet modifications such as VLAN tagging are carried out
completely in the hardware.
In the second set of experiments our aim is to evaluate the
switching performance of the ORAN with the input traffic
that does not hit existing entries in the flow table. For this
kind of traffic, input frames are first copied over the PCI
bus to the memory space of the embedded system, then the
embedded processor encapsulates–part of–the frame into an
OpenFlow frame and send it to the controller. Upon receiving
a positive answer from the controller the packet should be
copied again over the PCI bus to the Ethernet switch so it
can be forwarded to the right output port. That is, as already
discussed in Section IV, the PCI bus and the embedded system
are the major limiting factors of the switch throughput in that
case. To quantify the impact of the these limitations on the
switch performance we measure the throughput of the PCI
bus as well the packet throughput of the switch for the input
traffic which is unknown to the flow table.
For the PCI bus test we generate and inject to the switch
frames–again with different sizes as mentioned above–for
which there is no matching flow entry. Then, on the embedded
PC we measure the rate of successfully delivered frames. That
is, no extra processing is performed on the frames and they
are simply discarded by the embedded PC. Fig. 6 depicts the
results of our experiments on the PCI throughput. We observe
that the maximum throughput is around 1500 kpps for the 64Byte-long frames and it reduces with the frame size to around
20 kpps for the larger frames (1518 Bytes). Note that these
numbers should be regarded as the maximum forwarding rates
of the switch for unknown frames (with no matching flow
entries).
Next we measure the switch throughput for the same input
traffic, i.e., the traffic flow with no matching flow entries. For
this purpose, we use a combination of OpenFlow Packet-In
and Packet-Out messages [5] as follows. Once a frame arrives
to the switch it is encapsulated by the embedded PC and
forwarded to the controller (Packt-In), which is attached to
the embedded PC via a Fast Ethernet link. The controller
is configured in a way to send a Packet-Out message for
instructing the switch to send the incoming packet out on
a given output port. Fig. 7 presents the results of the test
for different values of input packet size. For the sake of
comparison, we also conduct the same measurements on two
commercial OpenFlow-capable Gigabit Ethernet switches. The
first commercial switch we use, denoted by CS1 in Fig. 7,
has a PowerPC for switch management running at 533 MHZ
and 512 MB RAM. The second switch, denoted by CS2,
has an embedded system with a PowerPC running at 666
MHZ and 256 MB RAM. From the figure we observe that
our implementation outperforms both commercial switches in
terms of the Packet-In/Packet-Out throughput test although
both commercial switches have a relatively more powerful
embedded system. Nevertheless, we see that in all considered
cases the throughput of the switch for unknown flows is limited
by the embedded switch management system to a few hundred
packets per second. Also, note that for the ORAN and CS1
the throughput reduces with the packet size. In our case this
is due to the requirements for copying every packet from the
kernel space to the user space and back.
The main point to take away from the above experiments is
that although the switch can successfully forward at the line
rate when incoming frames find matching flows in the flow
table, for input frames with no pre-configured flow entry the
embedded system of the switch becomes a major bottleneck.
VII. C ONCLUSION
We presented the design and implementation of the OpenFlow switching protocol on an ATCA industrial platform. The
programmability of the OpenFlow protocol combined with the
modularity and composability of the ATCA as present in the
ORAN make it an ideal platform for developing and testing
various networking ideas. As an example, different OpenFlow
protocol extensions can be easily tested on the ORAN.
We also presented the performance results of the ORAN
switch. The results demonstrate that the switch can success-
1,600
switching platform is not envisaged to be used as a packet
processing module. Therefore, we believe that the focus should
be on offloading the required packet processing of the embedded system onto another interface, if this cannot be directly
realized in the switch chipset. This is the topic of our future
work, where we are going to make use of the flexibility of the
ATCA platform and push the OpenFlow encapsulation process
into one of the available processing modules (AMC cards).
K Packets Per Second (KPPS)
1,400
1,200
1,000
800
600
400
200
0
64
500
1,000
1,518
Frame Size (Bytes)
Fig. 6. Throughput of the PCI Bus connecting embedded system to the
switch chipset
350
ORAN
CS1
CS2
Packets Per Second (PPS)
300
250
200
150
100
50
0
64
500
1,518
Frame Size (Bytes)
Fig. 7. Throughput of the ORAN switch for unknown flows in comparison
with two commercial OpenFlow switches
fully forward the packets at the line rate on the condition that
the incoming traffic hits the entries already installed in the
flow table. Nevertheless, if the input traffic is unknown to the
flow table and the switch needs to consult the controller for
forwarding packets, the switch forwarding rate appears to be
inadequate for many applications. This is due to the resource
limitation of the embedded system and its PCI connection
to the switch chipset, which is unfortunately quite typical in
today’s switching platforms. This together with the fact that
consulting an external controller for packet forwarding is one
of the most important add-ons of the OpenFlow concept lead
us to the following conclusion.
In an OpenFlow network under a realistic mix of traffic
flows, the required packet processing capacity seems to be
beyond what can be offered by the embedded system of the
switch. This is particularly an issue if we take into account
that some control protocols require more packets to be sent
to the controller. There are several approaches that can be
used to address this issue. One possibility is to minimize the
need for communication between the switch and the controller
for example through proactive flow installations. This can
be further complemented by developing a new generation of
switching platforms with more powerful embedded systems
and more bandwidth between the embedded system and the
switch chipset. Nevertheless, the embedded system in a typical
R EFERENCES
[1] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,
J. Rexford, S. Shenker, and J. Turner, “Openflow: enabling innovation in
campus networks,” SIGCOMM Comput. Commun. Rev., vol. 38, pp. 69–
74, March 2008.
[2] PICMG 3.0 Revision 3.0, AdvancedTCA Base Specification, 2008.
[3] M. Handley, O. Hodson, and E. Kohler, “Xorp: an open platform for
network research,” ACM SIGCOMM Computer Communication Review,
vol. 33, no. 1, pp. 53–57, 2003.
[4] J. Lockwood, N. McKeown, G. Watson, G. Gibb, P. Hartke, J. Naous,
R. Raghuraman, and J. Luo, “Netfpga–an open platform for gigabit-rate
network switching and routing,” in Proc. IEEE International Conference
on Microelectronic Systems Education, pp. 160–161, IEEE, 2007.
[5] openflow
switch
consortium,
“Openflow
switch
specification,
version
1.0.0.,”
[Online].
Available:
http://www.openflowswitch.org/documents/openflow-spec-v1.0.0.pdf.
[6] openflow
switch
consortium,
“Openflow
switch
specification,
version
1.1.0.,”
[Online].
Available:
http://www.openflow.org/documents/openflow-spec-v1.1.0.pdf.
[7] K. Pagiamtzis and A. Sheikholeslami, “Content-addressable memory
(cam) circuits and architectures: A tutorial and survey,” IEEE Journal
of Solid-State Circuits, vol. 41, no. 3, pp. 712–727, 2006.
[8] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and
S. Shenker, “Nox: towards an operating system for networks,” ACM
SIGCOMM Computer Communication Review, vol. 38, no. 3, pp. 105–
110, 2008.
[9] “Simple network access control (snac),” [Online]. Available:
http://www.openflow.org/wp/snac/.
[10] R. Sherwood, G. Gibb, K. Yap, G. Appenzeller, M. Casado, N. McKeown, and G. Parulkar, “Flowvisor: A network virtualization layer,” tech.
rep., Openflow-tr-2009-1, Stanford University, 2009.
[11] PCI Industrial Computers Manufacturers Group (PICMG)
http://www.picmg.org.
[12] S. Stanley, “Atca, amc and microtca market: update and forecast,” Heavy
Reading, vol. 7, no. 9, 2009.
[13] Kontron, “Om5080: Technical reference manual,” [Online]. Available:
http://emea.kontron.com/.
[14] Broadcom, “Bcm 56502: product brief,” [Online]. Available:
http://www.broadcom.com/collateral/pb/56502-PB03-R.pdf.