Download document 8928523

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Peering wikipedia , lookup

Net bias wikipedia , lookup

Asynchronous Transfer Mode wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Deep packet inspection wikipedia , lookup

Computer network wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Distributed firewall wikipedia , lookup

Network tap wikipedia , lookup

Airborne Networking wikipedia , lookup

Transcript
An HP Company
Introduction to ContexNet Subscriber-­‐Aware SDN Fabric Enabling NFV ContexNet: Subscriber Aware SDN Fabric enabling NFV
Contents 1 Introduction ................................................................................................................................................ 3 Background .................................................................................................................................................. 3 2 Key Technologies in ContexNet ................................................................................................................ 4 Network Overlay ....................................................................................................................................... 4 Scalable Mapping Service ........................................................................................................................ 5 Subscriber Aware Service Function Chaining .......................................................................................... 7 3 ContexNet Architecture ........................................................................................................................... 10 Components in ContexNet ..................................................................................................................... 10 SDN Controller........................................................................................................................................ 10 Mapping Service ..................................................................................................................................... 10 Switch Fabric .......................................................................................................................................... 11 Brokers ................................................................................................................................................... 11 4 Conclusion ............................................................................................................................................... 12 www.ConteXtream.com Page 2 ContexNet: Subscriber Aware SDN Fabric enabling NFV
1 Introduction Background In September 2012, a group of twelve leading service providers publicly embraced the concept of network
infrastructure virtualization. The industry rapidly moved towards the Network Function Virtualization (NFV)
concept and by Jan 2013 there were already in excess of 150 companies and 28 service providers aligned
and embracing the concept. Key benefits for NFV identified by the group of twelve operators included:
•
•
•
•
Reduction in capital and operational expenditures through hardware consolidation and resultant
economies of scale
Evolution to programmable smarter networks, reducing the human effort involved in tasks involved
in customer management (on-boarding), service integration, and maintenance.
Increased innovation by reducing time to market; by eliminating need for custom hardware,
development can be reduced and with greater focus on software, feature velocity can be increased
Elasticity and greater utilization of resources
In the NFV ecosystem, ConteXtream delivers the networking infrastructure required to make Network
Function Virtualization adoption and integration successful. ContexNet, ConteXtream’s carrier SDN
solution leverages the principles of software defined networking to deliver virtual connectivity, dynamically
mapping flows to functions without requiring pre-provisioning. ContexNet responds in real-time to carrier
conditions such as user demand, compute or network failures or congestion.
ContexNet dynamically links network functions together in a subscriber aware
manner. Subscriber Awareness is functionality built into the ContexNet solution
that provides it with the ability to associate traffic with a customer while making
appropriate traffic related networking decisions. Subscriber awareness in the
network layer interconnecting virtual network elements benefits operators by
•
•
•
Providing unprecedented visibility into the network across elements
interconnected using the SDN fabric
Enabling delivery of personalized services by leveraging functions in the
network
Increasing security by identifying and isolating flows as needed
ContexNet
provides VNF
interconnect
with subscriber
awareness
A major shortcoming in the IP networks available to operators today is that networks are relatively static and
typically optimized for traffic forwarding. These networks have no capability to forward traffic between
elements (often referred to as chaining/forwarding) on a customer-by-customer (i.e. demand) basis.
ContexNet associates traffic with a customer while making chaining/steering decisions. This advanced
subscriber steering capability enables ContexNet to link, chain, and load balance virtual network functions
at a granular level, thus maximizing concurrency and leveraging elastic allocation of virtual network function
instances while providing per customer visibility into traffic flows on a per customer basis as they cross
functions. By providing per subscriber service chains, it is possible to dynamically steer traffic to functions
only when a customer needs the element’s capabilities.
www.ConteXtream.com Page 3 ContexNet: Subscriber Aware SDN Fabric enabling NFV
2 Key Technologies in ContexNet This section discusses three key technologies implemented in ContexNet. These technologies implemented
within ContexNet make it a unique solution for operators seeking to truly realize the full potential of NFV.
Benefits promised of NFV like elasticity, higher utilization of resources, programmable networks can be
delivered in a carrier network environment only with a solution having capabilities like ContexNet.
Network Overlay Network Overlay capability is crucial for carrier grade SDN solutions and help overcome scalability
shortcomings of data center focused current generation of SDN solutions. These SDN solutions were solely
focused on the separation between control and data plane, and also the standardization of the interface
between the two (Figure 1)
API$
Applica'ons+
Open Flow helps in separation of control
and data forwarding plane, but can be
limited in terms of scalability, due to hopby-hop nature.
SDN+Controller+
Openflow$
Switch/+
Network+
Element+
Switch/+
Network+
Element+
Switch/+
Network+
Element+
Switch/+
Network+
Element+
IETF’s NVO3 has identified mechanisms
to make the solution scalable; ContexNet
leverages these to provide carrier grade
scalability
Switch/+
Network+
Element+
Figure 1 - Open Flow and SDN
SDN controllers in a flat pure Open Flow environment are centralized (logically) and can be somewhat of a
bottleneck esp. when operating at carrier scale, where there are large number of users, functions are widely
distributed and reliability requirements are extremely stringent. Scaling centralized SDN solutions when
lookups needed to program every element at every hop (when a flow is first detected) in a flat Open Flow
network can be particularly difficult. Supporting activities like VM relocation between subnets, can also be
very difficult to implement.
ContexNet implements Network Overlay functionality (as being defined in IETF NVO3) to overcome these
challenges. The Network Virtualization Overlay (NVO3) WG was formed in the IETF to make data centers
more scalable, and provide better support for multi-tenancy. By incorporating NVO3 functions, ContexNet
enables:
(1) Traffic isolation
(2) Address independence, across tenants
(3) Support for placement and migration of VMs anywhere (i.e. any datacenter)
Virtual networks become more scalable with overlays. The concept behind an overlay is simple; packets
enter and exit the NVO3 domain using a gateway. This gateway is the first hop in the NVO3 domain, when
a packet enters the domain; the NVO3 gateway maps it to a tenant and then encapsulates the same and
sends it to an endpoint (edge) NVO3 capable entity which decapsulates and presents it to the tenant VM.
The underlay elements are transparent to the endpoint, and thus the controller does not have to manage
them. However mapping flows to functions in a dynamic environment where VMs can move is a non-trivial
www.ConteXtream.com Page 4 ContexNet: Subscriber Aware SDN Fabric enabling NFV
exercise. To support the concept of VMs moving, the NVO3 group has defined the concept of Network
Virtualization Edge (NVE) and Network Virtualization Authority (NVA). NVA is the entity that helps in
mapping process (which needs to interface with orchestration systems) to keep track of VM/tenant
locations. The NVE can also be attached to external networks via. NVO3 gateway (NVE-GW) as shown in
Figure 2.
Two mechanisms possible for interface between NVE and NVA, are BGP and LISP. ConteXtream believes
that it is more scalable to use a pull/push mechanism and so it uses LISP (described later) to provide the
overlay network capability. Some implementations of SDN solutions with network overlay rely on BGP..
However as tenant networks are typically flat the aggregation benefits of BGP are limited, and BGP is used
in /32 environment. This makes it a unnecessarily complicated protocol to use for the NVE function.
ContexNet uses a map and encap scheme that is based on a mapping service from which information can
be pushed or pulled. Controllers responsible for programing elements in the network can thus scale and
converge rapidly with network changes. It should also be pointed out that, the underlying tunnel mechanism
needs to present sufficient information to the edge node so that when de-capsulating the packet, so that it
can be presented to the correct tenant. Several overlay tunneling protocols can be supported by ContexNet
including VxLAN, NVGRE, LISP etc.
Tenant)#1) Tenant)#3)
NVA)
Non,NVO3)
Domain)
NVE,GW))
NVE)
Tenant)#1)
NVE)
Underlay)
NVE)
Tenant)#2)
Tenant)#1)
ContexNet uses NVO3 “map &
encap” scheme for overlay networks.
This is implemented via. a highly
scalable distributed mapping service.
Interface between mapping service
and NVE (SDN Controller controlling
forwading) is based on LISP. The
Mapping service is agnostic to the
type of tunneling protocol used.
Figure 2 - NVO3 Architecture
IETF has identified several options for how the NVE is instantiated and connected to the tenant VMs; it can
be within the compute node, on an appliance, or connected via. VLAN’s. By placing ContexNet nodes
functioning as NVE’s at the appropriate point, it is also possible to make physical entities a part of an NVO3
virtual network.
Scalable Mapping Service The mapping service is a key technology within ContexNet. The mapping service is based on the Locator/ID
Separation Protocol (LISP) that is being developed by IETF. ContexNet’s LISP-SDN and LISP-NFV based
rd
mapping service provides scale, is extensible and makes the solution open to 3 party developers. With the
distributed mapping service, all controller instances get a global view (across data centers) of available
resources. VNF’s (VMs) can be added or removed with ease in one data center and any controller
connected knowledge of this thru the mapping service. The mapping service also implements mechanisms
to monitor availability, reachability etc. These mechanisms are incorporated when making forwarding
decisions on a per subscriber basis, which is also information that is maintained by the mapping service. A
later section describes usage of per subscriber mapping information. The following section provides an
overview of LISP and discusses its usage in the ContexNet.
LISP (Locator/ID Separation Protocol) The LISP protocol was originally defined within IETF to separate routing and identity. An IP address today
is often used for both of these distinct purposes. LISP creates a level of indirection with these two distinct
namespaces. In LISP parlance, these are called RLOC (Routing Locator) and EID (Endpoint Identity).
Within ContexNet the RLOC is the location of where a function attaches to in the virtual network. The
Identity is used at the Virtual IP, and is used as an identity for invoking the function.
www.ConteXtream.com Page 5 ContexNet: Subscriber Aware SDN Fabric enabling NFV
The LISP architecture includes:
(1) A mapping service (which is analogous to DNS) and provides an EID to RLOC mapping (for
e.g. on packet ingress)
(2) LISP ingress/egress tunnel routers (which are called XTRs within the LISP frame work).
The protocol further defines an interface between the mapping service and LISP ingress/egress routers. It
also defines a tunneling protocol between the ingress/egress XTR as shown in Figure 3.
DNS$translates$between$URLs$and$IP$Addresses$
Who$is$www.example.com?$
DNS$
Server$
Host$
IP$is${a.b.c.d}$
LISP$Mapping$resolves$between$IdenBty$(EID)$and$RouBng$Locator$(RLOC)$
Where$is$a.b.c.d?$(EID/IdenBty)$
OrchestraKon)
LISP$
Mapping$
XTR$
LocaBon$is${p.q.r.s}$$(Locator)$
LISP)
mapping)
(NVA))
Packets)to/from)
a.b.c.d)
Non,LISP)
XTR)
Domain)
NVE)
Overlay)Tunnel)
XTR)
NVE)
Underlay)
p.q.r.s)
XTR)
NVE)
ContexNet)
Mapping)
Service)
XTR)
NVE)
a.b.c.d)
Figure 3 - XTR and Mapping Service
As can be seen in the above, it is possible to combine the NVE and XTR
functionality at the packet forwarding layer with the NVA and LISP
ContexNet brings the best of
mapping services. Orchestration creates VM instances that obtain IP
both LISP and NVO3 together
addresses, which are essentially RLOCs. These instances can be
mapped to persistent identities that may be associated with the VM by
external entities. This makes the VM free to move anywhere in/across
data center. By leveraging existing overlay mechanisms like NVGRE
and VxLAN, it is now possible to create a complete L2 overlay network on an L3 (IP routing) fabric.
ContexNet ‘s implementation of the LISP framework decouples network control plane from the forwarding
plane by providing:
(1) a data plane (NVE) that specifies how the virtualized network addresses are encapsulated in
addresses from the underlying physical network
(2) a control plane that stores the mapping of the virtual-to-physical address spaces and the
associated forwarding policies, while serving this information to the data plane on demand (via. LISP)
Network programmability is achieved by programming forwarding policies such as transparent
mobility, service chaining, and traffic engineering in the mapping system, where the data plane elements
can retrieve these policies on demand as new flows arrive.
www.ConteXtream.com Page 6 ContexNet: Subscriber Aware SDN Fabric enabling NFV
Subscriber Aware Service Function Chaining In today’s largely physical networks, it is very common for operators to deploy middle boxes for advanced
services, such as intrusion detection and prevention systems (IDS/IPS); firewalls; content filters and
optimization mechanisms; deep packet inspection (DPI); caching; etc. These functions are usually deployed
as appliances on proprietary hardware and installed in the data path at fixed locations in or at the edge of
the carrier core network. As a major example of service chaining in operator networks, consider the Gi/SGi
interface is the “reference point” defined by 3GPP between the mobile packet core and packet data
networks (PDN). Typically functions deployed at this point are middle-boxes and do not use the traditional
client-server, destination based forwarding paradigm of IP and Ethernet. Rather, traffic flows through them
in a sequence. They are often implemented as logical or physical “rails” with all bearer traffic going through
all of them. This is illustrated in Figure 4 below
Subscriber#A#
P-GW#
PDN#
SGi#
Subscriber#B#
Fn#1#
Fn#2#
Fn#3#
Fn#4#
PCRF
/AAA#
Subscriber#C#
Figure 4 - Gi/SGi-LAN interface and middle boxes
Mobile operators are currently experiencing large growth in traffic on the SGi/Gi-LAN. Increased adoption of
smartphones, faster access networks are factors that have contributed to this increase in traffic. Today
operators typically deploy functions like Deep Packet Inspection, Caches, Video optimization, TCP
optimization, NAT and Firewall on the SGi/Gi-LAN for subscribers accessing Internet based
content/services. Currently these functions are deployed on dedicated hardware components. Both ETSI
and IETF have identified the problem with networking middle boxes. Within ETSI this has been identified as
VNF Functional Graph (VNF-FG) use case for NFV. IETF has a Service Function Chaining Working Group
also working on this area.
To understand the implementation architecture of subscriber aware service chaining in ContexNet, it is
useful to take a step back and look at the entire middle box networking problem. As pointed out by the SFC
WG Problem Statement (https://datatracker.ietf.org/doc/draft-ietf-sfc-problem-statement/),today these
Function Graphs middle box functions are deployed using network topologies that serve only to "insert" the
service function (i.e., a link exists only to ensure that traffic traverses a service function).These “extra” links
are not required from a native packet delivery perspective.
As more service functions are required (often with strict ordering),topology changes are needed before and
after each service function, resulting in complex network changes and device configuration. In such
topologies, all traffic, (whether a service function needs to be applied or not), often passes through with the
same strict order. The topological coupling limits placement and selection of service function) service
functions are "fixed" in place by topology and, therefore, placement and service function selection taking
into account network topology information is not viable. Moreover, altering the services traversed, or their
order, based on flow direction is not possible.
Further, many middle-box functions such as Firewall, NAT, TCP optimization, are flow state dependent. The
flow state, which is derived in the initial traffic direction, such as a TCP SYN, must also be used to apply
treatment in the opposite direction. Therefore, the same network element must process the bidirectional
www.ConteXtream.com Page 7 ContexNet: Subscriber Aware SDN Fabric enabling NFV
traffic for all flows that it is servicing. Maintaining this bidirectional requirement is critical to the functionality
of these elements. In other words, several middle box functions have endpoint affinity.
In summary when designing (virtual or physical) networks for interconnection of a chain of middle boxes,
two main factors to keep in mind are:
•
•
Order of function traversal
Bidirectional flows in order to provide service for all functions
These deployment considerations can make middle box networking very complex, especially in service
provider networks given both the very large number of endpoints (subscribers) and the exploding traffic
volumes. In this environment, designing, operating, and maintaining a network for middle boxes with high
availability can prove particularly challenging,
Gi-­‐LAN Service Function Chaining As already indicated, one use case where the above described challenges around middle box networking
exist is the Gi/SGi-LAN interface in mobile networks. In currently deployed Gi-LAN network
implementations, very often the design strategy is to:
•
Segment the traffic typically on a static basis on to service “rails”. Usually the segmentation
basis can be based on source IP pools, which are associated with specific endpoints/and
subscribers. Each rail can be designed with a set of middle boxes as appropriate for the subscriber
who will be steered on to that rail. Alternately, the composition on the functions in sequence can be
generic and equivalent to each other in terms of the functions represented on the rail
•
Install load balancers to accommodate for scale. Traffic is growing on the Gi-LAN so functions
must be front ended by external (or sometimes internal) load balancers accommodate. In today’s
pre-SDN environment, as networking is static, it is quite common for networks to be designed so
that all traffic passes through all elements.
These considerations lead to a network design depicted in Figure 5, where subscriber endpoints are first
“routed/switched” onto manageable “rails” (something the elements and load balancers can accommodate)
.
F;2;1$
F;1;1$
F;2;2$
F12$
Subscriber$
#1$
F;N$
…..$
Rail$1$
Switch$Network$
Switch$Network$
Rail$2$
Network
#2$
…..$
Subscriber$
#3$
P;GW$
LB#2$
Router/Switch$
3GPP$
Network$
Router/Switch$
LB#1$
Subscriber$
#2$
LB#X$
LB#Y$
F;N$
Subscriber$
#4$
F;A;1$
F;A;2$
F;B;1$
F;B;2$
Figure 5 - Typical setup of rails of middle boxes on Gi-LAN
www.ConteXtream.com Page 8 ContexNet: Subscriber Aware SDN Fabric enabling NFV
In the architecture depicted in Figure 5, there is a distribution of networking information (or states) in several
points. This becomes a useful consideration when we discuss the transition to NFV. Table 1 provides an
example of how state information (from a middle box networking perspective) could be distributed for the
network shown in Figure 5.
Table 1 - Example distribution of endpoint related state information across elements on Gi-LAN
State in the
Switch Network
Maps to Rail 1
and needs
functions 1,2,…N
Maps to Rail 1
and needs
functions 1,2,…N
Maps to Rail 2
and needs
functions A,B,…N
Maps to Rail 2
and needs
functions A,B,…N
Sub 1
Sub 2
Sub 3
Sub 4
State in Function Chain elements
LB#1 creates state
about affinity of
endpoint to F-1-1
LB#1 creates state
about affinity of
endpoint to F-1-2
LB#X creates state
about affinity of
endpoint to F-A-1
LB#X creates state
about affinity of
endpoint to F-A-2
LB#2 creates state
about affinity of
endpoint to F-2-2
LB#2 creates state
about affinity of
endpoint to F-2-1
LB#Y creates state
about affinity of
endpoint to F-B-1
LB#Y creates state
about affinity of
endpoint to F-B-2
….
F-N internal LB
…
F-N internal LB
…
F-N internal LB
…
F-N internal LB
It should also be pointed out that the example here in Figure 5 only considers the traffic in one direction,
however the networking needs to be setup in a bi-directional manner.
ContexNet implements subscriber aware service chaining by leveraging a mapping service (extension to
the NVA and LISP mapping) where subscriber traffic flows are identified individually and steering decisions
are made about which function instances (of a middle-box) that will process that particular users traffic flow.
The mapping layer maintains state information like shown in Table 1 in the SFC/SDN controller.
ContexNet implements a scalable subscriber service chaining solution that leverages the mapping service
and overlay controller described in the previous sections. This helps operators implement policy across the
entire service chain with customization leading to better utilization of resources. ContexNet subscriber
aware service chaining also makes it much easier to introduce new functions.
As can be seen from Figure 6, ContexNet leverages the mapping service for service chaining. At every
function hop the mapping service is used to steer traffic on a per subscriber basis to the next hop.
Subscriber%
#1%
3GPP%
N/W%
ContexNet%
Mapping%
Service%
Policy%
(e.g.%
PCRF)%
ContexNet%
Controller%
P"GW%
Network%
(Underlay)%
FWD%
FWD%
Subscriber%
#2%
Per%subscriber%service%
chain%path%mapped%across%
funcMon%instances,%
connected%via.%an%overlay%
network%
FWD%
FWD%
FWD%
FWD%
PDN%
FWD%
F%#N"i%
F%#1"1%
F%#1"2%
F%#2"1%
F%#N"1%
Figure 6 - Subscriber Aware Service Chaining
www.ConteXtream.com Page 9 ContexNet: Subscriber Aware SDN Fabric enabling NFV
3 ContexNet Architecture Components in ContexNet ContexNet node comprises of following 4 main entities as shown in Figure 7:
1)
2)
3)
4)
SDN Controller
Mapping Service (performing NVA and LISP Mapping service function roles)
OpenFlow switch fabric (hardware and/or software)
Brokers for interfacing with ETSI MANO Entities (Orchestration and VNF Manager). The
Forwarding Graph Policy Broker enable the service chaining path.
API'
ContexNet'
Or5Vi''
Vi5Vnfm'
RADIUS/
DIAMETER'
EMS'
Mgmt'
NFVO'
Broker'
VNFM'
Broker'
Mapping'Service'
LISP'
FG'Policy'
Broker'
Controller'
Open'Flow'
OF'Switch'Fabric'
Overlay'Tunnel'
Vn5Nf'
To/From'External'
Network'
Underlay'
VNF/VM'
VM'
Figure 7 - ContexNet Architecture
SDN Controller The ContexNet is based on Open Daylight and can be distributed across the data center. It interfaces with
the mapping service using LISP and with the switching fabric using Open Flow.
Mapping Service The Mapping Service nodes are part of the distributed LISP Mapping Service which serves as a repository
of data on endpoints (flows detected from NVE GW), Virtual Network Functions, policy and management.
The Mapping Service nodes are interconnected via the overlay network and serve to federate the
Controllers into a single logical entity.
LISP is a mechanism by the controllers to obtain information from the mapping service.
Mapping service is a distributed global database based on Casandra with a Distributed Hash Table lookup
mechanism. Information that the Mapping System keeps for a VM/VNF includes:
www.ConteXtream.com Page 10 ContexNet: Subscriber Aware SDN Fabric enabling NFV
•
•
•
•
•
Physical server identification
VM id
VNF type
VNF properties
Virtual Network Interfaces (ports, IP and/or MAC addresses, VLANs, type)
The mapping service includes a service that monitors health of every VNF. It also keeps track of flows
mapped during service chaining through a VNF and enables automatic Load Balancing.
Switch Fabric Switch Fabric is based on software and hardware of switches. The switches can be placed externally or
within the hypervisor. ContexNet can be deployed in multiple ways using the h/w s/w switch or VNF
combination.
Brokers ContextNet interfaces with external entities via. broker interfaces. For e.g. the orchestration layer broker
interfaces with solutions like OpenStack and then makes the information available globally via. mapping
layer. RADIUS/DIAMETER interfaces to external policy functions enable per subscriber policy.
www.ConteXtream.com Page 11 ContexNet: Subscriber Aware SDN Fabric enabling NFV
4 Conclusion ConteXtream’s award winning Carrier-SDN ContexNet solution is a distributed, subscriber aware software
defined networking fabric enabling virtual network functions. ContexNet leverages proven technologies and
runs on standard, off-the-shelf computing platforms. The unique subscriber/endpoint awareness offered by
ConteXtream’s ContexNet solution enables network operators to:
•
•
•
•
Quickly innovate new applications and network services
Deliver customized services
Improve network and resource utilization
Increase network visibility
ContexNet leverages ETSI, IETF and SDN mechanisms such as OpenFlow, LISP and Open Source
components like OpenStack to fully separate control from forwarding, location from identity and
orchestration from networking.
Key capabilities of the ContexNet solution are:
•
•
•
•
•
•
•
Application Aware: The ContexNet solution identifies traffic flows based on L1-L7 fields and steers
traffic to the specific subset of network functions required to support each subscriber flow.
Distributed Mapping: The ContexNet Mapping service provides scalable location identity
separation for endpoints including subscribers, virtual network functions and service elements. The
mapping service links identity to location, policy and other endpoint characteristics that are required
for optimal operation.
Federated Control: Subscriber flow steering policies are maintained in a real-time distributed
database, ensuring that subscriber flows remain associated with their network functions, regardless
of network entry point or failure conditions.
Overlay Network: Abstraction layer formed by the nodes allows the mobility of the users, VMs,
content or any other endpoint while maintaining consistent policies across the network. It has global
knowledge of policies and the network environment. The overlay is shielded from the states of the
underlying network switches and routers and it does not have to sync with them. Through simple
APIs, it enables fine-grained traffic engineering while mitigating the increasing burden of packet
tagging and various other virtualization workarounds
Standards-based: ConteXtream solution provides standards-based extensibility using LISP-NFV,
OpenFlow, OpenDaylight, NVO3, OpenStack and more.
Standard Hardware Deployment: ContexNet is a pure software solution. ContexNet software
components operate on standard, off-the-shelf computing platforms and is hypervisor-agnostic.
SDN-based load balancing: ContexNet provides a load balancer that is inherently built into the
SDN controlled virtual network layer
Initial use cases for ContexNet include:
•
•
•
•
•
GI-LAN Traffic Steering
Virtualied EPC
Virtualized IMS
Virtual CPE
Virtualized Session Border Controls
In these use cases, ContexNet enables virtualization of TCP Optimization, Video Optimization, Content
Filtering, Session Border Controller and IMS functions. As described, ContexNet SDN software provides a
foundation for NFV, enabling carriers to identify and steer traffic to the specific network functions needed for
each subscriber flow. ContexNet is in production with multiple Tier One Service Providers around the globe
providing virtual network connectivity for NFV.
www.ConteXtream.com Page 12