Download Next-generation Data Center Switching

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Deep packet inspection wikipedia , lookup

Peering wikipedia , lookup

Distributed firewall wikipedia , lookup

IEEE 1355 wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Computer network wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Network tap wikipedia , lookup

List of wireless community networks by region wikipedia , lookup

Peer-to-peer wikipedia , lookup

Airborne Networking wikipedia , lookup

Transcript
WHITEPAPER
Next-generation
Data Center Switching:
Drivers and Benefits
Table of Contents
Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Factors that Drive Next-Generation Data Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Data Center Infrastructure Becomes Increasingly Software-Defined. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Why Legacy Networks Are Not Agile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Making Physical Networks Agile: Inspired by Hyperscale Network Architectures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Enterprise Requirements for Next-Generation DC Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
How a Next-Generation DC Network Differs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Ways to Introduce Next-Generation DC Networks in Your Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Early Economic Validation of Next-Generation DC Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Overview
Datacenters form the IT backbone of every enterprise. It hosts and delivers applications to endusers via core infrastructure elements such as servers, networks, and storage. Enterprise IT has
a dual mandate to drive innovation and keep the lights on. Data center infrastructure initiatives
and architectures within organizations emerge in response to both supply and demand forces.
Supply forces manifest in the form of software and hardware technology, while demand manifests
via changes in applications, mergers and acquisitions, or strategic decisions such as outsourcing
etc. This document will cover the various changes that are taking place, which precipitate the
need for next generation data center network infrastructure. Specifically, this document will cover
how core networking technologies is poised to transform legacy networking approaches, which
IT operators deem as an inhibiting force in realizing operational velocity demanded by modern
applications and software-defined data centers. The advent of public clouds is compelling IT to
offer a comparable level of service within their private clouds to their internal customers and
partners. Service providers, on the other hand, have a desire to offer high value-added offering to
their customers for new revenue streams while retaining their margins against the onslaught of
public cloud providers that stand to commoditize mature service offerings.
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
Factors that drive Next-Generation Data Centers
There are three key factors that are driving forces behind the need for a next-generation data center
infrastructure.
1.
Distributed Cloud-Native Applications: Data center applications are transitioning from
a monolithic architecture (typically found as a 3-tier application) to a more distributed
architecture powered by light-weight micro-services implemented via containers. This trend
towards cloud-native applications is driven by the need to deliver agility to business by
innovating faster and by leveraging analytical insights that relies on various data sources.
2.
Business Velocity (Speed of Infrastructure Deployment): Applications alone cannot deliver
business agility if provisioning, configuration, scaling, monitoring and troubleshooting steps
required to deploy these applications in a production environment take weeks or months.
While the ability to spin virtual machines had reduced the server provisioning time, legacy
box-by-box operational practices for the network and storage layers still need upfront
design, planning & provisioning. That makes deployment very slow.
3.
Flat IT budget: CIOs continue to prioritize technology investments to provide more
operational leverage with constant headcount. A flat budget forces a relentless drive to
eke out higher operational efficiencies and reduce CAPEX in order to deliver more with the
same. Having open and industry-standard infrastructure provides vendor choice and helps
drive down CAPEX cost.
The net impact of all the three factors above has led to the emergence of several software-defined
data center initiatives across enterprise and service provider data centers.
Data center infrastructure becomes increasingly
software-defined
Data centers are increasingly becoming software-defined to deliver agility, drive down costs and provide
flexibility. Aided by open hardware designs, automated workflows via open APIs, and scale-out software
architectures, IT teams are building next generation data centers to enable their organizations compete
in the cloud era.
Figure 1: Legacy Data Center
(top) versus Next Generation
Data Center (bottom)
PAGE 22
page
Software innovation in compute layer
The advent of virtualization technology (essentially a software layer) drove huge efficiencies in the x86
server world by consolidating multiple industry-standard physical servers into one. Enterprises also
gained operational efficiencies across various workflows leveraging automation via its management
and orchestration layers. Server infrastructure across enterprises is nearly 75-80% virtualized now
and deemed very efficient. Enterprise vendors such as VMware and Microsoft offer their own
server virtualization and private cloud solutions leveraging their hypervisor footprints. Alternatively,
OpenStack—an open-source cloud platform—is backed by Red Hat, Mirantis and Canonical leveraging
the Linux KVM hypervisor.
As applications get more distributed using micro-service based architecture, containers are emerging
as an option to co-exist along with VMs, while retaining the open x86-based hardware cost advantages.
Container infrastructure platforms such as Docker and CoreOS, leverage the open platform of
Kubernetes and Apache Mesos, for container cluster management.
Software innovation in storage layer
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
Once server management became agile with the advent of VMs, the agility bottleneck moved to
the storage and network layer. Converged infrastructure approaches such as Dell EMC’s vBlock and
NetApp’s FlexPod were the first generation responses with validated solutions that simplified design
time across server, storage and networking gear. Expansion of VMs in data center led to a need in
increased automation which led storage vendors to extend software-driven benefits realized in the
server infrastructure, to storage environments. Software-enabled storage is making inroads to reduce
both capital and operational expenses. It is available in several flavors—VM-aware storage, HyperConverged storage, Container-aware storage, and lastly, a pure software-defined storage layer (e.g:
Ceph-based) which runs on commoditized open hardware designs.
Software innovation in network layer?
With both server and storage fully adopting software design principles, the networking layer has
become the bottleneck for data center agility and modernization. Legacy network infrastructure
(switching and routing) is mostly stuck in a box-by-box operational paradigm with proprietary
hardware. This is not too dissimilar in concept from what physical servers offered pre-virtualization over
a decade ago.
While virtual networks consisting of vSwitches have emerged to provide a logical view of network,
agility of the layer 2 and layer 3 networking layer is squarely dependent on the agility of the physical
routing and switching network. Unfortunately, legacy network approaches are neither agile nor simple
as they are stuck in the pre-virtualization era. Next-generation data center networks powered by
software, offer much more promise.
Why Legacy Networks are not agile
Before discussing next-generation data center networks, it is important to understand fundamental
reasons behind the lack of agility provided by legacy physical networks.
1.
Box-by-box operational paradigm: The co-location of control and data plane within the
same network box requires that each switch and router gets configured box by box. This
slows down provisioning and deployment immensely. A box-by-box approach makes
misconfigurations and/or outages very difficult to troubleshoot and resolve. Network
operations team has no choice but to hop and hunt across boxes to zero-in on the issue.
2.
3-tier Network Architecture: Traditional physical networking architectures have core, aggregation and access layers. This approach worked well for legacy workloads with predominantly north-south traffic patterns. VM and cloud-native application traffic however,
is mostly east-west and has two impacts: increased bandwidth requirements for access
page 3
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
switches and a higher need for a scale-out architecture to ensure linear network capacity addition with increasing VM scale and density in server racks. Legacy physical network
switches have scale-up architectures. This means the customer either needs to future-proof
their network with the largest models thus paying more today, or ends up buying more
boxes later at the cost of downtime and operational expense.
3.
Proprietary hardware: Legacy data center switches and routers are closed systems developed on proprietary hardware. Vendor lock-in ensues. More importantly, proprietary hardware leads to innovation deficit compared to open, industry-standard hardware. Decoupling software from hardware allows vendors to innovate independently and hence faster,
as witnessed in the compute layer with rapid software-driven innovations on x86 servers
versus mainframe and minicomputers. Open hardware provides also vendor choice, and
consequently cost advantages during procurement.
Thus the lack of agility of the legacy physical network is the real bottleneck to realize the vision of a true
software defined data center.
Next Generation Data
Center Goals
Inhibitors faced from Legacy
Physical Network
Reason(s)
Support Distributed Applications
& Increased pace of innovation
1.
3-tier network architecture
1.
2.
Scale-up system architecture
3.
Not natively multi-tenant
With increasing distribution of compute nodes, eastwest traffic goes up. Access/Aggregate layers are
designed for north-south traffic, not east-west.
4.
Hardware-based innovations in legacy
networks take longer to absorb.
2.
Scale-up architectures does not lend well to linear
network capacity expansion. Only scale-out architectures can deliver those.
3.
Legacy network do not have tenant-native constructs, making it very difficult to isolate and secure
the various compute instances over the network.
4.
Proprietary hardware elongates refresh cycle to 5+
years versus 3+ years demanded by software-defined data centers
Speed of Infrastructure deployment
Box-by-Box operational paradigm
Control and data plane joint at the hip in every legacy
network box. Makes it difficult to simplify, automate,
and provide agility. It is slow to deploy, slow to carry out
change management, slow to deploy applications and
slow to troubleshoot.
Proprietary Hardware
1.
Proprietary hardware leads to vendor lock-in and
higher prices.
2.
Proprietary hardware cannot avail of software innovations, needed by modern data centers.
(i.e. Business Velocity)
Optimize Flat IT Budget
Table 1: How Legacy physical networks inhibit data center transformation
PAGE 44
page
Making physical networks agile:
Inspired by Hyperscale Network Architectures
Interestingly, much of networking innovations is led by hyper-scale organizations (e.g. Google, Facebook, Amazon, Microsoft). They were required to innovate due to their operational scale but the core
ideas of (a) software development on open hardware designs to keep costs down, (b) control plane
separation from data plane for greater automation, and (c) data center design of any-size via atomic
unit of ‘pod’ deployments, that are small enough to be procured in practice but large enough to be
automated and gain operational efficiencies, are valid for nearly all enterprises.
Open networking (bare metal) hardware and SDN software have been the critical pieces that made
this design feasible. The legacy approach of Core-Aggregation-Access network architecture has been
upended with a Core-and-Pod architecture, that allows agility at scale since the pods are ‘atomic’ in
concept and hence may be added when needed to grow your network without incurring a proportional
operational cost, due to automation.
The result is a design philosophy where individual element failures are expected, but the network as a
system provides extremely high uptime. By centralizing controls, the basic operations of the network
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
as a system get dramatically simplified, and automating data center-specific workflows on top become
vastly simpler.
Figure 2: Legacy Network (left)
has box-by-box operational
paradigm. Next Generation DC
Network (right) has centralized
controller and architected for
automation.
page 5
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
Enterprise requirements for Next-generation DC Networks
Enterprises also want the agility, simplicity and cost optimizations provided by hyper-scale approach to
networking, also have additional requirements:
• One phone-call (single throat to choke) support model
• Familiar configuration model to minimize staff retraining
• Fully tested and validated designs
• Support heterogeneous workloads
• Interoperate with existing operational/security practices and vendors
• No requirement to do software programming
Another enterprise challenge is that they rarely prefer a rip and replace strategy for their infrastructure.
Hence interoperability with their existing infrastructure is a mandatory requirement. There could be
other requirements around upgrade, planning, troubleshooting etc, keeping in mind the IT teams’ current skills, and knowledge.
The below diagram illustrates the CAPEX and OPEX benefits of deploying a vendor-supplied next generation data center networks that takes into account enterprise requirements, when compared to legacy
networks.
Figure 3: Illustrative example of
CAPEX and OPEX savings with
Next Generation Data Center
Networks over Legacy Networks.
CAPEX SAVINGS
OPEX SAVINGS
Legacy Networks
Support Price
Software Price
Cables & Optics
Next-Generation
Networks
16x Faster
Configuration &
Upgrades
4x Faster Application
Deployment
40% Overall
TCO Savings
12x Faster
Troubleshooting &
Issue Resolution
SDN Software
& Support Price
Hardware
Price
Cables & Optics
Open
Hardware
Price
3YR CAPITAL EXPENSE
How a Next-Generation DC Network Differs
Next-generation DC networks do not need to be revolutionary. They just need to meet the emerging
requirements driven by applications, deployment, and agility, without requiring fork lift upgrades. Given
data center networking is a mature field and a foundational element of the data center, a next generation networking approach needs to support both legacy applications and newer emerging applications.
It needs to support CLI, which many network administrators are comfortable with, GUI for management and reporting tasks, and REST-ful API for automation workflows that could be built within specific
vendor ecosystems. It needs to be able to co-exist with the legacy network. It needs to support both
manual steps (current processes) and automation. It needs to deliver network visibility at a box-level
and also deliver fabric-wide visibility. The above are table stakes for any next-generation networking approach to be accepted by enterprise customers.
In addition, next generation networks are poised to bring a set of unique capabilities that cannot be
delivered by legacy networks, due to architectural differences.
PAGE 66
page
Given its SDN fabric-based and scale-out characteristic, next generation networks are in a unique position to deliver fabric-wide troubleshooting and visibility to drive better correlation resulting in faster
resolution. By the same token, fabric-wide provisioning and upgrades in minutes are possible as well.
This does not mean that every vendor can or will deliver these capabilities. Instead, these merely point
to the potential of next generation network architectures that simply cannot be delivered by legacy
physical networks today which operate on a box-by-box operational paradigm. For example, overlay virtual networks consisting of host-based vSwitches might be able to provide a logical network view and
apply VM-specific access control policies, but they would be insufficient to configure the relevant switch
configurations on the physical network, when a new network service needs to get deployed.
One effective way to compare legacy and next-generation network capabilities would be to visualize the
differences that result in a typical network provisioning, change management, automation and upgrade
workflow. We have listed those in the below tables.
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
Network Provisioning Workflows: Legacy versus Next-Gen Networks
Per-Switch Provisioning
Workflows
Legacy Network:
Box-by-Box manual approach
Next-Gen Network:
Automatic via SDN Controller approach
Download Image & Install
Manual selection based on
model/version
Automatic download of right image & configuration from controller.
Common Configuration: RADIUS,
NTP, Syslog
Manual
Automatic after initial set-up in controller.
Link (LAG/MLAG) Configuration
Manual Verification
Automatic verification
L2 Protocol Configuration
Manual verification of STP convergence
No STP. L2 expansion natively supported with no convergence
needs.
Add/Remove New tenant
Manual VLAN availability hunt &
removal steps across each switch.
One-time tenant configuration on controller. Automatic download
to & configuration of every switch.
L3 Protocol (OSPF and/or BGP)
Configuration
Manual, for topology discovery &
egress paths
No STP. Latest topology within central controller (HA).
Table 2: Network Provisioning Workflows—A comparison between Legacy Network to Next Generation Network
page 7
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
Change Management Workflows: Legacy versus Next-Gen Networks
Per-Switch Workflow
Legacy Network:
Box-by-Box manual approach
Next-Gen Network:
Automatic via SDN Controller approach
Back-up Configuration
Manual selection based on model/version
Centralized SDN controller architecture allows for
elimination of these steps.
Replace switch & re-cable
Manual
Install right image
Manual Verification
Some vendors offer zero-touch SDN fabric upgrades,
where all you need to replace switch and re-cable. Just
update MAC address on SDN controller and you are
good to go!
Download backed-up configuration
Manual verification of STP convergence
Check L2 and L3 protocols convergence
Manual VLAN availability hunt & removal
steps
Table 3: Change Management Workflows—A comparison between Legacy Network to Next Generation
Network
Network Automation Workflows: Legacy versus Next-Gen Networks
Automation Objective
Legacy Network:
Box-by-Box manual approach
Next-Gen Network:
Automatic via SDN Controller approach
Add ESX host
Manual configuration of edge port with
LAG & port specific configuration, for every
switch.
Automatic LAG/MLAG configuration with vCenter
integration
Add new application or port
group
1. WAIT for maintenance window to enable corresponding VLAN on network
switches.
Automatic configuration of new network segment
and membership, with the right vCenter integration
hooks.
2.Manually enable VLAN on ESX edge port
3.Manage STP for VLAN
Table 4: Network Automation Workflows when either a ESX host or a New Application is deployed—A comparison between Legacy Network to Next Generation Network. With the right vCenter integration, Next Gen Networks can make your organizations agile in a way legacy networks simply cannot.
Network Upgrade Workflows: Legacy versus Next-Gen Networks
Upgrade workflow
steps
Legacy Network:
Box-by-Box manual approach
Next-Gen Network:
Automatic via SDN Controller approach
Backup configuration
Manual per-switch
Manual Step
Copy new image & upgrade
Manual per-switch.
One-time copy to SDN controller. Launch upgrade.
Check for L2 and L3 protocols
convergence
Manual Verification per-switch
None. Fabric takes care of it.
Plan downtime to upgrade
Impact to application with every switch’s
software upgrade. Takes hours to upgrade
entire network.
Zero-touch fabric upgrade, if offered by vendor,
could take as less as 15 minutes, for the entire network!
Table 5: Network Upgrade Workflows—A comparison between Legacy Network to Next Generation
Network. Entire SDN Fabric upgrades within minutes now possible with next generation networks!
PAGE 8
CONTROLLER
(CLI or GUI)
Figure 4: Next generation DC
network can deliver logical “virtual Pod” (vPod) deployment.
Leverage SDN controller to
1
2
work with multiple orchestrators
3
in different environments and
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
teams to accelerate innovation
cycles.
A
B
AA
LOGICAL “vPODS”
B
A
B
SERVICE RACKS
ways to introduce next-generation DC networks in your
environment
When a new high-impact technology approach emerges, organizations ponder how to best introduce
it in their data centers. They want to minimize risk that any new technology approach is expected to
bring, while delivering value to their stakeholders.
There are a few specific use-cases that allow enterprises and service providers to evaluate, introduce,
test and deliver next generation DC network in a methodical manner. Once organizations gain experience with next generation data center networking technologies, they tend to cut those over into the
more mission-critical production workloads, often to gain speed and OPEX advantage ahead of the HW
refresh cycle.
For Enterprise Organizations:
1.
Secure Private Cloud Workloads: Multi-tenancy is a big reason why private clouds have
emerged in enterprise organizations. Clouds are made up of inherently shared resources
but one that requires adequate isolation to respect an enterprise’s security needs. Private
cloud deployments typically have ‘tenants’ as a conceptual unit of deployment to ensure
isolation, security, agility & flexibility. For example, a finance department developing its
page 9
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
internal employee application would not want to be impeded due to access policies that
need to change within the marketing department’s development and test teams. Box-bybox networks used in VMware-based or OpenStack-based private cloud deployments often
are unable to satisfy at least one of security, agility and flexibility needs of the organization
for reasons already discussed in the above sections. A credible next generation networking
solution would offer the ability to carve out and support distinct tenants. Tenant-native constructs in next generation DC networks ensure automated workflows related to addition,
removal & change management of a tenant. These workflows get translated into relevant
configuration of the underlying elements, thus providing the building blocks for true network automation.
2.
Big Data Workloads: An island of workloads to target could be Big Data workloads such as
Hadoop-based or BI-applications focused on deriving analytics and insights. Typically, part
of a high performance 10G/40G production network, the Big Data application workflow
typically involves copying data from the production application into a separate database
before analytical processes are run on them. Thus, it allows for relative isolation from the
production applications responsible for delivering end-user experience. Big Data workloads
span multiple databases, possibly across different networks that makes it an ideal use-case
to benefit from fabric-wide visibility and troubleshooting capabilities that next generation
data center networks offer. Interruptions to Big Data processes may result in delays to generate data & business insights. Hence reducing mean time to resolve (MTTR) of any network-related problems, offered by next generation networks, becomes worthy of evaluation
from a business impact point of view. Lastly, exploding data growth in enterprises causes
big data clusters to expand with time. Thus, ‘pay as you grow’ approaches, driven by scaleout architectures, becomes very beneficial to keep costs in check. Next generation network
with its scale-out architectures and open network hardware can be a natural fit.
3.
Virtual Desktop Infrastructure (VDI): Another option to consider introducing next-generation networking solution in your environment would be for VDI deployments. They typically
are isolated from IT production server-based applications. They share east-west traffic
characteristics of server virtualization infrastructure. Given VDI is an end-user facing application, network latency and east-west bandwidth needs become critical. Monitoring VDI
performance and availability makes network/fabric visibility a must-have need that fits well
with next generation networks’ capabilities.
4. Application Development and Test environment: Selecting naturally isolated teams might
be another approach. Application development and software QA (aka Dev & Test) environment provide a rich set of demands to test a next generation network against. The application development and test engineers desire automation, simplicity and agility with an increasing focus within their own teams. They constantly work on improving their continuous
deployment approaches. A next generation network capable of delivering an agile “virtual
Pod” (vPod) deployment per team or per application instance, could be a relevant use-case
in some organizations. Leveraging the SDN controller to work with multiple orchestrators in
different environments and teams (a new next-gen capability compared to legacy networks) can accelerate innovation cycles. For example, one team works on VMware vSphere
vPod, another on VMware NSX vPod, a third on Red Hat OpenStack whereas a fourth team
on Docker container vPod. vPod isolation allows change management for a given vPod
without any impact of other vPods.
5.
Engineering Labs: Lab environments are inherently less risky than production networks. It
is typically isolated and can be an excellent first choice to test out a separate pod of next
generation network. Lab staff headcount rarely grows in organizations despite a growing
footprint of various products. Hence simplicity and agility become important attributes during evaluation. Given labs are cost centers, TCO reduction is paramount. Next generation
networks with its software-defined architecture working on open network hardware seems
to meet these criteria.
PAGE 10
For Service Providers & Cloud Providers:
1.
Managed Private Cloud: Service providers constantly look for a path to offer value-added
services for topline growth, and to reduce service delivery costs for enhanced margins.
A next generation networking approach offers both. OpenStack-based private cloud
deployments are increasingly attractive to service providers due to cost advantages led
by automation via standardized interfaces and also provide catalog-based services via
orchestration. With a decentralized SDN controller and a tenant-native approach, next
generation networking solutions that support OpenStack deployment becomes a necessary
item to truly realize the benefits of OpenStack. Legacy networks cannot provide such as
similar capability due to box-by-box operational paradigm. An example would be to provide
desktop-as-a-service to small-medium enterprises using tenant-centric network services
delivered by a virtual pod architecture, as illustrated in the diagram above.
2.
Telco Network Function Virtualization (NFV) Cloud: With exploding data usage and
bandwidth prices under pressure, carriers are required to offer innovative services to retain
their gross margins. Removing cost inefficiencies in their network, due to legacy physical
network footprint, is top of mind for most carriers. Next generation networks, with their de-
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
centralized control plane and separation of software and hardware layers, offer both agility
and cost effectiveness. Open network hardware switches reduce cost, while next generation network software allows carriers to evaluate their innovative services in a pod-based
deployment on a per tenant basis. As an example, next generation networks may be leveraged to build a NFV cloud capable of delivering tenant-centric virtual network functions
(VNF) services that need elasticity, dynamic service chaining and IP portability.
Early economic validation of Next-Generation DC Networks
Next generation networking represents a technological shift with strong business drivers. Few technologies offer a genuine shift that allows IT organizations to meet the twin goals of doing more with less
and deliver innovation. Third party validations of such claims are an essential tool to organizations trying
to build consensus within their teams to evaluate such technologies. The below diagram conceptually
illustrates the nearly 50% TCO CAPEX savings, across various categories, over a 3-year period, when
compared with legacy physical networks. The key capital expense benefits are obtained via leveraging
open hardware network switch gear, and reduced cables/optics infrastructure. Key operational expense
benefits are obtained via software-based automation, fabric-wide visibility, & simplification of various
network operation workflows. Further details around economic considerations may be found in this
report, where a specific next generation network vendor (Big Switch Networks) was evaluated.
page 11
Next-generation Data Center Switching: Drivers and Benefits
WHITEPAPER
Conclusion
IT organizations are increasing their pace towards building software-defined data centers to
stay relevant in the era of public clouds. This move is being driven by business goals around
agility, cost, and flexibility to meet the IT dual mandate of doing more with less and drive
innovation. Support of distributed applications, delivery of rapid infrastructure deployment
with enhanced cost efficiency are the key tenets that envelope this move to the next generation.
Mainstream virtualization in data centers had led to VMs finding overlay networks sufficient
to manage their networks. However, actual transport of packets is handled by the physical
network and that is where the key bottleneck to agility exists. Software-defined data centers
and private clouds can only be as agile as the underlying physical network layer, which lags
behind server and storage layers. Legacy networks is stuck in a pre-virtualization era with
its box-by-box operational paradigm. It simply cannot deliver agility, simplicity and flexibility
that next generation data centers need.
Inspired by networking designs built by hyper-scale vendors, next generation data center
networking leverages multiple forces to provide the last fitting puzzle in helping softwaredefined data centers of any size truly become a private cloud capable of running cloud-native applications, and driving business velocity within flat IT budgets. The technology forces
of (a) open hardware designs, and (b) decentralization of control from data plane, allow
emergence of new network design patterns such as leaf-spine fabric, which enables a pod
based architecture with tenant-native constructs, to deliver enhanced simplicity, security,
automation, visibility and economics. Workflows around network provisioning, change management, automation, troubleshooting and upgrades benefit significantly from an investment in next generation networks.
Enterprises, service providers, and cloud providers who seek to evaluate and introduce
these intelligent, agile and flexible network technologies in building out next generation
data centers and private clouds may select one or more of the use-cases mentioned above.
Early third party validation of economic benefits have indicated nearly a 50% TCO savings
driven largely by open hardware costs on the CAPEX side, and by simplified operations due
to automation, visibility and SDN fabric-based next generation network architecture on the
OPEX side.
Headquarters
3965 Freedom Circle, Suite
300, Santa Clara, CA 95054
+1.650.322.6510 TEL
+1.800.653.0565 TOLL FREE
www.bigswitch.com
[email protected]
Copyright 2016 Big Switch Networks, Inc. All rights reserved. Big Switch Networks, Big Cloud Fabric, Big Monitoring Fabric,
Switch Light OS, and Switch Light VX are trademarks or registered trademarks of Big Switch Networks, Inc. All other
trademarks, service marks, registered marks or registered service marks are the property of their respective owners.
Big Switch Networks assumes no responsibility for any inaccuracies in this document. Big Switch Networks reserves the
right to change, modify, transfer or otherwise revise this publication without notice.
BCF PoV Paper V1 (December 2016)