Download Networking innovations for HPE ProLiant Gen9 servers

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

IEEE 802.1aq wikipedia , lookup

Power over Ethernet wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Asynchronous Transfer Mode wikipedia , lookup

AppleTalk wikipedia , lookup

Remote Desktop Services wikipedia , lookup

Computer network wikipedia , lookup

Lag wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Distributed firewall wikipedia , lookup

List of wireless community networks by region wikipedia , lookup

Network tap wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Peer-to-peer wikipedia , lookup

Airborne Networking wikipedia , lookup

Virtual LAN wikipedia , lookup

Transcript
Networking innovations for
HPE ProLiant Gen9 servers
Contents
Abstract .............................................................................................................................................................................................................................................................................................2
Introduction ....................................................................................................................................................................................................................................................................................2
Performance and convergence..............................................................................................................................................................................................................................2
Network and virtualization technologies ......................................................................................................................................................................................................2
HPE FlexibleLOM technology.................................................................................................................................................................................................................................4
HPE ProLiant c-Class BladeSystem networking ..........................................................................................................................................................................................4
Virtual Connect with Flex-10 and Flex-20..................................................................................................................................................................................................4
FlexibleLOM for server blades................................................................................................................................................................................................................................5
Network adapters for ProLiant Gen9 server blades ...........................................................................................................................................................................6
HPE ProLiant rack and tower networking .........................................................................................................................................................................................................6
NIC Partitioning ....................................................................................................................................................................................................................................................................6
FlexibleLOM for rack mount servers.................................................................................................................................................................................................................7
NIC adapters for ProLiant Gen9 rack and tower servers ...............................................................................................................................................................8
Comparing virtualization technologies..................................................................................................................................................................................................................8
Comparison of Virtual Connect to NIC Partitioning ............................................................................................................................................................................8
Comparison of Flex-10/20 to SR-IOV .............................................................................................................................................................................................................9
Resources .........................................................................................................................................................................................................................................................................................9
Technical white paper
Technical white paper
Page 2
Abstract
ProLiant Gen9 servers address the increasing needs of your IT infrastructure to be more cost-efficient through performance and manageability.
Your network infrastructure plays a significant role in achieving your goals to meet those requirements. This paper describes the latest
technology innovations and solutions that enabled you to maximize the efficiency and simplify your network infrastructure for HPE ProLiant
Gen9 servers for BladeSystem and rack and tower servers. This paper assumes basic knowledge of common networking strategies and protocols.
Introduction
The server-to-network edge refers to the interface between a server and the first layer (or tier) of local area network (LAN) and storage area
network (SAN) switches. Since LANs and SANs each have their own requirements and topologies, this interface is often the most physically
complex part of data center’s network infrastructure.
The latest HPE networking solutions introduced for the ProLiant Gen9 servers have the power to liberate you from traditional infrastructure
complexity and simplify management of servers and networks. These solutions offer increased performance and convergence along with
enhanced functionality.
Performance and convergence
In the past, the obstacle to using Ethernet for converged networks was limited speed and bandwidth. The latest HPE network adapters for
ProLiant Gen9 servers offer multiport 10G and 20G operations. This level of performance facilitates convergence, allowing a single port to
efficiently handle different protocols and carry LAN, SAN, and management data (Figure 1).
Figure 1. High-speed port with converging data types.
HPE FlexFabric adapters and modules are converged network solutions that allow administrators to simplify and flatten the typical Ethernet
topology. FlexFabric reduces the number of physical components, simplifies management, and improves quality of service (QoS).
Network and virtualization technologies
Network adapters for ProLiant Gen9 servers offer increased functionality that improves overall performance and networking efficiency. These
adapters use the following technology innovations to enhance network connectivity:
Flex-10 and Flex-20—Provides efficient utilization of the network connection. Flex-10, Flex-20, or FlexFabric adapters allow for up to eight
configurable virtual network adapters, also known as FlexNICs. These FlexNICs can be configured for specific traffic types, such as storage,
management, VM migration, and VM traffic, among others. Bandwidth is assigned to each FlexNIC to fine tune performance, and eliminate
hardware.
FlexFabric—Combined with Flex-10 and Flex-20, Fibre Channel over Ethernet and accelerated iSCSI. HPE FlexFabric adapters provide a lossless
network environment for storage. Additionally, the FlexFabric adapters offload the storage protocols, improving CPU efficiency and storage
performance.
RDMA over Converged Ethernet (RoCE)—Dramatically improves data transfer efficiency with very low latencies for applications such as
Microsoft® Hyper-V Live Migration, Microsoft SQL and Microsoft Storage Spaces with SMB Direct 3.0. RDMA over Converged Ethernet (RoCE)
reduces CPU utilization and helps maximize host VM density and server efficiency. Using SMB with RoCE, Hyper-V Live Migration is seven times
faster than TCP/IP.
Technical white paper
Page 3
Tunnel offload—Minimizes the impact of overlay networking on host performance for Virtual Extensible LAN (VXLAN) and Network
Virtualization using Generic Routing Encapsulation (NVGRE). By offloading packet processing to adapters, customers can use overlay networking
to increase VM migration flexibility and network scale with minimal impact to performance. HPE Tunnel Offloading increases I/O throughput,
reduces CPU utilization, and lowers power consumption up to 122%. 1
TCP/IP Offload Engine (TOE)—The increased bandwidth of Gigabit Ethernet networks increases demand for CPU cycles to manage the
network protocol stack. This means that performance of even a fast CPU will degrade while simultaneously processing application instructions
and transferring data to or from the network. Computers most susceptible to this problem are application servers, web servers, and file servers
that have many of concurrent connections.
The ProLiant TCP/IP Offload Engine for Windows® speeds up network-intensive applications by offloading TCP/IP-related tasks from the
processors onto the network adapter. TOE network adapters have onboard logic to process common and repetitive tasks of TCP/IP network
traffic. This effectively eliminates the need for the CPU to segment and reassemble network data packets. Eliminating this work significantly
increases the application performance of servers attached to Gigabit Ethernet networks.
TOE is included on integrated Multifunction Gigabit Ethernet adapters and optional multifunction mezzanine cards. The TCP/IP Offload Chimney
is included as part of Windows Server® 2008 and later operating systems.
Receive Side Scaling (RSS)—RSS balances incoming short-lived traffic across multiple processors while preserving ordered packet delivery.
Additionally, RSS dynamically adjusts incoming traffic as the system load varies. As a result, any application with heavy network traffic running on
a multiprocessor server will benefit. RSS is independent of the number of connections, so it scales well. This makes RSS particularly valuable to
web servers and file servers handling heavy loads of short-lived traffic. Windows Server 2008 supports RSS as part of the operating system.
iSCSI Acceleration—Accelerated iSCSI offloads the iSCSI function to the NIC rather than taxing the server CPU. Accelerated iSCSI is enabled by
the HPE ProLiant Essentials Accelerated iSCSI Pack that is used with certain embedded Multifunction NICs in Windows and Linux®
environments.
iSCSI boot for Linux—iSCSI boot for Linux is available on HPE FlexFabric 10Gb 536FLB and 534M adapters. iSCSI boot allows the host server
to boot from a remote OS image located on a SAN within a Red Hat® or SUSE Linux environment. The host server uses an iSCSI firmware image
(iSCSI boot option ROM), making the remote disk drive appear to be a local, bootable “C” drive. Administrators can configure the server to
connect to and boot from the iSCSI target disk on the network. It then downloads the OS image from the iSCSI target disk. The HPE iSCSI boot
solution also includes scripts to significantly simplify the installation process. Adding an iSCSI HBA card is not required.
Single Root-I/O Virtualization (SR-IOV)—Allows multiple virtual machines (VMs) running Windows Server 2012 to share a single
SR-IOV-capable PCIe NIC while retaining the performance benefit of having one PCIe device to one VM association. By assigning a
Virtual Function (VF) to each VM, multiple VMs can share a single SR-IOV capable PCIe NIC that may have just one physical network port.
1
Testing was conducted by turning the tunnel offload function on or off for measurement of VXLAN bidirectional throughput, VXLAN transmit and receive physical CPU
effectiveness, and host server power efficiency.
Technical white paper
Page 4
Network virtualization allows a single adapter port to operate as four separate adapters (Figure 2) for the server’s operating system.
HPE FlexFabric components fully support network virtualization. Using open standards programmability, HPE FlexFabric components allow you
to software-define your data center’s network infrastructure to the functionality you need.
Figure 2. A single network port handling four virtualized NICs.
We have pioneered network virtualization in BladeSystem with ProLiant Gen9 servers with HPE Virtual Connect, and now offer similar capability
through NIC Partitioning in select network adapters for ProLiant rack and tower servers. Both Virtual Connect and NIC Partitioning are discussed
later in this document.
HPE FlexibleLOM technology
LAN-on-motherboard (LOM) technology provides essential network connectivity without requiring an optional network card to be installed in an
expansion slot. While the LOM design leaves standard expansion slots available for expansion functions, it also limits your connectivity options.
We developed our innovative FlexibleLOM technology, which uses a FlexibleLOM module that attaches to a dedicated connector on the server
blade or system board. FlexibleLOM technology maintains the close-coupled interface of a LOM while allowing you to select the connectivity you
need now—and adapt to network changes in the future without using a standard PCIe slot. FlexibleLOM technology is available on
ProLiant Gen9 server blades and select ProLiant Gen9 rack mounted servers.
HPE ProLiant c-Class BladeSystem networking
The HPE ProLiant c-Class BladeSystem uses a modular architecture of server blades, midplanes, and interconnect modules. This modularity
offers high server density and operational flexibility—an environment well suited for a converged infrastructure.
Virtual Connect with Flex-10 and Flex-20
A data center with high server density and varied network topographies can result in an overly complex network infrastructure. The
HPE ProLiant Gen9 servers for BladeSystem offers HPE Virtual Connect, which employs network convergence to substantially reduce and
simplify server edge connectivity as well as increase network efficiency. Virtual Connect (VC) offers wire-once simplicity, allowing you to connect
blade servers and virtual machines to data and storage networks using Ethernet, Fibre Channel, and iSCSI protocols over a single port.
When you use HPE VC Flex-10 or Flex-20 adapters and interconnect modules, VC converges the LAN traffic into 10/20 Gb Ethernet streams on
each of the adapter’s ports. VC dynamically partitions the network traffic between the adapters and a FlexFabric module. Dynamic partitioning
allows you to adjust the partition bandwidth for each FlexNIC (up to four per port). Utilizing HPE OneView 3.0, Virtual Connect Firmware v4.50
and the HPE Virtual Connect FlexFabric-20/40 F8 Module, dynamic partitioning can adjust bandwidth for up to eight FlexNICs.
The HPE Virtual Connect FlexFabric family supports both single- and dual-hop FCoE. You can use one FlexFabric module for all your data and
storage connection needs. Single-hop FCoE enables a converged fabric at the server end without impacting traditional LAN and SAN. Dual-hop
FCoE bridges the gap between a converged fabric at the server end and the aggregation layer.
Technical white paper
Page 5
The HPE Virtual Connect FlexFabric-20/40 F8 Module (Figure 3) works with HPE FlexFabric Flex-20 adapters to eliminate up to 95 percent2 of
network sprawl at the server edge. This device converges traffic inside enclosures and directly connects to external LANs and SANs. Flex-20
technology with the industry’s first high-speed 20 Gb connections to servers can achieve a total bandwidth of up to 240 Gbps to a LAN and
SAN—a 3X improvement over legacy 10G VC modules.
Figure 3. HPE Virtual Connect FlexFabric-20/40 F8 Module.
A redundant pair of Virtual Connect FlexFabric-20/40 F8 Modules provides eight adjustable downlink connections (six Ethernet and two
Fibre Channel, or six Ethernet and two iSCSI, or eight Ethernet) to dual-port 20 Gb FlexFabric adapters on servers. Up to eight uplinks are
available for connection to upstream Ethernet (up to 40GbE) and Fibre Channel switches.
FlexibleLOM for server blades
The FlexibleLOM Blade adapter (designated “FLB” in the name) installs as a daughter card on the server blade board (Figure 4). 2
Figure 4. FLB adapter installation for a ProLiant server blade with the side cover removed.
The c-Class enclosure System Insight Display provides the FLB adapter’s network status (Figure 5).
Figure 5. FlexibleLOM information from the enclosure’s System Insight Display for a selected blade.
2
HPE internal calculations comparing the number of hardware components of traditional infrastructure vs. HPE BladeSystem with two Virtual Connect FlexFabric modules.
Technical white paper
Page 6
Network adapters for ProLiant Gen9 server blades
Table 1 lists the latest network adapters available for ProLiant Gen9 server blades.
Table 1. Optional network adapters introduced with HPE ProLiant Gen9 c-Class server blades.
Adapter name
Fabric connectivity
Features
HPE FlexFabric 10Gb 2-Port 536FLB Adapter
10 Gb Ethernet, Virtual Connect Flex-10
TCP Offload Engine, MSI/MSI-X, virtualization features
HPE Ethernet 10Gb 2-Port 560FLB Adapter
10 Gb Ethernet, iSCSI
HPE teaming, iLO, LSO and checksum offload, time stamping
(IEEE 1588), virtualization features, SR-IOV compliant
HPE Ethernet 10Gb 2-Port 570FLB Adapter
10 Gb Ethernet, iSCSI
HPE teaming, iLO, LSO and checksum offload, time stamping
(IEEE 1588), virtualization features, SR-IOV compliant
HPE FlexFabric 20Gb 2-port 630FLB Adapter
20 Gb Ethernet, Virtual Connect Flex-20 (NIC, FCoE,
iSCSI per port)
HPE teaming, iLO, LSO and checksum offload, time stamping
(IEEE 1588), virtualization features, SR-IOV compliant, NVGRE
HPE FlexFabric 20Gb 2-port 630M Adapter
20 Gb Ethernet, Virtual Connect Flex-20 (NIC, FCoE,
iSCSI per port)
HPE teaming, iLO, LSO and checksum offload, time stamping
(IEEE 1588), virtualization features, SR-IOV compliant, NVGRE
HPE FlexFabric 20Gb 2-port 650FLB Adapter
20 Gb Ethernet, Virtual Connect Flex-20 (NIC, FCoE,
iSCSI per port)
HPE teaming, iLO, LSO and checksum offload, time stamping
(IEEE 1588), virtualization features, SR-IOV compliant, VXLAN,
NVGRE, RoCE*
HPE FlexFabric 20Gb 2-port 650M Adapter
20 Gb Ethernet, Virtual Connect Flex-20 (NIC, FCoE,
iSCSI per port)
HPE teaming, iLO, LSO and checksum offload, time stamping
(IEEE 1588), virtualization features, SR-IOV compliant, VXLAN,
NVGRE, RoCE*
*upcoming feature (for testing only)
HPE ProLiant rack and tower networking
Network convergence and virtualization is no less beneficial to rack and tower IT infrastructures than it is to BladeSystem. Like
ProLiant Gen9 servers for BladeSystem, ProLiant Gen9 rack and tower servers also implement FlexibleLOM technology.
NIC Partitioning
NIC Partitioning brings the adapter virtualization capabilities of HPE BladeSystem Virtual Connect to network adapters for ProLiant Gen9 rack
and tower servers. Like Virtual Connect, NIC Partitioning allows a single 10G port to be presented to the operating system as four separate
NIC Partitions (NPARs). Each NPAR is an actual PCIe Function (PF) that appears to the system ROM, OS, or virtualization OS as a discrete
physical port with its own software driver and behaving as an independent NIC, iSCSI, or FCoE adapter. This allows an NPAR-capable 10G dualport adapter to provide up to eight network functions (Figure 6). NIC Partitioning allows you to set the bandwidth appropriate for each NPAR’s
task. Furthermore, you can configure the weighting of each NPAR to provide increased bandwidth presence when an application requires it, and
can set QoS for each NPAR.
Figure 6. NIC Partitioning of an NPAR-capable 10G dual-port adapter, with provisioned bandwidth.
Technical white paper
Page 7
Each virtual machine (VM) assigned to a specific NPAR can acquire as much free bandwidth as is available, while incrementally yielding
bandwidth back as demand increases on the other partitions.
NIC Partitioning offers the following benefits over standard network connectivity:
• Reduce network sprawl—connecting servers to LANs and SANs can be consolidated at the server-to-network edge.
• Maximize scalability—converged network functionality means adding servers or network devices is actually easier.
• Optimize resource allocation—a server with a dual-port adapter can handle eight functions can efficiently handle bandwidth requirements.
• Improve CPU utilization—shifting the network virtualization task from the OS to the adapter frees up the CPU for other work.
• Simplify administration—adding or replacing network cards or moving workloads from one partition to another is accomplished within minutes.
NIC Partitioning is switch-agnostic and works with a variety of standard Ethernet switches or pass-through devices. NIC Partitioning provides a
significant benefit for consolidation of server-to-network connectivity and bandwidth utilization within a given server.
FlexibleLOM for rack mount servers
The FlexibleLOM Rack (FLR) adapter for rack mount servers connects to the system board and secures to the chassis floor with a captive
thumbscrew (Figure 7). As with traditional LOM implementations, the FLR adapter’s network connectors are accessible through the rear panel of
the server chassis.
Figure 7. FLR adapter installation for a ProLiant rack mount server.
The FLR adapter maintains the full functionality of typical LOM architecture including Wake-on-LAN and thermal management. You can view the
status of an FLR adapter on the System Insight Display (Figure 8).
Figure 8. The FLR adapter’s NIC indicators on the server’s front panel.
Technical white paper
Page 8
NIC adapters for ProLiant Gen9 rack and tower servers
The adapter embedded on ProLiant Gen9 rack and tower servers is upgradeable to a variety of 1 and 10 Gb network adapters to meet your
needs.
Table 2. Optional network adapters available for HPE ProLiant Gen9 rack mount servers.
Adapter name
Fabric connectivity
Features
HPE Ethernet 1Gb 4-port 366FLR Adapter
1 Gb Ethernet
Above features plus TCP checksum and segmentation, MSI-X,
Wake-on-LAN, Jumbo frames, IEEE 1588
HPE Ethernet 10Gb 2-Port 530FLR-SFP+
Adapter
10 Gb Ethernet
VLAN tagging, NIC teaming, PXE boot, SR-IOV compliant, NPAR
HPE FlexFabric 10Gb 2-port 534FLR-SFP+
Adapter
10 Gb Ethernet
VLAN tagging, NIC teaming, PXE boot, SR-IOV compliant, NPAR
HPE Ethernet 10Gb 4-Port 536FLR-T
Adapter
10 Gb Ethernet
VLAN tagging, NIC teaming, PXE boot, SR-IOV compliant, NPAR
HPE InfiniBand QDR/Ethernet 10Gb 2-Port
544FLR-QSFP Adapter
InfiniBand (IB) or Ethernet
Dual functionality: IB bandwidth up to 56 Gbps, Ethernet bandwidth
up to 10 Gbps
HPE FlexFabric 10Gb 2-Port 556FLR-SFP+
Adapter
10 Gb Ethernet, iSCSI, FCoE, FlexFabric
Converged enhanced Ethernet standards, tunnel offload support for
VXLAN and NVGRE, RoCE
HPE Ethernet 10Gb 2-port 560FLR-SFP+
Adapter
10 Gb Ethernet
Jumbo frames, PXE boot, checksum segmentation offload, time
stamping (IEEE 1588), virtualization support, SR-IOV compliant
HPE Ethernet 10Gb 2-port 530FLR-SFP+
Adapter
10 Gb Ethernet
Checksum segmentation offload, time stamping (IEEE 1588),
virtualization support, SR-IOV compliant
Comparing virtualization technologies
We have discussed network virtualization technologies that are available to ProLiant Gen9 servers. These technologies achieve similar
functionality and results, but employ different methods and dissimilar means of management. The following sections compare the technologies
and offer guidance for which may be best suited for your environment. More information on Virtual Connect and converged networks in general
is available through links provided in the Resources section of this paper.
Comparison of Virtual Connect to NIC Partitioning
Virtual Connect and NIC Partitioning both allow server blades and rack mounted servers to achieve similar convergence goals. Their differences
are primarily in how they are managed. VC manages QoS across multiple servers, and orchestrates network management across multiple
adapters, switches, and servers. NPAR management is limited to managing a single adapter within a single server at a time, and does not have
the scalability of VC.
Table 3. Comparison of Virtual Connect to NIC Partitioning.
Features
Virtual Connect
NIC Partitioning
Server platform
ProLiant c-Class and Integrity BladeSystem
ProLiant rack and tower servers
Requirements
Flex-10/20 adapter, Virtual Connect FlexFabric module
HPE 10G 530FLR and 534FLR adapters
# of functions
Up to 8* per 2-port device (4 FlexNICs per port)
Up to 8 per 2-port device (4 NPARs per port)
Management
Consolidated across multiple adapters/switches/servers
Utility controls one adapter per server
Benefits
• VM migration
• Single-point management of multiple adapters,
switches, servers
• Scalable deployments
• Reduces hardware costs
• Improves CPU utilization
• Switch agnostic
Disadvantages
• Uses CPU bandwidth
• Requires local management
• Lacks scalability
• Adding partitions requires reboot
* Up to 16 (or eight FlexNICs per port) with HPE OneView 3.0, HPE Virtual Connect v4.50, and HPE Virtual Connect FlexFabric-20/40 F8
Technical white paper
Comparison of Flex-10/20 to SR-IOV
HPE Flex-10/20 Technology and SR-IOV both provide improved I/O efficiency without increasing the overhead burden on CPUs and network
hardware. Both Flex-10/20 and SR-IOV technologies are hardware-based solutions, but each uses a different approach.
Table 4. Comparison of Flex-10/20 to SR-IOV.
Features
Flex-10/Flex-20 technology
SR-IOV
Server platform
ProLiant c-Class and Integrity BladeSystem
ProLiant rack and tower servers
Requirements
Flex-10/20 adapter, Virtual Connect FlexFabric-20/40 F8 Module
Windows Server 2012, SR-IOV compliant NIC adapter
Virtual Ethernet Bridge (VEB) deployment
Deployed in hypervisor software
Deployed in NIC hardware
# of functions possible
Up to 8 per 2-port device (4 FlexNICs per port)
Up to 256 PCIe Functions
Benefits
• Good performance between VMs
• Partitioned data flow is more cost-effective
• Flexible management and compliancy with network standards
• Reduces CPU utilization
• Increased network performance due to direct data
transfer between NIC and guest OS memory
• Scalability
Disadvantages
• Uses CPU bandwidth
• Requires local management
• Lacks scalability
• Adding partitions requires reboot
Resources
HPE Virtual Connect general information:
hpe.com/info/virtualconnect
“Converged Networks and Fibre Channel over Ethernet” white paper:
h20566.www2.hpe.com/hpsc/doc/public/display?sp4ts.oid=5039733&docLocale=en_US&docId=emr_na-c01681871
“Overview of HPE Virtual Connect technologies” technical white paper:
h20195.www2.hpe.com/V2/GetDocument.aspx?docname=4AA4-8174ENW&cc=us&lc=en
Implementing Windows Server 2012 SR-IOV on HPE ProLiant servers:
h20566.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c03514877&lang=en-uk&cc=uk
Learn more at
hpe.com/us/en/servers/networking.html
Sign up for updates
© Copyright 2014, 2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without
notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard
Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States
and/or other countries. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered
trademark of Linus Torvalds in the U.S. and other countries. All other third-party trademark(s) is/are property of their respective owner(s).
4AA5-4076ENW, November 2016, Rev. 2