Download ExtremeFabric - Extreme Forum 2017

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Parallel port wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Lag wikipedia , lookup

AppleTalk wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Dynamic Host Configuration Protocol wikipedia , lookup

Computer network wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Distributed firewall wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Network tap wikipedia , lookup

Peering wikipedia , lookup

Airborne Networking wikipedia , lookup

List of wireless community networks by region wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Transcript
Ethernet Fabrics
Extreme Networks ExtremeFabric
ExtremeFabric is a fully meshed auto configuring routed Fabric that uses host
routes to allow for IP mobility of any host on the network seamlessly. It supports
logical L2 domains with VXLAN and logical L3 domains.with VRF.
Mikael Holmberg
Sr. Global Consulting Engineer
Disclaimer
This product roadmap represents Extreme Networks® current product direction.
All product releases will be on a when-and-if available basis.
Actual feature development and timing of releases will be at the sole discretion of Extreme
Networks.
Not all features are supported on all platforms.
Presentation of the product roadmap does not create a commitment by Extreme Networks to
deliver a specific feature.
Contents of this roadmap are subject to change without notice.
2
Agenda




Why Ethernet Fabrics are needed?
ExtremeFabric….what is it?
Spine/Leaf Architecture
SummitX870/690
Why Fabric?
Traditional 3-Tier Campus/Data Center Architecture
 Simple to build and good for north-south traffic
 But….
– Not great for east-west traffic
– Limited scalability - upstream links must be large enough
to handle aggregated downstream links
– Uplink to the rest of the network typically goes through the
core
– Inefficient multi-pathing (due to STP) can cause
unpredictable latency
3-tier or “fat-tree” network
A new design approach is needed!
Why Fabric?
Fixed-Form Factor displacing Chassis
 Performance:
– Newer silicon has put fixed form switches on par with chassis
 Availability:
– Redundant connectivity allows 100% uptime
 Flexibility:
– Customers avoid chassis vendor lock-in
– Less rack space, power, cooling, etc.
 Scalability:
– Chassis runs out of slots
 Cost Savings:
– Price per port less with fixed form factor
– Better utilize power, rack space, cooling, cabling
X870
3.2 Tb
BD8K
3.8Tb
(total)
BDX8
2.56Tb
(slot)
Modular Chassis Market in Decline
CAGR
2015-2020
Software*
14.7%
Modular
-0.8%
Fixed
1.4%
WLAN
11.2%
TAM
3.1%
*Software = switch OS and SDN
controllers (ie not NMS, NAC, etc)
The emergence of high-density fixed form factor switches can reduce or eliminate the need for costlier, oversized chassis-based switches
in the data center. The move toward FFF switches will help network managers deliver higher-performance networks and reduce footprint,
power, cooling and TCO.
Mark Fabbi, Gartner Data Center Analyst
Why Fabric?
Leaf-Spine dominating Data Center & even entering Campus designs
Spine
Leaf





Scales beyond LAG and STP – uses all the links
Simplifies: Flattens the network - eliminates unnecessary layers
Improves latency – less layers to traverse
Reduces CapEx – speed of high performance fixed switches dropping
Reduces downtime – redundant spines enables ISSU
But an efficient, resilient multi-path protocol between switches is needed!
How does the industry define an Ethernet Fabric*?
 Interconnects switches over a single logical L2 and/or L3 construct
 Supports multiple active paths and fast failover
– ECMP-based forwarding & flow-based load balancing
– Eliminates need for STP
L2/L3 Fabric
 Uses zero/low-touch provisioning
– Especially when adding and removing fabric devices
 Operates over a variety of topologies
– Leaf-Spine, mesh, etc.
 Is highly-scalable and more…
– Centralized management, not limited by VLAN limitation (4096)
* “Innovation Insight for Ethernet Switching Fabric”, April 29, 2016
ExtremeFabric
ExtremeFabric Introduction
ExtremeFabric is a network of cooperating interconnected devices that create a Fabric of any scale for any topology,
providing fully redundant, multipath routing. The Fabric grows dynamically and freely, not bound to any well-known
topology such as Clos or Leaf/Spine.
ExtremeFabric nodes build a secure Fabric by running the very scalable BGP protocol to exchange topology information
about the location of IP Hosts. It uses IPv6 as the network layer to transport IPv4 and IPv6 traffic. Host addresses can be
IPv4/32 or IPv6/128 addresses.
Summary
•
•
•
•
•
•
•
•
•
Zero touch, zero local configuration
RESTful API to manage the Fabric
Secure with TLS and certificates and/or 802.1X
If present, policy rules, configuration, and metadata are identical for
all Fabric nodes
Equal Cost Multi Path (ECMP) through the Fabric
Automatic DHCP Relay Services
LAG attached Bridges and Servers with EasyLAG to multiple
ExtremeFabric nodes
IPv4/IPv6 attached hosts – 32/128 bit host routes
Default Gateway / Static routes via REST configuration
ExtremeFabric – What is it?
Combines the fabric elements into a single domain:
• The Fabric is a collection of individual L3 nodes
• Fabric transparent to end devices
• Policy and overlays applied at the Fabric edge
• No subnets, no VLANs, no VRFs required within the Fabric
•
•
•
•
Directly attached Hosts
Gateway Router
Bridged Network
EMC and ZTP+
- Configuration
- Policy
- State
•
Server
•
VXLAN Overlay Network
- LAG/MLAG
ExtremeFabric Element ID and Discovery
Network Element ID and Discovery
•
•
•
802.1X for node authentication
DHCP for IP address discovery
LLDP to identify other ExtremeFabric nodes
802.1X
802.1X
LLDP
802.1X
DHCP
DHCP
802.1X
LLDP
DHCP
DHCP
LLDP
ExtremeFabric
LLDP
ExtremeFabric Network Fabric
LLDP with
BGP flag
LLDP with
BGP flag
LLDP with
BGP flag
LLDP with
BGP flag
LLDP with
BGP flag
LLDP with
BGP flag
LLDP with BGP flag used to find other
ExtremeFabric nodes
LLDP with
BGP flag
LLDP with
BGP flag
•
LLDP with
BGP flag
LLDP with
BGP flag
Network Fabric
LLDP with
BGP flag
ExtremeFabric
LLDP with
BGP flag
ExtremeFabric IP Core
IPv6
Core
IP Core
•
IPv6 link-local address discovery to form IPv6
core
ExtremeFabric
ExtremeFabric BGP Peering
LLDP with EasyBGP
LLDP with EasyBGP
LLDP with EasyBGP
BGP Peering
An automated topology process uses LLDP to initiate
BGP-based ExtremeFabric connection
LLDP with EasyBGP
• Zero-touch, no BGP configuration needed
• BGP route forwarding based on dynamically
assigned ASNs
ExtremeFabric
BGP
Based
Fabric
LLDP Extensions
LLDP custom vendor extensions
ExtremeFabric Nodes extend LLDP to advertise Extreme Easy
Networking functions.
TLV Type 127 – Vendor Specific
Octets 1 – 6 – Common preamble
Octet
Function
Value/meaning
1
Type (7 bits)
127 – Vendor Specific
1-2
Length (9 bits)
Variable by Subtype
Extreme Capabilities (Subtype 1)
Octet #
1–6
7–8
Capabilities
Common Preamble
Capabilities
A 16-bit mask indicating ExtremeFabric capabilities on the
advertising link. ExtremeFabric supports two capabilities; EasyLAG
and EasyBGP. If EasyBGP bit is set, the neighbor can participate in
a BGP session. Either side of the IP link may initiate an external BGP
session as long as EasyBGP parameters are also present in the
LLDP advertisement.
B
3-5
OUI
D8-84-66 Extreme OUI
6
OUI Subtype
1 - Extreme Capabilities
2 - ExtremeFabric
L
R
R
R
R
R
R
R
R
R
R
R
R
Bit
Function
Value/meaning
B
easyBGP
0 = not capable of EasyBGP
1 = capable of EasyBGP
L
easyLAG
0 = not capable of EasyLAG
1 = capable of EasyLAG
R
Reserved for future use
14 bits – must be 0
R
R
LLDP Extensions
EasyBGP (Subtype 2)
IP Link Address
Octet #
1–6
7 – 10
11 – 14
15 – 20 or 32
BGP Parameter
Common Preamble
AS – Number
Router – ID
IP Link Address
Describes the IP link address to use when establishing a BGP
session.
The EasyBGP session will establish with the presence of the subtype within the
LLDP advertisement. The session will drop and re-connect if any of these
parameters change.
Octet
Function
Value/meaning
15
Address
Length
4 octets – IPv4 address length
16 octets – IPv6 address length
AS – Number
16
Address
Family
1 – IPv4
2 – IPv6 (default)
17-20 or 32
Address
IP address (default IPv6 link-local)
AS – Number is allocated from the 4 octet private number space and derived from
the Chassis IPv4 address beginning with 0XFE followed by the last 3 bytes of the
Chassis IP address. IP addresses allocated from an authoritative site such as
DHCP make it statistically improbable of producing duplicate IP addresses. All other
AS – Numbers are valid as long as they are in accordance with RFC 6793.
Router Identifier
BGP Router – ID is unique value throughout the ExtremeFabric domain identifying
the BGP node.
Advertising Host Routes using BGP
Redistribution of host routes and forwarding
BGP Myth
BGP Reality with ExtremeFabric
Using IPV6 neighbor and IPV4 ARP discovery, ExtremeFabric
nodes propagate all host routes learned locally via redistribution
into BGP and install them into the appropriate IPv4 or IPv6 FIBs.
This allows forwarding of IPv4 and IPv6 hosts to take place
throughout the ExtremeFabric thereby achieving end to end
connectivity.
It’s difficult to
configure
No configuration of BGP is required with ExtremeFabric.
Instead, BGP is automatically started with pre-set
parameters whenever a new switch enters the fabric
It’s complex
BGP routing is actually less complex that link-state
routing protocols, like OSPF. Besides, ExtremeFabric
BGP operation is transparent to the network
administrator.
It converges slowly
ExtremeFabric substitutes BFD (RFC 5880) for BGP’s
standard timers to deliver sub-second convergence
It doesn’t support
neighbor discovery
ExtremeFabric uses IPv6 neighbor discovery to build its
BGP route forwarding paths
It’s hard to
troubleshoot
Since ExtremeFabric is IP-based, standard IP tools
(ping, traceroute, etc.) can be used for troubleshooting
The hosts may attach having a single link or in case of LACP,
using multiple links. An ExtremeFabric supports EasyLAG which
gives it inherent ability for an MLAG function without supporting
the MLAG protocol.
Other network attachments to the fabric require cloud managed
configurations and policies. These policies may enable additional
services on an access port like standard routing and VPNs, as
well as exception rules for handling hosts connected behind
bridged networks.
ExtremeFabric Attachement
Extreme​Fabric network supports attachment types of the
following:
Gateway Routers
Gateway routers (multiple) - All ExtremeFabric nodes will keep a list of Default
Gateways that are provided by the configuration server defined by RESTful
Schema. Packets should be forwarded using ECMP mechanism towards the
Default Gateways if the ExtremeFabric node determines that the host route is
outside of its network.
Bridge devices- can attach via LAG as depicted to the right in the picture.
Servers may also connect via LAG as well as directly attached IP hosts. Note
that all hosts, even those behind Bridges, are discovered and identified as
IPv4 and/or IPv6 addressed hosts. MAC addresses are not learned by
ExtremeFabric nodes. Only the ARP or neighbor table is consulted to resolve
an IP address to a MAC which is needed for Layer-2 framing.
ExtremeFabric
EasyLAG
EasyLAG
Bridged
Network
Server
User
User
EasyLAG
(aka Fabric LAG)
EF3
EF2
​A Fabric wide LAG is dynamically formed to each host that attaches to the
ExtremeFabric network with LACP ensbled. All ExtremeFabric nodes use the
same chassis MAC so hosts running LACP will form a single LAG with the
entire Fabric, as each host views the network of ExtremeFabric nodes as a
single entity. The domain wide chas​sis MAC is chosen but can be configured
via the shared OneConfig.
​The ExtremeFabric network will always provide full connectivity. Nonpreferred paths may be utilized while ExtremeFabric nodes discover end
users using the host mobility feature. Eventually, the Fabric will converge on
the best path. In this example, A and B are behind a bridge LAG to the
ExtremeFabric.
1. Using ARP, A and B are discovered by their respective EF nodes and
advertised by BGP throughout the Fabric.
2. The Fabric can uniquely identify each LAG and assign a value to a BGP
attribute along with the host address.
3. All ExtremeFabric nodes connected to the same LAG can do host-mobility
checks to completely discover all users.
4. An optimization would be to send a BGP attribute using either Address
Family or extended community name to identify a common LAG as EF1
and EF5 use the same EasyLAG.
5. Having this information allows ExtremeFabric nodes to take a direct path
to each host as instead of a non-preferred path through the Fabric.
3.
2.
EF1
EF4
4.
EF5
ExtremeFabric
5.
EF1 FIB
Dest A/32, Nhop
LAG port
Dest B/32, Nhop
BGP port EF2
A/32
1.
EF5 FIB
Dest A/32, Nhop
BGP port EF4
Dest B/32, Nhop
LAG port
B/32
Non preferred AB Forwarding paths
Preferred AB Forwarding paths
ExtremeFabric VXLAN Overlay
W
A
• VXLAN terminated at the ExtremeFabric
•
VXLAN MAC Routes
Distributed by BGP E-VPN
A, W
•
•
A, W
A, W
Fabric VTEP
• MBGP is used to carry VXLAN VNI and LTEP
information, using a proprietary SAFI in the
capability field of the BGP Open Message to
indicate support of the VXLAN address family.
Treated like any L2
Host Routing across Fabric
B, X, Y, Z
B, X, Y, Z
B, X, Y, Z
A, B, W, X
A, W, X
A, B, W
Local MAC Routes Learned
at Edge
X, Y,
Z
B, Y,
Z
Y, Z
B
X
Y
Z
ExtremeFabric ISSU
•
•
ExtremeFabric
Servers
Remain
Reachable
During
Upgrade
ExtremeFabric
Orchestrated Fabric Upgrade
Via ExtremeManagement or other
•
Upgrade half of the fabric
•
Then the other half
ExtremeFabric
ExtremeFabric
Policy
 Role-based policy supported across ExtremeFabric
•
•
•
•
Proven: part of Extreme’s solution for years
Enables user/traffic segmentation based on policy
Class of Service can also be applied
Centrally orchestrated via EMC
•
•
Dynamic policy orchestration with ExtremeConnect
Orchestration between the edge of the fabric
(between Hypervisor and the ToR/EoR) with
ExtremeControl and Connect



•
SysAdmin creates virtual
networks and VMs
Report VM Location, Switch
Port Status, Connectivity
Profile
For VMware ESX and OpenStack (ML2)
OpenStack “hierarchical port binding”
For VMware NSX with Microsegmentation
Automated virtual network creation, assignment,
distribution
Compute Host
XNV dentifies and tracks
VMs
ExtremeFabric
XNV- Extreme Network Virtualization
ezVXLAN .py– VXLAN Python App running on EXOS
ExtremeFabric REST API
EXOS auto-generates the ExtremeFabric REST API swagger documentation from the Python scripts. Each EXOS
device has a local copy of this auto-generated swagger document. Although direct RESTful access is available, the
following diagram illustrates how Cloud Connector with EMC interact with the REST infrastructure.
Order of Operations
EMC
1. The Cloud Connector (CC) communicates with EMC,
via its REST API, and retrieves the JSON block that
contains the ExtremeFabric configuration, among
various other JSON configuration objects.
1.
REST API
(JSON)
Cloud
Connector
2.
3.
HTTP Server
(CherryPy)
/extr
API
Dispatcher
/rest
/opncfg
/<other-apis>
EXOS
2. The Cloud Connector uses the EXOS REST API to
configure ExtremeFabric. The CC uses the /rest/
extr/fabric/onecfg/vrfs API to create the Fabric.
Then, the CC uses the /rest/extr/fabric/onecfg/
hosts and /rest/extr/fabric/onecfg/vlanMappings to
create the fabric host entries and fabric vlan
mapping entries respectively.
3. The EXOS Fabric REST API interprets the JSON
into EXOS CLI commands to configure the
Fabric.
Key Elements for a complete network fabric solution
Cloud
Services
ZTP+ to bootstrap the underlay Fabric
Fabric management, troubleshooting, updates (ISSU) etc.
NBI
•
•
Define, automate policy & segmentation for attached endpoints
Authenticate fabric elements before they join
• ExtremeConnect
•
Management
Center
Orchestrate the overlay through other IT systems
•
Works with all leading DC virtualization solutions!
Radius, Policy, REST, ZTP+, SNMP, Sflow+
• ExtremeAnalytics
•
Connect
SBI, NBI
Deep insights into application flows, performance in the fabric
• ExtremeFabric
•
•
Any to any active meshed L3 “underlay” fabric
Open Interfaces NBI, SBI at every layer
•
Allows to build an open network fabric solution where parts
•
Can be moved from on-prem to the cloud OR hybrid
•
•
Network Fabric
Primarily ZTP+, JSONRPC and REST (OpenConfig) interfaces
Can be replaced with other solution components (BYO solution)
•
Analytics
• ExtremeControl
Control
•
•
IT Systems
Management
• ExtremeManagement
Along good/better/best: best with all Extreme solution
components!
ExtremeFabric
Spine/Leaf Architecture
100Gb Enabled Spine-Leaf Fabric Architecture
100GE Enabled Fabric
Spine Switches
X870-32c
40Gb/100Gb
Uplinks
100Gb
Uplinks
40Gb
Uplinks
X670
X770
X690
X870-96x-8c
Leaf Switches
1
42
1
2
3
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
20
21
22
23
24
25
26
25
26
27
28
27
28
29
30
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
3
4
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
3
4
2
3
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
20
21
22
23
24
25
26
25
26
27
28
27
28
29
30
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
3
4
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
3
4
42
ID
N3K-C3064PQ-FA
STAT
1
42
4
5
6
42
ID
41
N3K-C3064PQ-FA
STAT
41
ID
1
4
5
6
N3K-C3064PQ-FA
STAT
41
1
41
ID
1
2
40
N3K-C3064PQ-FA
STAT
40
Gb 1
2
40
40
Gb 1
1
2
1
2
1
2
1
2
1
2
Gb 2
Gb 3
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
Gb 2
Gb 3
Gb 4
39
Gb 4
39
39
39
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
38
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
38
38
38
37
37
37
37
36
36
36
36
35
35
35
35
34
1
2
1
2
1
2
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
34
34
34
33
33
33
33
32
32
32
1
32
2
31
Gb 1
1
Gb 2
Gb 3
31
Gb 4
Gb 1
2
31
31
1
2
1
2
Gb 2
Gb 3
Gb 4
30
Gb 1
1
2
1
2
1
2
1
2
Gb 2
Gb 3
30
Gb 4
30
30
Gb 1
Gb 2
Gb 3
29
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
1
2
1
2
1
2
26
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
1
2
1
2
1
2
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
23
Gb 1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
Gb 2
Gb 3
Gb 3
Gb 4
Gb 2
Gb 3
Gb 4
18
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
15
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
2
1
2
Gb 1
Gb 2
Gb 3
Gb 4
Gb 2
Gb 3
1
2
1
2
1
2
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
2
1
2
1
2
1
2
1
2
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
24
23
22
04
2
1
2
20
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 3
Gb 4
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
19
18
1
2
1
2
1
2
17
16
15
1
2
1
2
1
2
14
13
12
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
11
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
Gb 1
Gb 2
Gb 3
Gb 4
10
09
08
07
06
05
05
04
1
2
1
2
1
2
03
03
03
02
02
01
2
01
Gb 2
Gb 1
06
04
Gb 1
Gb 1
04
02
1
21
1
09
05
03
Gb 4
08
06
02
Gb 4
Gb 3
07
07
1
05
Gb 4
Gb 3
Gb 2
10
09
07
Gb 3
Gb 2
Gb 1
11
08
06
Gb 2
Gb 1
14
10
09
Gb 1
2
2
13
Gb 4
08
Gb 4
12
11
Gb 1
Gb 4
Gb 3
15
12
1
10
Gb 3
Gb 2
2
1
2
12
Gb 2
Gb 1
17
14
1
2
16
13
11
Gb 1
28
2
1
20
15
Gb 1
Gb 4
19
17
14
Gb 4
Gb 3
18
16
13
Gb 4
Gb 3
Gb 2
27
2
1
1
18
Gb 1
17
Gb 3
Gb 2
Gb 1
26
1
1
19
16
Gb 2
Gb 1
22
20
Gb 2
Gb 1
Gb 1
21
Gb 4
Gb 1
2
25
23
1
2
1
25
22
20
2
1
23
21
19
1
24
24
22
Gb 4
27
25
21
Gb 4
Gb 3
26
26
Gb 1
25
Gb 3
Gb 2
28
28
27
24
Gb 2
Gb 1
29
29
Gb 1
28
27
Gb 1
29
Gb 4
01
01
10GBase-T / SFP+ 10Gb
Aggregation
10Gb/40Gb
Aggregation
•
•
High Density 10Gb
Aggregation
High Density25Gb/50Gb
Aggregation
100Gb enables scaling of fabric capacity at low cost with high availability
X870 serves multiple roles as spine and leaf in new architecture
ExtremeSwitching X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
• 100Gb Spine switch for high speed Data Center
• High density Leaf applications for 25Gb/40Gb/50Gb
• 32 QSFP28 ports supporting flexible interface rates
•
•
•
•
•
32 x 100Gb Ethernet
64 x 50Gb Ethernet
128 x 25Gb Ethernet
128 x 10Gb Ethernet
32 x 40Gb Ethernet
X870-96x-8c High Density 10Gb Leaf Switch
96 x 10GbE Ports (via 24 port breakout)
8 x 10/25/40/50/100GbE Ports
• Optimized X870 platform to address high density 10Gb aggregation applications
•
•
10Gb remains majority of server connections
Allows seamless migration to 25Gb/50Gb/100Gb
• 24 x QSFP28 ports operate only as partitioned 4 x10Gb Ethernet ports
•
Uses 40Gb QSFP+ cables and transceivers for breakout
• 8 x QSFP28 ports are unrestricted for use as10/25/40/50/100GbE
• Port Speed Licenses available to upgrade speed restricted ports to unrestricted 100Gb use in groups
of 6 ports per license
• Enables upgrade system to the full capacity of platform as needed
X690-48x/t-4q-2c Leaf Switch*
X690-48x-4q-2c
48x10Gb SFP+
6xQSFP:
4x100Gb / 6x40Gb
X690-48t-4q-2c
48x10Gb 10GBASE-T
6xQSFP:
4x100Gb / 6x40Gb
•
•
•
•
New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications
Enabled with 100Gb QSFP28 high speed uplinks
Shares power supply and fan modules with X870
EXOS / ExtremeFabric
*FUTURE AVAILABILITY – PRODUCTS AND
FEATURES SUBJECT TO CHANGE
Thank You!