Download CiscoContentNetworking_200105

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Extensible Authentication Protocol wikipedia , lookup

Distributed firewall wikipedia , lookup

AppleTalk wikipedia , lookup

Server Message Block wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Dynamic Host Configuration Protocol wikipedia , lookup

Hypertext Transfer Protocol wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Remote Desktop Services wikipedia , lookup

Lag wikipedia , lookup

Transcript
Cisco Load Balancing
Solutions
F0_4553_c1
© 1999, Cisco Systems, Inc.
1
Agenda
• Problems We Are Solving
• DistributedDirector
• LocalDirector
• MultiNode Load Balancing
F0_4553_c1
© 1999, Cisco Systems, Inc.
2
Problems We Are Solving
• Efficient, high-performance client
access to large server complexes
• Continuous availability of
server applications
• Scalable, intelligent load distribution
across servers in the complex
• Load distribution based on server
capacity of doing work and
application availability
F0_4553_c1
© 1999, Cisco Systems, Inc.
3
DistributedDirector
F0_4553_c1
© 1999, Cisco Systems, Inc.
4 4
What Is DistributedDirector?
• Two pieces:
Standalone software/hardware bundle
Special Cisco IOS®-based software on
Cisco 2501, 2502, and Cisco 4700M hardware
platforms—11.1IA release train
Cisco IOS software release 11.3(2)T and
later on DRP-associated routers in field
• DistributedDirector is NOT a router
• Dedicated box for DistributedDirector
processing
F0_4553_c1
© 1999, Cisco Systems, Inc.
5
What Does
DistributedDirector Do?
• Resolves domain or host names to a
specific server (IP address)
• Provides transparent access to topologically
closest Internet/intranet server relative to client
• Maps a single DNS host name to the “closest”
server to client
• Dynamically binds one of several IP addresses
to a single host name
• Eliminates need for end-users to choose from
a list of URL/host names to find “best” server
• The only solution which uses intelligence in the
network infrastructure to direct client to best server
F0_4553_c1
© 1999, Cisco Systems, Inc.
6
DNS-Based Distribution
APPL2
• Client connects to appl.com
• appl.com request routed
to DistributedDirector
• DistributedDirector
uses multiple
decision metrics to
select appropriate
server destination
APPL3
APPL1
2.2.2.1
3.3.3.1
1.1.1.1
IP
3
4
• DistributedDirector sends
destination address to client
DD
Resolve appl.com
• Client connects to the
appropriate server
1
2
Client
F0_4553_c1
© 1999, Cisco Systems, Inc.
7
How Are DistributedDirector
Choices Made?
• Director Response Protocol (DRP)
Interoperates with remote routers (DRP agents)
to determine network topology
Determines network distance between clients and server
• Client-to-server link latency (RTT)
• Server availability
• Administrative “cost”
Take a server out of service for maintenance
• Proportional distribution
For heterogeneous distributed server environments
• Random distribution
F0_4553_c1
© 1999, Cisco Systems, Inc.
8
Director Response Protocol (DRP)
• Operates with
routers in the
field to determine:
Web
Server
DistributedDirector
Web
Server
DRP Agents
Client-to-server
network proximity
Internet
Client-to-server
link latency
Client
F0_4553_c1
© 1999, Cisco Systems, Inc.
9
DRP “External” Metric
AS4
• Measures distance from
DRP agents to client in
BGP AS hop counts
Server
Server
DRP
DRP
Two Hops
AS3
One Hop
AS1
Server
AS2
Client
DRP
F0_4553_c1
© 1999, Cisco Systems, Inc.
10
DRP “Round-Trip Time” Metric
AS4
• Measures client-to-DRP
server round-trip times
Server
Server
DRP
DRP
• Compares link latencies
• Server with lowest round-trip
time is considered “best”
• Maximizes end-to-end
server access
performance
AS3
RTT Measurement
AS1
Server
Client
AS2
DRP
F0_4553_c1
© 1999, Cisco Systems, Inc.
11
“Portion” Metric
“Portion”
Metric Value
Server 1
SPARCstation
7
7/24 = 29.2%
Server 2
SPARCstation
8
8/24 = 33.3%
Server 3
Pentium 60 MHz
2
2/24 = 8.3%
Server 4
Pentium 60 MHz
2
2/24 = 8.3%
Server 5
Pentium 166 MHz
5
5/24 = 20.8%
24
24/24 = 100%
Total =
•
•
F0_4553_c1
Portion of
Connections
Proportional load distribution across heterogeneous servers
Can also be used to enable traditional round-robin DNS
© 1999, Cisco Systems, Inc.
12
Server Availability Parameter
• DistributedDirector establishes a TCP connection
to the service port on each remote server, thus
verifying that the service is available
• Verification is made at regular intervals
• Port number and connection interval are
configurable
• Minimum configurable interval is ten seconds
• Maximizes service availability as seen by clients
F0_4553_c1
© 1999, Cisco Systems, Inc.
13
DistributedDirector—
How Does It Work?
• Two configuration modes:
DNS caching name server authoritative
for www.foo.com subdomain
HTTP redirector for http://www.foo.com
• Modes configurable on
per-domain basis
F0_4553_c1
© 1999, Cisco Systems, Inc.
14
DistributedDirector—Redundancy
• DNS mode
Use multiple
DistributedDirectors
to provide several
name servers
authoritative for a
given hostname to
provide redundancy
All DistributedDirectors
are considered to be
primary DNS servers
F0_4553_c1
© 1999, Cisco Systems, Inc.
• HTTP mode
Use multiple
DistributedDirectors
and Cisco’s Hot
Standby Router
Protocol (HSRP)
to provide redundancy
15
LocalDirector
F0_4553_c1
© 1999, Cisco Systems, Inc.
16 16
LocalDirector
User
Data Center
Internet or
Intranet
LocalDirector
• LocalDirector appliance front-ends server farm
Load balances connections to “best server”
Failures, changes transparent to end users
Improves response time
Simplifies operations and maintenance
• Simultaneously supports different server
platforms, operating systems
• Any TCP service (not just Web)
F0_4553_c1
© 1999, Cisco Systems, Inc.
17
LocalDirector—
Server Management
• Represents multiple servers with a single
virtual address
• Easily place servers in and out of service
• Identifies failed servers: takes offline
• Identifies working servers: places in service
• IP address management
• Application-specific servers
• Maximum connections
• Hot-standby server
F0_4553_c1
© 1999, Cisco Systems, Inc.
18
LocalDirector—Specifications
• 80-Mbps throughput—model 416
• 300-Mbps throughput—model 430
Fast Ethernet channel
• Supports up to 64,000 virtual and
real IP addresses
• Up to 16 10/100 Ethernet, 4 FDDI ports
• One-million simultaneous
TCP connections
• TCP, UDP applications supported
F0_4553_c1
© 1999, Cisco Systems, Inc.
19
Network Address Translation
• Client traffic destined for
virtual address is distributed
across multiple real addresses
in the server cluster
• Transparent to
client and server
Server 1 Server 2
2.2.2.1
Server 3
2.2.2.2
3.3.3.1
Server Cluster
1.1.1.1
LocalDirector
Real
Addresses
Virtual
Address
• Network Address
Translation (NAT)
IP
Requires all traffic to
pass through LocalDirector
• Virtuals and reals are
IP address/port combination
Client
Client
Client
F0_4553_c1
© 1999, Cisco Systems, Inc.
20
Session Distribution Algorithm
• Passive approach
Least connections
Weighted
Fastest
Linear
Source IP
F0_4553_c1
© 1999, Cisco Systems, Inc.
21
Ideal for Mission-Critical
Applications
TAP Servers
Mail, Web, FTP, and so on
LocalDirector
High-Availability Solution
F0_4553_c1
© 1999, Cisco Systems, Inc.
22
LocalDirector Strengths
• Network Address Translation (NAT)
allows arbitrary IP topology between
LocalDirector and servers
• Proven market leader with extensive
field experience
• Rich set of features to map between
virtual and real addresses
• Bridge-like operation allows transparent
deployment and gradual migration to NAT
F0_4553_c1
© 1999, Cisco Systems, Inc.
23
LocalDirector Weaknesses
• NAT requires all traffic to be routed
through a single box
• NAT requires that data be scanned
and manipulated beyond the
TCP/UDP header
• Two interface types supported:
FE and FDDI
F0_4553_c1
© 1999, Cisco Systems, Inc.
24
MultiNode
Load Balancing
F0_4553_c1
© 1999, Cisco Systems, Inc.
25 25
MultiNode Load Balancing
(MNLB)
• Next-generation server load balancing
• Unprecedented high availability
Eliminate single points of failure
• Unprecedented scalability
Allow immediate incremental or large-scale
expansion of application servers
• New dynamic server feedback
Balance load according to actual application
availability and server workload
F0_4553_c1
© 1999, Cisco Systems, Inc.
26
MNLB—What Is It?
• Hardware and software
solution that distributes IP
traffic across server farms
MNLB
• Cisco IOS router and
switch based
• Implementation of
Cisco’s ContentFlow
architecture
• Utilizes dynamic
feedback protocol for
balancing decisions
F0_4553_c1
© 1999, Cisco Systems, Inc.
27
MNLB Features
• Defines single-system image
or “virtual address” for IP
applications on multiple servers
• Load balances across
multiple servers
MNLB
• Uses server feedback or
statistical algorithms for
load-balancing decisions
• Server feedback contains
application availability
and/or server work capacity
• Algorithms include round robin,
least connections, and best
performance
F0_4553_c1
© 1999, Cisco Systems, Inc.
28
MNLB Features
• Session packet forwarding
distributed across multiple
routers or switches
MNLB
• Supports any IP application:
TCP, UDP, FTP, HTTP, Telnet, and
so on
• For IBM OS/390 Parallel
Sysplex environments:
Delivers generic resource capability
Makes load-balancing decisions
based on OS/390 Workload Manager
data
F0_4553_c1
© 1999, Cisco Systems, Inc.
29
MNLB Components
• Services Manager
Software runs on LocalDirector
ContentFlow Flow Management
Agent
Backup
Service
Manager
Makes load-balancing decisions
Uses MNLB to instruct Forwarding
Agents of correct server
destination
Uses server feedback protocol
to maintain server capacity and
application availability info
Services
Manager
• Backup Services Manager
Enables 100% availability
for Services Manager
No sessions lost due to primary
services manager failure
F0_4553_c1
© 1999, Cisco Systems, Inc.
30
MNLB Components
• Forwarding Agent
Cisco IOS router and switch software
ContentFlow Flow Delivery Agent
Uses MNLB to communicate with
Services Manager
Sends connection requests to
Services Manager
Receives server destination from
Services Manager
Forwards data to chosen server
• Workload Agents
Runs on either server platforms or
management consoles
Maintains information on server work
capacity and application availability
Communicates with Services Manager
using server feedback protocol
For IBM OS/390 systems delivers
OS/390 Workload Manager data
F0_4553_c1
© 1999, Cisco Systems, Inc.
Workload
Agents
Forwarding
Agents
31
How Does MNLB Work?
Workload
Agents
• Initialization:
Services Manager locates
Forwarding Agents
Instructs each Forwarding
Agent to send session
requests for defined
virtuals to Services
Manager
Forwarding
Agents
Services
Manager
Locates Workload Agents
and receives server operating
and application information
Client
F0_4553_c1
© 1999, Cisco Systems, Inc.
32
How Does MNLB Work?
• Session packet flow
1. Client transmits connection
request to virtual address
2. Forwarding Agent transmits
packet to Services Manager
Services Manager selects
appropriate destination
and tells Forwarding Agent
3. Forwarding Agent forwards
packet to destination
4. Session data flows through
any Forwarding Agent router
and switch
The Services Manager is also
notified on session termination
F0_4553_c1
© 1999, Cisco Systems, Inc.
3
2
4
1
Client
33
Dispatch Mode of
Session Distribution
VIPA 1.1.1.1
• Virtual IP address (VIPA) on
hosts (alias, loopback)
• Load-balancer presents
virtual IP address to network
• Load-balancer forwards packets
based on Layer 2 address
Server 2
Server 1
2.2.2.1
Server 3
2.2.2.2
3.3.3.1
Server Cluster
LocalDirector
1.1.1.1
Real
Addresses
Virtual
Address
Uses ARP to obtain
Layer 2 address
IP
IP header still contains
virtual IP address
• Requires subnet adjacency since
it relies on Layer 2 addressing
Client
F0_4553_c1
© 1999, Cisco Systems, Inc.
Client
34
Dispatch Mode
• Benefits
No need to scan past TCP/UDP
header, may achieve higher
performance
Outbound packets may travel
any path
Server 1 Server 2 Server 3
Server Cluster
• Issues
Inbound packets must pass
through the load-balancer
IP
Ignoring outbound packets
does limit the effectiveness
of the balancing decisions
Subnet adjacency can be a
real network design problem
F0_4553_c1
© 1999, Cisco Systems, Inc.
Client
Client
35
MNLB
• Uses either NAT or
modified dispatch mode
• NAT
MNLB
MNLB architecture creates
high availability—no single
point of failure
No throughput bottleneck
1.1.1.1
• Modified dispatch mode
Uses Cisco Tag Switch network to
address across multiple subnets
Inbound and outbound traffic
can travel through any path
Services Manager notified
on session termination
Client
F0_4553_c1
© 1999, Cisco Systems, Inc.
36
Benefits
F0_4553_c1
© 1999, Cisco Systems, Inc.
37 37
MNLB: The Next Generation
• Unprecedented high availability
Eliminate single points of failure
• Unprecedented scalability
Allow immediate incremental or largescale expansion of application servers
• New dynamic server feedback
Balance load according to actual application
availability and server work load
F0_4553_c1
© 1999, Cisco Systems, Inc.
38
Single System Image
• One IP address for the
server cluster
MNLB
• Easy to grow and maintain
server cluster without
disrupting availability or
performing administrative
tasks on clients
• Easy to administrate clients,
only one IP address
• Enhances availability
F0_4553_c1
© 1999, Cisco Systems, Inc.
39
Server Independence
• MNLB operates independent
of server platform
• Server agents operate in
IBM MVS, IBM OS/390, IBM
TPF, NT, and UNIX sites
MNLB
• Application-aware load
distribution available in
all server sites
• Enables IP load distribution
for large IBM Parallel Sysplex
complexes
F0_4553_c1
© 1999, Cisco Systems, Inc.
40
Application-Aware Load Balancing
• Client traffic is distributed
across server cluster to the
best server for the request
MNLB
• Transparent to client
• Allow agent(s) in servers to
provide intelligent feedback
to network as basis for
balancing decision
• Uses IBM’s OS/390 Work Load
Manager in OS/390 Parallel
Sysplex environments
• Application-aware load balancing
ensures session completion
F0_4553_c1
© 1999, Cisco Systems, Inc.
41
Total Redundancy—
Ultimate Availability
• No single point of failure
for either applications,
servers, or MNLB
• Multiple forwarding
agents ensure access
to server complex
• Multiple Services Managers
ensure load balancing is
maintained through failure
• Single cluster address for
multiple servers maintains
access to applications in
case of server failure or
server maintenance
F0_4553_c1
© 1999, Cisco Systems, Inc.
MNLB
42
Unbounded Scalability
• Scalability limited only by
the number and throughput
of forwarding agents
• Performance limited only by
the number and throughput
of Forwarding Agents
MNLB
• Forwarding Agents can be
added at any time with no
loss of service
• Servers can be added with
no network design changes
• NO throughput bottlenecks
• Scales to the largest of Web sites
F0_4553_c1
© 1999, Cisco Systems, Inc.
43
Implementation
and Road Map
F0_4553_c1
© 1999, Cisco Systems, Inc.
44 44
Phase One Implementation
• MNLB components
Cisco IOS-based forwarding
agents in Cisco 7500, 7200,
4000, 3600, and Catalyst®
5000R Services Manager
MNLB
Services Manager runs on
LocalDirector chassis
LocalDirector hot-standby for
phase one backup manager
Workload Agents for IBM
OS/390, IBM TPF, NT,
and UNIX
F0_4553_c1
© 1999, Cisco Systems, Inc.
45
Thank You
Q&A
F0_4553_c1
© 1999, Cisco Systems, Inc.
46
F0_4553_c1
© 1999, Cisco Systems, Inc.
47