Download Implementation of Storage Area Networks

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

IEEE 1355 wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Computer network wikipedia , lookup

Distributed firewall wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Asynchronous Transfer Mode wikipedia , lookup

Airborne Networking wikipedia , lookup

List of wireless community networks by region wikipedia , lookup

Network tap wikipedia , lookup

Storage virtualization wikipedia , lookup

Transcript
50-11-31
DATA COMMUNICATIONS MANAGEMENT
IMPLEMENTATION OF
STORAGE AREA NETWORKS
Craig Myers and Nathan Muller
INSIDE
SAN Advantages; SAN Evolution; SAN Components; Role of Hubs; Remote Data Replication; Security;
Zoning; The Need for Speed; SAN Management; Interoperability; Emerging Role of IP
Essentially, a storage area network (SAN) is a specialized network that
enables fast, reliable access among servers and external or independent
storage resources, regardless of physical location. Fibre Channel or Gigabit Ethernet links can provide high-speed transfers of data between systems distributed within a building, campus, or metropolitan area. For
longer distances, Asynchronous Transfer Mode (ATM) and Internet Protocol (IP) technologies can be used to transport data over the wide area
network (Exhibit 1). The access links at each location can be multiples of
T1 at 1.544 Mbps (NxT1), a T3 at 45 Mbps, or an optical carrier link at
155 Mbps (OC-3). Carrier-provided Ethernet services between 10 Mbps
and 1 Gbps can also be used.
In a SAN, a storage device is not the exclusive property of any one
server. Rather, storage devices are shared among all networked servers as
peer resources. Just as a local area network (LAN) can be used to connect
clients to servers, a SAN can be used to connect servers to storage, servers
to each other, and storage to storage
for load balancing and protection.
PAYOFF IDEA
SAN ADVANTAGES
Redundancy is an inherent part of
the SAN architecture, making for
high availability. The pluggable nature of SAN resources — storage,
nodes, and clients — enables much
easier scalability, while preserving
ubiquitous data access. And under
10/01
With the rapidly increasing volume of missioncritical information being accumulated in today’s
business environment, companies are demanding better performance, availability, manageability, and security of their data storage assets. To
meet these needs, companies are implementing
specialized networks that are capable of helping
them realize operational efficiencies and maximize revenue-generating opportunities. These
objectives can be achieved with a combination of
transport technologies and distributed storage
resources, which together comprise the storage
area network (SAN).
Auerbach Publications
© 2001 CRC Press LLC
10/01
EXHIBIT 1 — A Simple Storage Area Network Spanning the WAN under the Supervision
of a Centralized Management System
© 2001 CRC Press LLC
Auerbach Publications
centralized management, there is more efficiency in carrying out tasks
such as optimization, reconfiguration, and backup/restore.
SANs are particularly useful for backups. Previously, there were only
two choices: either a tape drive had to be installed on every server and
someone went around changing the tapes, or a backup server was created and the data moved across the network, which consumed bandwidth.
Performing backup over the LAN can be excruciatingly disruptive and
slow. A daily backup can suddenly introduce gigabytes of data into the
normal LAN traffic. With SANs, organizations can have the best of both
worlds: high-speed backups managed from a central location.
Instead of dedicating a specific kind of storage to one or more servers,
a SAN allows different kinds of storage — mainframe disk, tape, and
RAID — to be shared by different kinds of servers, such as Windows NT,
UNIX, and OS/390. With this shared capacity, organizations can acquire,
deploy, and use storage devices more efficiently and cost-effectively.
ATM would be adept at connecting heterogeneous storage resources
over the wide area network because it slices and dices different protocol
traffic into standardized packets called cells for high-speed, jitter-free
transmission between distributed storage nodes.
SANs also let users with heterogeneous storage platforms utilize all of
the available storage resources. This means that within a SAN users can
backup or archive data from different servers to the same storage system.
They can also allow stored information to be accessed by all servers, create and store a mirror image of data as it is created, and share data between different environments. With ATM as the wide area transport, data
integrity is maintained over long distances through synchronization enforced by Stratum-1 clocks embedded in the circuit-switched fabric.
By decoupling storage from computers, workstations, and servers, and
taking storage traffic off the operations network, organizations gain a
high-performance storage network and improve the performance of the
LAN. These features reduce network downtime and productivity losses
while extending current storage resources. In effect, the SAN does in a
network environment what traditionally has been done in a backend I/O
environment between a server and its own storage subsystem. The result
is high speed, high availability, and high reliability.
With a SAN, there is no need for a physically separate network to handle storage and archival traffic. This is because the SAN can function as
a virtual subnet that operates on a shared network infrastructure. For this
to work, however, different priorities or classes of service must be established. Fortunately, both Fibre Channel and ATM provide the means to
set different classes of service, and this capability can be added to IP.
SANs also promise easier and less-expensive network administration.
Today, administrative functions are labor-intensive and time-consuming,
and IT organizations typically have to replicate management tools across
10/01
Auerbach Publications
© 2001 CRC Press LLC
multiple server environments. With a SAN, only one set of tools is needed, which eliminates the need for their replication and associated costs.
All of this makes SANs highly suited for data-intensive environments
like those used for video editing, pre-press, online transaction processing
(OLTP), data warehousing, storage management, and server clustering
applications.
SAN EVOLUTION
SANs have existed for years in the mainframe environment in the form of
Enterprise Systems Connection (ESCON). In mid-range environments,
the high-speed data connection was primarily SCSI (Small Computer System Interface) — a point-to-point connection, which is severely limited
in terms of the number of connected devices it can support as well as the
distance between devices.
An alternative to network attached storage (NAS) was developed in
1997 by Michael Peterson, president of Strategic Research (Santa Barbara,
California). He believed NAS was too limiting because it relied on network protocols that did not guarantee delivery. Peterson proposed SANs
that could be interconnected using network protocols such as Ethernet,
and that the storage devices themselves could be linked via non-network
protocols.
According to Peterson, SANs have three major components: the interfaces, including Small Computer Systems Interface (SCSI), IBM Serial
Storage Architecture (SSA) or Fibre Channel; interconnects such as extenders, multiplexers, hubs, switches, and routers; and the switching fabric. In a traditional storage environment, a server controls the storage
devices and administers requests and backup. With a SAN, instead of being involved in the storage process, the server simply monitors it. By optimizing the box at the head of the SAN to do only file transfers, users are
able to get much higher transfer rates. Here is where Fibre Channel
comes in.
Using Fibre Channel as the connection between storage devices increases the distance options. While traditional SCSI allows only a 25meter (about 82 feet) distance between machines and Ultra2 SCSI allows
only a 12-meter distance (about 40 feet), Fibre Channel supports spans
of 10 kilometers (about 6.2 miles), making it suited for building campuswide storage networks. SCSI can only connect up to 16 devices, whereas
Fibre Channel can link as many as 126. By combining LAN networking
models with the core building blocks of server performance and mass
storage capacity, the SAN eliminates the bandwidth bottlenecks and scalability limitations imposed by previous SCSI bus-based architectures.
More recently, vendors have pushed the speed of Fibre Channel from
1 Gbps to 2 Gbps and increased the distance beyond the original 6.2
miles to about 75 miles. As the SAN concept has evolved, it has moved
10/01
Auerbach Publications
© 2001 CRC Press LLC
beyond association with any single technology. In fact, just as LANs and
WANs use a diverse mix of technologies, so can SANs. This mix can include Fiber Distributed Data Interface (FDDI), ATM, and IBM’s SSA, as
well as Fibre Channel. More recently, SONET (Synchronous Optical Network) and Dense Wave Division Multiplexing (DWDM) have been added
to the mix to extend the operating range of storage networks. Even the
TCP/IP suite of Internet protocols is being used for a more economic implementation of storage networks.
Although early implementations of SANs were local or campus-based,
there is no technological reason why they cannot be extended much farther with such proven technologies such as SONET and ATM. With its 50millisecond recovery time, SONET also offers the benefit of extremely
high resiliency, which has yet to be matched by any other transport technology, including Fibre Channel. Under SONET, data travels to its destination in opposite directions over a dual ring architecture (Exhibit 2). For
metro area SANs, SONET offers the highest resiliency of any transport
technology. This ensures that data gets to its proper destination with little
or no loss, even if a fiber gets cut or a node on the ring fails. If one of
the fibers is cut or a node fails, protection mechanisms kick in to ensure
that data gets to its destination with little or no loss. ATM’s quality-of-service (QoS) capabilities and priority queuing techniques allow the SAN to
be extended over a much wider area — perhaps globally — with little or
no performance fatigue.
EXHIBIT 2 — Metro Area SAN under SONET
10/01
Auerbach Publications
© 2001 CRC Press LLC
SAN COMPONENTS
There are several components required to implement a SAN. A Fibre
Channel adapter is installed in each server. These are connected via the
server’s personal computer interface (PCI) bus to the server’s operating
system and applications. Because Fibre Channel’s transport-level protocol
wraps easily around SCSI frames, the adapter appears to be a SCSI device.
The adapters are connected to a single Fibre Channel hub, running
over fiber-optic cable or copper coaxial cable. Category 5, the unshielded twisted-pair wiring rated for 100-Mbps Fast Ethernet and 155-Mbps
ATM, can also be used.
A LAN-free backup architecture can include some type of automated
tape library that attaches to the hub via Fibre Channel. This machine typically includes a mechanism capable of feeding data to multiple tape
drives and can be bundled with a front-end Fibre Channel controller. Existing SCSI-based tape drives can be used through the addition of a Fibre
Channel-to-SCSI bridge.
Storage management software running in the servers performs contention management by communicating with other servers via a control protocol to synchronize access to the tape library. The control protocol
maintains a master index and uses data maps and timestamps to establish
the server-to-hub connections.
Currently, control protocols are specific to the software vendors.
Eventually, the storage industry will likely standardize on one of the several protocols now in proposal status before the Storage Network Industry Association (SNIA).
From the hub, a standard Fibre Channel protocol, Fibre Channel-Arbitrated Loop (FC-AL), functions similarly to Token Ring to ensure collision-free data transfers to the storage devices. The hub also contains an
embedded SNMP agent for reporting to network management software.
ROLE OF HUBS
Much like Ethernet hubs in LAN environments, Fibre Channel hubs provide fault tolerance in SAN environments. On a Fibre Channel-Arbitrated
Loop, each node acts as a repeater for all other nodes on the loop so that
if one node goes down, the entire loop goes down. For this reason, hubs
are an essential source of fault isolation in Fibre Channel SANs. The
hub’s port bypass functionality will automatically bypass a problem port
and avoid most faults. Stations can be powered off or added to the loop
without serious loop effects. Storage management software is used to
mediate contention and synchronize data, activities necessary for moving
backup data from multiple servers to multiple storage devices. Hubs also
support the popular physical star cabling topology for more convenient
wiring and cable management.
To achieve full redundancy in a Fibre Channel SAN, two fully independent, redundant loops must be cabled. This scheme provides two inde10/01
Auerbach Publications
© 2001 CRC Press LLC
pendent paths for data with fully redundant hardware. Most disk drives
and disk arrays targeted for high-availability environments have dual
ports specifically for this purpose. Wiring each loop through a hub provides higher-availability port bypass functionality to each of the loops.
Some organizations will have the need for multiple levels of hubs.
Hubs can be cascaded up to the Fibre Channel-Arbitrated Loop limit of
126 nodes (127 nodes with an FL or switch port). Normally, the distance
limitation between Fibre Channel hubs is three kilometers. Several vendors, however, have found ways to extend the distance between hubs to
ten kilometers, allowing organizations to link servers situated on either
side of a campus, or even spanning a metropolitan area.
REMOTE DATA REPLICATION
According to some industry estimates, as much as 90 percent of businesses that have installed SANs need remote data replication capabilities that
extend well beyond campus environments and the limited distance ranges of SCSI. The latest link extenders can connect storage facilities as far as
120 km (75 miles) apart at a transfer rate of 1 Gbps. For greater distances
offered by WANs, there are two technology approaches — synchronous
and asynchronous — that can be employed for remote data replication.
Synchronous data replication ensures data integrity by allowing
source and copied storage volumes to remain in sync with one another.
This is accomplished by a pair of link extenders that converts the shorthaul copper or multi-mode optical signal to the long-haul, single-mode
signal, and vice versa. An internal digital signal conditioner re-times the
signal over the long-haul connection to eliminate jitter.
With their link extenders, vendors include comprehensive system status monitoring and link diagnostics via an integral bit error rate tester
(BERT). The link monitoring and diagnostics provide fault detection and
isolation. Integration with network management is provided via an RS-232
serial port attached to a local terminal or modem to allow remote access.
Another way to implement remote data replication over long distances
is through the use of ATM on the WAN. Despite the asynchronous nature
of ATM, precise timing can be applied in the service provider’s network
via Stratum-1 clocks. Because these clocks support circuit emulation for
point-to-point PBX trunks for the delivery of voice, which has one of the
most stringent performance requirements, they can also support realtime data to ensure the end-to-end integrity of source and copied volumes during transmission.
SECURITY
In general, a SAN has the same security needs as a file server network.
Fortunately, the application server shields the SAN from end users, so
there is no direct desktop access to the contents of the SAN. The appli10/01
Auerbach Publications
© 2001 CRC Press LLC
cation server(s) must provide the security mechanisms to prevent unauthorized access or denial-of-service attacks. In a heterogeneous server
environment, this is a particularly difficult challenge because of the inherent difference of security levels between platforms and the administrator’s experience with server security.
If an attacker does manage to compromise a server, the SAN is vulnerable. The server can be used as the springboard to all of the logical storage units on the network. There are a few techniques to counter such
attacks. The easiest is to utilize switch zoning (discussed below) in an intelligent way to prevent the whole SAN from being accessible to all servers. Breaking the SAN into subnets with traffic strictly partitioned
between servers and storage prevents global access to the SAN. Logical
Unit Number (LUN) masking is another technique that can thwart an intruder. This technique is implemented on the storage servers by setting
permissions for visibility and access to specific clients.
Outside the SAN itself, the network administrator must take reasonable precautions to shield the LAN from eavesdropping or “man-in-themiddle” hacks that replay or alter traffic as it traverses the LAN. This is
primarily important only when the data contains sensitive financial, medical, or proprietary information. The servers running the SAN management agents would also be of particular interest to a network interloper.
Physical access restrictions can play a key role in minimizing intrusions
that originate from within the organization.
ZONING
A key feature of SANs is zoning, a term used by some switch companies
to denote the division of a SAN into subnets that provide different levels
of connectivity between specific hosts and devices on the network. In effect, routing tables are used to control access of hosts to devices. This
gives IT managers the flexibility to support the needs of different groups
and technologies without compromising data security. Zoning can be
performed by cooperative consent of the hosts or can be enforced at the
switch level. In the former case, hosts are responsible for communicating
with the switch to determine if they have the right to access a device.
There are several ways to enforce zoning. With hard zoning, which
delivers the highest level of security, IT managers program zone assignments into the flash memory of the hub. This ensures that there can be
absolutely no data traffic between zones. Virtual zoning provides additional flexibility because it is set at the individual port level. Individual
ports can be members of more than one virtual zone, so groups can have
access to more than one set of data on the SAN.
Zones can be extended across the wide area network (WAN) through
transparent bridging. This is a Layer 2 service that can be provisioned over
an ATM network, enabling SAN traffic to be moved between far-flung lo10/01
Auerbach Publications
© 2001 CRC Press LLC
cations at near wire speed. The end users do not even know the cloud is
there because only MAC (media access control) addresses are used between the SAN segments on each side. This is a very efficient way of handling SAN traffic and is simple to implement because it does not require
that the storage servers at each end be equipped with ATM interfaces.
THE NEED FOR SPEED
The demands on networks and systems for moving and managing data
are increasing exponentially, and improvements in performance across
the infrastructure are required to enable users to move and manage their
data efficiently and reliably. One of these performance improvements
has been realized with the introduction of 2-Gbps switch speeds for the
SAN. Companies such as Brocade, Gadzoox, Qlogic, and Vixel now offer
2-Gbps Fibre Channel products around new technology advances.
The 2-Gbps technology is based on the FC-SW-2 Open Fabric standard, which establishes the foundation for building interoperable, multivendor switch fabrics. Users can connect existing 1-Gbps products with
the newer 2-Gbps technology and, through standards-based auto-negotiation, extend their current SAN installations instead of having to replace
them. Not only does this technology provide a clear path for the industry,
but also holds great promise for new and revolutionary products that
greatly extend the capabilities of SANs.
The Fibre Channel Industry Association has introduced a proposal for
10-Gbps Fibre Channel that supports LAN and WAN devices over distances ranging from 15 meters (about 50 feet) to 10 kilometers (about 6
miles). The standard also supports bridging SANs over metropolitan-area
networks through dense wave division multiplexing and SONET. The 10Gbps draft specification requires backward compatibility with 1-Gbps
and 2-Gbps devices. The 10-Gbps devices will also be able to use the
same cable, connectors, and transceivers used in Ethernet and Infiniband.
At the systems level, SAN architectures also make use of a number of
underlying bus technologies, including all variants of SCSI and PCI. The
latest innovation in bus technology is Infiniband, a channel-based,
switched fabric architecture that provides scalable performance from 500
MB/s to 6 GB/s, meeting a range of needs from entry level to high-end enterprise systems. Supported by the computing industry’s leading companies, it is anticipated that this new high-speed bus technology will soon
replace the current PCI bus standard because it overcomes I/O bottlenecks
for substantial improvements in link speeds between servers and storage,
as well as overall data throughput in the server systems themselves.
SAN MANAGEMENT
The tools needed to manage a Fibre Channel fabric are available through
the familiar SNMP (Simple Network Management Protocol) interface. The
10/01
Auerbach Publications
© 2001 CRC Press LLC
FC-AL MIB (Management Information Base) approved by the Internet
Engineering Task Force (IETF) extends SNMP management capabilities
to the multi-vendor SAN environment. New vendor-specific MIBs will
emerge as products are developed with new management features.
Switches, hubs, and other central networking hardware provide a natural
point for network management. Of course, GUI-based management systems will play a key role in managing storage networks.
One of these GUI-based management solutions comes from Tivoli Systems. The company’s Tivoli Storage Network Manager simplifies the complexity of managing information across the multiple platform and
operating environments typical in a SAN. Policy-based automation and
expansion capabilities help administrators ensure availability of missioncritical applications, thereby providing higher storage resource utilization.
Once the SAN elements have been discovered, storage resources assigned, and policies established for monitoring file systems, administrators can then do the following:
• Continually monitor all the components within the discovered SAN
topology.
• Capture data to be used for reporting on performance, capacity, and
service-level planning.
• Automatically extend supported file systems that are becoming full.
• Receive automatic notification when file systems exceed a predetermined threshold.
Tivoli Storage Network Manager also generates SNMP traps and Tivoli
Enterprise Console (TEC) events to report on all activities that it monitors.
These events can then be sent to the designated management console or
the designated administrator. Tivoli Storage Network Manager can also be
configured to send SNMP Traps or TEC Events to one or more locations
(Exhibit 3). This is done by entering information in the following fields:
• IP Address: IP address of a host or device that can receive SNMP traps
• Port: port number that the host or device will use to listen for SNMP
traps; the default is 162
• Community: name of the community to which the SNMP host or device is assigned; the default is Public
Events and data from the SAN are continuously captured and processed, providing information, alerts, and notification to the administrator for problem resolution. The administrator can then launch specific
SAN management software from within Tivoli Storage Network Manager
to assist in problem closure.
Among the other companies currently offering SAN management solutions is Prisa Networks, which offers the VisualSAN software suite. One
10/01
Auerbach Publications
© 2001 CRC Press LLC
10/01
EXHIBIT 3 — Tivoli Storage Network Manager’s Event Destinations Screen
© 2001 CRC Press LLC
Auerbach Publications
component of the suite is VisualSAN Network Manager, which automatically discovers, manages, and monitors multi-vendor SAN devices, generating a topology map that depicts the SAN network elements, servers,
storage systems, and their interconnects (Exhibit 4). The VisualSAN Network Management Suite from Prisa Networks consists of VisualSAN Network Manager, which automatically discovers SAN devices and renders a
topology map showing device interconnects, and VisualSAN Performance Manager, which monitors the real-time performance of SANs.
By drilling down through the map, the administrator can view which
devices and interconnects are active and which need attention. An event
correlator collects and consolidates faults, events, and alerts and presents
this information in real-time. All events are logged and user-defined
alerts are generated. The system can be configured to notify the system
administrator of fault conditions via e-mail, page, or SNMP trap. This realtime alerting allows the administrator to quickly and effectively manage
the entire SAN.
The other component of the VisualSAN software suite is Performance
Manager, a module that monitors real-time performance of SANs and renders this data in an intuitive visual format with alert generation. The collected data is used for historical and trending analyses.
Another vendor offering SAN management is Hewlett-Packard. The
company’s OpenView Storage Node Manager offers the following SAN
management capabilities:
•
•
•
•
Automatic discovery of devices connected to a SAN
Multidimensional graphical representations of the network
Problem tracking using the topology map and other technologies
Management of Fibre Channel arbitrated loop configurations as well
as fabrics
Users are also able to set basic levels of service in the SAN and use
tools such as SNMP so that if a storage device or disk is in danger of failing, an alarm will be triggered on the OpenView screen. Also, users are
able to set thresholds for storage disks and could receive a warning
when the disk is approaching its data saturation point.
Management can be extended to any location on the WAN. This is another area in which ATM excels. A low-speed virtual connection can be
set up through the network, which is dedicated to conveying management traffic. The priority handling of this type of traffic makes centralized
management of the SAN possible, regardless of the distances involved.
INTEROPERABILITY
While standards have been developed for Fibre Channel technology, different interpretations of the standards by vendors have resulted in prod10/01
Auerbach Publications
© 2001 CRC Press LLC
10/01
EXHIBIT 4 — Visual SAN Network Management Suite
© 2001 CRC Press LLC
Auerbach Publications
ucts that may not work together on the same network. As a result,
interoperability problems still plague the SAN industry. Many vendors are
now working on standards through organizations such as the Storage Network Industry Association (SNIA), the Fibre Channel Industry Association
(FCIA), the Fibre Alliance, and the Open System Fabric Initiative (OSFI).
Others are investing heavily in interoperability labs where vendors
can see how well their equipment works with components from other
vendors. To a large degree, these efforts have paid off, but not to the extent that products are plug-and-play. EMC, for example, has devoted
150,000 square feet, more than $2 billion in equipment, and 1.5
petabytes of storage to its interoperability lab in Hopkinton, Massachusetts. New products are run through a set of testing procedures. If a problem is detected, the product is returned to the vendor so the firmware or
code can be upgraded. If no problem is found with the product during
the retest, the lab certifies it as having passed a set of tests that have been
commonly accepted among SAN vendors. This gives customers assurance that the products they buy from one vendor will work with those of
another vendor that have passed the same tests.
IBM has also invested heavily in interoperability. It has dedicated
80,000 square feet of space and $500 million worth of equipment to its
interoperability lab in Gaithersburg, Maryland. The IBM Global Services
Lab goes beyond simple interoperability testing. It also looks at the application level to help companies improve the management of their business through better availability of information.
By investing heavily in interoperability labs, joining to develop SAN
standards, and going only with turnkey solutions that they knew will
work, SAN vendors have survived the interoperability morass that at one
point threatened to engulf them. In terms of the SAN life cycle, vendors
have achieved success in the first phase; that is, they have made it possible for users to purchase pretested SAN solutions without fear of running into major problems. But plug-and-play compatibility among
equipment still has not been realized to the point that users can mix and
match switches, hubs, and other components from various vendors.
To further promote interoperability, the SNIA has set up its Technology Center in Colorado Springs, Colorado, to showcase the development
and testing of advanced network storage technologies that require interoperability of multi-vendor storage products. The 14,000-square-foot
center is the world’s largest independent storage networking lab, providing its 150 members with the opportunity to profile enterprise storage
systems, computing platforms — including SAN and NAS infrastructures
— and products.
EMERGING ROLE OF IP
One of the hottest new trends in building storage networks is the use of
the ubiquitous Internet Protocol (IP). Nishan Systems, for example, has
10/01
Auerbach Publications
© 2001 CRC Press LLC
launched a development effort that it claims is the definitive convergence
of storage and networking. The company’s Storage over Internet Protocol (SoIP) connects storage devices and servers more economically than
Fibre Channel and ATM. Along with its strategic partners, Nishan Systems
intends to offer fully interoperable products that allow organizations to
access and share stored data in a high-performance, manageable, and
scalable SAN. And because the products are based on the worldwide
standard Internet Protocol (IP) and Gigabit Ethernet, they will be fully
compatible with the vast installed base of routers and switches with
which IT professionals are already familiar.
Another protocol, IP SAN, also known as iSCSI (Internet Protocol
small computer system interface), uses the IP networking infrastructure
to transport large amounts of block storage data over existing LANs and
WANs. Among the companies with IP SAN solutions is IBM. The company’s IP Storage 200i provides storage that is directly attachable to an
Ethernet LAN. This solution supports heterogeneous Windows NT, Windows 2000, and Linux clients, enabling users to take advantage of many
SAN-like capabilities without the infrastructure and support cost of Fibre
Channel SAN environments.
With the potential to support all major networking protocols, IP SAN
can unify network architecture across an entire enterprise, reducing the
overall network cost and complexity, while ensuring widespread availability. To facilitate administration, IP SAN can use known network management tools and utilities that have been developed for IP networks.
IBM’s IP Storage 200i, for example, comes with a browser-based interface
that allows system administrators to easily configure the system, set permissions, and implement changes from anywhere on the network.
To meet an organization’s diverse connectivity requirements, there are
single-product solutions that address the challenges of connecting multiple SAN islands across a variety of network topologies. Entrada Networks, for example, offers a SAN over IP switch called Silverline. The
switch features an assortment of network connection options that includes T3 for today’s ATM based WANs, OC-3 and higher feeds for
WAN/MAN networks, and Gigabit Ethernet for implementing SANs over
existing high-speed IP networks. In supporting connectivity for the most
commonly used WAN services, this solution meets today’s current and
emerging storage needs, while allowing for future technology migration,
bandwidth scalability, and convergence.
COURSE OF ACTION
The move to SANs provides organizations with a new level of scalability
and new tools have become available for centralized administration, allowing a much greater degree of flexibility than the traditional networkattached storage paradigm. Because SANs entail an entirely different way
of implementing and managing storage, however, it is a good idea to be10/01
Auerbach Publications
© 2001 CRC Press LLC
gin with a simple installation, where the need is immediate, and expand
its use throughout the rest of the organization as needs justify.
SANs need not be local nor regional. IP can be used in situations in
which cost containment is a vital concern. For maximum scalability,
however, fiber-based ATM networks can be used to extend the range of
SANs to any corporate location across the WAN without jeopardizing the
integrity of remote data replication or database image transfer operations.
Once a prohibitively expensive service, ATM is also going down in
cost due to the availability of feature-rich integrated access devices
(IADs) that provide NxT1 access rather than forcing companies to lease
a full DS3 access line for 45-Mbps access. These devices come with serial,
Ethernet, and T1 interfaces for direct connection of virtually any storage
device and server. Today’s IADs make ATM attractive for midsize companies that were previously locked out of this highly reliable and scalable
service due to the exorbitant cost of access.
If an organization runs multiple types of traffic over the WAN, then
ATM is the clear choice. For example, if the organization has separate data
networks for IP, SNA, and Frame Relay — and then wants to add a SAN
— all of this traffic can be combined over ATM. Add to this scenario packetized voice as low as 8 Kbps for intra-company calls or circuit emulation
for PBX trunking between major sites, and the cost savings resulting from
network consolidation can be quite dramatic. With ATM’s quality-of-service and priority queuing features, there would be no service degradation
— and no adverse impact on an organization’s competitiveness.
This level of convergence would require the selection of an integrated
communications provider (ICP) that is capable of supporting all of these
data transport technologies through the same multi-protocol platform under centralized 24×7 surveillance and proactive management from a network operations center. This type of carrier gives businesses the
opportunity to take advantage of the most appropriate technology, or
easily migrate between them as their needs change, without having to
deal with multiple service providers and equipment vendors. And now
that the latest generation of IAD supports all of these traffic types, there
is not even the need for customers to add or change the access equipment to take advantage of any or all of these technologies. A simple
hardware or software upgrade may be all that is necessary.
CONCLUSION
Companies faced with a continuous bombardment of information are
turning to SANs to house, manage, and protect this vital asset. While NAS
is intended for data access at the file level, SANs are optimized for highvolume, block-oriented data transfers. Although both solutions address
the need to remove direct storage-to-server connections to facilitate more
flexible storage access, SANs provide a higher-performance and more
10/01
Auerbach Publications
© 2001 CRC Press LLC
scalable storage environment. They achieve this by enabling many direct
connections between servers and storage devices — such as disk storage
systems and tape libraries — over a variety of transports. The choice between Fibre Channel, Gigabit Ethernet, IP, or ATM will hinge on such factors as the distance between storage locations, the presence of other
types of traffic, and the organization’s budget constraints.
Craig Myers is a Technical Consultant at e.spire Communications, an integrated communications provider in Herndon, Virginia, which operates its own nationwide ATM backbone network accessible from 46 domestic locations.
He can be reached at [email protected].
Nathan Muller is a 30-year veteran of the telecommunications industry and the author of 19 books and over 2000
articles in 60 publications worldwide. He is Senior Technical Consultant at e.spire Communications and can be
reached at [email protected].
Mention of specific vendors and products in this article is for illustration purposes only and does not constitute an
endorsement of any kind by either the authors or e.spire Communications.
10/01
Auerbach Publications
© 2001 CRC Press LLC