Download PPT File for

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

IEEE 802.1aq wikipedia , lookup

Network tap wikipedia , lookup

CAN bus wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Distributed operating system wikipedia , lookup

Computer cluster wikipedia , lookup

Transcript
Oil and Gas
Computational Demands for Oil and Gas
 Seismic Processing
 Reservoir Simulation
 Geophysical Visualization
As computational complexity increases, so does the need
for greater computational speed
Raising the Computational Bar
The search for greater
speed has resulted in
greater parallelism
Clustered
SuperComputer
ccNUMA
SMP
Uniprocessor
Scaling in Clustered SuperComputers
 Partition data
 Distribute
 Divide & Conquer
But, how do you
manage it all?
Administrative Challenges of Clusters
Hardware Failures
Remote Monitoring and Access
Software Upgrades and Cloning
Linux NetworX Evolocity™

First vertical-node design

AMD Athlon™ or Intel Pentium®

Single or Dual processors

Fast Ethernet included; High-speed
options available (Dolphin, Myrinet, and
Quadrics)

Pre-configured to use ICE™ because
real cluster management takes both
hardware and software
No one has built Linux
clusters longer than Linux
NetworX
Evolocity™ Benefits
Vertical rack-mount design increases
system component reliability
Horizontal design increases heat
Vertical design allows for better air flow
Evolocity decreases heat by 12° C
average 2U solution
compared to the
Integrated Cluster Environment (ICE)™
 ClusterWorX® - Cluster management software
 ICE™ Box - Management hardware appliance
“As Lawrence Livermore National Laboratory continues to add to
parallel capacity resources, cluster management becomes a more critical
issue, which is why we required the ICE management tool from Linux
NetworX.”
Lawrence Livermore National Laboratories
Dr.
Mark Seager, Asst. Dept. Head for TeraScale Systems
ClusterWorX® and ICE Box
 Integrated Power and
Temperature Monitoring
 Serial Console access to
each node – the only serial
connection that does not
require any rack space
 View and assign node
properties and network
configuration
ClusterWorX® Events

Events perform automatic
system administration tasks

Example:
If (Temperature >= 60 °)
Shutdown the node

Send email when an event
has been triggered.
ClusterWorX® Plug-in
 Perl and native Linux
plug-in support allows
you to add custom
monitors and actions
 Over 40 built-in near
real-time monitors
 Three types of plug-in:
 Monitor – constantly
monitors the node
 Startup – read once
when node boots
 Execute – act upon a
system trigger
Administrative Challenges
Hardware Failures
Remote Monitoring and Access
Software Upgrades and Cloning
ClusterWorX® Monitoring
 Remote Access through
Web Browsers
 Examine Load Averages,
CPU Usage, Memory
Usage, and Configuration
 Manage the cluster using
Subgroups
ICE™ Box Remote Access and Control
Adjust identification
power, temperature, and
reset with LED control
panel.
ICE Box folds open for
easy access, then foldsup neatly.
ClusterWorX Architecture
ClusterWorX
(jserv)
cwxd
CMM
cwxd
cwxd
cwxd
cwxd
cwxd
cwxd
cwxd
cwxd
cwxd
1010
Cluster Management is the combination and
integration of both Hardware and Software
Thermal
Serial
Ethernet
Administrative Challenges
Hardware Failures
Remote Monitoring and Access
Software Upgrades and Cloning
ClusterWorX® Cloning
 Clone nodes as easy
as point and click
 Use clone images to
conveniently provide
software upgrades
Network Boot
MAC
IP Addr
Configure TCP/IP
Request Image
Network Loader
Execute Network Loader
Request Image
Net-boot kernel
(cloneboot.ebi, hdboot.ebi,
and nfsboot.ebi)
Load into RAM
Transfer Control
Execute /etc/init
Compute Node
Host Node
Clone Boot
… Execute linuxrc from init
Request Partitions
Disk Partitions
Format Disks and Mount
Request File Image
File Image
Untar File Image
Rebuild lilo
Network reboot (hdboot)
Compute Node
Host Node
Performance in Bioinformatics
Tularik, Incorporated
San Francisco, California
Background

Slow computation was costing time and money - the legacy system would have
taken 38 years to complete!
Solution

Linux NetworX delivered a 74-node system with 150 GigaFlops of
Performance running with ICETM Cluster Management
Results


ICETM reduced administrative overhead by $10,000 per year
Their computation completed on the Linux NetworX cluster in 34 days
An increase of over 400 times the speed
Linux NetworX Roadmap
Clustrix 1.0 (2002)
 LNXI distribution for clustering
 Integrated MPICH, Myrinet,
PBS,
and Maui
ICE Box 3.0 (2002)
 IPMI-based System Monitoring
 SNMP MIB Agents
www.linuxbios.org
LinuxBIOS 1.0 (2002)
 BIOS reboot in under 3 seconds
 Remotely manage BIOS settings
Oracle 9i RAC (2002)
 Cluster transparent Database
Conclusion
Clustered SuperComputers for complex computations
The best Cluster Management is the combination and
integration of hardware and software
So you can focus on your core business
Powerful Cluster
Technology