Download Issues Facing IT Managers in Managing Enterprise-Wide Systems

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Cracking of wireless networks wikipedia , lookup

Distributed firewall wikipedia , lookup

IEEE 1355 wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Distributed operating system wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Internet protocol suite wikipedia , lookup

Airborne Networking wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Transcript
Issues Facing Managers in Managing
Enterprise-Wide Systems
The PC also often had far better "response times" than the
terminal access via networks to the mainframe. Together with
a point-and-click device (such as a mouse), the high resolution
images with icons and symbols constitute what later has been
named a Graphical User Interface (GUI).
The main benefit of a GUI is that you can learn how to use
a system much faster than with a command-based systems. As
users started to get used to high resolution GUIs, they became
more and more critical of the comparatively crude images of
the mainframe systems.
or
"Change is the constant state in the IT world"
Based on a presentation by Claes Stahl, Laird StAhl Ltd.
at SEUGI '95 in Stockholm
The World of IT
This is a presentation which will report on how things are
shaping up in the world ofIT in general and hopefully provide
some comfort for the busy IT executive who feels that it's
getting more and more difficult to extract useful information
from the flood of trends, statements and announcements. Here
are the news:
• The bad news is that the IT world is constantly changing
• The good news is that the IT world is constantly changing
The Fashion Industry of IT
An aspect of human behaviour is the urge to compete with
your fellow man. Success in this competition can be achieved
in two ways, by being better than the other guy and by making
it more difficult for the other guy to improve. You win both
ways, don't you?
In the IT industry, both these approaches are used, by
humans as well as whole organizations. The obvious one is to
win by being objectively better than the competition. The other
technique is much more negative, since you try to raise your
own stature by showing off and not helping the other guy to
understand the things you have managed to grasp. You feel you
have worked so hard to learn it so you sll), "let them struggle,
so they see how hard it was for me ... ". A variation of this
theme is to make your new knowledge and ideas look even
more advanced and mysterious than they actually need to be.
You create a fashion image of the issues. This is what the IT
industry is excelling in.
Most people working in the IT industry are able to see
through this posturing, if they have enough time to investigate
the truth behind the fashion. Unfortunately, this takes some
time (and you are certainly not going to be helped by the
proponents), so if you are a busy IT executive, you are
constantly struggling with the flood of information and how to
interpret it. Usually you say "I'll check that later" or "that
sounds interesting, but I'll look at it when I need it" and your
pile of information to read is constantly growing .....
In this session, we will try to address some of these "fashion
items" in the IT industry :
• Is the Mainframe Dead?
• Why Client/Server Computing?
• Open Systems, Portability & Networking
• Distributed Systems
• Object Orientation
• The Internet and the World Wide Web
• Managing Systems in large organizations
IT in the 1970s and 80s
Centralized computing was "the only kid in town" in the
1970s. Commercial data processing expanded, lead by IBM's
enormous success with their mainframe computers. In the next
decade, things started to change a little. This was primarily due
\to one factor, the arrival of the desktop personal computer.
~he Graphical User Interface (GUI)
The fact that the PC could provide some new features was
initially exploited by individual software packages. The best
early example is perhaps the spreadsheet system Lotus 1-2-3;
The Application Development backlog
Another problem with the mainframe environment has been
the time it takes to build and change applications when organizational requirements change. In large organizations, all applications were developed and maintained by separate
programming departments, which had a growing backlog of
requests from users for changes and new system developments.
The initial attempts to resolve this was to try to make the
programmer more productive through 2nd, 3rd and 4th generation languages. Later, computer aided software engineering
(CASE) tools were also used. The latest tools involve prototyping techniques.
While waiting for all this, the users became more and more
impatient and discovered that PCs could give them tools on the
desktop directly. They started their own data processing departments on the desktops.
Running a "data center" - not a holiday
As a result of this, large organizations are gradually discovering some new facts about their data processing. Their corporate data is kept in many locations (pCs on desktops, distributed
file servers, diskettes in briefcases, laptop computers on the
move, etc .. ). This has had the following effects on DP in large
organizations:
• Difficult for management to ensure that corporate data is
secure and properly maintained
• Changes to applications must be made at the workstation
• Performance on individual workstations and on file servers
must be managed
If the environment is totally distributed, i.e. no central
control, all these issues have to be addressed by the users
themselves. Because of this, many organizations have had to
allocate substantial amounts of non-DP staff time to look after
their own computer environments.
These development call for a better management, but not
necessarily a return to central computing. You could summarize the main strategic points:
1. The GUIs are best kept on workstations (too much network
load if high resolution images are to be sent around the
network)
2. Data should be kept on the platform which is best suited to
serve the number of simultaneous users that will access it
3. It must be possible to move applications between platforms
if and when the number of users change (up or down)
The first objective is already satisfied to a large degree
through the use of Windows, OS/2 and other future workstation
based system software. Which one you use is mainly important
only to the vendor, since a standard in Windows-type APIs is
emerging. (e.g. WIN-OS/2, WABI, etc.)
The second point can be met by locating the data base server
functions on a platform which is powerful enough to sustain
all concurrent requests with acceptable response times.
The third objective is addressed by the issue of open systems
and it cannot be said to be a reality yet (and it is a matter for
discussion if it will ever be).
1033
Open systems
ISO (International Standards Organization) and IEC (International Electrotechnical Commission), have together
set up JTCI (Joint Technical Committee 1) to coordinate
IT standards
- X/Open Co. Ltd., which was set up to "package" many
organizations standards in Common Application Environment (CAE), also "custodian" of the Unix brand name.
OSF (Open Software Foundation), which develops products for members to use
There are many national groups that develop, co-ordinate or
approve of standards, often adopting international standards
directly. It is actually the national organizations that make up
the members ofISO, so there is a kind of "two-way" link there.
There are. some words and expressions in the DP industry
which seem to ppssess some mystical powers, popping up and
adapting themselves to any situation. Examples of this are
words such as "object" and "enterprise". Another expression
which any street-wise used computer salesman must be capable
of saying again and again is "open systems".
A large section of the DP community puts an equal sign
between "open system" and the operating system UNIX. This
is not entirely correct, since the fact of the matter is that UNIX
is one of the operating systems that adhere to some "open
systems" defmitions.
Uyou try to fmd one single meaning of the expression which
Portable Operating System Interface - Posix
is accepted by everyone, you will fail. Here are three organiIn 1984, the IEEE started to look at the problem of moving
zations' definitions of open systems. First the U.S. based
applications between the then different versions of Unix that
Institute of Electrical and Electronics Engineers (IEEE):
were dominant at the time. The project was named" 1003" and
"A comprehensive and consistent set of international init has a number of subgroups formed to develop standards for
formation technology standards and functional standards
operating
systems.
profiles that specifY inteifaces, services and supporting
Only
a
few the standards developed by the groups are
formats to accomplish interoperability and portability of
complete
and
ratified by IEEE (the first founding element applications, data and people. "
1003.1
was
first ratified in 1988), some are provisionally
Note the words "interoperability" and "portability".
ratified
and
most
are in the process of being developed.
Another organization involved in open systems is the Open
At
first,
most
system
suppliers paid little attention to the
Software Foundation (OSF). Their position as half academic
.
efforts
to
create
Posix.
However,
in 1988, NIST (the U.S.
and half commercial is evident, since their definition sounds
government's
standardization
body)
released F1PS (Federal
much more like wishful thinking:
Information
Processing
Standards)
151-1,
which stated that all
"A truly open computing environment would employ a
operating systems purchased from 1 Oct. 1990 by federal
standard set of inteifaces for programming, communicaorganizations must comply with Posix 1003.1. This let the cat
tions, networking, system management, and user 'look and
out of the bag and Unix vendors quickly claimed that their
feel', so software applications would become uncoupled
products complied. From 1992, several suppliers have also
from the plaiforms on which they run. "
been
able to provide Posix compliant proprietary systems. It
The final quote is from the more commercially oriented
now
seems
that Posix compliance has become a minimum
X/Open Co. Ltd. Here you can definitely see the "let's-getrequirement,
the industry has moved on with additional standon-with-it!" attitude:
ards.
'~ vendor-independent computing environment consisting
X/Open Co. Ltd.
of commonly available products that have been designed
X/Open was originally set up by some European computer
and implemented in accordance with accepted standards"
companies to balance the dominance of U.S. based companies
Portability and Interoperability
But let us concentrate on the main points in the defmitions in the IT industry. Today, the organization is wholly international.
of "open":
X/Open specializes in producing de facto standards "where
• Portability - By setting standards for how operating system
none
exist" and have produced CAE (Common Applications
platforms interact with applications, it should be possible to
Environment)
which goes further than the Posix interface.
move applications between different platforms that conform
X/Open
have
also put together the X/Open Portability
to the standards.
• Interoperability - Different system platforms should be able Guide (XPG), compiling different standards into one single
"branding procedure". The first versions of XPG were never
to communicate with each other as long as they comply with
really adopted by the industry. However, XPG4 is starting to
an open communications (networking) standard.
Since exchanging information over networks has been some- become generally accepted and many vendors have or plan to
thing concrete for many years, the focus has mostly been on have products that comply with it.
The Open Software Foundation (OSF)
interoperability, but that is changing.
OSF is an organization set up in May of 1988 by vendor
Standards Organi~tions
companies
such as IBM, HP and DEC to balance the grip that
The setting of computer system standards· has gradually
AT&T
had
on commercial Unix (Unix System V). OSF
moved out of the hands of the individual manufacturer. Today,
produced
a
Unix
of its own, OSF/1 (which could not be called
there are many organizations that are not affiliated to .a single
Unix,
since
AT&T
owned the trademark). OSF has today
manufacturer (if any) which set standards. Here are some major
become
a
more
widely
accepted forum and has grown into a
ones:
supplier
of
additional
products.
Here are OSF's main products:
- IEEE - Institute of Electrical and Electronics Engineers,
OSFIl
a
Unix
system
based
on the BSD 4.3 and 4.4
U.S. organization with international influence
version
of
Unix
- OMG - Object Management Group, promotes object
OSF/Motif - a GUI for Unix environments, based on
technology
MIT's
X Windows System
- UniForum - organization for Unix users
OSF/DCE
(Distributed Computing Environment) - a set
- lTU-T - International Telecommunications Union-Teleof
building
blocks for creating distributed client/server
communication (used to be called CCITT until March
application!!
1993)
1034
There is also the Distributed Management Environment
(DME) which was supposed to "provide a framework for the
efficient, cost-effective management of open systems", but
most observers seem to agree that there is not much life in that
project.
DCE Services
DCE is probably OSF's most important product. The package consists of C coded functions that you can install in a
Unix-type system and which are 'meant to help you build
distributed applications. The following services are available:
• Remote Procedure Call (RPC) - A program-to-program
communication support facility based on TCPIIP, which can
be used as a basis for creating client/server applications
• Directory Service to help in locating objects (such as programs, users, servers, etc.) in a network
• Security Service - uses techniques equivalent to Kerberos
(an authentication technique invented at MIT) to conduct the
security administration separately from the client and server
ends of an application
• Distributed File Service (DFS) - a file system with hierarchical directories
• DCE Threads - a multitasking facility (something smaller
Unix systems don't normally have as a standard feature)
• Distributed Time Service, which provides for a clock synchronization scheme between mUltiple hosts in a network
• Diskless support service - for systems with distributed
platforms without their own permanent storage (e.g. disks)
COSE, Spec1170, Unix 95 and???
In April 1993, IBM, H-P, Sun, USL, Novell and Santa Cruz
Operation (SCO) set up a project called the Common Open
Systems Environment (COSE), the group was also joined by
DEC later. The COSE initiative aims at narrowing down the
large number of Unix interfaces in the industry to a manageable
and usable set of definitions. The project has been slightly
overshadowed by Spec1170, however (see below).
Also in April 1993, Novell Inc.~cquired Unix System
Laboratories (USL) from AT&T. US~was the owner of the
Unix name and of Unix System V. The take-over was opposed
strongly by many vendors and Novell responded by giving the
Unix brand to X/Open, along with tl;1e task of becoming
"custodian" and owner of the System V Interface Definition
(SVID).
In September 1993, years of arguing and infighting between
Unix vendors came to an end, when over 70 suppliers set up a
plan to create one single set of Unix interfaces in a project
called Spec1170. The project is managed by X/Open and has
produced a list with 1,168 interfaces. Tests for Spec1170
compliance have been available since December 1994.
In December 1994, X/Open also announced that systems that
comply with XPG3 or XPG4 can be branded "Unix 93", those
that comply with Spec1170 can be called "Unix 95". The
Spec1170 standard is also referred to as XPG4.2 and "Single
Unix Standard" (SUS).
Networking Standards
So far, we have discussed system portability. Open systems
also incorporates the world of networks, which is full of
expressions that can be somewhat confusing. You can hear
expressions like " . .. how does ISDN relate to SDLC?",
" ... what is ATM going to dofor me?", " ... why is X.25 not so
good any more, I thought rru were going to improve it to reach
Frame Relay standards ... ?", " ... now, how can you be so silly
even considering installing expensive Token Ring, when there
1035
is Ethernet?", " .. . my TCPIIP network is much better than your
SNA and don't even mention OS!... ", and so on.
It is often difficult to know how the different concepts relate
to each other. Even people who specialize in networks seem to
frod it difficult sometimes ~o explain these things.
one aspect of network standardization is that because of the
very nature of networking, you must be able to connect
equipment with different physical characteristics, manufactured by many different suppliers. This has forced the industry
to standardize. This, coupled with a need to standardize due to
increased international data communication traffic, has resulted in the networking environment being much more standardized than other parts of the IT industry.
Unfortunately, it has not resulted in one single standard for
everything, however, which is why we need look closer at the
picture ..... .
The OSI seven layer'model
The International Standards Organization (ISO) once did a
good job of trying to sort out the entities and their relationships
by proposing the Open Systems Interconnect (OSI) seven layer
model, with level 1 representing the physical cabling through
to the applications at level 7.
Not all networks can be mapped onto this model. SNA,
IBM's most wide-spread standard for large networks, maps
into a slightly different model. Unix environments use TCP/IP,
yet another network technique, which also maps differently.
Network standards
Before we can look at the different protocols, we must also
point out that there are three main types of networks:
• Local Area Networks (LANs) -This is a network which is
local to an office or abuilding. It's existence is very much
due to the proliferation ofPCs and workstations in the office
environment. LANs are usually set up and managed by the
organization that use them.
• Wide Area Networks (WANs) - have been in place ever
since data processing over telecommunication links was
introduced, although they were not called WANs until the
concept of LAN were invented. WANs can span over short
or long distances, although the main idea is to provide long
distance communications. The physical infrastructure is
usually provided by a public telecommunications company.
• Metropolitan Area Networks (MANs) - This is a much more
recently invented concept, describing "oversized LANs" or
"dwarfed WAN s", depending on which way you look at it.
LAN Data Link Technologies
Here are some major LAN Data Link technologies:
• Token-Ring - IBM's LAN where the PC-connections are
set up in a ring and a "token" with the data is passed around
until it reaches the destined PC/workstation. It can transfer
at speeds of up to 16 Mh/S and provides generally high
performance, but is regarded as comparatively expensive.
Token-Ring has also been approved as IEEE 802.5
• Ethernet - Invented by the Xerox Corporation,. it uses low
cost technology and transfers at 10 Mb/S, so it may not
. always have the same capacity as Token Ring.
A recent modification to Ethernet has resulted in a 100 Mh/S
Fast Ethernet, which is also starting to appear on the market.
• Fiber Distributed Data lnteiface (FDDl) - This standard
has been created by ANSI to support fiber optic LANs.
FDDI operates at data rates of 100 Mb/S, so it has a much
higher capacity than any of the others, but the technology is
also more expensive.
• Asynchronous Transfer Mode (ATM) - This the latest
addition to the family of transport technologies. It is based
on a switching technique were all data is divided into 53
byte cells which are transported across the network in
separate transmissions. They are then assembled at the
destination again. A TM has scope for very high speed
transmission. A recent decision by the standardization organization ATM Forum has decided on a 25 Mb/S LAN
standard, but the technology allows for much higher speeds
(at a cost).
LAN Protocols";' Software Support
Layers 3 and 4 in the OSI model is supported in the LANs
by many different products from several suppliers. This is were
we can start talking about "protocols", which is implemented
in software. Here are some major ones:
• Internet Packet Exchange (IPX) from Novell Inc. A network driver inside their NetWare, which operates with DOS
systems. It is widely used by many manufacturers.
• TCPIIP is a standard feature of Unix and it can also be
purchased for DOS and OSI2 platforms.
• NetBIOS began its life as IBM's Token Ring supporting
software, although it now also operates over other architectures as well. It works with many different operating systems.
• DECnet is Digital's LAN supporting software, developed
around 1975 for general network connection, i.e. not just
for LANs (which weren't even invented at the time).
• Xerox Network Systems (XNS) is sometimes referred to as
the "grandfather" of LAN protocols, because there have
been many variations introduced by different vendors.
• AppleTalk from Apple Corp.
LAN topologies can support multiple protocols. For example, it is not uncommon to see applications using NetBIOS,
IPX and TCP/IP on the same token ring.
W AN Technologies
eventually the speed could be up to 2 Gb/S. There is no
ready interface standard for ATM yet, but it will probably
be called B-ISDN, where B stands for Broadband, once it
is introduced.
• Integrated Services Digital Network (ISDN) - This is an
interface which gives access (usually through some IDN
links) to a digital network that handles both voice (digitized)
and data traffic. The ISDN we have today which operates
over standard links (i.e. not ATM) is also referred to as
narrow band, N-ISDN.
• SMDS -Switched Multi-Megabit Data Service is not strictly
WAN, since most telephone companies offer it only in some
specific geographical locations to be used to set up Metropolitan Area Networks (MANs). It uses cell switching, and
can perhaps be looked upon as a precursor to ATM.
Public Network Protocols
Let us take a quick look at some of the network protocols
you can use in a WAN. There are several available, although
there may be differences in different countries. It is also up to
the telecommunication companies to decide exactly which
physical transport mechanism they use for each protocol, the
user is only promised a specific performance and reliability
level (and price). Here are the major ones:
• X.21 - This is the traditional protocol for digital networks
where you use dedicated lines.
• X.25 - This is packet switching. The data is sent as digital
"packets" with addresses, so you can share the physical
links with other users. This can make it more economical.
• Frame Relay - This is also packet switching but with less
error recovery than X.25. This enables it to run faster than
X.25 and it is seen as a possible successor to X.25.
• ATM - which we listed above as a WAN technology, can
also be regarded as a network protocol, with its cell switching technique. Because of its huge capacity for fast switching
in intermediate network nodes, ATM lends itself to very
high transfer speeds over switched lines. Again, ATM is not
yet generally available over public networks.
Usually, the backbone cabling infrastructure of WANs are
provided by public telecommunications organizations. Basically, you use a public network for two things, to speak over
(voice) and to send computer data over (data). There are
basically two choices available today and another one "tomorW AN Protocols
row":
We are now moving upwards in the picture and getting closer
• Standard public telephone lines - these are primarily meant
to be used for normal telephone conversations, which is
to the computer installation's domain. The WAN Protocol that
transmitted in analogue form. Data can also be transmitted you choose for a WAN is usually dependent on which archiin converted analogue form, using modems to perform the tecture you have picked (SNA, TCPIIP, OSI, and others). Here
conversions in each end. Usually, you do not run data over are some major protocols
these lines at more than 28.8 Kb/S (although .some people
• Point-to-Point Protocol (PPP) - is created to allow two
points to connect logically over packet switched networks
try higher and succeed). If you lease a telephone line, you
to look like a point-to-point connection. This makes it
may be able to get a "clean" link that can support up to 2
possible to run TCPIIP over X.25 W ANs. (It also works on
Mb/S.
LANs, but that is not relevant here).
• Digital links (Integrated Digital Network lines) -these are
specially checked links that the telecommunication compa• X.25 Link Access Protocol-Balanced (LAP-B) - a protocol
used to access X.25 networks when they operate over
nies have set aside for digital data traffic. The arrangement
conventional packet switched networks (i.e. non-ISDN).
is sometimes referred to as an Integrated Data Network.
Because the lines have been "tuned" to data traffic, they can • X.25 Link Access Protocol-D (LAP-D) - a protocol to
access X.25 networks when they operate over ISDN.
reliably provide higher data. transfer rates than the standard
• Synchronous Data Link Control (SDLC) - is widely used
telephone lines, 2 Mb/S is standard and it can be higher.
as main protocol in IBM's SNA networks.
They are configured either as leased or circuit switched
networks (X.21) or as packet switched networks (X.25 or • Binary Synchronous Control (BSC) -This is part ofIBM's
non-SNA support and is being gradually phased out.
Frame Relay), the latter making them more economical,
since you can then get leased line performance at dial-up WAN Architectures
prices.
Let us now take a look at the industry's three main WAN
• Asynchronous Transfer Mode lines - this is not available architectures:
yet from most telecommunication companies, although . • Systems Network Architecture (SNA) - a WAN standard
work is in progress. ATM serviceswill provide links which
(attempts to implement it on LANs in the '80s failed because
can run at higher speeds than any previous technology. The
it required too much resources of the PCs at the time). It is
WAN version of ATM will initially run at 155 Mb/S, and
IBM proprietary and used by IBM-type mainframe installa-
1036
tions. where it has become a de facto standard. IBM has
expanded SNA from being solely hierarchical to also support peer-to-peer networking through APPN.
• Transmission Control ProtocollInternet Protocol (TCPIIP)
- was created in the mid-1970s for the U.S. Department of
Defense for internal messaging. In the early 1980s, its use
was extended to additional U.S. Government installations.
Later, universities and and other academic institutions also
adopted TCP/IP for their W ANs and it became a true
multi-vendor network protocol.
TCPIIP is inherently peer-to-peer, since it was designed for
communications between independent computer systems.
Around 1983, TCPIIP was integrated into the Unix operating system of University of California at Berkeley and was
made freely available to the public. A set of service programs as well a "socket function" was shipped with Unix
from now on ("socket functions" simplifies the programming when you want to access communication protocols).
This has meant that TCP/IP is widely used and has become
a de facto standard for both LANs and W ANs.
• Open Systems Interconnect (OSI) - Due to the lack of
general open networking standards in the late 1970s, ISO
developed OSI which is a comprehensive network model
divided into seven definition layers; The most important
aspect of OSI is to provide a de jure standard, which can be
used to explain where a product fits in. Many commercially
successful products (such as TCPIIP) do not comply 100%
with OSI. Some of its "thunder" has been stolen by the fact
that TCP/IP is gradually becoming a de facto standard.
acknowledgement from the destination within a specific
period of time. This is the main protocol for applications.
• User Damgram Protocol (UDP) -is also a front-end to IP,
but more primitive an unreliable than TCP. It is the original
email interface for the mail systems for which IP was created
(datagrams was the name given to the mail items).
• Sockets - is a concept aimed at simplifying program communication over the network. A socket is a concatenation
of a host (IP) address and an application id, referred to as a
port. If you know the socket, you can read and write to the
program that it represents, regardless of where on the
network it is located. Both TCP and UDP support sockets.
There are several standard ports for common applications,
for instance FTP(see below) uses port number 21. Numbers
up to 1023 are referred to as well-known and are pre-assigned. Higher numbers can be used by applications at their
own. discretion.
mM's Networking Blueprint
All these networking standards and protocols - some of them
more "open" than others - have not made life easier for the
people who plan for the future. The word "standards" implies
one way of doing things, but this seems not to be the case when
it comes to IT in general and networking in particular.
IBM have tried to tackle the issue by announcing their
Networking Blueprint, which initially was an idea more than
anything else. Since then, products have started to trickle on
to the market that support the different layers in the Networking
Blueprint~ You can summarize it in the'one statement ''you
name it, we'll support it... ".
An illustration of this point is how IBM originally announced
a component as "Multi Protocol Transport Feature" or MPTF,
and then later changed it to the more snappy "AnyNet".
TCP/IP Overview
A network standard that has evolved through the years· is
TCP/IP. As we discussed earlier, it has become a de facto
standard due to two facts, its widespread use within academic
and government installations in the U.S. and its integration into
Unix systems.
The main objective of TCPIIP was to· make it possible to
connect many separate networks so that they become one single
network resource (internetworking or "internet-ing"). TCPIIP
is actually several protocols:
• Internet Protocol (IP) - This is the protocol that handles the
delivery of messages from an originating node to a destination node, possibly via several other nodes. The 32-bit IP
address shows which network and within that network,
which host a messages is destined for.1Pis connectionless,
which means there is no sense of "handshaking" with the
other side.
• Transmission Control Protocol (TCP) - adds more sophistication to IP, since it is connection-oriented. This means
that it can resend a message if it has not received a positive
Common TCP/IP Applications
A number of applications (a mainframe person would probably call them ''utilities'') have been pre-defined for TCPIIP.
Most of the are supported via TCP some via UDP. This is
another reason for the wide acceptance of TCPIIP. Here is a
list of some of these applications:
• TELNET - A function that allows terminal device emulation
on the other side
• FTP - File Transfer Protocol. A means for transferring files
from one host to another
• SMTP - Simple Mail Transfer ProtocoL A mailbox servIce
• NFS - Network File System. Allows hosts to share files
across a network so they appear as local files
• RPC - Remote Procedure Call. An Application Programming Interface (API) which makes it easier to code programs
that call programs on another system.
• X-Windows - created originally by the Massachusetts Institute of Technology (MIT) to provide Unix systems with a
GUI platform to use in clientlserver applications. Often
simply referred to as "X". One of the more popular GUIs
that use X is OSF's Motif.
Common Transport Semantics (CTS) and AnyNet
CTS is a layer in IBM's Networking Blueprint between the
transport network protocols (rCP/IP, SNA, OSI) and the API
support code (APPC, RPC, etc.). It can act as a "switchboard", making it possible to create TCPIIP applications which
communicate over SNA networks and vice versa. The initial
descriptions of the Networking Blueprint did not describe any
details on CTS, Later, IBM introduced the Multi Protocol
Transport Networking (MPTN) architecture, implemented
through the VTAM option Multi Protocol Transport Feature
(MPTF), later renamed AnyNe.tlMVS, which allows crosscommunicate in two ways:
• SNA applications over TCPIIP networks. You must have
AnyNet support at both ends of the link:
• TCPIIP Sockets applications over SNA networks
The following additional components have been announced
in support of the MPTN architecture and AnyNet: AnyNet/2,
AnyNetl6000, AnyNetl400 and AnyNet for Windows.
Distributed Applications (Client/Server) One concept, many names
In the last few years, the expression "clientlserver" has
perhaps been even more talked about and given even more
magical powers, than "open systems". The principle of run-
1037
!
ning an application over many computer platforms is not new.
Client/server is just one name for it, there are many other
names, such as cooperative computing, distributed data processing, offload processing, etc.
There may be differences between these concepts, but in
essence the rationale behind them is the capability to distribute
components of an application system over many different
computer platforms in order to let each task be handled by the
most suitable platform (data access on one, screen handling on
another, printing on a third, etc.).
There are many ways to describe how a distributed application can be built. The industry talks about different Application
Distribution Models (ADMs). The most famous one is perhaps
the Gartner Group's ADM.
In any case, combining workstation computing with central
computing is what today's and tomorrow's applications will be
all about. Totally distributed processing becomes the norm
rather than the exception.
Design choices
When you plan for a client/server solution, the main problem
is when wishful thinking clashes with realities. It is comparatively easy to speculate on exciting solutions if you just consider
hypothetical platforms, looking at all the interesting technology
appearing on the market. It is a much different thing when you
try to adapt ideas to the hard cold real world oflegacy systems
and systems management issues. This is were many client/server projects have run into difficulties. A recently completed project commissioned by IBM (the "Chantilly" project),
which made extenSive year long on-site studies at several
Fortune 500 companies (half of them in the top 5% of the list)
concluded that client/server applications cost more because
they require even stronger management disciplines than central
solutions. An overall majority of the applications (88 %)
needed redesigns or re-architecting before they could be scaled
up to the intended operational level. Most sites reported
measurable positive effects in other ways, such as higher
end-user satisfaction, better productivity, improved employee
morale, etc., so there is no question of whether client/server
is here to stay. But it also means that there are great rewards
for those who can get it right first time.....
Client/Server Modeling Techniques
One way to get achieve this is by using modeling techniques,
where you take the installation's main requirements and map
them onto two or three models to establish what is possible.
Each model represents a different way of looking at the
proposed application and its computer processing environment.
Each model also represents an "entry point" into the decision
making process and the one to chooSe depends on which aspect
is most important to the installation:
- The application design (the "wishful thinking")
- The physical infrastructure (hardware environment)
- The functional operational structure (the management
environment)
Usually you cannot decide based on just one single aspect,
things are very rarely just black or white. By looking at the
problem through the eyes of all the three models, you would
still get a good idea of which the most unrealistic solutions
would be and then you would work it down from there to a
decision.
The final design decisions should be based oli each platform's
technology, functionality, relationships with existing systems,
cost as well as management structures (and you can never
exclude politics, of course).
The three models that relate to each aspect are:
• The Application Design Model (ADM)
• The Physical Infrastructure Model (PIM)
• The Functional Operation Modei (FOM)
The application design and the ADM
The application design ,is very often the starting point in the
decision process. This is because it can be used as a model to
establish what is theoretically possible to achieve in a new
application system, it can work as a "wishing list", if you like.
The model can, however, raise some tough questions, particularly if the final solution is to incorporate legacy system
components or if it requires major hardware platform restructuring in order to work.
The Gartner Group's model for client/server computing has
been widely recognized by the industry and it can well serve
as the Application Design Model in this case. This model
locates four functions in either the client or the server or both:
• Data access
• Function (business logic)
• Presentation
Requirements on a new application can be things such as "it
must use a Graphical User Interface (GUI)", "it must be
possible for individuals to change the data, even when a central
repository is not operational" and so on. Other issues can relate .
to whether there is ready-made software available "off the
shelf' which may already be designed along specific lines.
These requirements can be mapped onto an Application Design
Model such as the Gartner Group's.
The model shows five different ways in which applications
can be split between the server and the client:
- Distributed Presentation
- Remote Presentation
- Distributed Function
- Remote Data Management
- Distributed Data Management
The physical infrastructure and the PIM
The existing hardware set-up can be mapped in a Physical
Infrastructure Model (PIM) which helps to categorize the
existing installation. For example, is the installation primarily
based on using "dumb" terminals attached to a host, or is there
a WAN with programmable workstations (PWS) that can
perform some of their own processing? Or perhaps there is a
range of servers operating at different levels.
Operational structure and the FOM
The management aspects of an application is sometimes
forgotten in the initial stages of designing a new application.
In the past, this may not have caused any major problems, since
the management structure was centralized and usually very
similar from application to application. You could basically
launch the application into production and it would fit in nicely
with all the others, thank you. With the introduction of distributed applications, there are potentially many more parties to
involve. Should the client~end of the application be managed
by the users themselves? If, not, is there an infrastructure in
place to help them? You can look at your organization and
usually map the systems management into one of several
Functional Operational Models (FOMs) such as:
Smnd-alone -this can be called an "enterprise-oriented"
model, where all control and management is centralized.
Cooperative -this is also an "entetprise-oriented" model,
but with management functions in all places, each with
independent responsibilities, even though the main function is central. Control and management is still regarded
as centralized ..
1038
:!
Client/server - this is a workgroup-oriented model with
much more independent management with loose relationships and very flexible roles. There can be multiple clients
and multiple servers, each managing and controlling their
own environment, it's a truly decentralized model.
Interconnect processing - this is a model which can be
seen as either workgroup and enterprise-oriented, although it is probably mostly implemented in enterprise
situations (i.e. centralized). It is a model where all
partners can operate independently in some cases, while
being dependent in others. In such a system, the overall
management is centralized to one controlling point, while
delegating responsibilities.
Resource sharing - this is a litt'e bit of an "up-side down"
version of the previous case. It's a workgroup-oriented
model where independently managed and controlled clients can use and share resources. Each such relationship
appears to each client as if they are local so management
structures must allow for individual requirements.
Cross-mapping the models
Once the different models are established for the particular
case, they can be mapped against each other in order to
establish feasible solutions (or rather sort out the unrealistic
ones).
answers", the model is synchronous, a "client" waits for
a "server" to respond. The programs in the conversational
model are equal partners (peers). Notice that a side can
sometimes be a "sender" and sometimes a "receiver". This
model is supported in IBM's SN A, through the Advanced
-Program-to-Program-Communication (APPC) interface.
• Remote Procedure Call model- Procedure calling is a well
established technique in modular programming. A "main"
program calls a subroutine, possibly passing some parameters to it. It usually waits while the subroutine performs its
task, the relationship is synchronous. It is also common
that the "main" program is responsible for knowing where
the.subroutine is located. In this model you can say that the
program that takes the initiative is the "source" program
and the subroutine is a "target" program.
In distributed systems, the called procedure can be located
anywhere in the network (remote), so there must be an
interface that translates the call into network communication. In Unix, this is done via TCP/IP, which has RPC
support built-in.
• The Message Queuing model - In contrast to the two other
models, this one is asynchronous. It is based on programs
passing requests ("messages") to a queue somewhere. "The
other side" is a program that asynchronously retrieves these
messages and acts upon them. There is no direct contact
between the requester and the program processing the
request. Usually there is some kind of queue manager to
which all requesting programs pass their messages. This
central function is then responsible for invoking the program or programs which will process the requests by taking
them from the message queue. The requesters have no
information on where the message processing programs are
located and there is no automatic signal sent back to the
requester to indicate when a message has been processed.
This model has great scope for high volume client/server
applications, since it means you can create systems that
delegate multiple processes to run in parallel in many
separate processors or systems, if needed.
How programs can communicate
Let us now tum our attention to the inter-program communication needed for distributed applications.
In an environment where you are trying to build applications
with components spread over many computer platforms, it is
not enough to just have network protocols for data interchange
between them. In a client/server or program-to-program communication relationship, "the other side" is not a passive
"dumb terminal", but an intelligent computer program. This
makes it possible to select the level of sophistication in a
multi-platform application as if you operated with one single
system of interacting programs. When you design an application with many interacting program components, it is useful to
have some "game rules" for how two programs exchange
information and trigger each other. Should one always drive
the other? Or should they perhaps be allowed to start conversations independently?
Inter-process Communication Models
In the DP industry, there are three established models for
how programs can communicate with each other. Each can be
used depending on an application's requirement. In the past,
they have mostly been applied to applications running on single
platforms only (since that is the way computer systems have
been built up to now), but they can be as easily used as models
for applications witq program components running on different
platforms.
These inter-process communication models are:
• The conversational model
• The Remote Procedure Call (RPC) model
• The message queuing model
IBM have included all the three models in their Networking
Blueprint
• The Conversational model - This is perhaps the most
sophisticated of the three models mentioned here. It is based
on two equally "important" programs communicating
(peer-to-peer). One sends the other a message which is acted
upon, followed by a return message to confirm. This can
be followed by further exchanges of messages. Since "conversation" usually means some form of '~questions and
The models can be used for different requirements. The first
two synchronous are not that different in concept. The choice
between them is more dependent on whether you use SNA or
TCP/IP. The main choice is between synchronous techniques
and asynchronous. It would make more sense, for instance, to
use APPC or RPC if you have a file server that must complete
before you can carry on the main work. On the other hand, if
there are things "to do" while a long data base update is
running, you may use the message queuing model.
There is a growing collection of tools ("middleware") on the
market that support these models. There is IBM's APPC and
the RPC implementations in the TCP/IP-Unix world for synchronousclient/server. There are fewer on message queuing,
IBM's MQSeries products for several platforms are fairly new.
The Internet
One of the most talked about IT related issues today is the
Internet, the "Information Superhighway" and the World Wide
Web. Is this something that IT management has to decide on
today or is it yet another one of those "fashion items"? Well,
let us here take a brief look at the facts and then you can decide
for yourself.
The Internet is a network that grew out of the US government
funded Advanceq Research Projects Agency Network (ARP ANET) which was an early switched network with leased
1039
A starting page at a Web site is often referred to as its home
page. Some Web browsers have a home page set up automat-
lines that linked up computers used by researchers in the
u. S.A .. The network has since then grown organically,as more
and more computers have been added to it, located all over the
world and today it is not related to the US government in the
same way (even though US funding has still been provided).
The technology base for the Internet is TCP/IP with its Internet
Protocol (IP) addressing scheme, which allows computers to
connect to each other without the need for all-to-all connections.
Initially, it was research establishments and academic institutions that connected their computers into the Internet. Today,
other public institutions as well as commercial organizations
are also doing it. The Internet has been hailed by some people
as the "information superhighway", although that is a huge
exaggeration. The term was coin~d publicly for the first time
by US Vice President Al Gore in a speech some years ago,
where he said that networks created in the future which were
being planned for use by the general public, using new broad
band technology, would be looked upon as an information
superhighway for the common good of society in general.
Unfortunately, this has given rise to the belief that the current
Internet actually is the information superhighway. For those
that have tried it seriously, it seems to be more of a rambler's
track.
The World Wide Web
Another increasingly fashionable term is the World Wide
Web or WWW.This is the result of a concept on the internet
referred to as Hypertext Links. These make it possible to set
up logical links in one computer file to files on any other
computer, anywhere on the Internet. When you look at
("browse") a file and see a Hypertext Link, it's highlighted
and you can simply "click" on it (or press enter while the
cursors has highlighted it) and the underlying program will
connect you to that computer and show the other file instead.
If your Web Browser (as the programs are called that allow
you to do this) is cleverly designed, it can maintain a Web map
in memory, so you can browse up and down on the tree oJ
connections that you build for yourself in this way. You can
say that you build you application dynamically. When you do
this, you are "surfmg the Net" ...
An added feature of the WWW and its browsers is the
capability to imbed high resolution graphics images, sound and
even moving images in a document. This obviously requires
that your computer supports graphics and multimedia type
documents, otherwise you can only see the raw text in them.
Once you have managed to set up a WWW Browser, it is
relatively easy to "surf on the Net", especially if you use a
graphical Web Browser. You will see high resolution graphics
and pointers (hypertext links) to other documents all over the
network and by "clicking" on these links you can wander
around everywhere without efforts. Obviously, you must know
where to start, you have to tell the program the address of a
page (i.e. the location of a Web site and a Web page in that
computers file system) to begin with. You usually give your
Web browser program something that looks like this:
http://abcloc.web.comlweb/pages/hello.html
Web pages are created by coding files in something called
Hyper Text Mark-up Language (HTML), which looks remarkably similar to the old Script language, but with room for
imbedding graphics and multimedia file specifications, which
are files that must also be present, of course. There are several
formats for these multimedia files .and a browser must obviously support the format to be able to display and playa file,
both in terms of software and hardware.
ically when you install it (usually pointing to the company that
you bought the program from, although that can be changed).
Then, you can always hope that there are useful references to
other Web sites in the home page. If you are looking for
information on particular subjects, there are several services
available which work as phone directories, with lists of thousands of web sites. One such list is even said to contain 2 million
references! These list services have names such as the Web
Crawler, the WWW Worm and YAHOO (which means Yet
Another Hierarchical 0 ... Oracle, where the first 0 can be
interpreted as any of the four different words, one of which is
the word "Organized").
Using the Internet - Electronic mail
The most powerful effect of the Internet in the short to
medium term is the fact that more and more companies,
organizations and individuals subscribe to it. In that sense you
can say it is becoming a public highway (even if it cannot be
called "super"). Large organizations and corporations have
had electronic mail ("email") for 10-15 years, based on central
computer systems. Smaller organizations have also managed
to set up email systems with the growing trend of LANs and
small computers. All these systems are internal to the owning
organization. The Internet has made it possible to connect the
private networks so that outsiders can communicate with users
inside an internal email systems. This is usually done by
creating a gateway in the private email system which has one
single Internet address and then you prefix the priv:ate user's
ID to that IP address so that the internal email system knows
exactly who it is for. This is why there usually is a "@" in
email addresses of Internet format. The sign splits the address
between the private ID to the left and the Internet address to
the right ofthe "@".
With the introduction of commercial services for the general
public in this way (e.g. CompuServe, Online America, Prodigy, Delphi, CIX, etc.), everyone can have access to email.
This is the most powerful effect of the Internet so far.
World Wide Web and WWW Browsing
This is the concept most computer journalists like to talk
about. In principle, the WWW is a very good idea, especially
if you use graphic web pages. It provides an easily understood
intuitive way of using a computer system. All you need is to
look at things and "click" on objects on the screen if you want
to know more about it. The WWW is object-oriented right from
the start.
The problem starts when you want to look at information
with lots of graphics/sound/video content. With today's network band widths, it is very easy to get to a point where you
sit and wait for ever for a page to download. The amount of
bytes to transfer if a WWW page has high resolution graphics
and/or sound and video contents can be 100-200 megabytes.
And that is just for one page when accessed by one user. By
just using static graphics (and not too much of it) you can cut
page sizes down to perhaps 200 K-l MB, but that is starting
to defeat the object of the exercise. It was supposed to show
pictures, not just text.
Obviously, if you have access to a network connection that
runs at ISDN speeds of 64 Kbit per second or more, it starts
to become feasible, but even such speeds are soon too slow,
when web pages become real multimedia pages. You will really
need megabits/second speeds for each individual user.
This is why you cannot say we have true WWW browsing
on the agenda until general network services around the world
1040
~
:
have been improved to provide the speeds necessary at affordable cost.
Another major reason why WWW Browsing is of limited
value is the actual contents on the net. Most sites that provide
information do it because they are academically. or privately
interested in doing it. There is a growing number of commercial companies (particularly computer companies) that also
provide Web servers. These are mainly used to provide promotional information on their own products. This makes it
possible to download and print what is essentially product and
sales brochures (but then you might as well write to the
company and ask for brochures to be posted to you, and they
will probably be high quality colour printed).
The WWW is currently a bit like a new telephone system
that lots of people have access to. People are scanning the phone
books to ring up locations but there is not much to talk about
-once you get there. Nevertheless, it's exciting to feel that you
can do it. ....
Web Browsing - a general application platform?
An interesting aspect of Web Browsing is the simplicity with
which you can actually create your own Web pages with
HTML. There are now word prQCessing systems which are
starting to provide translation mechanisms into HTML, so you
can create a document and then have it described in HTML
terms. The document can already link into established graphics
standards (e.g. GIF and JPEG) so if you create pictures, they
can be connected straight away.
This opens up the possibility to create distributed in-house
applications, based on Web Browsing techniques. The mechanism allows for graphical design and since more and more Web
Browsers are being brought to market, 'you can buy a readymade package for the users. All you need to do is build HTML
files and store them along with their graphics/sound/video
objects on a system.
There is a flip-side to this coin, however, and that is network
. band width. Web browsing flies almost straight in the face of
one of the main arguments for clientlseryer computing, i.e.
don't transport unnecessary things over th~ network so that you
can achieve acceptable performance. With Web browsing, it's
almost as if you set up an exploded Windows system with all
the components that you can click on 10cate4 at other computers
all over the place. What you download and when is totally in
the hands of the user. That opens up .some interesting issues
with regard to network capacity planning. The only solution
seems to be compression techniques to maximize the networks'
efficiency, in combination with "unlimited" network speeds
(i.e. gigabits per seconds).
Object Technology
Another of the "fashionable" concepts that is being discussed
at length in the industry is object technology.
What is an object?
Before we answer that question, consider a classic computer
environment, which consists of a program as one entity and
then data as another entity. You then run the program's code
in a computer to access the data. The code must know exactly
what the data looks like physically to be able to interpret the
stored information. You may use several different programs to
access the same data for different purposes. This is the classic
. procedural approach.
\ Object Orientation makes a permanent connection between
'the data and the logic to access it, i.e. the program logic and
. the data is "cemented" together into one entity, an object, as
seen from the outside. In object technology, c<>4e is referred
to as methods, and data descriptions are referred to as variables. When an object has real data loaded into it, its variables
have values. A particular object which has a set of values set
up in its variables is referred to as an instance of that object.
A specific combination of code (methods) and variables (data
definitions) is referred to as
object class. An object class is
designed to perform a specific task. For instance, you can have
a customer object class, which looks after a data base that
describes customer records in different ways. The methods
around the variables can add, remove and modify the variables.
A bank can have an account object class, where the object's
methods perform tasks such as withdrawals and deposits.
Object classes with something in common, such as belonging
to a stock control type application or a banking system are kept
in class libraries. In the IT world, there is fast growing business
in manufacturing and selling class libraries which have objects
for standard functions (book keeping, stock control, staff
management) as well as specially designed for a partiCUlar
industry (banking, manufacturing, etc.).
The concept of forcing all access to the data through the
methods is a form of encapsulation and protects the data from
being accessed by other programs. The main advantage with
this technique is that if you want to change the format of the
data (sorry, variables), you only need to inform the object's
official methods, since all access to the data must be through
those. Object orientation. enforces the concept of seeing the data
and its access logic as one entity.
So what else is new? Couldn't you achieve the same thing
with a bit of discipline in the programming department? Well,
perhaps you could, but object orientation works on the principle of positive help for the system designer; The fact that an
object is a complete entity, where both data (variables with
values) and computer logic (methods) are inextricably linked
to each other, means that you can "click" on the object without
having to know exactly how it "does its thing". This means
you can build larger systems, constructed from pre-fabricated
objects, and use them as building blocks. Perhaps a fairly
logical conclusion of that is that objects can be used in many
situations, not just to solve one single application problem.
Object orientation provides great scope for the reuse of program code (methods).
an
The Object Class ffierarchy & Inheritance
An object class consist usually of at least a Base Class Object,
a kind of parent object which has a set of methods and variables.
You can then create additional subclasses or child classes.
These can inherit all or selected parts of the parent class. The
methods you don't want in the child class can be changed or
overridden. This ability to slightly change the class when
creating a subclass is referred to as polymorphism.
The technique allows the programmer to re-use code and data
descriptions that have been established before, allowing
himlher to think much more in terms of problem solving rather
than spending a lot of effort on trivial things such as data types
and definitions. You can say it's a form of code sharing.
Object interaction
When you build a larger system out of objects, there must
obviously be some kind of interaction between them. One
object must be able to "click on the other" (figuratively
speaking, of course). This is done through the use of messages.
A message from an object asks the receiving object to use one
of its methods. The message is also sometimes referred to as
a call and can for instance tell the target object to change its
data or to provide data in return.
1041
You can refer to the object that issues a message a client
object and the called object a server object. Obviously, there
must be some kind of agreement between them on what format
the message shall have and how information is to be exchanged
between them.
Objects for everything
Most of what we have said so far has been describing object
orientation in very general terms. We have made some references to programs and data as if we were talking about a
traditional data base (e.g. a stock control system or a bank
account data base) as the main components of an object and
object classes.
Object technology is not limited to traditional data base
concepts, however. You can apply the concept to just about
anything. For instance, a graphic image can be described as an
object, where the data is the raw components and the methods
inside is the program code that puts the image into its fmal
form.
An object can be a spreadsheet, where the data is traditional
computer data and the method is the spreadsheet program that
sets up the entire spreadsheet in its operational form.
A classic example is the word processing package which
allows you to insert a spreadsheet calculation inside a document. The document is an object consisting of text and the word
processing program's way of presenting it, the spreadsheet is
another object with data and methods to show the data.
:.\
Standards for 00 - CORBA and OLE
This is where we start getting into politics, when it comes to
object technology. For one object to be able to interact with
another, there must be an agreement on the format of the
messages, in both directions. Most object oriented solutions so
far have been proprietary, i.e. suppliers of application enabling
software have created their own internal protocols which do
not necessarily work with object technology from other suppliers. The IT industry has been struggling for some years now
to agree on standards for object interaction. The main forum
for this has been the Object Management Group (OMG) which
was created in 1989 and which today has over 400 members,
including companies such as Apple, AT&T, Borland, Digital,
HP, Microsoft, HP, IBM and others. The main idea has been
to create standards so tha"t products 'from many different
vendors can interact, whether they are executing on a single
platform or is part of distributed environments such as client/server applications. The efforts has resulted in the Common
Object Request Broker Architecture (CORBA) which mainly
consists of:
• The Object Request Broker ,core function which is the
communications link between objects. This is complemented with an Object Adapter, which is the interface to the
ORB services.
• The Interface Defmition Language (lDL) which is used to
specifY the information needed to use an object. This would
typically be coded by the creator of an object so that other
objects can use it. The IDL statements specifies for an object
class what methods and variables the object has, the types
of parameters you pass to it in a call, as well as the format
of the information you get back from the object as a result
of a call. These definitions are external to the program code
using it and kept in an Interface Repository. IDL statements
look a bit like C++ code. Part of the CORBA package is
a compiler that compiles the IDL code and binds it to the
program, regardless of which program language used (as
long as it's an 00 compliant language that CORBA supports, of course).
• A Dynamic Invocation Interface, which allows for dynamic
calls of server objects by client objects. This means that you
can change the server objects dynamically, without having
to recreate the bindings to other objects.
Separated from OMG's efforts, Microsoft have been busy
creating their own principles referred to as the Common Object
Model (COM), which has resulted in the their technology
Object Linking and Embedding (OLE). This is built into
Microsoft's Windows platform. For some time it looked as if
Microsoft were going their own way totally with OLE while
the rest of the industry put their faith in OMG and CORBA.
However, since Windows NT has not yet been widely accepted
in the large corporate market, Microsoft seem to have changed
their position slightly and are now more interested in cooperating with OMG. In March 1995, Microsoft submitted their
OLE specifications to OMG so that it may soon be possible for
OMG to create object broker links between CORBA and OLE.
Systems Management Trends
As we have seen, the trend for the 90s are towards distributed
heterogeneous systems. One of the early experiences of such
systems is that the cost of managing them can rise tremendously, compared to older centralized solutions. In order to
ensure that this cost does not grow uncontrollably, it is important to review the mechanisms for managing these new system.
Systems Management Approach
A structured approach to Systems Management can be
adopted by basing it on disciplines or tasks rather than platform. This has been tried before with IBM's ISMA, SystemView and OSI's SMFA. By becoming more task oriented there
is more of an emphasis on task oriented skills. Some of the
main disciplines to be considered are:
• Business Management
• Change Management
• Configuration Management
• Operations Management
• Performance Management
• Problem Management
System Management Models
There are several system management models available.
• Standalone - indicates all control is assigned to the local
systems. This infers that the skills and tools exist there.
• Distributed Function with distributed control' - indicates
local control but some degree of monitoring may be taking
place elsewhere, but the ultimate control is local.
• Distributed Function with centralized control - puts the
function where it is best suited. Ultimate control however
will be retained centrally.
Combinations of these can also be employed. For example
there could be central control during day shift and local control
during night shift.
System Management Interface standards
One of the biggest problems when establishing a systems
management infrastructure, is the lack of standards and agreements between suppliers of tools. This relates both to the low
level interfaces between products (e.g. how networks report
problems and how you can signal different actions) as well as
scope and reach of the different products.
When it comes to the interface standards in distributed
environments, there are two main "schools", Simple Network
Management Protocol (SNMP) which comes from the TCPIIP
network originally and which has' become quite widespread
because of the popUlarity of TCP/IP itself. The alternative is
Common Management Information Protocol (CMIP), which is
1042
part of OSI's networking standards. Since the OSI definitions
are more de jure than de facto (i.e. there are not so many real
products out there), SNMP seems to have the upper hand so
far, when it comes to how much used they are.
Simple Network Management Protocol (SNMP)
SNMP is one of the industry wide protocols which tie the
systems and applications together. SNMP operates with a
concept of an SNMP manager, which takes decision and SNMP
Agents which are located close to the managed resource. The
Agent uses a Management Information Base (MIB) in which
to keep information about the resource.
The SNMP Manager makes decisions and monitors status of
the resource on behalf of the Agent. Communication between
them is via TCP/IP.
The Manager uses a "get" command to retrieve data from
the MIB and a "set" command to place data into the MIB. The
"trap" concept is used to make the Agent notify the Manager
to look at the MIB when some unusual event has occurred.
Typically the Manager polls the Agent for status.
The SNMP Agent gathers information about the resource and
places it in the MIB. It also reacts to commands from the
Manager.
The MIB has a standard format called MIB-ll which vendors
must comply with. However, it is necessary to incorporate an
extension for vendor dependent information. This can be
achieved by "compiling" the MIBsbeing managed.
This also means that there are many MIB Extensions in
existence. In order to solve this problem, a generalized manager is required to work with all agents. Such a manager is
IBM's NetView/6000.
Common Management Information Protocol (CMIP)
Another industry standard protocol is CMIP. This is part of
the OSI network standards. The key to the OSI concept is that
it uses object technology so that the exterior always appears
the same for all objects.
Similar to SNMP, CMIP operates with a Manager and
Agents which are closer to the managed resources.
CMIP Managers provide general maruigement fimctions for
the agents which it controls. The CMIP .manager issues common service requests to the agents. These services are referred
to as Common Management Information Services or CMIS.
The CMIP Manager's applications fall into five categories:
• Fault
• Configuration
• Accounting
• Performance
• Security
CMIP Agents execute the CMIS services requested by the
Manager. When an error is detected, the Manager is informed.
The Agent can represent one or more objects to the Manager.
The CMIS services offer a more powerful command set than
SNMP.
The CMIP was originally developed for OSI only, but it has
subsequently taken on a wider role. Other network standards
can also be used, for example CMIP over TCPIIP (CMOT)
and LAN (CMOL).
SNMP versus CMIP
SNMP
• Polls for Information
- Network Traffic
- Processing Overhead
• Less powerful
• Use less storage resources
• Requires a MIB definition
• TCP/IP only
CMIP
• Event Driven
• Higher Complexity (e.g. security)
• Application can control multiple objects
• Services built into objects' .
• Supports TCPIIP, OSI, LAN
The major difference between the protocols is the dependence of SNMP on polling. Managed devices then return the
requested information synchronously.
This produces a great deal of network traffic and uses a lot
of processing power. It is believed that SNMP managed
networks start to become impractical when there are more than
750 agents.
X/Open Management Protocol API (XMP API)
One way to avoid having to make a choice is to provide
support for both SNMP and CMIP. X/Open have described all
the interfaces needed to achieve just that in their XMP API,
which is adopted by IBM's SystemView as well as OSF's
DME. This technique allows applications to control multiple
resource types regardless of protocol. It allows developers the
fimctionaIity to implement all types of agents and managers. It
provides generic commands allowing requests to be made to
managed objects or other managers, as well as responding to
notifications and events.
System Management Frameworks
Frameworks or management models, is a concept which
includes both a design plus interfaces so that it is possible to
build systems management solutions. There are many examples, focused on different parts of the large systems environment:
• OSF's Distributed Management Environment (DME) tries
to offer a unified solution to managing distributed systems,
networks and applications. Some systems management applications are provided.
• The OSI Management Model (X. 700) is aimed at connecting
disparate systems. This will apply to multi vendor platforms.
• IBM's SystemView is based on the OSI Model. This allows
common management applications, shared data and common GUIs to the heterogeneous environment.
• The Network Management Forum's OMNIPoint concentrates on the inter-operability of networks.
• Desktop Management Task Force (DMTF) is a group
originally set up by eight companies, concentrating on
developing standards to access the software and hardware
in PCs connected in a network. The objective is to make it
possible at some stage to centrally manage them via the
Desktop Management Interface.
System Management Tools & Frameworks
There are many products or collections of products on the
market which provide too~s to manage systems in different
ways. Most of them have come from the need to manage
networks, and are gradually expanded to being able to manage
more resources such as software in distributed environments.
. Here is a list of some, along with the supplier's name in
alphabetical order:
- Amdahl - EDM
- BMC - Patrol
- Boole & Babbage - Sysnet
CA - Unicenter
Candle - Availability Command Center
CompuWare - Ecotools
1043
.....
-------------
These are trademarks of the IBM Corporation:
HP - Open View
IBM - NetView, SystemView and "Karat"
Landmark - Probe and Monitor
Legent -XPE
Microsoft - SMS
Novell-NMS
Sterling Software - NetMaster
SUN - SunNet and "Solstice"
Tivoli
Choosing a Platfonn
When selecting platform to use for the management of a large
system, there are many aspects to consider. Perhaps the most
difficult one is the legacy aspect, i.e. what are we doing today
that affects the decision?
The job of managing large heterogeneous systems is not a
very new science, so there is not a long tradition to base it on,
although you can of course use existing experience of managing
large central systems as a starting point. There are many
solutions to choose from and new ones seem to be announced
ever day. Here are some of the main points to consider:
• Inter-operability - Protocols and Agents supported?
• Investment Protection - Frameworks supported?
• Functionality
Applications available? - Network faults, performance,
topology recognition
Advanced Applications? - Software management, automated software installation, administration, security, etc.
Portable application support?
Other vendor support?
• Management Protocols - SNMP?, CMIP?, SNA?
• Ease of USe
• Common Data - MIB and Object types supported?
• Flexibility - Function placement, central/distributed?
• Capacity - Number of resources supported on hardware/software base?
IT Management Issues - Summary
in
•
•
•
•
•
•
•
Here again is the selection of issues that we have discussed
this paper and that the IT manager is struggling with today:
Is the Mainframe Dead?
Why Client/Server Computing?
Open Systems, Portability & Networking
Distributed Systems
Object Orientation
The Internet and the World Wide Web
Managing Systems in large organizations
This was just one selection. The list is constantly changing, as
is the IT industry in its normal state of affairs. One topic we
have not addressed for instance is workflow, which looks to
become another of those "fashionable hot topics" for those
interested in the world ofIT,but that is another story .....
CICSTM
CICS/ESA™
DFSMSTM
DB2TM
ESA/370™
ESA/390
ES/9000'"
IBMTM
IMS/ESA™
MQSeries™
MVSIDFpTM
MVS/ESA™
MVS/SP™
MVS/XA™
NetBIOS™
NetView™
OpenEdition MVSTM
Operating Systemi2™
OSI2™
OS/400™
Presentation Manager
PSI2™
SAA™
SNA™
These are trademarks as noted:
1-2-3
Lotus Corp.
Amdahl
Amdahl Corporation
ANSI
American National Standards Institute
Apple
Apple Computer. Inc.
Apollo
Hewlett-Packard Company
AT&T
AT&T Inc.
DEC
Digital Equipment Corporation
DCE
Open Software Foundation. Inc.
Ethernet
Xerox Corporation
ezBRIDGE
System Strategies. Inc.
HP
Hewlett-Packard Company
IEEE
Institute of Electrical and Electronics Engineers
Intel
Intel Corp.
InterOpen
Mortice Kern Systems. Inc.
ISO
International Organization for Standardization
Lotus
Lotus Corp.
MacIntosh
Apple Computer. Inc.
Motif
Open Software Foundation. Inc.
NetwotX Computing System Hewlett-Packard Company
NCS
Hewlett-Packard Company
NetWare
Novell. Inc.
NFS
Sun Microsystems. Inc.
OSF
Open Software Foundation. Inc.
OSFIl
Open Software Foundation. Inc.
OSFIDCE
Open Software Foundation. Inc.
OSFlMotif
Open Software Foundation. Inc
Intel Corp.
Pentium
POSIX
Institute of Electrical and Electronics Engineers
Sun
Sun Microsystems. Inc.
sunOs
Sun Microsystems. Inc.
Transact
System Strategies. Inc.
UniForum
/usrigroup
Unisys
Unisys Corporation
X/Open Company Limited
UNIX
UnixWare
Novell. Inc.
UTS
Amdahl Corporation
Windows
Microsoft Corporation
Windows NT
Microsoft Corporation
X/Open
X/Open Company Limited
XPG3
X/Open Company Limited
X/Open Company Limited
XPG4
X-Window
Massachusetts Institute of Technology
X-Windows
Massachusetts Institute of Technology
Claes Stithl, Director and Consultant
Laird Stahl Limited,
30 Southwood Gardens,
Hinchley Wood,
Surrey,
England
Tel. +44 (0)181 398 1613
Fax. +44 (0)181 398 1651
email: [email protected]. uk
1044