Download High Volume Transaction Processing (HVTP) environment and

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Clusterpoint wikipedia , lookup

Tandem Computers wikipedia , lookup

Versant Object Database wikipedia , lookup

Serializability wikipedia , lookup

Expense and cost recovery system (ECRS) wikipedia , lookup

Asynchronous I/O wikipedia , lookup

Concurrency control wikipedia , lookup

Transcript
Page 1 of 3
High Volume Transaction Processing
(HVTP) environment and its challenges
The high end server, or zSeries server (S390), plays a critical role in high volume transaction
applications. These servers (referred to as mainframes below) are the backbone of line of
business applications and provide data stores and critical high volume transaction processing.
Web sites and distributed applications are most often linked to these high performing 'back end'
systems.
Challenges of HVTP
The High Volume Transaction Processing environment contains unique characteristics and
requirements which present significant challenges to implementing a successful solution. The
following extract from a management brief titled "Strategies for e-Volume, Competitive Impact
of Transaction Processing in e-Business", prepared by International Technology Group, provides
insight into some of the HVTP characteristics and challenges:
Functionality does not equate to scalability.
The functional characteristics of any solution do not, in themselves, say anything about the
ability to handle volume. A system designed for small workloads does not automatically scale.
Effective volume testing should always precede selection of any application, database, or server.
The difficulties experienced by many companies that have implemented enterprise resource
planning (ERP) systems are due in no small part to these effects. Inability to handle volume
growth has also been the predominant cause of major Web outages, and has contributed to the
less visible but often more damaging slowdowns in performance that have characterized many
Web commerce systems at peak times.
All system components must be optimized for volume.
The ability to handle volume is determined not only by application design, but also by
underlying databases, transaction monitors, systems software, middleware and hardware
platforms. A bottleneck in any one of these will impact the performance of the entire system.
It may be possible to scale a poorly optimized system beyond its architectural limits. But there
will be penalties in performance, difficulty of implementation and support, and costs. Over time,
companies can easily become trapped in a cycle of increasing expenditures for diminishing
returns, as escalating investments become necessary to handle even incremental workload
growth.
Service quality is directly related to volume.
It is comparatively easy to deliver high levels of availability, response time, transaction integrity
and security in a low-volume environment. But difficulties increase as workloads expand.
Maintaining, say, 99.999 percent availability with mainframe-class transaction volumes is a great
deal more difficult than at the department level or in a smaller organization.
Minimization of planned outages also becomes more challenging. Capabilities such as clustering,
concurrent workload execution and data movement must all be more effective in a high-volume
environment. Failure to allow for this effect is responsible for the inability even of more
Page 2 of 3
sophisticated ERP and Web commerce operators to achieve 7x24 operations.
Volume affects the entire IT infrastructure.
As a general principle, each business transaction at the customer interface generates at least 20 to
50 other business transactions as its effects ripple through systems for order fulfillment, product
and service delivery, logistics, inventory, purchasing, accounting and other applications.
High-volume transaction processing thus requires that all systems be capable of handling large
workloads in an efficient manner. A bottleneck at any point will, again, result in slowdowns and
disruptions that will affect processes throughout the company and extend to the supply chain as a
whole.
The continued predominance of mainframe systems in high-volume transaction processing has
not only been due to the costs and difficulties of replacing legacy systems. It has also occurred
because mainframe architecture and -- in some cases, mainframe databases and applications -are better optimized for such workloads."
TPF meeting the challenges
TPF has been meeting the challenges of the high-volume transaction processing environment
since its inception. TPF has been architected specifically to meet the challenges of these high end
transaction environments: extremely reliable - 24x7, highly available - 99.99%, high volume over ten thousand messages per second, and scalable - TPF continues to scale to meet the most
demanding requirements. Because TPF systems are so efficient, many customers run their TPF
CPU at 90%, and peak loads are not an issue. Also, there are production TPF systems that have
been up and running for years without any planned or unplanned outages. Enhancements that
have been made, and are being made to TPF, are always designed to maintain these inherent
strengths of the TPF system.
Evolution of TPF
The fundamental TPF architecture was defined as early as the late fifties and the early iterations
in the mid-sixties. The basic concept was to develop a transaction orientated operating system
where expensive "job" preparation and execution overhead was minimized. Operating system
level pre-built structures were ready to dynamically accept inbound traffic, process it, respond
and once again return to a static state with little or no ongoing consumption of resources.
As memory was precious in those days, there was a heavy dependency on efficient I/O
processing to DASD. This proved to be a premier feature in TPF because as the numbers of users
were increased and more volumes added for data and I/O capacity, there was no net increase in
overhead. In other words, there were no complicated structures that reached a point of
diminishing returns. The application programs themselves were also written very close to the
low level TPF architecture. Application programmers were in tight control of database design.
Many lower level system services common in other operating systems were implemented at the
applications layer. The result of this was an overall solution for handling reservation traffic that
was repeatedly proven to be as much as ten times (or more) as efficient as anything remotely
comparable to TPF, even running on the same exact hardware. A system that today is already
handling the "billion transactions per day" that many claim to aspire to.
TPF directions
Page 3 of 3
TPF continues to serve the market it was designed for, the high end. Whereas in UNIX systems,
in order to be flexible enough to run a large variety of programs and faithfully implement the
standards, there is a lot of overhead and at the high end. There are workable UNIX solutions, but
the number of boxes, complexity and costs can get excessive. What is still needed is an operating
system that targets serving tens to hundreds of thousands of users efficiently. However, that
system can not completely ignore a tidal wave of software that is available in the mainstream.
Business today can ill afford to create every solution from scratch. It is necessary to be able to
reuse pieces of code that provide the functionality required in a high end solution. Ideally, a high
end operating system would support many of the base concepts required by software (such as a
process model and process control) and base services as well (such as naming and security).
This is where we have been taking TPF. TPF continues to service assembler find/file programs
as not only will some existing code be with us ten years from now, but in many cases where path
length is absolutely critical, there is nothing more efficient. TPF also supports C, C++ and will
support Java and a high end model of their associated environments. Will a TPF Java program be
as efficient as an assembler program or even C? Of course not, but consider that in many
installations today, less than 20% of the software accounts for 80% or more of the execution path
length. That leaves a lot of software that can be brought to market quickly without exposing the
system to performance problems.
As the TPF language environment continues to see improvements, we are working with business
partners to bring new applications and functions to TPF and are currently working on porting a
financial switching application. To bring object models to TPF we have added POSIX database
structures. To speed application development and improve programmer productivity we have
VisualAge for TPF. Key applications that are available for TPF are an Apache Web Server and a
Mail Server. Improvements in connectivity options such as TCP/IP and IIOP have been added,
as well as messaging via MQSeries for TPF, the defacto standard for messaging middleware. As
you can see, we continue to enhance TPF with an Open Systems approach. In addition, because
TPF is shipped as Source Code, customers have the utmost flexibility to make the system meet
their unique needs.