Download Implementing a Tivoli Solution for Central Management of Large Distributed Environments Front cover

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Clusterpoint wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Object-relational impedance mismatch wikipedia , lookup

Team Foundation Server wikipedia , lookup

Transcript
Front cover
Implementing a Tivoli Solution
for Central Management of
Large Distributed Environments
Remotely manage thousands of
distributed outlets
Architect and design for large
scale enterprises
Learn step-by-step Linux
installation
Morten Moeller
Robert Haynes
Paul Jacobs
Fabrizio Salustri
ibm.com/redbooks
International Technical Support Organization
Implementing a Tivoli Solution for Central
Management of Large Distributed Environments
June 2005
SG24-6468-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page xvii.
First Edition (June 2005)
This edition applies to the Tivoli Management Environment software products Version 3.x,
4.x and 5.x.
© Copyright International Business Machines Corporation 2005. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP
Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Part 1. The outlet environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 1. The challenges of managing an outlet environment . . . . . . . . . 3
1.1 Complex application infrastructures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Managing outlet application systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 Service Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.2 Service Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 The outlet application infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Basic products used to facilitate outlet applications . . . . . . . . . . . . . 15
1.3.2 Managing outlet applications using Tivoli . . . . . . . . . . . . . . . . . . . . . 18
1.4 Tivoli product structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.5 Managing outlet applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.6 Meeting future challenges today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Chapter 2. The Outlet Solution overview . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1 Introducing Outlet Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.1 The Outlet Inc. environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.2 Outlet Solution features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.1.3 Current IT infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.1.4 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 The Outlet Systems Management Solution. . . . . . . . . . . . . . . . . . . . . . . . 34
2.2.1 Outlet Systems Management Solution requirements . . . . . . . . . . . . 34
2.2.2 Outlet Systems Management Solution capabilities . . . . . . . . . . . . . . 36
Chapter 3. The Outlet Systems Management Solution Architecture . . . . 37
3.1 Outlet Inc. requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
© Copyright IBM Corp. 2005. All rights reserved.
iii
3.2 Physical architecture for Outlet Inc.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.2.1 Tivoli Command Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2.2 TMR connections within Outlet Inc.. . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2.3 Tivoli gateway architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3 Logical Tivoli architecture for Outlet Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.3.1 Administering the Tivoli Management Environment . . . . . . . . . . . . . 53
3.3.2 Tivoli naming conventions for Outlet Inc. . . . . . . . . . . . . . . . . . . . . . 60
3.4 Setup and configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.4.1 Suggested Tivoli hardware requirements . . . . . . . . . . . . . . . . . . . . . 62
3.4.2 Tivoli repeater architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.4.3 Optimizing slow links connections. . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.4.4 Managing endpoint behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.4.5 Managing the TME Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.4.6 Tivoli and RDBMS Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.5 Network communications and considerations . . . . . . . . . . . . . . . . . . . . . . 83
3.5.1 TCP/IP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.6 Configuration management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.6.1 Tivoli Configuration Manager: Inventory . . . . . . . . . . . . . . . . . . . . . . 87
3.7 Release management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.7.1 Software Distribution in Outlet Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.7.2 Integrating Tivoli Software Distribution with the TEC . . . . . . . . . . . 102
3.7.3 Integrating Tivoli Software Distribution with Inventory . . . . . . . . . . 102
3.8 Availability and Capacity Management . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.8.1 IBM Tivoli Monitoring architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.8.2 ITM for Databases: DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.8.3 ITM for Web Infrastructure: WebSphere Application Server . . . . . . 115
3.8.4 IBM Tivoli Monitoring for Transaction Performance . . . . . . . . . . . . 117
3.9 Event management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Part 2. Management solution implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Chapter 4. Installing the Tivoli Infrastructure . . . . . . . . . . . . . . . . . . . . . . 129
4.1 The Outlet Systems Management Solution. . . . . . . . . . . . . . . . . . . . . . . 130
4.1.1 Management environments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.1.2 Functional component locations . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.2 Installation planning and preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4.2.1 Create naming standards for all Tivoli related objects . . . . . . . . . . 141
4.2.2 Operating platform preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.2.3 Enabling SMB server on the srchost server . . . . . . . . . . . . . . . . . . 144
4.2.4 Establishing a code library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.3 Installation and configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.3.1 Establishing an installation roadmap. . . . . . . . . . . . . . . . . . . . . . . . 147
4.3.2 Installing a working database environment . . . . . . . . . . . . . . . . . . . 149
iv
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
4.3.3 Establishing the TME infrastructure . . . . . . . . . . . . . . . . . . . . . . . . 155
4.3.4 Tivoli Enterprise Console v3.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
4.3.5 Installing Tivoli Configuration Manager . . . . . . . . . . . . . . . . . . . . . . 187
4.3.6 IBM Tivoli Monitoring installation. . . . . . . . . . . . . . . . . . . . . . . . . . . 197
4.3.7 IBM Tivoli Manager Web Health Console . . . . . . . . . . . . . . . . . . . . 201
4.3.8 IBM Tivoli Monitoring for Web Infrastructure . . . . . . . . . . . . . . . . . . 209
4.3.9 IBM Tivoli Monitoring for databases . . . . . . . . . . . . . . . . . . . . . . . . 213
4.3.10 TMTP Management Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
4.4 Postinstallation configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
4.4.1 Framework customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
4.4.2 Enabling MDIST2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
4.4.3 Enabling Tivoli End-User Web Interfaces . . . . . . . . . . . . . . . . . . . . 232
4.4.4 Configuring the Tivoli Enterprise Console . . . . . . . . . . . . . . . . . . . . 234
4.4.5 Customizing the Inventory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
4.4.6 Configuring Software Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 245
4.4.7 Enabling the Activity Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
4.4.8 Enabling Change Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
4.4.9 IBM Tivoli Monitoring configuration . . . . . . . . . . . . . . . . . . . . . . . . . 255
4.4.10 IBM Tivoli Monitoring for Web Infrastructure . . . . . . . . . . . . . . . . . 259
4.4.11 IBM Tivoli Monitoring for Databases . . . . . . . . . . . . . . . . . . . . . . . 262
4.4.12 IBM Tivoli Monitoring for Transaction Performance . . . . . . . . . . . 266
Chapter 5. Creating profiles, packages, and tasks . . . . . . . . . . . . . . . . . 271
5.1 Defining the logical structure of the environment . . . . . . . . . . . . . . . . . . 272
5.2 Deploying management endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
5.3 Creating monitoring profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
5.3.1 Inventory scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
5.3.2 Hardware scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
5.3.3 System software scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
5.3.4 Custom scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
5.3.5 Defining OS and HW Monitoring Profiles . . . . . . . . . . . . . . . . . . . . 279
5.3.6 Defining TEC profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
5.3.7 Building the software package for deploying DB2 Server . . . . . . . . 291
5.3.8 Creating a DB2 instance for the Outlet Solution . . . . . . . . . . . . . . . 294
5.3.9 Defining DB2 monitoring objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
5.3.10 Creating the db2ecc user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
5.3.11 Creating profiles for Tivoli Monitoring for Databases . . . . . . . . . . 298
5.3.12 Deploying WebSphere Application Server . . . . . . . . . . . . . . . . . . 303
5.3.13 Enabling WebSphere Application Server monitoring . . . . . . . . . . 311
5.3.14 Creating a software package for TimeCard Store Server . . . . . . . 317
5.3.15 Installing the TMTP Management Agent . . . . . . . . . . . . . . . . . . . . 318
5.3.16 Configuring TMTP to monitor WebSphere . . . . . . . . . . . . . . . . . . 320
Contents
v
Part 3. Putting the solution to use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Chapter 6. Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
6.1 Automating deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
6.2 Automating tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
6.3 Creating the deployment activity plan . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
6.3.1 Building the reference model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
6.3.2 Completing the activity plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
6.4 Defining the logical structure of the
Outlet Inc. environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
6.5 Creating endpoint policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
6.5.1 Allow_login policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
6.5.2 Select_gateway policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
6.5.3 Login policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
6.5.4 After_login policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
6.6 Installing endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Part 4. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Appendix A. Configuration files and scripts . . . . . . . . . . . . . . . . . . . . . . 349
A.1 Additional material contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
A.2 TMR setup and maintenance-related files . . . . . . . . . . . . . . . . . . . . . . . 351
A.2.1 create_logical_structure.sh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
A.2.2 Store lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
A.2.3 load_tasks.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
A.2.4 update_resources.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
A.3 Policies and related tasks and scripts. . . . . . . . . . . . . . . . . . . . . . . . . . . 357
A.3.1 put.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
A.3.2 get.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
A.3.3 allow_policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
A.3.4 select_policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
A.3.5 gateway.lst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
A.3.6 login_policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
A.3.7 after_policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
A.3.8 sub_ep.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
A.3.9 ep_login_notif.sh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
A.3.10 run_ep_customization_task.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
A.4 DB2 monitoring deployment-related scripts . . . . . . . . . . . . . . . . . . . . . . 371
A.4.1 create_db2_instance_objects.sh . . . . . . . . . . . . . . . . . . . . . . . . . . 371
A.4.2 create_db2_database_objects.sh. . . . . . . . . . . . . . . . . . . . . . . . . . 372
A.4.3 itm_db2_instance_rm_distrib.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
A.4.4 itm_db2_database_rm_distrib.sh . . . . . . . . . . . . . . . . . . . . . . . . . . 374
A.5 WebSphere Application Server monitoring
deployment files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
vi
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
A.5.1 create_app_server.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
A.5.2 itm_was_rm_distrib.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
A.5.3 was_configure_tec_adapter.sh. . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
A.5.4 was_start_tec_adapter.sh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
A.5.5 was_stop_tec_adapter.sh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
A.6 TMTP monitoring deployment scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
A.6.1 addtoagentgroup.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
A.6.2 deployj2ee.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
A.7 Software Packages and related files. . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
A.7.1 Common scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
A.7.2 IBM HTTP Server v2.0.47. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
A.7.3 WebSphere Message Queuing Server v5.3 . . . . . . . . . . . . . . . . . . 395
A.7.4 WebSphere Message Queuing Server v5.3 Fixpack 8. . . . . . . . . . 408
A.7.5 WebSphere Application Server v5.1 . . . . . . . . . . . . . . . . . . . . . . . . 417
A.7.6 WebSphere Application Server v5.1 Fixpack 1 . . . . . . . . . . . . . . . 464
A.7.7 DB2 Server v8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
A.7.8 TMTP Agent v5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
A.7.9 WebSphere Caching Proxy v5.1. . . . . . . . . . . . . . . . . . . . . . . . . . . 503
A.7.10 WebSphere Caching Proxy v5.1 Fixpack 1 . . . . . . . . . . . . . . . . . 592
A.7.11 TimeCard v5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
A.8 APM related scripts and files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
A.8.1 ep_customization.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
A.8.2 Production_Outlet_Plan_v1.0.xml . . . . . . . . . . . . . . . . . . . . . . . . . 629
Appendix B. Obtaining the installation images . . . . . . . . . . . . . . . . . . . . 645
IBM Business Partners. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Appendix C. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
System requirements for downloading the Web material . . . . . . . . . . . . . 656
How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
Product Manuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
IBM Tivoli Management Framework v4.1.1. . . . . . . . . . . . . . . . . . . . . . . . 670
IBM Enterprise Console v3.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
IBM Tivoli Configuration Manager v4.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . 670
IBM Tivoli Monitoring v5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Contents
vii
IBM Tivoli Monitoring for Databases v5.1.2 . . . . . . . . . . . . . . . . . . . . . . . 671
IBM Tivoli Monitoring for Web Infrastructure v5.1.2 . . . . . . . . . . . . . . . . . 671
IBM Tivoli Monitoring for Transaction Performance v5.3 . . . . . . . . . . . . . 672
Online Information Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
Online manuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
viii
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figures
1-1
1-2
1-3
1-4
1-5
1-6
1-7
1-8
1-9
1-10
1-11
1-12
2-1
3-1
3-2
3-3
3-4
3-5
3-6
3-7
3-8
3-9
3-10
3-11
3-12
3-13
3-14
3-15
3-16
3-17
3-18
3-19
3-20
3-21
3-22
3-23
4-1
4-2
Growing infrastructure complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Layers of service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
The ITIL Service Management disciplines . . . . . . . . . . . . . . . . . . . . . . . . 9
Key relationships between Service Management disciplines . . . . . . . . 12
A typical outlet application infrastructure . . . . . . . . . . . . . . . . . . . . . . . . 13
Outlet Solution specific service layers . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Logical view of an outlet solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Typical Tivoli-managed outlet application infrastructure . . . . . . . . . . . . 19
The On Demand Operating Environment . . . . . . . . . . . . . . . . . . . . . . . 20
IBM Automation Blueprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Tivoli’s availability product structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Typical outlet solution infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Outlet Inc. network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Outlet Inc. Geographical Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Centralized management method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Outlet Systems Management Solution overview . . . . . . . . . . . . . . . . . . 43
Logical Configuration of TMR Hub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Logical Configuration of TMR Spokes . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Sample profile manager hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Tivoli Desktops logical configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
MDist2 GUI Node Table dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Initial endpoint login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Normal Endpoint login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Tivoli Protocol Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Logical Architecture for Outlet Inc. Inventory Management . . . . . . . . . . 92
Software Distribution architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Software package distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
IBM Tivoli Monitoring High-level overview . . . . . . . . . . . . . . . . . . . . . . 104
Sampling of volatile metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Heartbeat endpoint registration data flow . . . . . . . . . . . . . . . . . . . . . . 110
Heartbeat monitoring data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Endpoint cache retrieval data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
High-level data flow of the Web Health Console . . . . . . . . . . . . . . . . . 113
TMTP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Event Management architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Event management flow with Tivoli Enterprise Console . . . . . . . . . . . 125
Outlet Systems Management Solution architecture . . . . . . . . . . . . . . . 130
Install Tivoli Server: Specify Directory Locations . . . . . . . . . . . . . . . . . 158
© Copyright IBM Corp. 2005. All rights reserved.
ix
4-3
4-4
4-5
4-6
4-7
4-8
4-9
4-10
4-11
4-12
4-13
4-14
4-15
4-16
4-17
4-18
4-19
4-20
4-21
4-22
5-1
5-2
5-3
6-1
6-2
B-1
x
Install Tivoli Server: server and region names . . . . . . . . . . . . . . . . . . . 159
TM Install: completion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Tivoli Desktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Create a new Policy Region. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Set managed resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Client Install: main window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Client Install: Add Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Client Install: Set Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Client Install: return to beginning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Client Install confirmation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Policy Region: ManagedNodes-region . . . . . . . . . . . . . . . . . . . . . . . . 170
About Tivoli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Installed Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Installed Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Edit TMR Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Administrator Info Changed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
TEC Console login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Activity Plan Editor launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Change Manager launch dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
TMTP: Web Health Console Integration . . . . . . . . . . . . . . . . . . . . . . . 270
Resource monitor status for endpoint dev01-ep . . . . . . . . . . . . . . . . . 290
Transactions discovered by discovery policy . . . . . . . . . . . . . . . . . . . . 326
Create J2EE Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
The automated deployment process . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Activity flow for Outlet Server deployment . . . . . . . . . . . . . . . . . . . . . . 337
Searching for Software Downloads . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Tables
2-1
3-1
3-2
3-3
3-4
3-5
3-6
3-7
3-8
3-9
3-10
4-1
4-2
4-3
4-4
4-5
4-6
4-7
4-8
4-9
4-10
4-11
4-12
4-13
4-14
4-15
4-16
4-17
4-18
4-19
4-20
4-21
4-22
4-23
4-24
4-25
6-1
6-2
Network speed comparison for software distribution . . . . . . . . . . . . . . 32
Suggested naming conventions for Tivoli objects . . . . . . . . . . . . . . . . . 61
Hardware requirements for TMR Servers and managed nodes . . . . . . 63
Hardware requirements for gateways . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Hardware requirements for endpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Recommended MDist1 repeater settings. . . . . . . . . . . . . . . . . . . . . . . . 68
Recommended MDist2 repeater settings . . . . . . . . . . . . . . . . . . . . . . . 71
Endpoint Login Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Standard TCP/IP ports used by Tivoli . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Suggested wcollect settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Tables for storing Software Distribution status . . . . . . . . . . . . . . . . . . 103
The database environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Transaction performance monitoring environments. . . . . . . . . . . . . . . 134
The Hub TMR environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
The Spoke TMR environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
The PoC Regional environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
The Outlet environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Functional component location details for non-TMF systems . . . . . . . 138
Hub TMR functional component location details . . . . . . . . . . . . . . . . . 138
Sopke TMR functional component location details . . . . . . . . . . . . . . . 139
Naming standards for the Outlet Systems Management Solution . . . . 141
DB2 Instance and Database Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
hubtmr TMR Server installation parameters . . . . . . . . . . . . . . . . . . . . 157
Tivoli Management Framework installation roadmap . . . . . . . . . . . . . 172
TEC product and path installation roadmap. . . . . . . . . . . . . . . . . . . . . 182
TCM product and patch installation roadmap . . . . . . . . . . . . . . . . . . . 188
IBM Tivoli Monitoring product and patch installation roadmap . . . . . . 198
Web Health Console installation roadmap: GA version . . . . . . . . . . . . 203
Web Health Console installation roadmap - LA version. . . . . . . . . . . . 203
TMTP product and patch installation roadmap . . . . . . . . . . . . . . . . . . 218
Options for configCLI.sh script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Parameters for the mdist2 RIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Parameters for the tec RIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Parameters for the invdh_1 and inv_query RIMs. . . . . . . . . . . . . . . . . 239
Parameters for the planner RIM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Parameters for the ccm RIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Profile distribution scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Production tasks and related scripts . . . . . . . . . . . . . . . . . . . . . . . . . . 335
© Copyright IBM Corp. 2005. All rights reserved.
xi
6-3
B-1
xii
APM plan activities and conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Installation image download reference . . . . . . . . . . . . . . . . . . . . . . . . 647
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Examples
3-1
3-2
3-3
4-1
4-2
4-3
4-4
4-5
4-6
4-7
4-8
4-9
4-10
4-11
4-12
4-13
4-14
4-15
4-16
4-17
4-18
4-19
4-20
4-21
4-22
4-23
4-24
4-25
4-26
4-27
4-28
4-29
4-30
4-31
5-1
5-2
5-3
5-4
Gateway selection configuration file . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Changing collector starts and stops . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Signature file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Append file/etc/samba/smb.conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Library structure code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Database created. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Cataloging hubtmr system databases . . . . . . . . . . . . . . . . . . . . . . . . . 154
Verifying databases for the hubtmr system . . . . . . . . . . . . . . . . . . . . . 155
Installing the Hub TMR Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
List of installed Tivoli components generated by wlsinst . . . . . . . . . . . 162
Managed node list produced by odadmin odlist. . . . . . . . . . . . . . . . . . 170
Installed Tivoli Components and patches . . . . . . . . . . . . . . . . . . . . . . 179
TEC component installation verification on Hub TMR . . . . . . . . . . . . . 185
TEC component verification on spokeTMR . . . . . . . . . . . . . . . . . . . . . 186
TCM Components installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
ITM Components installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
WebSphere Application Server tracing and logging location . . . . . . . . 207
Verifying TMTP CLI installation and configuration . . . . . . . . . . . . . . . . 222
oserv configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
From the hubtmr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
From the spoketmr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Using the wlsconn command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Old https-port file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Revised http-port file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Verifying inventory profile distribution using wmdist command . . . . . . 244
wmdist -l sample output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Command line invocation of Inventory queries - sample output . . . . . 244
tecad_sd.conf sample file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Notifications to the TEC server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Querying the status of heartbeating . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
ITM for WI Configure_Event_Server task invocation . . . . . . . . . . . . . 260
TEC reception log output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
TMTP configuration of TEC Server . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
TMTP event processing verification . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Creating profile managers with crtprfmgr . . . . . . . . . . . . . . . . . . . . . . . 272
Issuing wcrtprfmgr from the spoketmr system . . . . . . . . . . . . . . . . . . . 272
Subscribing the spoketmr and hubtmr profile managers . . . . . . . . . . . 273
Verifying endpoint availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
© Copyright IBM Corp. 2005. All rights reserved.
xiii
5-5
5-6
5-7
5-8
5-9
5-10
5-11
5-12
5-13
5-14
5-15
5-16
5-17
5-18
5-19
5-20
5-21
5-22
5-23
5-24
5-25
5-26
5-27
5-28
6-1
6-2
6-3
6-4
6-5
A-1
A-2
A-3
A-4
A-5
A-6
A-7
A-8
A-9
A-10
A-11
A-12
A-13
A-14
xiv
Creating inventory hardware scanning profile . . . . . . . . . . . . . . . . . . . 275
distributing an InventoryConfig profile . . . . . . . . . . . . . . . . . . . . . . . . . 275
Using wmdist to determine the status of a distribution. . . . . . . . . . . . . 275
Viewing scan results through wqueryinv . . . . . . . . . . . . . . . . . . . . . . . 275
Creating profiles for software scanning . . . . . . . . . . . . . . . . . . . . . . . . 276
Operating System information from Inventory . . . . . . . . . . . . . . . . . . . 277
Creating custom profiles for software scanning . . . . . . . . . . . . . . . . . . 277
Signature scanning of pristine system . . . . . . . . . . . . . . . . . . . . . . . . . 278
Inventory scan events forwarded to TEC . . . . . . . . . . . . . . . . . . . . . . . 279
Linking Jre using the DMLinkJre task . . . . . . . . . . . . . . . . . . . . . . . . . 280
Creating a Tmw2kProfile from the commandline . . . . . . . . . . . . . . . . . 281
hubtmr-region_MASTER_ITM_linux-ix86 monitoring profile details . . 282
Monitoring engine listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Defining the ACP profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
DB2 Instance object creation verification with wlookup . . . . . . . . . . . . 296
Default DB2-instances monitoring profile. . . . . . . . . . . . . . . . . . . . . . . 299
Active DB2 Instance monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Default DB2-databases monitoring profile . . . . . . . . . . . . . . . . . . . . . . 300
Output from wdmlseng on database server endpoint . . . . . . . . . . . . . 303
Discover WebSphere Resources task invocation . . . . . . . . . . . . . . . . 313
ISWApplicationServer objects and the default profile managers . . . . . 315
wdmlseng - <ep_name> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Sample TMTP object names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Deploying the J2EE component. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Creating the Policy Region structure . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Distributing profiles to regions and stores . . . . . . . . . . . . . . . . . . . . . . 341
Logical subscription hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Creating profile managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Subscribing spoketmr profile managers to hubtmr profile managers. . 342
Links to /mnt directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
create_logical_structure.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
general_stores.lst. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
maine_stores.lst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
ohio_stores.lst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
load_tasks.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
update_resources.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
put.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
get.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
allow_policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
select_policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
gateway.lst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
login_policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
after_policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
A-15
A-16
A-17
A-18
A-19
A-20
A-21
A-22
A-23
A-24
A-25
A-26
A-27
A-28
A-29
A-30
A-31
A-32
A-33
A-34
A-35
A-36
A-37
A-38
A-39
A-40
A-41
A-42
A-43
A-44
A-45
A-46
A-47
A-48
A-49
A-50
A-51
A-52
A-53
A-54
A-55
A-56
A-57
sub_ep.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
ep_login_notif.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
run_ep_customization_task.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
create_db2_instance_objects.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
create_db2_database_objects.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
itm_db2_instance_rm_distrib.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
iitm_db2_database_rm_distrib.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
create_app_server.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
itm_was_rm_distrib.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
was_configure_tec_adapter.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
was_start_tec_adapter.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
was_stop_tec_adapter.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
addtoagentgroup.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
deployj2ee.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
do_it . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
java141_it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Software package for IBM HTTP Server v2 installation . . . . . . . . . . . . 383
Response file for IHS 2.0.47 installation . . . . . . . . . . . . . . . . . . . . . . . 392
IHS 2.0.47 installation script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
IHS 2.0.47 uninstallation script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
mq_server.5.3.0.spd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Script for creating user and group for WebSphere MQ . . . . . . . . . . . . 405
WebSphere MQ Server installation script . . . . . . . . . . . . . . . . . . . . . . 405
Script for uninstallation of WebSphere MQ server . . . . . . . . . . . . . . . . 407
Software Package mq_server.5.3.0_fp8.spd . . . . . . . . . . . . . . . . . . . . 408
mq_server_538_install.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
mq_server_538_uninstall.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Software package for WAS 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Response file for WAS 5.1 installation . . . . . . . . . . . . . . . . . . . . . . . . . 438
Script for installation of WAS 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
mod_was_server_JVM_process.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
mod_was_pmirm_settings.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Enable WAS security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
WebSphere Application Server security enablement control . . . . . . . . 462
WebSphere Application Server 5.1 uninstallation . . . . . . . . . . . . . . . . 463
Software package definition for was_server.5.1.0_fp1 . . . . . . . . . . . . 464
was_appserver_510_fp1 installation script . . . . . . . . . . . . . . . . . . . . . 476
Software package definition for db2_server.8.2.0.spd . . . . . . . . . . . . . 477
Response file for DB2 Server v8.2 installation. . . . . . . . . . . . . . . . . . . 487
db2_server_820_addusers.sh script to create db2 users . . . . . . . . . . 488
db2_server_820_remusers.sh to remove db2 users . . . . . . . . . . . . . . 489
Software Package for TMTP Agent deployment . . . . . . . . . . . . . . . . . 494
Responsefile for TMTP Agent deployment . . . . . . . . . . . . . . . . . . . . . 502
Examples
xv
A-58
A-59
A-60
A-61
A-62
A-63
A-64
A-65
A-66
A-67
A-68
A-69
xvi
Software package for Caching Proxy deployment . . . . . . . . . . . . . . . . 503
ibmproxy.conf for Outlet Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Installation script for WebSphere Caching Proxy. . . . . . . . . . . . . . . . . 591
Script to stop the WebSphere Caching Proxy . . . . . . . . . . . . . . . . . . . 592
Caching proxy fixpack 1 spd file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
Software package for TimeCard installation and removal . . . . . . . . . . 603
Timecard installation and setup script . . . . . . . . . . . . . . . . . . . . . . . . . 613
db2_store_cfg.sh script to create store databases . . . . . . . . . . . . . . . 615
Store database table definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
TimeCard application installation script . . . . . . . . . . . . . . . . . . . . . . . . 618
ep_customization.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
Production_Outlet_Plan_v1.0.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions
are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES
THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
© Copyright IBM Corp. 2005. All rights reserved.
xvii
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
1-2-3®
AIX®
Approach®
AS/400®
BookMaster®
CICS®
Database 2™
DB2 Universal Database™
DB2®
Domino®
e(logo)server®
Eserver®
Eserver®
Eserver®
eServer™
ETE™
ibm.com®
IBM®
Illustra™
IMS™
Lotus®
MQSeries®
Notes®
OS/2®
PartnerWorld®
Perform™
RAA®
RACF®
Rational®
Redbooks (logo)™
Redbooks (logo)
™
Redbooks™
SLC™
Tivoli Enterprise Console®
Tivoli Enterprise™
Tivoli Management
Environment®
Tivoli®
TME®
WebSphere®
The following terms are trademarks of other companies:
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel
SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
xviii
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Preface
This IBM Redbook will help you design, install, tailor and configure a Tivoli®
solution for managing a large Linux® IT infrastructure supporting highly
distributed business models such as branch banking or retail outlets.
The primary purpose of this book to provide an easy-to-follow. step-by-step guide
for using the following Tivoli systems management products:
򐂰 IBM Tivoli Management Framework v4.1.1
򐂰 IBM Tivoli Configuration Manager v4.2 (Inventory, Software Distribution,
Activity Planner, Change Manager)
򐂰 IBM Tivoli Enterprise™ Console® v3.9
򐂰 IBM Tivoli Monitoring v5.1.2
򐂰 IBM Tivoli Monitoring for Web Infrastructure: WebSphere® Application Server
v5.1.2
򐂰 IBM Tivoli Monitoring for Databases: DB2® v5.1
򐂰 IBM Tivoli Monitoring for Transaction Performance v5.3
In addition to these core components, particular implementations can be
extended using a variety of Tivoli solutions. To name a few:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
IBM Tivoli Data Warehouse for historical reporting
IBM Tivoli Service Level Advisor for service level management
IBM Tivoli Remote Control
IBM Tivoli Business Systems Manager for business related surveillance
IBM Tivoli Data Exchange for data synchronization
... and many more
You will also read about industry-specific solutions such as:
򐂰 IBM Tivoli Configuration Manager for ATMs
򐂰 IBM Tivoli Point-of-Sale Manager
For more information of any of these products, refer to the following Web site:
http://www.ibm.com/software/sw-atoz/indexT.html
The information in this book is intended for IT professionals with conceptual
knowledge of the Tivoli product set and who will be responsible for, or take part
in, deploying or maintaining a Tivoli systems management solution. Throughout
this redbook we provide references to relevant product documentation. While
reading this book, you might want to visit our main Tivoli InfoCenter Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
© Copyright IBM Corp. 2005. All rights reserved.
xix
The information in this book was developed around an example implementation
of a management solution for centralized distribution and management of most
hardware and software resources. We also discuss the provisioning of a sample
WebSphere based application using a DB2 database in a fictitious, Linux-based
retail environment.
The information in this book is structured this way:
򐂰 Part 1, “The outlet environment” on page 1
This introductory part discusses the management challenges surrounding a
highly distributed environment - including a high-level overview of the related
ITTIL processes, THis is followed by a description of how selected Tivoli
products will help overcome the challenges, and ends with a conceptual
architecture for the Outlet Systems Management Solution.
– The challenges of managing an outlet environment
– The Outlet Solution overview
– The Outlet Systems Management Solution Architecture
򐂰 Part 2, “Management solution implementation” on page 127
This main part of the book provides detailed step-by-step instructions on how
to install and configure the various Tivoli components along with related
middleware such as DB2 databases and WebSphere Application Servers.
The information is divided into an installation and a setup and customizing
section.
– Installing the Tivoli Infrastructure
– Creating profiles, packages, and tasks
򐂰 Part 3, “Putting the solution to use” on page 329
In this final section, we demonstrate how to perform the actual rollout of the
solution, focusing on automating the deployment. In addition, we provide a
brief discussion on the activities involved with the on-going maintenance of
the environment.
Throughout this book examples are provided. Source code for these is available
in Appendix A, “Configuration files and scripts” on page 349. For instructions on
how to download a machine-readable copy, see the instructions provided in
Appendix C, “Additional material” on page 655.
The team that wrote this redbook
This redbook was produced by a team of specialists from around the world
working at the International Technical Support Organization, Austin Center.
xx
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Morten Moeller is a Project Leader at the International Technical Support
Organization (ITSO), Austin Center. He applies his extensive field experience as
an IBM Certified IT Specialist to his work at the ITSO where he writes extensively
on all areas of Systems Management. Before joining the ITSO, Morten worked in
the Professional Services Organization of IBM Denmark as a Distributed
Systems Management Specialist where he was involved in numerous projects
designing and implementing systems management solutions for major
customers of IBM Denmark.
Robert Haynes is a Technical Lead in Toronto, Canada. He has 17 years of
experience in Application Development, and has been with IBM Global Services
for the past seven years. He holds a degree in Computer Science from the
University of the West Indies and an MBA from Wilfrid Laurier University,
Waterloo, Canada. He is an IBM Certified Application Developer for DB2 v7 and
8, a Certified PL/SQL Developer for Oracle 9i and a Principal CLP in Lotus®
Notes.
Paul Jacobs is a Technical Services Professional in Toronto, Canada. He
graduated Wildrid Laurier University, Waterloo, Canada with a degree in
economics and joined IBM shortly after graduating. Paul has been with IBM
Global Services for the past six years, specializing in Systems Management.
Fabrizio Salustri is an IT Support Specialist working for the EMEA South
Region within the IBM Global Services. He has been working for IBM since 1996,
and has extensive experience with Tivoli Products. Throughout his career,
Fabrizio has been involved in several projects implementing Tivoli solutions for
important clients of IBM Italy. Before joining the Tivoli Support, he worked as
Certified AIX® System Administrator in AIX Technical Support.
Thanks to the following people for their contributions to this project:
Bryan Chagoly
IBM Tivoli
Austin, Texas
The Editing Team
ITSO
Austin Center, Texas
Ashwin Manekar, Mark Wainwright
IBM SystemHouse
Raleigh, North Carolina
Preface
xxi
Become a published author
Join us for a two- to six-week residency program! Help write an IBM Redbook
dealing with specific products or solutions, while getting hands-on experience
with leading-edge technologies. You'll team with IBM technical professionals,
Business Partners and/or customers.
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you'll develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our Redbooks™ to be as helpful as possible. Send us your comments
about this or other Redbooks in one of the following ways:
򐂰 Use the online Contact us review redbook form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
[email protected]
򐂰 Mail your comments to:
IBM® Corporation, International Technical Support Organization
Dept. JN9B Building 003 Internal Zip 2834
11400 Burnet Road
Austin, Texas 78758-3493
xxii
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Part 1
Part
1
The outlet
environment
In this section, we introduce and discuss the challenges of providing IT-based
solutions to large numbers of branches, retail outlets, and so forth.
For completeness, the information in this section will encompass some solutions.
However, the main focus will be on specific challenges of managing an
infrastructure of detached, highly distributed components. Here, the term
detached means that the distributed components are connected to the enterprise
network by low-cost, low speed, and low-reliability network connections, which is
the typical situation found in most outlet implementations today.
The main topics covered in this section are:
򐂰 Chapter 1, “The challenges of managing an outlet environment” on page 3
򐂰 Chapter 2, “The Outlet Solution overview” on page 29
򐂰 Chapter 3, “The Outlet Systems Management Solution Architecture” on
page 37
© Copyright IBM Corp. 2005. All rights reserved.
1
2
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
1
Chapter 1.
The challenges of managing
an outlet environment
Many large organizations within the retail and banking industries are looking for
ways to centralize and automate certain aspects of their IT processes. The goal
is to handle the vast majority of all IT administrative tasks from the enterprise
headquarters and to minimize the need for IT staff at each outlet or remote
location. The outlet could be a retail store or branch bank. This will free up the
outlet staff to focus on improving sales, and the customer experience. In addition,
centralizing IT might, in many instances, lead to better alignment between
business objectives and IT priorities, and will help the IT organizations to react
faster to the ever-changing business environment.
In this chapter, we discuss the challenges of designing and centrally managing
the IT requirements of hundreds or possibly thousands of remote locations and
their existing and emerging business requirements.
© Copyright IBM Corp. 2005. All rights reserved.
3
1.1 Complex application infrastructures
The complex infrastructure needed to facilitate modern, distributed retail and
banking solutions has been dictated mostly by requirements for standardization
of client run-time environments. In addition, application run-time technologies
play a major role, because they must ensure platform independence and
seamless integration to the back-end systems, as depicted in Figure 1-1.
Furthermore, making the applications accessible from any device type, from
anywhere in the world by any person on the planet raises some security issues
(authentication, authorization, and integrity) that need addressing.
Enterprise Network
Internet
Central Site
Browser
Web
Server
Appl.
Server
e-business
Browser
POS
Browser
e-business
Web
Server
with Legacy Systems
Appl.
Server
Browser
Business Systems
Databases
Server
Business Systems
Applications
Personal Computer
Business Systems
Front End
Client-Server
Personal
Computer
GUI Front-End
Terminal
Processing
"Dumb" Terminal
Figure 1-1 Growing infrastructure complexity
Because of the central role that the Web, application, and database servers,
commonly referred to as resource servers, play within a business and the fact
that they are supported and typically deployed across a variety of platforms
4
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
throughout the enterprise, there are several major challenges to managing an
outlet infrastructure, including:
򐂰 Managing resource servers on multiple platforms in a consistent manner from
a central location
򐂰 Defining and maintaining the outlet infrastructure from one central location
򐂰 Monitoring resources (systems, sites, networks, applications, and databases)
to know when problems have occurred or are about to occur
򐂰 Taking corrective action when a problem is detected, independent of the
platform
򐂰 Gathering data across all outlet environments to analyze events, messages,
and metrics
The degree of complexity of the outlet infrastructure system management is
directly proportional to the size of the infrastructure being managed. In its
simplest form, an outlet infrastructure is comprised of a central set of resources
combined with a single Web server and its resources in each outlet. However,
the complexity of this simple outlet solution in most country-wide
implementations, no-matter which country, can require managing hundreds or
even thousands of resource servers throughout the enterprise.
To add to the complexity, the outlet infrastructure can span many platforms with
different network protocols, hardware, operating systems, and applications. Each
platform possesses its unique and specific systems management needs and
requirements, not to mention varying levels of support for the administrative tools
and interfaces.
Every component in the outlet infrastructure is a potential bottleneck, or even
single point of failure. Each and every one provides specialized services needed
to facilitate the outlet application system. We use the term application systems
deliberately to emphasize the idea that no single component by itself provides a
total solution.
The outlet application is pieced together by a combination of standard
off-the-shelf components and in-house-developed components. The standard
components usually provide general services, such as session control,
authentication and access control, messaging, and database access. The
in-house components add the application logic needed to glue all the different
bits and pieces together to perform specific functions for a particular application
system. On an enterprise level, chances are that many of the custom
components might be promoted to standard status to ensure that specific
company standards or policies are enforced.
Chapter 1. The challenges of managing an outlet environment
5
At first glance, breaking up the outlet application into many specialized services
might be regarded as counterproductive and very expensive. However,
specialization enables sharing of common components (such as Web,
application, security, and database servers) between more outlet application
systems. Specialization is also key to ensuring availability and performance of
the application system as a whole by allowing for duplication and distribution of
selected components. These specialized components can then meet specific
resource requirements or increase the performance of the application systems
as a whole. In addition, this itemizing of the total solution allows for almost
seamless adoption of new technologies for selected areas without the need to
change the total system.
Whether the components in the outlet system are commercial, standard, or
application-specific, each of them will most likely require other general services,
such as communication facilities, storage space, and processing power. The
computers on which they run need electrical power, shelter from the weather,
access security, and perhaps even cooling.
As it turns out, the outlet application relies on several layers of services that can
be provided in-house or by external companies. This is illustrated in Figure 1-2.
Figure 1-2 Layers of service
As a matter of fact, it is not exactly the outlet application that relies on the
services depicted above. The truth is that individual components such as Web
servers, database servers, application servers, lines, routers, hubs, and switches
each rely on underlying services provided by some other component. You can
break this down even further, but that is beyond this discussion. The point is that
the outlet solution is exactly as solid, robust, and stable as the weakest link of the
chain of services that make up the entire solution. The bottom-line results of an
enterprise can be affected drastically by the quality of its outlet solutions.
There are a number of technologies available to allow for continuing, centralized
monitoring and surveillance of the outlet solution components. These
6
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
technologies help manage the IT resources. Some of these technologies can
even be applied to manage the non-IT resources such as power, cooling, and
access control.
However, each layer in any component is specialized and requires different types
of management. In addition, from an IT-management point of view, the top layer
of any component is the most interesting, because it is the layer that provides a
unique service required by a particular component. For a Web server, the top
layer is the HTTP server itself. This is the mission-critical layer, even though it
still needs networking, an operating system, hardware, and power to operate. On
the other hand, for an outlet application server, the mission-critical layer is the
application transactions, which rely on other components implemented on the
server, such as the application server, database client, operating system, power,
and networking. These are considered secondary in this case. This said, all the
underlying services are needed and must operate flawlessly in order for the top
layer to provide its services. It is much like driving a car. You monitor the
speedometer regularly to avoid penalties for violating changing speed limits, but
you check the fuel indicator only from time to time or when the indicator alerts
you to perform preventive maintenance by filling up the tank.
1.2 Managing outlet application systems
Specialized functions require specialized management, and general functions
require general management. Therefore, it is obvious that the management of
the operating system, hardware layer, and networking layer be general because
they are used by most of the outlet infrastructure components. On the other
hand, a management tool for Web application servers might not be very
well-suited for managing the database server.
Up till now, the term managing has been widely used, but not yet explained.
Control over and management of the computer system and its vital components
are critical to the continuing operation of the system and therefore the timely
availability of the services and functions provided by the system. This includes:
򐂰 Controlling both physical and logical access to the system to prevent
unauthorized modifications to the core components
򐂰 Monitoring the availability of the systems as a whole, as well as the
performance and capacity usage of the individual resources, such as disk
space, networking equipment, memory, and processor usage
Of course, these control and monitoring activities have to be performed
cost-effectively, so the cost of controlling any resource does not become higher
than the cost of the resource itself. It does not make much business sense to
spend $1000 to manage a $200 hard disk, unless the data on that hard disk
Chapter 1. The challenges of managing an outlet environment
7
represents real value to the business in excess of $1000, or a disk outage due to
unavailable capacity or component failure represents a potential loss in excess
of $1000. Planning for recovery of the systems in the event of a disaster also
needs to be addressed. Being without computer systems for days or weeks has
a huge impact on your enterprise’s ability to conduct business.
There still is one important aspect to be covered for successfully managing and
controlling computer systems. We have mentioned various hardware and
software components that collectively provide a service, but which components
are part of the IT infrastructure, where are they, and how do they relate to one
another? A prerequisite for successful management is the detailed knowledge of
which components to manage, how the components interrelate, and how these
components can be manipulated in order to control their behavior.
In addition, now that IT has become an integral part of doing business, it is
equally important to know which commitments we have made with respect to
availability and performance of the outlet solutions, and what commitments our
subcontractors have made to us. For planning and prioritization purposes, it is
vital to combine our knowledge about the components in the infrastructure with
the commitments we have made to assess and manage the impact of
component malfunction or resource shortage. In short, in a modern outlet
environment, one of the most important management tasks is to control and
manage the service catalogue in which all the provisioned services are defined
and described, and the Service Level Agreements in which the commitments of
the IT department are all listed and explained in detail.
For this discussion, we turn to the widely recognized Information Technology
Infrastructure Library (ITIL). The ITIL was developed by the British Government’s
Central Computer and Telecommunications Agency (CCTA), but has over the
past decade or more gained acceptance in the private sector.
One of the reasons behind this acceptance is that most IT organizations, met
with requirements to promise or even guarantee performance and availability,
agree that there is no point in agreeing to deliver a service at a specific level if the
basic tools and processes needed to deploy, manage, monitor, correct, and
report the achieved service level have not been established. ITIL groups all of
these activities into two major areas, Service Delivery and Service Support, as
shown in Figure 1-3 on page 9.
8
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 1-3 The ITIL Service Management disciplines
1.2.1 Service Delivery
The primary objectives of the Service Delivery discipline are proactive and
consist primarily of planning and ensuring that the service is delivered according
to the Service Level Agreement. The processes required for this to happen are
described in the following sections.
Service Level Management
Service Level Management involves managing customer expectations and
negotiating Service Level Agreements. This involves identifying customer
requirements and determining how these can best be met within the
agreed-upon budget, as well as working together with all IT disciplines and
departments to plan and ensure delivery of services. This involves setting
measurable performance targets, monitoring performance, and taking action
when targets are not met.
Financial Management
Financial Management consists of registering and maintaining cost accounts
about the use of IT services and delivering cost statistics and reports to Service
Level Management to help them obtain the correct balance between service cost
and delivery. It also means assisting in pricing the services in the service catalog
and SLAs.
Chapter 1. The challenges of managing an outlet environment
9
IT Services Continuity Management
IT Services Continuity Management develops and ensures the continuing
delivery of minimum outage of the service by reducing the impact of disasters,
emergencies, and major incidents. This work is done in close collaboration with
the company’s business continuity management, which is responsible for
protecting all aspects of the company’s business, including IT.
Capacity Management
Capacity Management plans and ensures that adequate capacity with the
expected performance characteristics is available to support the service delivery.
It also delivers capacity usage, performance, and workload management
statistics (as well as trend analysis) to Service Level Management.
Availability Management
Availability Management means planning and ensuring the overall availability of
the services and providing management information in the form of availability
statistics, including security violations, to Service Level Management.
This discipline also includes negotiating underpinning contracts with external
suppliers as well as the definition of maintenance windows and recovery times.
1.2.2 Service Support
The disciplines in the Service Support group are concerned with implementing
the plans and providing management information about the levels of service to
be achieved.
Configuration Management
Configuration Management is responsible for registering all components in the IT
service, including customers, contracts, SLAs, hardware and software
components, and maintaining a repository of configured attributes and
relationships between the components.
Service Desk
The Service Desk is the main point of contact for users of the service. When
users report problems, incident records are generated by Service Desk. The
Service Desk also provides statistics to Service Level Management,
demonstrating the service levels achieved.
Incident Management
Incident Management registers incidents, generated from Service Desk or
automatically by the other processes. It also allocates severity, initiates known
10
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
solution tasks, and coordinates the efforts of support teams to ensure timely and
accurate problem resolution.
Escalation times are noted in the SLA and are agreed upon between the
customer and the IT department.
Problem Management
Problem Management implements and uses procedures to perform problem
diagnosis, identify solutions, and correct problems. It also registers solutions in
the configuration repository.
Escalation times should be agreed upon internally with Service Level
Management during the SLA negotiation. It also provides problem resolution
statistics to support Service Level Management.
Change Management
Change Management plans and ensures that the impact of a change to any
component of a service is well known and that the implications regarding service
level achievements are minimized. This includes changes to the SLA documents
and the Service Catalog as well as organizational changes and changes to
hardware and software components.
Release Management
Release Management manages the master software repository, reference
models and so forth. Release Management act, in effect, as the work horse for
Change Management by orchestrating and deploying approved changes, and
provides management reports regarding deployment.
The key relationships between these disciplines are shown in Figure 1-4 on
page 12.
Chapter 1. The challenges of managing an outlet environment
11
Figure 1-4 Key relationships between Service Management disciplines
For the remainder of this discussion, we limit our discussion to Capacity,
Availability, and Release management of the outlet environment. Contrary to the
other disciplines that are common services provided by the IT organization, the
outlet solutions provide special challenges to management, due to their high
visibility and importance to the bottom line business results, their level of
distribution, and the special security issues that characterize the Internet.
To learn more about ITIL, visit these Web sites:
򐂰 The UK’s Office of Government Commerce
http://www.ogc.gov.uk
򐂰 The IT Service Management Forum
http://www.itSMF.com
12
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
1.3 The outlet application infrastructure
In a typical outlet environment, the application infrastructure consists of three
separate, central tiers. As Figure 1-5 shows, the communication between these
tiers is restricted. These tiers form the back-end of the application solution and
normally host the applications accessible from the Internet.
In the distributed outlet environment, the outlets are connected through secure
connections (the enterprise network), and local servers that provide distributed
functions located throughout the regional offices and outlets.
Enterprise network
Outlet
Outlet
Outlet
Internet
Region
Firewall
Firewall
Authentication
Access control
Intrusion detection
Demilitarized
Zone
Firewall
Application hosting/serving (W eb and application servers)
Load balancing
Distributed resource servers (MQ, database, and so on)
Gateways to back-end or external resources (MQ, database, etc.)
Application
Tier
Firewall
Back-end and legacy recources (databases, transactions, etc.)
Infrastructural resource servers (MQ, database, and so on)
Gateways to external resources
Back-end
Firewall
Internal
Custom er
Segm ent
Internal
Custom er
Segm ent
Company intranet
Resource sharing...
Figure 1-5 A typical outlet application infrastructure
The tiers can be described this way:
򐂰 Regions or outlets are the remote host secure systems that host the
application locally to provide 24x7 access in the outlet. The application users
are restricted to being physically present in the location using hand-held
devices, Point of Service (PoS) systems, PCs or self-service systems such as
kiosks, ATMs, scanners, and so forth.
򐂰 The Demilitarized Zone (DMZ) is the tier accessible by all external and
remote, internal users of the applications. This tier functions as the
gatekeeper to the entire system. Functions such as access control and
Chapter 1. The challenges of managing an outlet environment
13
intrusion detection are enforced here. The only other part of the intracompany
network that the DMZ can communicate with is the application tier.
򐂰 The Application Tier is usually implemented as a dedicated part of the
network where the application servers reside. End-user requests are routed
from the DMZ to the specific servers in this tier, where they are serviced.
When the applications need to use resources from company-wide databases,
for example, these are requested from the back-end tier, where all the
secured company IT assets reside. As was the case for communication
between the DMZ and the Application Tier, the communication between the
Application Tier and the back-end systems is established through firewalls
using well-known connection ports. This helps ensure that only known
transactions from known machines outside the network can communicate
with the company databases or existing transaction systems such as CICS®
or IMS™. Apart from specific application servers, the Application Tier also
hosts load-balancing devices and other infrastructural components, such as
MQ Servers, needed to implement a given application architecture.
򐂰 The Back-end Tier is where all the vital company resources and IT assets
reside. External access to these resources is only possible through the DMZ
and the Application Tier.
This model architecture is a proven way to provide secure, scalable,
high-availability access to company data with a minimum of exposure to security
violations. However, components such as application servers and infrastructural
resources vary depending upon the nature of the applications, company policies,
the requirements to availability and performance, and the capabilities of the
technologies used.
If you are in the outlet hosting area, or you have to support multiple lines of
business that require strict separation, the conceptual architecture shown in
Figure 1-5 on page 13 can be even more complicated. In these situations, one
or more of the tiers might have to be duplicated to provide the required
separation. In addition, the back-end tier might even be established remotely,
relative to the Application Tier. This is very common when the outlet application
hosting is outsourced to an external vendor, such as IBM Global Services.
To help design the most appropriate architecture for a specific set of e-business
applications, IBM has published a set of e-business patterns that be used to
speed up the process of developing outlet applications and deploying the
infrastructure to host them.
The concept behind these e-business patterns is to reuse tested and proven
architectures with as little modification as possible. IBM has gathered
experiences from more than 20,000 engagements, compiled these into a set of
guidelines, and associated them with links. A solution architect can start with a
14
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
problem and a vision for the solution and then find a pattern that fits that vision.
Then, by drilling down using the patterns process, the architect can further define
the additional functional pieces that the application will need to succeed. Finally,
the architect can build the application using coding techniques outlined in the
associated guidelines.
For more information regarding e-business patterns, refer to the IBM Redbook
Patterns: Service-Oriented Architecture and Web Services, SG24-6303-00, and
related redbooks available for download at IBM Redbooks Web site:
http://www.redbooks.ibm.com
1.3.1 Basic products used to facilitate outlet applications
So far, we have concluded that when building an outlet solution we want to:
򐂰 Provide the user access to the applications and information using a variety of
devices: hand-helds, scanners, kiosks, PCs, and PoS along with an
easy-to-use interface that fulfills the needs of the user and has a common
look-and-feel to it.
򐂰 Use as many standard components as possible to keep costs down and be
able to interchange them seamlessly.
򐂰 Be reliable and available at all times with a minimum of maintenance.
򐂰 Build in unique features or differentiators that make the user choose our
product over those of the competitors.
No matter whether the outlet solution is the front end of an existing system or a
new application developed using modern, state-of-the-art development tools, it
must be characterized by three specific layers of services that work together to
provide the unique functionality necessary to allow the applications to be used in
an Internet environment, as shown in Figure 1-6.
S
o
lu
tio
nS
e
rv
e
r
P
re
s
e
n
ta
tio
n
C
lie
n
tO
p
e
ra
tin
gS
e
rv
ic
e
s
N
e
tw
o
rk
in
g
S
e
rv
ic
e
s
In
te
rn
e
tP
ro
to
c
o
l
T
ra
n
s
fo
rm
a
tio
n
S
e
rv
e
rO
p
e
ra
tin
gS
e
rv
ic
e
s
E
n
v
iro
n
m
e
n
ta
lS
e
rv
ic
e
s
Figure 1-6 Outlet Solution specific service layers
Chapter 1. The challenges of managing an outlet environment
15
The Presentation layer must be a commonly available tool installed on all the
machines of the users of the outlet solution. It should support modern
development technologies such as XML, JavaScript, and HTML pages. User
access is commonly implemented through a browser interface.
The standard communication protocols which provide connectivity to the Internet
are TCP/IP, HTTP, and HTTPS. These protocols must be supported by both
client and server machines.
The Transformation services are responsible for receiving client requests and
transforming them into business transactions that, in turn, are processed by the
Solution Server. In addition, it is the responsibility of the Transformation service
to receive results from the solution server and convey them back to the client in a
format that can be handled by the browser. In outlet solutions that do not interact
with existing systems, the transformation and solution server services be
implemented in the same application, but most likely they are split into two or
more dedicated services.
This is a very simple representation of the functions that take place in the
Transformation service. Among other functions that must be performed are
identification, authentication and authorization control, load balancing, and
transaction control. Dedicated servers for each of these functions are usually
implemented to provide a robust and scalable outlet environment. In addition,
some of these are placed in a dedicated network segment, the DMZ. From the
point of view of the outlet owner, the DMZ is fully controlled; client requests are
received by well-known, secure systems and passed on to the enterprise
network, the intranet. This architecture increases security by avoiding
transactions from unknown machines wanting to reach the enterprise network,
and minimizing the exposure of enterprise data to the risk of hacking.
To facilitate secure communication between the DMZ and the intranet, a set of
Web servers is usually implemented, and identification, authentication, and
authorization are typically handled by an LDAP Server.
The infrastructure depicted in Figure 1-7 contains all components required to
implement a secure outlet solution, allowing anyone from anywhere to access
and do business with the enterprise.
16
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Browser
Device
Firew all
Web Server
(Load Balancer)
Firew all
Web
Server
LDAP
Server
Browser
Device
Kiosk/ATM
Firew all
Application
Server
Firew all
Databases
Business
Logic
Figure 1-7 Logical view of an outlet solution
For more information about outlet architectures, please refer to the redbook
Patterns for e-business: User to Business Patterns for Topology 1 and 2 Using
WebSphere Advanced Edition, SG24-5864. Search for it at the IBM Redbooks
Web site:
http://www.redbooks.ibm.com.
Tivoli and IBM provide some of the most widely used products to implement the
e-business infrastructure. These are:
򐂰 IBM HTTP Server provides communication and transaction control.
򐂰 Tivoli Access Manager provides identification, authentication, and
authorization.
򐂰 IBM WebSphere Application Server provides Web application hosting,
responsible for the transformation services.
򐂰 IBM WebSphere Edge Server provides Web application firewalling, load
balancing, Web hosting, and is responsible for the transformation services.
Chapter 1. The challenges of managing an outlet environment
17
1.3.2 Managing outlet applications using Tivoli
Even though the e-business patterns help with designing outlet applications by
breaking them down into functional units that can be implemented in different
tiers of the architecture using different hard- and software technologies, the
patterns provide only limited assistance in managing these applications.
Fortunately, this gap is filled by solutions from Tivoli Systems.
When you design the systems management infrastructure needed to manage the
outlet applications, keep in mind that the determining factor for the application
architecture is the nature of the application itself. This determines the application
infrastructure and the technologies you use. However, it does not do any harm if
a solution architect consults with systems management specialists while
designing the application.
The systems management solution has to play more or less by the rules set up
by the application. Ideally, it manages the various application resources without
any impact on the outlet application, while observing company policies on
networking use, security, and so forth.
Management of outlet applications is, therefore, best achieved by establishing
yet another networking tier, parallel to the application tier, in which all systems
management components can be hosted without influencing the applications.
Naturally, because the management applications have to communicate with the
managed resources, the two meet in the network and on the machines hosting
the various outlet application resources.
Using the Tivoli product set, it is recommended that you establish all the central
components in the management tier and have a few proxies and agents present
in the DMZ and application tiers, as shown in Figure 1-8 on page 19.
18
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Enterprise network
Outlet
Outlet
Outlet
Internet
Region
Firewall
Firewall
Distributed Sys. Mgmt. Agents
Tivoli Gateway
Tivoli Endpoint
ITM Monitoring Engine
Distributed Sys. Mgmt. Agents
Tivoli Gateway
Tivoli Endpoint
ITM Monitoring Engine
Demilitarized
Zone
Firewall
Distributed Sys. Mgmt. Agents
Tivoli Gateway
Tivoli Endpoint
ITM Monitoring Engine
Application
Tier
Mangement
Tier
Firewall
Distributed Sys. Mgmt. Agents
Tivoli Gateway
Tivoli Endpoint
ITM Monitoring Engine
Central Sys. Mgmt. Resources
Tivoli TMR
TEC Server
TBSM Server
Tivoli Data Warehouse Server
Back-End
Firewall
Internal
Customer
Segment
Internal
Customer
Segment
Distributed Sys. Mgmt. Agents
Tivoli Gateway
Tivoli Endpoint
ITM Monitoring Engine
Figure 1-8 Typical Tivoli-managed outlet application infrastructure
Implementing the management infrastructure in this fashion, there is minimal
interference between the application and the management systems. The access
to and from the various network segments is manageable, because the
communication flows between a limited number of nodes using well-known
communication ports.
IBM Tivoli management products have been developed with the total
environment in mind. The IBM Tivoli Monitoring product provides the basis for
proactive monitoring, analysis, and automated problem resolution.
1.4 Tivoli product structure
In this section, we discuss how Tivoli solutions provide comprehensive systems
management for the outlet enterprise and how the IBM Tivoli Monitoring for
Transaction Performance product fits into the overall architecture.
In the hectic On Demand Business environments of today, responsiveness,
focus, resilience, variability, and flexibility are key to conducting business
successfully. Most business processes rely heavily on IT systems, so it is fair to
say that the IT systems have to possess the same set of attributes in order to be
Chapter 1. The challenges of managing an outlet environment
19
able to keep up with the speed of business. To provide an open framework for
the On Demand Business IT infrastructure, IBM has published the On Demand
Blueprint, which defines an On Demand Operating Environment with three major
properties (Figure 1-9):
򐂰 Integration is the efficient and flexible combination of resources including
people, processes, and information, to optimize resources across and beyond
the enterprise.
򐂰 Automation is the capability to dynamically deploy, monitor, manage, and
protect an IT infrastructure to meet business needs with little or no human
intervention.
򐂰 Virtualization means presenting computer resources in ways that allow users
and applications to get value out of them easily, rather than presenting them
in ways dictated by the implementation, geographical location, or physical
packaging.
Integration
Integration
Automation
Automation
Virtulization
Virtulization
On Demand Operating Environment
Figure 1-9 The On Demand Operating Environment
The key motivators for taking steps to align the IT infrastructure with the ideas of
the On Demand Operating Environment are:
򐂰 Align the IT processes with business priorities
Allow your business to dictate how IT operates, and eliminate constraints that
prohibit the effectiveness of your business.
򐂰 Enable business flexibility and responsiveness
Speed is one of the critical determinants of competitive success. IT processes
that are too slow to keep up with the business climate cripples corporate
goals and objectives. Rapid response and nimbleness mean that IT becomes
an enabler of business advantage versus a hindrance.
20
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
򐂰 Reduce cost
By increasing the automation in your environment, immediate benefits can be
realized from lower administrative costs and less reliance on human
operators.
򐂰 Improved asset utilization
Use resources more intelligently. Deploy resources on an as-needed,
just-in-time basis, versus a costly and inefficient just-in-case basis.
򐂰 Address new business opportunities
Automation removes lack of speed and human error from the cost equation.
New opportunities to serve customers or offer better services will not be
hampered by the inability to mobilize resources in time.
In the On Demand Operating Environment, IBM Tivoli Monitoring for Transaction
Performance plays an important role in the automation area. By providing
functions to determine how well the users of the business transactions, the J2EE
based ones in particular, are served, IBM Tivoli Monitoring for Transaction
Performance supports the process of provisioning adequate capacity to meet
Service Level Objectives, and helps automate problem determination and
resolution.
For more information about the IBM On Demand Operation Environment, please
refer to the redbook On demand Operating Environment: Managing the
Infrastructure, SG24-6634-00.
As part of the On Demand Blueprint, IBM provides specific Blueprints for each of
the three major properties. The IBM Automation Blueprint depicted in Figure 1-10
on page 22 defines the various components needed to provide automation
services for the On Demand Operation Environment.
Chapter 1. The challenges of managing an outlet environment
21
Business Services Management
Policy-based Orchestration
Availability
Security
Optimization
Provisioning
Virtualization
Figure 1-10 IBM Automation Blueprint
The IBM Automation Blueprint defines groups of common services and
infrastructure that provide consistency across management applications, as well
as enabling integration.
Within the Tivoli product family, there are specific solutions that target the same
five primary disciplines of systems management:
򐂰
򐂰
򐂰
򐂰
򐂰
Availability
Security
Optimization
Provisioning
Policy-based Orchestration
Products within each of these areas have been made available over the years
and, because they are continually enhanced, have become accepted solutions in
enterprises around the world. With these core capabilities in place, IBM has been
able to focus on building applications that take advantage of these solution-silos
to provide true business systems management solutions.
As already described, a typical outlet application depends not only on hardware
and networking, but also on software ranging from the operating system to
middleware such as databases, Web servers, and messaging systems, to the
applications themselves. A suite of solutions such as the IBM Tivoli Monitoring
for... products, enables an IT department to provide consistent availability
management of the entire business system from a central site and using an
integrated set of tools. By using an end-to-end set of solutions built on a common
foundation, enterprises can manage the ever-increasing complexity of their IT
infrastructure with reduced staff and increased efficiency.
22
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Within the availability group in Figure 1-10 on page 22, two specific functional
areas are used to organize and coordinate the functions provided by Tivoli
products. These areas are shown in Figure 1-11.
R apid tim e to valu e
Event Correlation and Automation
C ross-system & dom ain root cause analysis
O pen architecture
M ay be deployed independently
O ut-of-box best practices
E ase of use
Monitor Systems and Applications
D iscover, collect m etrics, probe (e.g. user experience),
perform local analysis, filter, concentrate,
determ ine root cause, take autom ated action
S uperior value with a fully
integrated solution
Q uality
P rocesses, roles, and m etrics
R apid problem response
Figure 1-11 Tivoli’s availability product structure
The lowest level consists of the monitoring products and technologies, such as
IBM Tivoli Monitoring and its resource models. At this layer, Tivoli applications
monitor the hardware and software, then provide automated corrective actions
whenever possible.
At the next level is event correlation and automation. As problems occur that
cannot be resolved at the monitoring level, event notifications are generated and
sent to a correlation engine, such as Tivoli Enterprise Console. The correlation
engine at this point can analyze problem notifications, or events, coming from
multiple components and either automate corrective actions or provide the
necessary information to operators who can initiate corrective actions.
Both tiers provide input to the Business Information Services category of the
Blueprint. From a business point-of-view, it is important to know that a
component or related set of application systems has failed as reported by the
monitors in the first layer. Likewise, in the second layer, it is valuable to
understand how a single failure can cause problems in related components. For
example, a router being down could cause database clients to generate errors if
they cannot access the database server. The integration to Business Information
Services is a very important aspect, because it provides an insight into how a
component failure can affect the business as a whole. When the router failure
above occurs, it is important to understand exactly what line of business
applications are affected and how to reduce the impact of that failure on the
business.
Chapter 1. The challenges of managing an outlet environment
23
1.5 Managing outlet applications
As we have seen, providing the services of outlet application systems requires
that basic services such as communications, messaging, database, and
application hosting are functional and well-behaved. This should be ensured by
careful management of the infrastructural components using Tivoli tools to
facilitate monitoring, event forwarding, automation, console services, and
business impact visualization.
However, ensuring the availability and performance of the application
infrastructure is not always enough. Web-based applications are implemented in
order to attract business from customers and business partners whom we might
or might not know. Depending on the nature of the data provided by the
application, company policies for security and access control, as well as access
to and use of specific applications, can be restricted to users whose identity can
be authenticated, for example when using an ATM machine. For other types of
users such as supermarket shoppers and kiosk users, there are no requirements
to authenticate to access the application.
In either case, the goal of the application is to provide useful information or
services to the user and, hopefully, attract the user to return later. The service
provided to the user, in terms of functionality, ease of use, and responsiveness of
the application, is critical to the user’s perception of the application’s usefulness. If
the user finds the application useful, there is a fair chance that the user will return
to conduct more business with the application owner.
The usefulness of an application is a very subjective measure, but it seems fair to
assume that an individual’s perception of an application’s usefulness involves, at
the very least:
򐂰
򐂰
򐂰
򐂰
򐂰
Relevance to current needs
Easy-to-understand organization and navigation
Logical flow and guidance
Integrity of the information (is it trustworthy?)
Responsiveness of the application
Naturally, the application owner can influence all of these parameters. The owner
can modify the design, validate the data, and so forth. However, network or
database latency and the capabilities of the user’s system are critical factors that
affect the time it takes for the user to receive a response from the application. To
avoid this becoming an issue that scares users away from the application, the
application provider can:
򐂰 Set the user’s expectations by providing sufficient information up front.
򐂰 Make sure that the back-end transaction performance is as fast as possible,
at a proven, acceptable level.
24
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Neither of these will guarantee that users will return to the application, but
monitoring and measuring the total response time and breaking it down into the
various components will give the application owner an indication of where a
delay, or bottleneck, might be.
To provide consistently good response times from the backend systems, the
application provider can also establish a monitoring system that generates
reference transactions on a scheduled basis. This will give early indications
about availability and impending capacity problems.
The need for real-time monitoring and gathering of reference and historical data,
among others, are addressed by IBM Tivoli Monitoring for Transaction
Performance. By providing the tools necessary for understanding the
relationships between the various components that make up the total response
time of an application, including breakdown of the backend service times into
service times for each sub transaction.
1.6 Meeting future challenges today
To help meet the challenges of staying productive and competitive, it seems
obvious that more and more IT-based solutions will be made available to the
users in the outlets, meaning for instance, retail stores or branch banks and
ATMs. For competitive and operational reasons, it is very likely that these
applications will have to be available on a 24x7 basis. This implies that
application hosting is required to be local to the outlets, partly to provide fast
response times to the local users, and partly because of the tremendous costs of
establishing application and networking infrastructures with the required
bandwidth and a 99.999% availability.
A likely generic solution infrastructure looks like the one depicted in Figure 1-12,
supporting either two or three-tier implementations.
Chapter 1. The challenges of managing an outlet environment
25
Figure 1-12 Typical outlet solution infrastructure
Whether or not regional gateways are implemented as suggested in Figure 1-12
is a matter of application design and functionality, combined with the need to limit
costs and avoid introducing bottlenecks.
Today, in the highly competitive marketplace where every dime matters, outlets
are traditionally connected to the enterprise systems through low-cost,
low-bandwidth lines to save costs. This allows the outlet businesses to exchange
data with the outlets outside of peek hours, thus keeping costs for networking to
a minimum. Over time it is more than likely, that the distributed solutions will be
tightly integrated on a real-time level with the back-end enterprise systems, which
typically will require establishing more advanced networking solutions than the
ones typically implemented today.
The generic outlet solution presented in this publication assumes that the retail
organization or branch bank has a server in each outlet and a low bandwidth
network connection between the outlet and the enterprise or regional office. More
details regarding the sample outlet solution are provided in Chapter 2, “The
Outlet Solution overview” on page 29.
26
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
No-matter the actual solution, implementing detached, distributed solutions
introduces a number of specific challenges related to application architecture,
data exchange, and, in particular, solution, infrastructure and platform
management.
To improve solution management, the successful solution would:
򐂰 Support the remote install and uninstall of applications over low bandwidth.
򐂰 Support the seamless upgrade of software across 1000s of outlets by
supporting a staged rollout.
򐂰 Support a standard set of software and remote systems management.
򐂰 Support remote problem determination from the enterprise for a wide variety
of applications running at 1000s of branches and retail outlets.
򐂰 Limit the need for IT skill in the outlet.
To ensure data currency and integrity, the successful solution would:
򐂰 Provide reliable, priority-based, schedule-driven data synchronization
supporting multiple data types and product independent APIs.
򐂰 Support push and pull data transmission and synchronization.
򐂰 Achieve maximum efficiency on data replication.
򐂰 Support the hosting of applications at the outlet due to the typically low
bandwidths between the enterprise and the outlet, but tightly integrate these
technologies to the enterprise.
To support user management, the successful solution would:
򐂰 Permit user registry management to be done either centrally or from the
outlets.
To ensure platform consistency, the successful solution would:
򐂰 Be delivered as an out-of-the-box solution which supports a heterogeneous
development environment.
򐂰 Support device-independent applications so that separate applications do not
need to be written for each device type.
Chapter 1. The challenges of managing an outlet environment
27
28
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
2
Chapter 2.
The Outlet Solution
overview
This chapter provides a high-level view of the capabilities, interfaces and the
requirements for the Outlet Solution and the related Outlet Systems
Management Solution.
© Copyright IBM Corp. 2005. All rights reserved.
29
2.1 Introducing Outlet Inc.
The following introduces a fictitious enterprise, Outlet Inc., which we use as the
target organization and for which we architect, design and implement a systems
management solution.
Outlet Inc. is a major retailer in the geography in which it is located.
2.1.1 The Outlet Inc. environment
Outlet Inc. maintains approximately 1400 outlets scattered throughout its
geography. The outlets vary in size; several outlets are considered primary,
serving as network hubs for multiple suboutlets. These hubs are referred to as
regional outlets.
The number of regional outlets account for about 30%, and the suboutlets for
roughly 70% of the total number of outlets.
Each regional outlet is directly connected through a high-speed connection to the
enterprise technology center.
The suboutlets are connected to the enterprise through a dial-up, slow VPN
connection through the regional outlets.
Outlet server and client systems are moving to a Linux platform, which is the new
platform for the Outlet Solution. Outlet Inc. needs a solution to manage the
distributed environment and the new business application.
All regional and suboutlets will be equipped with a United Linux server. This
server will be exploited by the Tivoli Management Framework.
Network configuration
The Outlet Inc. regional outlets are connected to the primary enterprise
technology center through a 64-kbps frame relay backbone network as depicted
in Figure 2-1.
30
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 2-1 Outlet Inc. network topology
The suboutlets are currently connected to the regional ones with 19.2 Kbps
leased lines. Outlet Inc. is working to improve the network capacity. In the near
future, the 19.2-Kbps lines should be upgraded to 64 Kbps, and our project relies
on these changes. In addition, Outlet Inc. might create connections among the
regional outlets.
The speed upgrade of the leased lines is planned in order to improve the
performance throughout the network, especially for software distribution. For
example, assume that Outlet Inc. needs to distribute a 5-MB application to 200
outlets, setting 50 to the maximum number of targets per distribution and
allowing the distribution to consume no more than 70% of the maximum
bandwidth) Table 2-1 on page 32 shows the result using either a 64 Kbps or 19.2
Kbps:
Chapter 2. The Outlet Solution overview
31
Table 2-1 Network speed comparison for software distribution
Distribution properties
19.2 Kbps
64 Kbps
Size of application
5 MB
Maximum bandwidth allowed
70%
Maximum number of targets per distribution
50
Number of total target
200
Distribution completion time for 100% of targets
208 Minutes
62 Minutes
DHCP
Outlet Inc. uses Dynamic Host Configuration Protocol (DHCP). DHCP provides a
way to dynamically lease, on a temporary basis, an IP address to requesting
systems. DHCP is now a very widely used protocol system for most of their
distributed environment.
The DHCP server supporting the systems in any outlet is the main outlet server.
The outlet servers themselves have static IP addresses.
DNS
In Outlet Inc., an internal network guarantees the IP-level connectivity for all the
systems through a LAN/WAN link structure, and through a private addressing
using a 10-class network.
Different DNS domain servers take care of management of the associations
between hostnames and IP address.
Outlets
Outlets will be equipped with one or more Linux server systems and a varying
number of Linux client systems, up to 150 to 200 clients in the regional outlets,
10 to 15 clients on the average for each outlet.
2.1.2 Outlet Solution features
Besides the Solution Management capabilities of the Outlet Solution, the
following features are being implemented through features of the middleware
(DB2, WebSphere) components supporting the Outlet Solution.
Data synchronization
Data synchronization between the outlet and the enterprise needs to be reliable
over a very low bandwidth. Data synchronization should be schedule driven and
exception management capabilities provided. Synchronization needs to be
32
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
supported from the enterprise to the outlet and from the outlet to the enterprise.
The solution must support various data types and use product-independent APIs.
User management
The enterprise and the outlet need to be able to control access to their systems.
An employee with a valid ID and password will be granted access while all
unauthorized users will be denied access.
Application services
The solution needs to support an infrastructure that enables the organization to
easily create, run, and maintain applications to improve the customer experience.
2.1.3 Current IT infrastructure
The IT infrastructure for the enterprise-outlet organization has the following
characteristics:
򐂰 Servers in the enterprise and in the outlet
򐂰 And outlet server, both hardware and software, is installed an configured from
the enterprise and shipped to the outlet for setup by a technician
򐂰 Poor connectivity between the outlet and the enterprise, the characteristics of
which are low bandwidth, low reliability, uncertain latency.
򐂰 Inconsistent levels of technology across the outlets
򐂰 Loosely integrated applications in a heterogeneous environment
򐂰 Numerous devices with device dependent applications
򐂰 Legacy device support
򐂰 Web applications
򐂰 Database management systems
2.1.4 Constraints
Like most retailers, Outlet Inc. expects the solution to be supported on both the
Windows® and Linux operating systems.
Retail POS systems need to be run from the outlet because that is the only way
to guarantee the required 99.999% availability. Today, POS cannot be run from
the enterprise. POS systems need to be hosted locally and integrated with the
enterprise.
Chapter 2. The Outlet Solution overview
33
The Outlet Solution footprint will need to be reasonable given the cost of
hardware installation across 1000s of outlets and the need to support remote
install across low bandwidths.
The Outlet Solution will likely be constrained by the need to phase the rollout due
to its sheer magnitude, meaning the number of applications in the solution,
number of outlets, number of supported devices, and so forth.
2.2 The Outlet Systems Management Solution
Based on the requirements for operational robustness of the Outlet Solution, the
Outlet Systems Management Solution must be designed to meet the specific
requirements and include the capabilities described in the following sections.
2.2.1 Outlet Systems Management Solution requirements
To manage the environment, including middleware infrastructures, in which the
Outlet Solution is implemented, a dedicated Outlet Systems Management
Solution will be established. This must support, at a minimum, the core functional
requirements:
򐂰
򐂰
򐂰
򐂰
򐂰
Automated installation and customization
Remote debugging
Interim fix
Monitoring
Disconnected operations
Automated installation and customization
A new application has been developed for the outlets. The application has been
through extensive user testing and feedback and has been approved for general
deployment. A six-week launch window has been agreed to by the organization.
The IT Administrator creates a change request and attaches it with relevant
information such as the launch window information, impact of the change, and a
change package which includes the application, prerequisites information,
information regarding installation, configuration, validation, recovery information,
and so on, to be deployed to each outlet.
The Outlet Systems Management Solution must be able to manage automated
installation and configuration of all application and middleware components to
the outlets, if possible without the outlets incurring any downtime.
The installation and customization process must include prerequisite checking of
the required hardware and software. If the software stack is not at the required
level, the system obtains the required levels of the underlying software before
34
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
initiating install of the new application. If a hardware upgrade is required, then the
install is put on hold until the required hardware upgrade takes place. Once the
application is installed, an install and configuration check is executed and the
Enterprise is notified of the successful deployment.
Interim fix
Similar to a new application deployment, an interim fix is rolled out to the outlets.
But instead of a mass upgrade, the decision has been made to apply the
upgrade region by region.
Again, a pre-installation check of the software stack and hardware is performed.
If the hardware is deemed to be below standards, a call is automatically placed to
service personnel to upgrade the machine. Once the hardware and software
prerequisites are passed, the interim fix is applied and a postinstall check can be
executed. A notification of success or failure is sent back to the enterprise.
Monitoring
The Outlet Systems Management Solution will allow the enterprise to continually
monitor the health of the outlet systems and their associated applications.
Throughout the normal business hours, metrics regarding the responsiveness of
the key application transactions, and status of registered applications and
middleware components is checked to ensure that they are within predefined
limits. This information indicates that they are operational, available and
performing according to predefined thresholds. If an application or the outlet is
unable to respond, the enterprise is informed of the outage and an investigation
is performed by the IT Administrator.
Remote debugging
The gift registry application is not able to update the gift list with the latest
purchase information. Using available knowledge and tools, the application tries
to recover from the situation itself. Once it has exhausted all possibilities, an alert
is sent to the enterprise to help resolve the problem. The IT Administrator
receives the alert and proceeds to investigate. Using available tooling, the IT
Administrator tries to search the knowledge base for a known recovery action.
Once the knowledge base has been exhausted, the IT Administrator starts a
manual investigation into the problem. By examining additional logs and making
some correlations against these logs, the IT Administrator is able to identify and
resolve the problem with the gift registry application.
Disconnected operations
Network connection to the enterprise is interrupted during a lightening storm.
During this period, the outlet personnel and customers are not aware of any
service degradation, that is, this recovery action occurs in the background. The
Chapter 2. The Outlet Solution overview
35
users at the outlet are not aware of the disruption and are unaware of the
recovery activities that are taking place.
2.2.2 Outlet Systems Management Solution capabilities
Based on the requirements, the following basic capabilities must be included in
the Outlet Systems Management Solution.
Inventorying
The Outlet Systems Management Solution must facilitate gathering and storing
of configuration information across thousands of outlets and hundreds of
thousands of devices. This information should be presented in a consistent
manner.
Remote installation
Remote installation must be supported across all middleware components and
applications. Support for phased rollout should be provided and the footprint
minimized.
Monitoring
The Outlet Systems Management Solution must enable the customer to monitor
application, middleware, operation system and hardware metrics across
thousands of outlets and hundreds of thousands of devices. This information
should be presented in a consistent manner.
Event handling
Centralized, correlated event handling with automated recovery facilities must be
facilitated across all middleware components and applications to allow for
automated problem determination and recovery.
Remote problem determination
Remote, end-to-end, problem determination must be supported across all
middleware components and applications.
Remote upgrade
Remote upgrades must be supported across all middleware components and
applications. This requires production level support of the previous version during
the phased rollout across outlets.
Remote uninstall
Remote uninstall capabilities must be provided and phased scheduling across
outlets supported.
36
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
3
Chapter 3.
The Outlet Systems
Management Solution
Architecture
In this chapter, we discuss the architecture of the systems management solution
for Outlet Inc. Refer to 2.2, “The Outlet Systems Management Solution” on
page 34 for a description of the functional requirements.
The architecture addresses only the challenges and needs for managing the IT
infrastructure and applications, not specific details about applications, user
administration, database management, infrastructure components such as
network or security.
The architecture focuses on issues common to IT infrastructures in retail and
outlet banking environments. The main components used to establish the core
management functions are:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Tivoli Management Framework
Tivoli Configuration Manager
Tivoli Enterprise Console
Tivoli Monitoring
Tivoli Monitoring for Web Infrastructure
Tivoli Monitoring for Databases
Tivoli Monitoring for Transaction Performance
© Copyright IBM Corp. 2005. All rights reserved.
37
3.1 Outlet Inc. requirements
The intent of Outlet Inc., is to implement a centralized systems management
solution that unifies all the tools and instruments used in the central technology
center. This solution also needs to allow the operators to work with the same
modalities independent of the platforms, without the needing to know specific
operating systems controls and commands.
That solution has to allow for the management of all the Outlet Inc. distributed
systems to be managed in the same way. In this first phase, however, the focus
is on the establishment of the Linux Outlet Solution server systems.
Another requirement is the independence of the solution from the hardware and
operating systems platforms on the target systems. This independence enables
Outlet Inc. to introduce a unique instrument of management, and to enlarge the
usage in the future to other departments, even data processing solutions in
heterogeneous environments such as Microsoft® Windows 2000, Microsoft
Windows NT® and Unix.
The definition of new hardware platforms and the renewal of software throughout
the organization introduces new management challenges compared to the
previous hostcentric perspective. In particular, the control and management of a
large distributed environment.
Based on Outlet Inc. requirements described in 2.2, “The Outlet Systems
Management Solution” on page 34 and the selected Tivoli products, the following
main areas will be addressed by the Outlet Systems Management Solution:
򐂰 Software Distribution
– Distribution and installation of software updates concerning data,
applications and systems throughout the Outlet Inc. infrastructure
– Distribution of homegrown software packages and prepackaged software
based on standard installation tools provided by the operating platform
– Possibility of installing software for 300 outlets in parallel
– Synchronous activation of the distributed packages on every outlet
– Ability to define after and before (install, undo, remove, commit…)
procedures
– Possibility of specifying installation prerequisites
– Management of distribution queues, automatic checkpoint and restart
– Locked file management
– Inventory integration, queries on inventory database conditioned
distributions
38
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
– Distribution on system groups, through a good balance of the distribution
weight (distribution parameter configuration)
– Distinction between software installation and activation phases
– Possibility to verify the status of distribution progress, in terms of systems
updates and distribution results
– Possibility to create packages to be distributed, moving from a client
image, taking a picture of its software status before and after a manual
installation, analyzing the differences
– Software products installation using response files (silent installation)
– Distribution of composite packages to targets including conditional
processing
– Definition of reference models to fix hardware and software states
conditions to be met on group of targets
– Versioning check on software packages defined as versionable, applicable
to products and patches
– Using an execution window to limit the planned action between specific
start and stop dates
– Possibility of retrieving a software package from a CD-ROM or a file server
rather than the source host
– Integration with inventory repository data relative to the status of software
package distributions; synchronization of software package status of
endpoints with the repository catalog
򐂰 Centralized hardware and software Inventory
– Analysis of hardware and software configurations of outlet systems
– Possibility of personalizing the inventory for the management of
nonstandard software
– Queries usage as input of software distribution process
– Integration with the software distribution activity
– Possibility of scheduling hardware and software scanning
– Complex and predefined queries
– Possibility of maintaining an historical record of the information collected
and analyzing that data in a report
򐂰 Alarms and thresholds management
– Focal point of correlation of data collected by setting monitor on distributed
resources and alarms
Chapter 3. The Outlet Systems Management Solution Architecture
39
– Alarms management of system resources including disk space, CPU
usage, memory availability, and critical processes
– Possibility of intercepting application alarms, in case the application is able
to write in a system log file
– Collection of application and system alerts in a distributed environment
database belonging to the event management focal points
– Ability to manage data collected in the database and to produce reports
– Correlation of collected information, and historical analysis
– Alarms filtering at remote site
– Possibility of automatic actions setting and activating, both on center and
on remote site
򐂰 Support staff capabilities
–
–
–
–
Access to the Tivoli Console
Access to the Health Console
Access to the MDist2 Console
Access to the Tivoli functionality according to each user’s roles and
responsibilities.
3.2 Physical architecture for Outlet Inc.
In accordance with the requirements defined in the previous section, Outlet Inc.
will adopt a centralized support infrastructure. As shown in Figure 3-1. All outlets
will be managed from the central technology center.
40
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 3-1 Outlet Inc. Geographical Architecture
The centralized management model will incorporate a centrally structured
support organization, where help desk, network, administration and general IT
staff will be combined into a single unit, based in a central location.
Chapter 3. The Outlet Systems Management Solution Architecture
41
Figure 3-2 Centralized management method
All operations, from distribution of software packages to inventory scanning of
systems, can be performed from this single focal point.
In order to satisfy Outlet Inc. requirements and the centralized management
model, a Tivoli command center will be established within Outlet Inc.
3.2.1 Tivoli Command Center
The Command Center is based on the standard Tivoli three-tier architecture with
two levels of TMRs, Hub and Spoke TMRs, as shown in Figure 3-3 on page 43.
For more details on the three-tier Tivoli architecture, refer to the Tivoli Planning
for Deployment Guide, available online at the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
From the welcome page, navigate to Management Framework →Planning for
Deployment Guide →Components of the basic Tivoli architecture.
This architecture allows for a good performance level of control and
management activities, while assuring the scalability of the Center in the event of
growing of management needs, for example, future expansion of the number
managed outlets.
42
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
In fact, every second-level Spoke (TMR) system is a dedicated Tivoli
Management Region that has management responsibility for a specific subset of
distributed outlets.
Figure 3-3 Outlet Systems Management Solution overview
Upstream, second-level TMRs or Spokes will be interconnected to the first-level
TMR (Hub) in order for the HubTMR to become a unique center capable of
controlling and monitoring all the distributed outlets.
Downstream, the Spoke TMRs are connected to a Tivoli Gateway implemented
on a managed node in each of the regions and outlets. The gateways are
responsible for passing requests and responses back and forth between the
endpoints, systems that are the objects of our management actions, and the
TMR servers. Gateways can be arranged in multi-level hierachies, allowing a
regional gateway to support multiple outlet gateways.
Placing a gateway in each region and outlet helps minimize WAN dependency
and capacity by:
򐂰 Allowing for selective bandwidth configuration.
򐂰 Minimizing the need to communicate with central components.
򐂰 Preloading software images to remote depots during offpeak hours
Chapter 3. The Outlet Systems Management Solution Architecture
43
򐂰
򐂰
򐂰
򐂰
Installing software from local sources
Allowing for detached (without connection to the TMR) software installation
Uploading of inventory scan data during offpeak hours
Consolidating events and issuing corrective actions locally before forwarding
them to the central event manager.
However, we want to emphasize that any systems management solution in which
monitoring of remote components requires a robust networking infrastructure
which will support timely transmission of events, commands, and data.
The architecture will be based on the following Tivoli products:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Tivoli Framework v4.1.1
IBM Tivoli Configuration Manager v4.2
Tivoli Enterprise Console v3.9
IBM Tivoli Monitoring v5.1.2
IBM Tivoli Monitoring for Web Infrastructure: WAS v5.1.2
IBM Tivoli Monitoring for Databases: DB2 v5.1.0
IBM Tivoli Monitoring for Transaction Performance v5.3
It is recommend that all the patches that apply to this environment be installed.
For a complete list of patches to be installed, please refer to the product Release
Notes and to Tivoli support Web site:
http://www.tivoli.com/support/patches
Figure 3-3 on page 43 shows the final configuration for the HubTMR production
environment.
Note: The final number of Spoke TMRs increase accordingly with the
increasing number of outlets to be managed.
Core components of the Tivoli Command Center
The key components of the Tivoli Management Environment® framework are:
򐂰 Tivoli Management Region Server
– HUB TMR
– Spoke TMRs
򐂰 Tivoli managed node
򐂰 Tivoli gateway
򐂰 Tivoli endpoint
򐂰 Source host
򐂰 TEC server
򐂰 RDBMS server
򐂰 Console server
򐂰 TMTP server
44
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Tivoli Management Region Server
The Tivoli Management Region Server (TMR) server is the heart of the Tivoli
Management Environment within Outlet Inc. All managed resources in Outlet Inc.
must be assigned to a TMR server. All of the desired Tivoli management
applications must be installed on the TMR servers. Some products can also
include binaries that must be installed on the Tivoli Gateways as well. The TMR
serves as the central workhorse within the systems management solution that
allows all of the various applications and components to communicate and
interoperate.
All TMRs will be able to perform administration such as definition of managed
resources, managing distribution of monitoring profiles, handling event traffic,
enterprise wide software distribution, etc.
To support Outlet Inc. with the services and applications necessary for enterprise
management, there will be two particular levels of Tivoli Management Regions.
To facilitate easy grouping for these management levels, the solution will be
based upon the Hub-Spoke model. This will involve the creation of a top level
management area that is known as the Hub TMR. Once this has been
established, sub-ordinate management levels known as Spoke TMRs can then
be created to support specific parts of the distributed outlet infrastructure.
The primary role of the Hub TMR server is to facilitate a centralized management
server in its own dedicated TMR. The Hub TMR server directly manages a
limited number of resources so that it can be dedicated primarily to the support of
administrators accessing the management functions in the TME®. In addition, it
is the focal point for enterprise wide activities such as configuration and
distribution of monitoring profiles to any server in the environment. The Hub TMR
server forms the central administration point from which all management
functions are performed within a TME. Administrator’s Desktop and TEC
consoles are defined and configured in this TMR. The central event management
server, the TEC server, is implemented on a dedicated managed node contained
within the Hub TMR.
The Spoke TMRs provide the direct control function to all managed resources in
a specific subset of the Tivoli Management Environment. Generally, Spoke TMRs
are not used as entry points for administrators. For ease of maintenance. the
Spoke TMRs should be placed in the same location as the Hub TMR.
Even if there is no limit on the number of managed nodes that can be defined in
a single TMR, it is not recommended to use a large number of managed nodes.
Experience has shown that 150 managed nodes is the maximum number to use
with Spoke TMRs. In this context, each outlet will have a managed node
installed. Keeping this advice in mind, a region should not span more than 150
outlets. This implies that eight to ten SpokeTMR environments will be required to
Chapter 3. The Outlet Systems Management Solution Architecture
45
manage 1000 outlets. If the company outgrows that number, Outlet Inc. can
guarantee the scalability by increasing the number Spoke TMRs.
Tivoli managed node
A managed node is a dedicated system with a supporting role in the TMR.
Managed nodes are used to host special functions such as repeater, gateway,
TEC Server, or the software library, on dedicated systems in order to distribute
the TMR workload among several systems. One managed node can assume
several functions. The main considerations for using managed nodes are
capacity and performance issues.
Tivoli gateway
One gateway must be established in every region and every outlet. The gateway
is a managed node configured to accept communication requests from the
managed systems, endpoints, and forward requests and responses to the TMR.
In addition, all gateways will be configured as both repeaters and collectors in
order to allow us to control and optimize the management traffic (MDist and
MCollect) to and from the TMR servers.
Tivoli endpoint
The endpoint enables a distributed system in a region or an outlet to be managed
from a central site. Endpoints are installed on every system of the outlet
infrastructure that will be managed from the central location, and will connect to
the local gateway.
To facilitate management of the Tivoli Management Infrastructure itself, all
servers related to all TMRs will have endpoints installed, even if they also
assume the role of a managed node.
Source host
The Source host is a dedicated managed node in the Hub TMR used to store all
the software packages and code libraries to be distributed to the outlets with the
Software Distribution component of Tivoli Configuration Manager. For
performance and capacity reasons, the source host should be implemented on a
dedicated managed node.
TEC server
The Tivoli Enterprise Console (TEC) server is used for reception and handling of
all events in the outlet infrastructure. The TEC server will receive events from
most of the Tivoli components that will be installed. However, IBM Tivoli
Monitoring and related components will be the main contributor. Once these
events have been received, they can be collected, correlated and can have rules
applied to them to allow for automation. This automation can be as simple as
46
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
removing duplicate events or running elaborate diagnosis programs, should the
same event occur multiple times within a given time period.
RDBMS server
The database server is used to host all the databases used by the Tivoli
infrastructure to keep track of various configuration and status information.
Because the Tivoli Framework allows these databases to exist outside the TME
itself, there is no requirement for establishing the database server as a managed
node. However, you should install an endpoint to allow for monitoring of the
system.
Console server
This server is used to facilitate the WebSphere Application Server based
consoles in the Tivoli environment. Similar to the rdbms server, the console
server does not have to be implemented on a managed node. In the Outlet
Systems Management Solution, the main function of the console server is to host
the ITM Web Health Console.
TMTP server
The TMTP server is a dedicated server system that performs all the
management, control, and coordination functions for monitoring application
availability and transaction performance. Because the Tivoli Monitoring for
Transaction Performance architecture is detached from the TME, this server
does not need to be implemented on a managed node.
Refer to 3.4.1, “Suggested Tivoli hardware requirements” on page 62 and 4.1,
“The Outlet Systems Management Solution” on page 130 for detailed information
about hardware and software configurations, as well as functional placement
used in the Outlet Systems Management Solution.
3.2.2 TMR connections within Outlet Inc.
In the Outlet Systems Management Solution, the management functions will be
performed at the Hub TMR level.
By creating the TMR connection between each of the Spoke TMRs and the
controlling Hub TMR, most resources can be shared securely and efficiently.
Once the Hub TMR is aware of the resources contained within a Spoke TMR it
can treat these as though they are part of its own Tivoli Management
Environment, thus enabling normal management functions such as software
distributions, inventory scans, and so on to be performed.
Chapter 3. The Outlet Systems Management Solution Architecture
47
To make the HubTMR aware of the resources within the Spoke TMRs, each
Spoke TMR will have a simple encryption level, two-way connection to the Hub
TMR.
These are the resources that will be exchanged between TMRs:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Administrator
Endpoint
Endpoint Manager
Gateway
Managed Node
Policy Region
Profile Manager
Repeater
Top Level Policy Region
This list is subject to change as additional products are installed and more
resources are included along with these products.
Connecting TMRs implies an initial exchange and periodic update of resources
contained in the TMRs. The update process, initiated through the wupdate
command or Desktop is a pull operation. The TMR server that needs to be
updated requests the information and modifies its local name registry. When a
new resource is created in a local database, it is not automatically pushed to
other interconnected TMRs. Therefore, it is important to update the resources on
a regular basis.
The frequency should be set according to the stability, or frequency of updates,
of the Tivoli environment. If the environment is stable, then updating the
resources once a day might be sufficient.
Updating the resources from different TMRs at the same time is not
recommended. Therefore, the operation should be serialized across the TMRs.
For more information about TMR connections, refer to the Tivoli Planning for
Deployment Guide which is available online at the Tivoli Information Center Web
site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
From the welcome page, navigate to Management Framework → Planning for
Deployment Guide.
3.2.3 Tivoli gateway architecture
Within the Outlet Inc. Tivoli environment there will be thousands of endpoints
located throughout the geographies in which Outlet Inc. is represented. Each of
48
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
these endpoints will require some element of management, whether receiving
software distributions or being controlled as part of problem determination or
proactive monitoring. With these criteria, we can introduce the concept of
gateways into discussion. Gateways assume some of the function of a TMR
server.
By delegating a share of the management processes to gateways, the TMR
servers are freed to service other managed resources. The endpoint gateways in
Outlet Inc. will perform all communications with assigned endpoints without
requiring additional communications with the TMR server. The gateway invokes
endpoint methods on the endpoints or runs gateway methods for the endpoint.
The Tivoli Gateway functionality can be installed onto any system that meets the
basic criteria for a Tivoli managed node. Installing Tivoli Gateway onto existing
systems is both cost-effective and time saving, although some problems could
arise with badly installed systems or in managing a system that already has other
functionality. Therefore, it is necessary to be careful of the load implications on
an existing system.
Gateway positioning within Outlet Inc.
Outlet Inc. should install a Tivoli Gateway in each outlet, more specifically on
each outlet’s application servers. Outlet Inc. will implement these servers to
support and deploy the solution with the appropriate hardware and software
configuration.
The gateways within Outlet Inc. will be positioned by applying a specific set of
rules. These rules help to determine the best strategic positions for the gateways
to best use the network, locations, and Outlet Inc. infrastructure:
򐂰 The number of endpoints supported by a gateway within Outlet Inc. will be up
to 500.
򐂰 Suboutlets must have the same management functionalities as the primary or
regional outlets.
򐂰 The gateway will be mapped to the outlet application server. In large outlets,
of more than 80 clients, Outlet Inc. should consider implementing a dedicated
gateway to improve performances at application and management levels.
3.3 Logical Tivoli architecture for Outlet Inc.
This section describes the logical architecture of the Tivoli solution in Outlet Inc.
Figure 3-4 on page 50 and Figure 3-5 on page 51 show the logical configuration
of TMR Hub and TMR Spoke regions.
Chapter 3. The Outlet Systems Management Solution Architecture
49
Policy regions for Hub and Spoke TMRs are shown. For each application, there
is a Policy Region in the TMR Hub Region and only the Inventory Profiles belong
to both the Hub and Spoke TMRs.
Figure 3-4 Logical Configuration of TMR Hub
50
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 3-5 Logical Configuration of TMR Spokes
The Spoke TMRs’ policy regions and profile managers, containing subscribers
and Inventory profiles, will be exchanged with the Hub TMR.
A subscription policy region should be defined for each Spoke TMR.
The application policy regions and profile manager hierarchy should be created
in the Hub TMR. The subscriber to the application profile managers is the profile
manager in the subscriber policy region defined in the Spoke TMRs. This allows
the Hub TMR to be the central point of operations for application functions.
Access to the policy regions and resources will be given to the Tivoli
Administrators, according to their roles.
The Hub TMR will contain the following policy regions and collections:
򐂰 Default Hub TMR Policy Region
򐂰 Application policy region:
– Inventory policy region:
• Query Library
– Software distribution policy region:
• Software packages
– Monitoring policy region:
• Monitoring profiles
򐂰 Spoke TMRs’ Collection:
Chapter 3. The Outlet Systems Management Solution Architecture
51
–
–
–
–
Spoke1 Policy Region
Spoke2 Policy Region
Spoke3 Policy Region
SpokeN Policy Region
•
Profile managers for subscribers, including servers, workstations, and
groups of subscribers1
•
Inventory Policy Subregion including inventory profile manager and
profiles for hardware and software scan
Experience has shown that it is a good approach not to have more than 500
subscribers in a single profile manager, especially for organizational and control
issues.
For performance reasons, a good practice is to distribute inventory profiles from
the profile manager of the Spoke TMRs. In this way, the RIM object of each
Spoke TMR can access the RDBMS in parallel. In addition, this would avoid
having the RIM object in the Hub TMR perform all the database updates and,
possibly, become a bottleneck.
For each Spoke TMR, we will create a hierarchy of profile managers for
subscribers, as shown in Figure 3-6.
The profile managers will be populated during endpoint registration.
Figure 3-6 Sample profile manager hierarchy
1
Several profile managers of subscribers will be defined in order to reflect the logical criteria of
grouping the machines (machine names which begin with the same two characters, managed nodes,
endpoints, and so forth).
52
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
The purposes of the profile managers shown in Figure 3-6 are:
򐂰 subs_<Group n>_<SpokeTMR>_pm contains all resources (managed nodes and
endpoints) divided in groups of about 50 outlets each.
򐂰 subs_<Server n>_<SpokeTMR>_pm contains all the managed nodes.
򐂰 subs_<Endpoints n>_SpokeTMR>_pm contains all the endpoints.
When, for example, a monitoring profile needs to be distributed to all servers
belonging to <SpokeTMR>, you can subscribe the subs_<Server
n>_<SpokeTMR>_pm profile managers to the profile.
When any profile is distributed, all targets that are subscribed to them receive the
profile.
All profile distributions should be performed through the profile manager
hierarchy to achieve better performance and scalability.
3.3.1 Administering the Tivoli Management Environment
A Tivoli administrator is a person with a network user account assigned to
manage one or more policy regions in the Tivoli environment. Tivoli
administrators can perform system management tasks and manage policy
regions in one or more networks. You can delegate system management tasks to
administrators by:
򐂰
򐂰
򐂰
򐂰
Assigning authorization roles
Copying policy regions between Desktops
Moving policy regions onto an administrator’s Desktop
Moving or copying system resources between policy regions
An account for the initial Tivoli administrator, or root administrator, is created
during the installation of the Tivoli Framework. The installation program also
creates a new root administrator when you install each subsequent TMR. The
root administrator has root authority on UNIX® machines and administrator
privileges on Windows machines. After the Framework is installed, the root
administrator can create other Tivoli administrators and grant them authorization
roles in policy regions. Based on these roles, administrators can perform
assigned system management tasks without being root administrators or
requiring access to the superuser UNIX password or the Windows administrator
password.
This ability to compartmentalize delegated authority gives senior administrative
personnel complete control over who can perform specified operations on
different sets of resources. The root administrator can also grant root authority to
subordinate Tivoli administrators. TMRs can have multiple root administrators.
Chapter 3. The Outlet Systems Management Solution Architecture
53
See the wauthadmin command in the Tivoli Framework Reference Manual for
information about changing and displaying root administrators.
Each administrator has his or her own Tivoli Desktop. When the administrator
logs in to Tivoli, the Desktop associated with that login is displayed. The Desktop
displays the policy regions and resources that the administrator can manage and
the Bulletin Board icon from which the administrator can read messages. The
Desktop can also include the Administrator collection icon and the Tivoli
Scheduler.
Authorization roles
After the Framework is installed, the root administrator can add other Tivoli
administrators. When you create an administrator, you select which policy
regions and other resources the administrator will manage and what
authorization roles the administrator will have for each resource. The
authorization role an administrator has for a resource determines what
management tasks an administrator can perform on the resource. Authorization
roles provide a set of privileges to Tivoli resources, or objects.
Authorization roles are mutually exclusive, not hierarchical. Each role provides its
own set of privileges, but a role does not provide the privileges of any other role.
For example, some tasks require the administrator authorization role. An
administrator with the super role is not able to perform these tasks unless the
administrator also has the admin role.
An administrator with super or senior roles in the Administrator collection can
create other administrators and assign them authorization roles.
Examples of Tivoli Administrators
The following sections show some examples of Tivoli administrators that could
be created in our Outlet Inc. environment.
Tivoli TMR Administrator
Created with a name of TMR_Admin, this administrator has the role of ultimate
manager of the Tivoli environment. This role has all the privileges needed to
perform any function within Tivoli and, by extension, on all managed systems.
Care should be taken that only a very limited number of users are given this level
of access, and that these users are knowledgeable in the use of Tivoli. This role
is very powerful and much damage can be done in very little time. At least one
user and one backup user should be assigned this role.
The TMR Administrator will be assigned all roles on a TMR-wide basis:
54
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Super
Senior
Admin
User
Install_client, install_product
Backup
Restore
Because these roles will be inherited by all resources within the TMR, this
administrator has unlimited access. The TMR Administrator is assigned to all
notice groups.
Note: Tivoli authenticates its users based on their ID and the machine from
which they log in, then permits access on this basis. It is possible for a UNIX
administrator to su to a User ID that has Senior Tivoli privileges and gain
access to Tivoli. It is recommended that Outlet Inc. limit root access to the
TMR Server to a select group of users. An equally effective approach would
be to limit root access on the TMR server and only allow login IDs for Tivoli
super administrators to this server. The super ID could then be mapped to
USER ID@TMR server and permit each Senior Tivoli Administrator his or her
own ID.
Tivoli resource administrators
The resource administrators performs system management functions within the
Tivoli Management Environment. This administrator role will be allowed to:
򐂰
򐂰
򐂰
򐂰
򐂰
Manage specific policy regions
Add managed nodes
Develop and schedule tasks
Create and distribute profiles
Manage notices
Tivoli resource administrators are excluded from tasks that modify the TMR, such
as installing Tivoli products, and patches to the Tivoli software or other TMR
specific maintenance.
Tivoli user administrators
The User_Admin role provides limited read-only access within the TMR. This
administrator should only be allowed to browse information and is not intended,
despite the name, to perform any administrative functions for users.
Access and authority is limited to those areas needed to perform only those job
functions within Tivoli. Primarily, User_Admin access will be limited to being able
to display the TEC console, Mdist console, and read notices.
Chapter 3. The Outlet Systems Management Solution Architecture
55
Roles
As additional subprojects are implemented within the Outlet Inc. environment,
additional groups will require controlled access to Tivoli. Therefore, additional
roles might have to be developed, and existing roles might have to be modified to
include the additional needed functions. As new roles are defined, or old ones
are modified, this should be documented. Care needs to be taken that all existing
administrators are modified to match the newer definitions. As a
recommendation, it is easiest to develop all roles using scripts that can be
modified, then reexecuted to develop new administrators.
General guidelines for roles
򐂰 All roles map across TMRs, except for the super which maps to user.
򐂰 Authorization roles are mutually exclusive and not hierarchical. For example,
you do not inherit user authorization from super. You must be assigned both.
򐂰 Authorization roles are inherited from TMR to policy region, from parent policy
region to subordinate policy region, and from policy region to objects
contained within a resource. Authorization roles provide a set of privileges to
the TME resources or objects contained in the resource.
򐂰 An administration role applies to all objects contained in a resource.
򐂰 Administrators can only assign roles that are equal or subordinate to the ones
they possess.
򐂰 TEC event groups are assigned roles on a console (user) by console basis.
To acknowledge or close an event, the user and admin roles must be
assigned to the event group.
򐂰 Keep models as high-level as possible and still maintain the needed
restrictions and safeguards. For example, if you can assign a user at the TMR
level, assign at this level and not below.
򐂰 Create typical administrators and replicate from there.
򐂰 When modifications have to be made, make them to the model and
regenerate the users from there.
򐂰 Develop scripts to generate the model administrators and then reuse them to
create the individual administrators.
General administrator recommendations
For an organization which has a large number of multidisciplined administrators
and heterogeneous system resources such as Outlet Inc., the roles assigned to
each administrator must be clearly defined in accordance with the job
responsibility of the specific administrator and the administrative policies of the
organization. In line with the these policies, the roles assigned to administrators
must allow them access to the resources for performing their normal
56
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
administrative tasks, but restrict them from gaining access to resources for which
they should not have access.
To match the administrators’ roles to their administrative needs, it is important to
understand the tasks and resources that administrators will need to manage.
From these, we will determine the roles that specific administrators should be
assigned.
In an organization like Outlet Inc. it is more than likely that there will be a large
number of local administrators and, accordingly, a large number of Tivoli
administrators.
This number affects the performance of the TMR server because an object is
generated within the Tivoli database each time an administrator is created.
Where an operation requires cross-referencing of database objects against each
administrator in the TMR, depending on the processing power of the TMR server,
these Tivoli operations would require a longer period of time to execute. Thus,
even if the system hosting the TMR server in Outlet Inc. seems to have a good
processing power, you should be careful defining a large number of Tivoli
administrators. A large number of concurrent Tivoli Administrators with Tivoli
Desktops open will cause a large amount of daemons or processes, to be
spawned, therefore the performance could decrease.
Avoid using default root user
A default root user should be created for each TMR server that is installed. This
default root user will have full access to all resources within the TMR. It would
have super, senior, admin, user, install_client, and install_product TMR roles.
As a precaution, this default administrator account should not be used to perform
any day-to-day operations in a TMR. The primary reasons for this are:
򐂰 There is no way of keeping an audit on who had been performing operations
using this administrator account.
򐂰 This account has full access on all resources within a TMR. Some tasks that
have to be performed using this administrator are not reversible. They also
cannot be undone by another user.
Single administrator for multiple users
It is possible to create single administrator for multiple users performing similar
jobs; for example, release management support personnel whose job function
might entail distributing software to remote servers. In this case they would
require similar authority on resources.
Using the Set Login → Add Login Name function of Tivoli administrator, multiple
users can log on as the same administrator and use its desktop to perform their
Chapter 3. The Outlet Systems Management Solution Architecture
57
administrative tasks. When a user needs to be added or removed from specific
job roles, simply add or remove entries to the Current login names,
Administrator →Edit Logins, list for the specific Tivoli administrator.
Having multiple users assigned to one administrator does have its drawbacks.
First, only one TEC console can be opened for a single administrator and its
Desktop, which means only one user, can have the TEC console opened at a
time. Second, the subscription notices will be shared also. This means if the user
has closed a notice after it has been read, other users will not be able to read the
same notice.
Therefore, the decision on whether to use one user for one administrator or
multiple users for one administrator depends on the needs and requirements for
users of the same group.
The following questions can help you understand whether to use multiple users
for one administrator or have one use for each administrator:
򐂰 Do users need to read notices from the Tivoli Desktops?
򐂰 If users will be reading notices from the Tivoli Desktop, are missed notices an
issue for the user?
򐂰 Do users’ job descriptions require them to monitor events through the TEC
console?
򐂰 If users are monitoring events through the TEC console, do these users all
have to have the TEC console opened on their workstation?
If the answer to all of the questions above is no, then multiple users for one
administrator be used. However, if the answer to any of these questions is yes,
then one user for each administrator should be used.
Tivoli Dsktop considerations
Usage level of the administrator desktop should be carefully considered in large
environments where a large number of administrators are expected to use Tivoli.
For every open window in the Tivoli Desktop, the TMR opens a new network
thread (TCP/IP connection). The use of the navigator is recommended: navigator
reduces the number of open windows.
An example of network traffic generated by the Tivoli Desktop is 595 KB as a
result of opening a Desktop and three policy regions, one of which contains 1000
objects.
Tests performed in a laboratory environment and IBM client experiences suggest
starting with no more than 30 concurrent active Desktops, regardless of how the
Desktops are to be used: TEC consoles, remote control, and so forth.
58
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Particularly for the TEC consoles, the above consideration is affected by the
following factors we have not yet examined:
򐂰 Number of events received per second
򐂰 Number of events maintained in the database
򐂰 Number of event groups and filters configured
Outlet Inc. scenario
Outlet Inc. will have various classes of operators who will require access to the
Tivoli environment. Operators will have access through the Tivoli Desktop. Some
of the typical operators requiring access are:
򐂰
򐂰
򐂰
򐂰
򐂰
Help desk operators (first degree support level)
Software distribution administrators
Technical support”
Linux support
Outlet Solution administrators
For each of those administrators, two specific Linux users are defined on TMRs.
Outlet Solution administrators are the only ones whose logins, the Linux user and
password, are defined in the spokeTMR. All other users are defined on HubTMR
only, and in the current login name on Tivoli administrator definition. The
HubTMR managed node is specified (LoginName@HubTMR).
Example scenario
The following figure shows a hypothetical organization of Tivoli Desktops
mapped to Outlet Inc. requirements:
Chapter 3. The Outlet Systems Management Solution Architecture
59
Figure 3-7 Tivoli Desktops logical configuration
In each of the four groups in Figure 3-7, the users in each group perform similar
jobs. We can set up multiple users for one administrator, ensuring that all users
within a group log in to the same Tivoli Desktop with the same Tivoli
administrator.
It is recommended that you have a low number of TEC consoles opened
simultaneously because there is a high rate of information exchange between
the Event Console, the Event Server, and the database.
3.3.2 Tivoli naming conventions for Outlet Inc.
The Tivoli Management Environment is based on an object-orientated data
model. As such, each instance of an object in a TMR must be uniquely
identifiable. Past experience has shown us that it is useful when examining
traces or log files to immediately be able to determine what type of object is
being examined.
60
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Within Outlet Inc., there will be many Tivoli products being implemented in many
parallel projects. This means producing a unique, accurate, flexible and
meaningful naming convention is critical.
This section provides a categorization of all resources that will be implemented
within the Outlet Inc. environment.
Table 3-1 is an easy reference table for all object types and naming conventions:
Table 3-1 Suggested naming conventions for Tivoli objects
Resources
Name Conventions
Policy Region
name_TMR_pr
Profile Managers
type_name_TMR_pm
Profile Manager Subscribers
subs_name_TMR_pm
Dataless Profile Manager
Subscribers
subs_name_TMR_pm
Profiles
type_name_TMR_prf
Gateways
gw_hostname (S0nnnntt)
Managed Nodes
hostname (S0nnnntt)
Endpoints
hostname (W0nnnntt)
Task Library’s
name_TMR_tl
Tasks
name_TMR_tsk
TEC Console
name_tec
TEC Event Server
EventServer (Default for TMR)
File Packages
swd_name_TMR_fp
Administrators
name_TMR_adm
Jobs
name_TMR_job
Indicator Collections
name_TMR_ic
Collections
name_TMR_col
Type can be one of the following:
򐂰 swd: software distribution
򐂰 inv: inventory
򐂰 dm: monitoring
Chapter 3. The Outlet Systems Management Solution Architecture
61
򐂰 subs: list of subscribers
򐂰 name: name of the TMR, such as Hub or Spoke TMRs, for example:
– hub
– spoke1
– spoke2
– spoke10
3.4 Setup and configuration planning
The purpose of this section is to provide comprehensive and detailed set-up and
configuration planning information about hardware and software, the behavior
and management of endpoints within the environment, and optimized distribution
of data and packages.
With this information, you can get a clear understanding of how the critical
systems are configured and how the environment will operate and need to be set
up for optimal performance and stability.
3.4.1 Suggested Tivoli hardware requirements
The following hardware requirements are for the gateway and endpoint systems
in the Outlet Systems Management Solution. Remember that the Tivoli Gateway
component will be installed and configured on a dedicated server in every outlet,
while the endpoint component will be installed on all servers in the Remote Store
infrastructure that will be managed.
The Tivoli Gateway must be installed on a managed node. The managed node
can be considered a distributed server, taking on the responsibilities of the TMR
server. In the case of the gateway, the scope of responsibilities relates to
communicating with the local endpoints.
In the following sections we highlight both Tivoli Release Notes and suggested
requirements, based on experiences from other large customers similar to Outlet
Inc. In both cases, these requirements only refer to the Tivoli applications; they
do not include other considerations about concurrent applications executing on
the same systems.
TMR Server and managed nodes hardware requirements
Table 3-3 outlines the hardware requirements for the central Tivoli systems TMR
Servers and Managed Nodes.
62
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Table 3-2 Hardware requirements for TMR Servers and managed nodes
Environment
Release Notes
Suggested Values
TMR Servers
Pentium® Processor
Pentium III Processor
256-512 MB RAM
512 MB RAM
200 MB Hard Disk available
(without depot)
300MB Hard Disk available
Pentium Processor
Pentium III Processor
256-512 MB RAM or more
512 MB RAM
200 MB Hard Disk available
(without depot)
512 - 2 GB Hard Disk or more
available (with depot)
Managed nodes
Note: Tivoli can take full advantage of multiprocessor machines. The code
can run effectively in parallel.
In Outlet Inc. environment, the central managed nodes assume different roles,
and, thus, the hardware requirements will vary. The suggested configurations
apply to the common TMR Server and managed node, with the note that
systems that take active part in the software distribution process, the source host
and active repeaters, need extra disk space for holding software packages as
they are moved to the endpoints.
For specific details on requirements for supporting servers such as console and
rdbma servers, consult the release nodes for the specific components.
Gateway hardware requirements
Table 3-3 outlines the hardware requirements for the Tivoli Gateways.
Table 3-3 Hardware requirements for gateways
Environment
Release Notes
Suggested Values
Small / Medium
Outlet (up to 80
Clients)
Pentium Processor
Pentium III Processor
64 MB RAM
128 MB RAM
200 MB Hard Disk available
(without depot)
1 GB Hard Disk available (with
depot)
Chapter 3. The Outlet Systems Management Solution Architecture
63
Environment
Release Notes
Suggested Values
Regions and Large
Outlet (more than
80 Clients)
Pentium Processor
Pentium III Processor
64 MB RAM or more
256 MB RAM
200 MB Hard Disk available
(without depot)
1,5 GB Hard Disk or more
available (with depot)
In Outlet Inc. environment, Application Server and Management Server
(gateway) are both on the same system. In large outlets, we suggest taking into
consideration, the possibility of having a dedicated gateway to decrease the
processor and network workload.
Endpoint hardware requirements
Table 3-4 outlines the hardware requirements for the Tivoli endpoints.
Table 3-4 Hardware requirements for endpoint
Small / Medium /
Large Outlet
Release Notes
Suggested Values
Pentium Processor
Pentium III Processor
32 MB RAM
32 MB RAM or more
30 MB Hard Disk available
(cache included)
30 MB Hard Disk available or
more (cache included)
Note: RAM value is considered as an absolute value, not a sum of the values.
3.4.2 Tivoli repeater architecture
Tivoli provides the Multiplexed Distribution (MDist) service to enable
synchronous distributions of large amounts of data to multiple enterprise targets.
The MDist service is used by a number of Tivoli profile-based applications, such
as Tivoli Software Distribution, to maximize data throughput across large,
complex networks. During a profile distribution to multiple targets, MDist sets up
a distribution tree of communication channels from the source host to repeaters
to targets. MDist limits its own use of the network, as configured through repeater
parameters, to help prevent intense network activity that can stress network
bandwidth for periods of time.
The following section covers the proposed architecture, tuning parameters and
configuration for repeaters within the Outlet Inc. Tivoli Management
Environment.
64
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
By default, the TMR server acts as the repeater distribution server for all targets
in the TMR.
Within the HubTMR there will also be a software repository used to store all
applications ready for distribution, which will also be a source for MDist
distributions.
It is proposed that within the Outlet Inc. environment, both Hub and Spoke TMRs
will be defined as repeaters. Every Spoke TMR will be configured to distribute
packages to 40 outlets in parallel. The gateway at the outlet level, both in primary
regions and subsequent outlets, will themselves be configured as a repeaters for
the endpoint below them.
It is important to read and understand the gateway architecture and MDist
principles, before reading the rest of the MDist architecture section.
Repeater guidelines
Whether your repeater sites are gateways or managed nodes, knowledge of the
organization’s underlying network topology and network performance
characteristics is required for proper configuration of repeater sites.
The following selection guidelines apply to most environments:
򐂰 If you have a slow network link, such as over subnets, install a repeater site
on each side of the connection.
򐂰 If a machine is often a source client for software distribution, make it a
repeater to enhance performance.
򐂰 If a repeater range contains too many clients in multiple subnets, you can
improve performance by adding a repeater to each subnet.
򐂰 Do not use an unreliable client as a repeater site.
These guidelines can be applied in the future when selecting where repeater
sites in Outlet Inc. should be positioned.
Each repeater can be configured and tuned to match these guidelines using the
wrpt command from a command line. The next section describes the parameters
and gives a brief explanation and their use in the Tivoli management
Environment.
Repeater parameters and settings
The Tivoli Management Framework provides essentially two implementations of
Multiplexed Distributions, MDist. These are often referred to as MDist1 and
MDist2, where MDist2 provides better efficiency and throughput and more
detailed tuning options. As such, MDist2 is the extended version of MDist1.
Chapter 3. The Outlet Systems Management Solution Architecture
65
For both implementations of MDist, the wrpt command is used to configure and
tune the repeater setup. MDist2 extensions are controlled via the wmdist
command.
MDist1
Following are the repeater parameters for MDist1.
򐂰 Network Load (net_load)
The net_load parameter sets the ceiling in the network load in kilobits per
second (kbps/sec), that each transfer can add to the repeater site’s network.
If you set this parameter to 50, each transfer will limit itself to writing no more
than 50 Kb in a one second period. There is no connection between
independent transfers. As a result, if four transfers are started by four different
applications at the same time, the total network load could be four times the
set value.
Conversely, if the repeater site has two network interfaces, the load
contributed to each network could be half the given value.
You can specify a negative value for net_load to enable this parameter for
each target system, rather than for the entire distribution. For example, if you
set net_load to -25, data transfers to each target are limited to writing no more
than 25 Kb/sec.
򐂰 Network Spacing (net_spacing)
The net_spacing parameter, expressed in milliseconds, specifies a delay to
insert between each write to the network. For most networks, this value
should be zero, indicating no delay. If network monitoring shows that MDist
transfers are causing abnormally high collision rates, then the network
spacing parameter can be used to more evenly space when the writes occur.
Under default settings, using the wrpt -t command to list repeater settings
does not show a value for the net_spacing parameter. A value for
net_spacing is only displayed if you change its default configuration for a
repeater.
򐂰 High-level TCP Time-out (stat_intv)
The stat_intv parameter specifies the time-out value for managed nodes and
endpoints, after which a blocked connection will be considered dead. Some
systems with unreliable networks experience bottlenecks that go undetected
by the operating system TCP/IP stack. The high-level TCP time-out forces a
time-out error on these connections so that the distribution can proceed.
򐂰 Maximum Simultaneous Connections (max_conn)
The max_conn parameter sets the maximum number of TCP connections
that can be open at the same time. Theoretically, a repeater can distribute in
parallel to any number of target nodes. This parameter sets a limit on the
66
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
maximum number of connections; therefore, the repeater can distribute in
parallel to subsets of the clients in its range. This prevents the network
connections from exhausting system limits.
If the number of clients in the range is three times the setting of the maximum
connections, then there will be three blocks of parallel transfers, each of
maximum connections. If the number of target clients is greater than the
number of maximum connections, then the repeater must store all of the data
so that it can be sent to the remaining targets after the initial set completes.
The repeater stores the data in memory, up to the value specified for the
mem_max parameter. If the data exceeds mem_max, then the repeater
stores the remainder on disk, up to the value specified for the disk_max
parameter. This means that
if mem_max plus disk_max is not large enough to hold the largest possible
distribution, then you need to configure additional repeater managers.
򐂰 Maximum Memory (mem_max) and Maximum Disk (disk_max)
The mem_max and disk_max parameters specify values for maximum
amounts of memory and disk space system resources that can be allotted to
a repeater during distribution. If target clients accept data at the same rate
that the repeater sends data, the repeater consumes very little memory. If the
repeater has one slow target, the repeater attempts to keep the data flowing
to the fast targets. To service both slow and fast targets, the repeater must
temporarily save data for the slow targets.
The repeater first consumes up to the maximum amount of memory. When
the maximum amount memory has been used, the repeater starts paging
data to disk. The repeater will consume up to the maximum disk space
specified in disk_max. When both the memory and disk limits are reached,
the repeater stops receiving data. The stop appears as a slow target to the
parent repeater.
Stops can ripple all the way up the distribution hierarchy to the source, which
then must wait.
We suggest setting the sum of these parameters at least equal to the
maximum size of the package to distribute on every repeater interested in a
distribution of that packages, that is HUB TMR, Spoke TMR, outlet gateway.
򐂰 Paging Space, Disk Usage Rate, and Disk Delays
The disk_dir, disk_hiwat, and disk_time parameters specify additional
parameters about temporary paging space, speed with which disk space is
consumed, and time delays between disk allocations. When a repeater
exhausts the maximum memory allocated and starts paging to disk, it slows
down when it crosses the high-water disk level. Above this threshold, it will
only consume pages of disk space at a specified interval. The paging file is
created in an indicated directory.
Chapter 3. The Outlet Systems Management Solution Architecture
67
Table 3-5 outlines the recommended MDist1 values for the various repeater
parameters for the different types of repeaters in the Outlet environment:
Table 3-5 Recommended MDist1 repeater settings
Settings
Parameter
HubTMR
SpokeTMR
Gateway
net_load
Default
-5 KB/s
500 KB/s
net_spacing
Default
Default
Default
stat_intv
Default
Default
Default
max_conn
40
40
20
mem_max
50000
50000
50000
disk_max
100000
100000
100000
disk_dir
/Hostname/disk_dir
/Hostname/disk_dir
/Hostname/disk_dir
disk_hiwait
80000
80000
80000
disk_time
Default
Default
Default
MDist2
Tivoli Framework 3.7 introduces the MDist2, an evolution of MDist1 that remains
unchanged. MDist2 enables control of the total amount of resources used by a
repeater. This makes distributing data fast and efficient, improving performance
and throughput. This section describes how to configure repeater parameters for
MDist2.
You can set the network load, target network load, number of priority
connections, packet size, debug level, maximum memory, and maximum disk
space. You also can set intervals for how often the database is updated and the
frequency and length of time that a repeater retries unavailable or interrupted
targets.
MDist1 and MDist2 share the same repeater hierarchy but have separate
repeater configuration settings:
򐂰 MDist1 parameters are normally for distribution.
򐂰 MDist2 parameters are normally for the repeater. These parameters allows
control of total resources during simultaneous distributions
Repeater settings are set with the wmdist command:
wmdist –s repeater_name [key=value]
68
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Following are the main repeater parameters for MDist2.
򐂰 Network Load and Target Network Load (net_load)
Use the net_load parameter to specify the maximum amount of network
bandwidth that a repeater is allowed to allocate. This value is shared among
all active connections. For example, if there are 10 open connections, with
net_load=500 (default value), each connection receives 50 KBps. MDist2 can
use these 10 open connections for one or more distributions.
The target_netload parameter, used in conjunction with net_load, allows you
to specify the maximum amount of network bandwidth that a repeater can
send to an individual target. For example, suppose net_load is 500 KBps and
target_netload is 100 KBps (default value). If there is only one connection, the
target_netload value takes effect and limits the connection to 100 KBps.
However, if there are 10 connections, the net_load value takes precedence
and limits each connection to 50 KBps. Because 50KBps is less than the
target_netload of 100KBps, this value also satisfies the 100 KBps value for
target_netload.
Both net_load and target_netload can be active at the same time. This is
different from the MDist service’s negative net_load value, which offers no
control on the total bandwidth.
򐂰 Time-out Values (send_timeout)
Use the send_timeout parameter to set the time-out between network writes.
That is, a target is allowed b seconds to receive a packet in the network. If a
timeout occurs, the distribution remains in the repeater queue and attempts to
resend the distribution as set by conn_retry_interval. The default value is 300
seconds. This parameter replaces the gateway session_timeout parameter
used by MDist.
In comparison, the execute_timeout parameter is the time-out between when
all the data is sent and the method returns. For example, applications can run
scripts after receiving data, but before returning results to the repeater. The
default value is 600 seconds. This parameter replaces the final_timeout
parameter used by MDist.
򐂰 Maximum Priority Connections
Use the max_sessions_high, max_sessions_medium, and
max_sessions_low parameters to control the number of connections that the
repeater can open for each priority (high, medium, or low). If no connections
are available for a given priority, the repeater tries to borrow a connection
from a lower priority. For example, if max_session_high=3 and all three
high-priority connections are in use, the repeater will try to use an available
lower-priority connection.
The sum of high, medium, and low connections is the total number of
available connections.
Chapter 3. The Outlet Systems Management Solution Architecture
69
Repeater queues operate on a first in, first out (FIFO) basis. A repeater
assigns a distribution to its queue based on which distribution arrives first, not
when the distribution was submitted. For example, suppose that a medium
priority distribution is using all the medium and low priority connections. If a
high priority distribution arrives, and all high priority connections are in use,
the repeater can use lower-priority connections for this distribution as soon as
a medium or low priority connection finishes. The original medium priority
distribution is blocked until the high priority distribution has finished using the
medium and low priority connections.
These parameters replace the wrpt max_conn command used by MDist.
򐂰 Maximum Memory (mem_max) and Disk Space (disk_max)
Use the disk_max and mem_max parameters to specify the amount of
memory and disk space allocated to the repeater depot. These values are
shared among distributions, unlike the MDist wrpt keyword parameters,
which are targeted per distribution. The default size for disk_max is 500 MB.
This is the total size of the depot. The depot contains both permanent and
temporary distributions. All simultaneous distributions must fit within the depot
space.
If the disk_max value equals zero, no limit is enforced. The depot cannot
exceed the size of the disk. Every distribution flowing through a repeater is
stored at least temporarily in the depot. The depot must be large enough to
hold the largest distribution that you expect to distribute.
The mem_max parameter specifies the amount of memory used to buffer
data being sent to targets. This improves performance by reducing the
number of disk accesses to the depot. The memory is shared among all
active distributions. The default size for mem_max is 64 MB. This is a
memory cache used for all active distributions.
These parameters replace the wrpt mem_max and wrpt disk_max commands
used by MDist.
򐂰 Status Notification Interval (notify_interval)
Use the notify_interval parameter to specify the frequency in minutes a
repeater reports status. As targets finish, their results are buffered by the
repeater. When the length of time set by notify_interval has elapsed or all the
targets of this distribution have finished, the results are sent to the application
using MDist2. In turn, the application notifies the distribution manager to
update the database. The notify_interval parameter default value is 30,
meaning that each repeater sends status every 30 minutes. Status
information is sent to the database infrequently to cut down on network traffic
and overhead on the TMR server.
򐂰 Connection Retry Interval (conn-retry)
70
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
The conn_retry_interval parameter sets the frequency in seconds that a
repeater retries unavailable or interrupted targets. The default value is 900
seconds. Between a gateway and its endpoints, the retry mechanism is to
retry the number of seconds set by conn_retry_interval up to the number of
seconds set by retry ep_cutoff. Between repeaters, the retry mechanism is to
retry the number of seconds set by conn_retry_interval until the distribution’s
deadline is reached. Because repeaters do not log in to each other, MDist2
performs an explicit retry to know when an unreachable repeater becomes
available.
򐂰 Retry Endpoint Cutoff Interval (retry_ep_cutoff)
Use the retry_ep_cutoff parameter to specify the time in seconds that you
want the repeater to continue retrying an unavailable or interrupted endpoint.
A repeater retries unavailable or interrupted targets until the distribution’s
deadline is reached. For example, the repeater only knows if a endpoint is
unavailable when it tries to contact the target (when a connection becomes
available). This information is sent to the distribution manager only after the
length of time set by notify_interval.
If an endpoint is down, the repeater skips the endpoint, leaves the information
in its queue, and waits for the target to log in. The repeater also retries the
target the number of seconds set by conn_retry_interval (default value is 900
seconds) up to the number of seconds set by retry_ep_cutoff (default value is
7200 seconds).
After the number of seconds set by retry_ep_cutoff is reached, the item
remains in the queue until the distribution deadline, waiting for the endpoint to
log in.
򐂰 Packet Size (packet_size)
Use the packet_size parameter to specify the number of bytes written to the
network during each send packets packet size determines how many bytes
will be written to the network before the repeater pauses to enforce bandwidth
control. For very slow networks, it is useful to lower the packet size to prevent
flooding the network for long periods of time.
Table 3-6 on page 71 outlines the recommended values for the various MDist2
repeater parameters for the different kinds of repeaters in the Outlet Systems
Management Solution.
Table 3-6 Recommended MDist2 repeater settings
Settings
Parameter
rpt_dir
HubTMR
SpokeTMR
Gateway(s)
/opt/tivoli/rpt_dir
/opt/tivoli/rpt_dir
/opt/tivoli/rpt_dir
Chapter 3. The Outlet Systems Management Solution Architecture
71
Settings
Parameter
HubTMR
SpokeTMR
Gateway(s)
permanent_storage
FALSE
FALSE
TRUE.
disk_max
500 MB
500 MB
1 GB
max_sessions_high
5
10
5
max_sessions_medium
10
20
10
40 (Default)
10
5
mem_max
64 MB
64 MB
64 MB
send_timeout
300 s
300 s
300 s
execute_timeout
600 s
600 s
600 s
notify_interval
default
15 min
10
conn_retry_interval
1800
900
900
retry_ep_cutoff
7200
7200
7200
500 KB/s
500 KB/s
500 KB/s
Default = 0
5 KB/s
0 (Default)
packet_size
16
16
16
debug_level
3
3
3
max_sessions_low
net_load
target_net_load
MDist2 provides a console that allows controlling the distributions. An example
showing the Node Table is depicted in Figure 3-8.
72
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 3-8 MDist2 GUI Node Table dialog
MDist2 console can be installed on a TMR Server or a Managed node.
Following are some examples showing how to control the MDist2 distributions
from the commandline:
򐂰 List distributions:
wmdist –l [-I dist_id] [-a] [-v]
-I:
-a:
-v:
Specific distribution ID
Show only active distributions
Verbose output
򐂰 List endpoint status:
wmdist –e dist_id [-t endpoint_id] [state…]
-t:
Status for specific endpoint
state…:
Filter with specified states
򐂰 Remove status from database:
wmdist –d dist_id [-f]
Chapter 3. The Outlet Systems Management Solution Architecture
73
-f:
Force deletion without confirmation prompt
򐂰 Cancel Distributions:
wmdist –c dist_id [endpoint…]
򐂰 Pause Distributions:
wmdist –p dist_id [endpoint…]
򐂰 Resume Distributions:
wmdist –r dist_id [endpoint…]
Following are a few examples on how to control and manage the repeater
depots:
򐂰 List segments:
wdepot repeater_name list [id^version]
򐂰 Add segments:
wdepot repeater_name add id^version [source_host:]file
򐂰 Delete segments:
wdepot repeater_name delete id^version
򐂰 Delete all segments not currently in use:
wdepot repeater_name purge
3.4.3 Optimizing slow links connections
To optimize the use of geographical slow links such as 64 kbps, configure the
Spoke TMRs dispatcher, modify the environment variable SLOW_LINK,
assigning it the TRUE value.
1. To do that, run following command:
odadmin environ get > /tmp/odadmin.env
2. Then, edit file /tmp/odadmin.env and add following line:
SLOW_LINK=TRUE
3. Save file /tmp/odadmin.env and run following command:
odadmin environ set < /tmp/odadmin.env
4. To finish, run odadmin reexec on the TMR dispatcher.
This parameter is only available for MDist1 services.
74
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
3.4.4 Managing endpoint behavior
As discussed in “Tivoli gateway architecture” on page 48, the endpoint gateways
play an important part in the deployment and management of Tivoli endpoints.
Within the Outlet Systems Management Solution these gateways will perform all
communications with its assigned endpoints without requiring additional
communications with the TMR server.
The gateway invokes endpoint methods on the endpoints or runs gateway
methods for the endpoint.
Although it is a great advantage that a single gateway can manage hundreds
even thousands of endpoints, it can also bring along its own set of pitfalls which
must be carefully avoided in large scale installations such as in Outlet Inc. The
endpoints behavior, logging in, restarting, broadcasting, and so on must all be
carefully controlled to achieve an efficient and self-supporting endpoint
environment.
This section describes the principles and the procedures for endpoint to gateway
logins. This discussion is followed up in “Endpoint login policies” on page 77, in
which the architecture for endpoint login in the Outlet Systems Management
Solution is described.
Endpoint login scenarios
Here we outline the following examples of endpoint logins that will be applied to
the Outlet Systems Management Solution:
򐂰 Initial login with select_gateway_policy (a script that provides the Endpoint
Manager with a ordered list of gateways
򐂰 Subsequent or normal logins (applied to Outlet Inc.)
Initial Endpoint Login with select_gateway_policy defined
Figure 3-9 on page 76 illustrates the endpoint’s initial login process, where a
select_gateway_policy script has been defined. This scenario can be common in
large, multi-site enterprises like the Outlet Inc. environment where thousands of
endpoints are logging in to multiple TMRs.
Chapter 3. The Outlet Systems Management Solution Architecture
75
TMR Server
Endpoint Manager
2
3
Intercepting
Gateway
1 3 4
Endpoint
requesting login
Figure 3-9 Initial endpoint login
The process that takes place during initial login is:
1. The endpoint attempts to perform the initial login to the assigned gateway
(also known as the intercepting gateway) specified first by the -g option in the
installation procedure.
2. The gateway forwards the login request to the Endpoint Manager at the TMR
Server.
3. The Endpoint Manager refers to the select_gateway_policy defined in the
TMR, gets the candidate gateway for the endpoint, and sends the new login
information to the gateway. The gateway relays the information to the
endpoint.
4. The endpoint logs in to its assigned gateway
Subsequent or normal logins
Figure 3-10 on page 77 illustrates the flow of data for logins after the initial login.
A normal login usually occurs after the endpoint has already been established as
a member of the TMR.
76
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
TMR Server
Endpoint Manager
Assigned
Gateway
1
Endpoint
requesting login
Figure 3-10 Normal Endpoint login
The endpoint logs into the assigned gateway. The endpoint is immediately
established as a communicating member of the Tivoli network.
Endpoint login policies
In the previous sections the select_gateway_policy was mentioned. This is one
of four endpoint related policy scripts that can be defined in a TMR. Table 3-7
shows the login policies valid for endpoints, and their time of execution.
Table 3-7 Endpoint Login Policies
Policy Name
Origin
Execution Time
allow_install_policy
Executed by the
Endpoint Manager.
Executed when the endpoint
installation begins.
select_gateway_policy
Executed by the
Endpoint Manager.
Executed each time an endpoint
needs to be assigned to a gateway.
after_install_policy
Executed by the
Endpoint Manager.
Executed directly following the
endpoint’s installation and initial log
in.
login_policy
Executed by the
gateway.
Executed each time the endpoint
logs in.
Chapter 3. The Outlet Systems Management Solution Architecture
77
allow_install_policy
This policy controls which endpoints are allowed to log in to the TMR. You might,
for example, want to prevent endpoints from subnet 26 to login to a specific TMR,
in which case you would establish an allow_install policy to reject login requests
from these endpoints. The default behavior of this policy allows endpoints to
login unconditionally. You can also use this policy to perform any pre-login
actions you might need.
The TMR Endpoint Manager executes the allow_install_policy as soon as it
receives an endpoint’s login packet from an intercepting gateway. If the policy
exits with a non-zero value, the login process is terminated immediately. If the
policy exits with a zero value, the login process continues.
At Outlet Inc. the allow_install_policy will accept only logins from supported IP
subnets that are specified in the Endpoint Manager configuration file.
This policy will detect duplicate labels, and will reject the endpoint login if the
label already exists in the Endpoint Manager database.
An example of allow_install_policy is provided in A.3.3, “allow_policy” on
page 358.
select_gateway_policy
Executed by the Endpoint Manager, this policy provides an ordered list of
gateways that should be assigned to an endpoint. The select_gateway_policy is
run each time an endpoints initial login packet is forwarded to the Endpoint
Manager, on migratory login or when an endpoint has become isolated (cannot
locate any gateway to which to log in). The policy overrides the Endpoint
Manager’s default selection process and is recommended for TMRs with multiple
gateways. On initial login, the policy provides a list of primary and alternate
gateways for the endpoint to contact. This list is sent to the endpoint with the
initial login assignment information.
The Endpoint Manager tries to contact each gateway in the order listed in the
policy script. The first gateway contacted is the gateway to which the endpoint is
assigned. The intercepting gateway is also added to the end of the
select_gateway_policy list to ensure that the endpoint has at least one definite
contact. If the gateways listed in the script cannot be contacted, the Endpoint
Manager assigns the intercepting gateway to the endpoint.
Within the Outlet Systems Management Solution the gateways will be selected
based on the entry in a configuration file containing the match between the IP
address and the gateway; for example:
78
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Example 3-1 Gateway selection configuration file
10.248.32.* gateway1
10.248.32.* gateway2
10.248.26.[25-45] gateway3
An example of select_gateway_policy is provided in A.3.4, “select_policy” on
page 361.
after_install_policy
The Endpoint Manager executes the after_install_policy after the endpoint has
successfully been created. For example, this script can be used to subscribe
endpoints to profile managers. Because the script runs before the endpoint’s first
normal login, you cannot use it to run downcalls, commands that communicate
with the endpoint itself. The policy is run after the initial login only; it will not be
run on subsequent logins of an endpoint.
An example of after_install_policy is provided in A.3.7, “after_policy” on
page 365. This example uses the after_login policy to subscribe endpoints to the
correct profile managers and submit an APM plan which includes software
distributions and inventory scans against the newly created endpoint.
login_policy
This policy is executed by the gateway and performs any action you need each
time an endpoint logs in to its gateway. For example, this script can be helpful
when configured for automatic upgrade of the endpoint software, or inventory
scan. If the login_policy exits with a non-zero value, the endpoint login will not
fail.
Note: The same login_policy script should be run on all of the gateways in a
TMR.
An example of login_policy is provided in A.3.6, “login_policy” on page 364.
Endpoint login parameters
When an endpoint logs into a gateway there are several parameters passed to
any policy that is run. These parameters are:
$1
$2
$3
$4
$5
$6
The label of the endpoint machine
The object reference of the endpoint machine
The architecture type of the endpoint machine
The object reference of the gateway that the endpoint logged into
The IP address of the endpoint logging in
region
Chapter 3. The Outlet Systems Management Solution Architecture
79
$7
$8
dispatcher
version
All these values can be used by the currently running policy script.
Outlet Inc. endpoint policy and gateway solution
Within the Outlet Systems Management Solution Tivoli implementation there will
be very specific gateway and endpoint policies in place. This solution is not
designed automatically to allow the migration of endpoints between gateways.
The reason or this is the Outlet Solution needs to maintain a consistent
distribution list for software distribution, even if it will be manually possible to
migrate the endpoints.
On Hub and Spoke TMRs the select_gateway_policy, executed by the Endpoint
Manager, will always return only the gateway of the outlet to which the endpoints
belong. The following guidelines are recommended with the
select_gateway_policy:
򐂰 Defining the subnet masks specific to an outlet is recommended in order to
assign the endpoints to the correct gateway, or outlet server.
򐂰 Avoiding the use of w* (Tivoli) commands inside the policies is recommended.
If needed, launch a new script immediately before the policy scripts
terminates.
򐂰 Creating gateways on the TMR servers is not recommended in order to avoid
incorrect assignment of the endpoints to these servers - or protect these
gateways with rigid allow_login_policy scripts.
3.4.5 Managing the TME Infrastructure
As with many other business-critical applications, Tivoli components require
regular maintenance and checks to be performed. This is mainly for the following
two reasons:
򐂰 To check data integrity
򐂰 To ensure safe and stable operation by creating restorable backups
The next two sections explain the importance of regular backups and checking
and repairing the Tivoli object database.
Backups
The Tivoli Management Environment is an extremely volatile environment, with a
great number of changes, additions and removals occurring almost every day.
With this in mind, Tivoli provides a feature to backup the databases used by each
Tivoli managed node. This includes the TMR Servers, gateways and other
80
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
managed nodes. This backup can be scheduled to match the frequency of
changes within the environment.
Before and after each application installation, it is a good practice to back up your
current Tivoli Management Environment database. This will make it easy to go
back to the previous database if you encounter a problem while installing a
particular application.
Placing the Tivoli object database backups under control of Tivoli Storage
Manager is highly recommended for safe archiving and restoration.
Checking and repairing Tivoli database integrity
As already mentioned, the Tivoli Management database and its integrity is critical
to the operation of the Tivoli Management Environment. Because of this,
checking and repairing the database is of great importance.
By maintaining an error free database, the environment will operate smoothly
and efficiently. The database should be checked on a regular basis. This task
can also be scheduled.
Note: To perform a database check, use the command wchkdb –u to perform a
local database check, or wchkdb –ux to check all remote databases.
Scheduling within the Tivoli Management Environment
With all the tasks and operational areas that can be run within the Tivoli
environment. it is important for Outlet Inc. to automate and schedule these. The
Tivoli Framework scheduler maintains a record of all scheduled jobs. This record
tracks scheduled system management tasks over long periods of time as jobs
are run, completed, removed from the scheduler queue. Jobs with a repeating
schedule are deleted when the repeat cycle is completed.
However, problems can occur when an administrator who has scheduled a
repeating job leaves the company or moves to another position. In such cases,
the person is removed as a valid Tivoli administrator, but the scheduler still
contains jobs to run in the future with the credentials of the former Tivoli
administrator.
For example, suppose an administrator has a nightly job scheduled to clean out
the print queue and ensure that the print service is still active. After the Tivoli
administrator role representing the person who left the company is deleted, this
job will fail the next time the scheduler run it because the administrator role who
created the job is no longer valid. In such situations, remove the job. If you still
need the job performed, reschedule it under a different, active administrator.
Chapter 3. The Outlet Systems Management Solution Architecture
81
3.4.6 Tivoli and RDBMS Integration
All Tivoli Enterprise applications produce or receive data for manipulation,
information and administration. Many of these Tivoli applications rely on the
support of an external relational database management system (RDBMS) to
store the large amount of data that can be received. The RDBMS Interface
Module (RIM) component of the Tivoli Framework is used to access these
databases.
Interfacing to the RDBMS using RIM
The RDBMS Interface Module (RIM) provides a common interface that Tivoli
applications can use to store and retrieve information from a number of relational
databases. By using the RIM interface, a Tivoli Enterprise application can access
any supported database in a common manner, independent of the database.
Storing information in an external database enables an application to take
advantage of the power of relational database technology, such as SQL queries,
to store and retrieve information.
Within the implementation of the Outlet Systems Management Solution, there
are some specific applications that uses RIM.
򐂰
򐂰
򐂰
򐂰
Tivoli Framework MDist2 Service
Tivoli Enterprise Console (TEC)
Tivoli Configuration Manager - Software Distribution
Tivoli Configuration Manager - Inventory
The RIM acts as a processing agent for Tivoli related data, which upon reception
by RIM, is processed and converted into SQL based queries and instructions.
These SQL based queries and instructions enable the RDBMS to be populated
with the required information.
The Tivoli Scalable Collection Service (SCS), previously known as MCollect,
performs asynchronous data collection from the endpoints. From these
endpoints, one or more RIM objects can be created. You can configure multiple
RIM objects to write Inventory data in parallel to the configuration repository.
Currently, SCS is only used by the Inventory component of Tivoli Configuration
Manager, to facilitate upload of HW and SW scan data from a large number of
endpoints without overloading network connections or databases. Refer to
“Scalable Collection Service” on page 89 for further details regarding the inner
workings and configuration of SCS.
RDBMS Configuration in Outlet Inc.
The Tivoli environment requires that an RDBMS system be used to provide
permanent storage capabilities for various types of data that can be accessed
from, or interact with, external sources.
82
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
An important part of data storage and connection between Tivoli and an RDBMS
is the database systems themselves. In the Outlet Systems Management
Solution, the database system is intended to be on an external server, one that
only has the Tivoli endpoint installed, that will contain a RDBMS. This system will
be responsible for storage of:
򐂰 MDist2 distribution status and historical information
򐂰 Events received by the Tivoli Enterprise Console, including event adapters,
distributed monitoring, and software distribution, among many other event
sources
򐂰 Inventory configuration data
򐂰 Software package distribution, status information, and activity plans
The chosen vendor for the RDBMS system in the Outlet Systems Management
Solution is IBM DB2.
3.5 Network communications and considerations
Tivoli’s distributed architecture is designed to work across a wide variety of
systems throughout the Outlet Inc. network topology. The minimum requirement
for communication protocol support is bidirectional, full-time, interactive TCP/IP
connections.
3.5.1 TCP/IP
The Tivoli Management Environment contains several key systems, which when
installed and configured, make up the Outlet Systems Management Solution.
Within this Framework, there are communication methods and procedures used
to enable the transfer of methods, command, tasks, object references, and
object calls.
Figure 3-11 on page 84 shows the typical communication paths within a TMR. It
shows that communication between managed nodes, systems running the Tivoli
Object Request Server OSERV, communicate over TCP and, preferably, static IP
addresses. The figure also shows how the communications between the Tivoli
Gateways are performed over TCP using static or DHCP assigned addresses.
Chapter 3. The Outlet Systems Management Solution Architecture
83
Figure 3-11 Tivoli Protocol Communication
It is not necessary for an endpoint to be able to communicate with the TMR
server, because all communication to the endpoints are provided through the
hosting gateways.
TCP/IP Ports
In order to facilitate the correlation between the clients and servers, TCP/IP uses
a concept called port. All TCP/IP connections are attached at each end by one of
these ports.
The port numbers between 0 and 1023, known as trusted ports, are reserved for
standard services such as ftp, rexec and telnet. These trusted ports are attached
to the server side of the application. At the client side, when an application wants
to open a connection, it normally asks the operating system for a non-reversible
port, one with a number ranging from 1024 to 65535.
In Tivoli, the client/server communication is implemented by the oserv process.
The process runs on the TMR server and on each managed node, providing a
common framework communication across all platforms on which the Tivoli
environment works.
84
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
TMR-to-Managed Node communication
The basic communication between a TMR server and its managed nodes is
managed by a mechanism called Inter-ORB communication. The mechanism,
also known as Interdispatcher communication, is the basic interaction between
different oservs.
The oserv that initiates the connection is considered the client, while the oserv,
which accepts the request is considered the server. The oserv object dispatcher
port is 94, the listening port. The client port, also known as the ephemeral port, is
assigned by the operating system or selected automatically from a predefined
port range.
When the oserv needs to initiate a data transfer greater than 16 KB, the
mechanism used is called Inter-object Messaging (IOM). Both, client and server
sides of an IOM channel are connected using ephemeral ports, ports numbered
above 1023, and they are assigned by the operating system or selected
automatically from a predefined port range.
The option set_port_range of the odadmin command can be used to specify and
limit the port range used. This option is especially helpful for firewall
administrators who need to limit the port range availability. A sufficiently large
port range must be specified, so that ports can always be assigned as needed.
An example of the command to specify and limit the port range availability
follows:
odadmin set_port_range 5000-6000
Managed Node-to-Managed Node communication
The communication between managed nodes is implemented the same way as
the TMR server to managed-node communication.
TMR-to-TMR communication
The communication between TMR servers is implemented by a mechanism
called Inter-TMR communication that, from the network communication
perspective, follows the exact same pattern as the Inter-ORB communication.
Endpoint-to-Gateway communication
The endpoint-to-gateway communication is between the gateway daemons
running on the gateway, with the Tivoli Management Agent (TMA) installed on
endpoints.
When a gateway needs to invoke a method to be executed on an endpoint, the
gateway initiates a communication called a downcall. A downcall can be
originated also from any managed node or the TMR server.
Chapter 3. The Outlet Systems Management Solution Architecture
85
When an endpoint needs to invoke a method to be executed on the associated
gateway, it initiates a communication called an upcall.
The default configuration of the listening port for gateways is 9494, and for
endpoints, is 9495.
Communication during Installation
During the installation process, access from the TMR to a system on which a
managed node is being installed, uses rexec, rsh or ssh services. The Outlet
Systems Management Solution will use ssh. The TCP port number used by
rexec, rsh, and ssh are 512, 514, and 22 respectively.
Table 3-8 summarize the use of TCP/IP ports used in Tivoli environment:
Table 3-8 Standard TCP/IP ports used by Tivoli
Connection
86
Type
Server
Listening
Port
Client
Listening
Port
TMR - Managed Node – TMR
Inter-ORB
94
Range
TMR – Managed Node – TMR
IOM
Range
Range
Gateway – TEC
Inter-ORB
94
Range
Managed Node – Managed Node
Inter-ORB
94
Range
Managed Node – Managed Node
IOM
Range
Range
TMR – TMR
Inter-ORB
94
Range
Endpoint to Gateway
Initial login
broadcast
9494 –
gateway
OS choice
Endpoint to Gateway
Normal login
broadcast
9494 –
gateway
OS choice
Endpoint to Gateway
Method upcall
9494 –
gateway
OS choice
Gateway to Endpoint
Method
downcall
9495 –
endpoint
Range
TMR – Managed Node
rexec for
installation
512
OS choice
TMR – Managed Node
rsh for
installation
514
OS choice
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Connection
Type
TMR – Managed Node
ssh for
installation
Server
Listening
Port
22
Client
Listening
Port
OS choice
DNS
The Tivoli service, or daemon, that runs on each client and on the server within
Outlet Inc. must be able to map a machine’s IP address to a host name during
the initial connection between services. This technique is sometimes known as
reverse mapping. Mapping between IP addresses and host names or host
names and IP addresses can be done using one of the three following sources of
data:
򐂰 /etc/hosts file
򐂰 NIS
򐂰 DNS
With the first two data sources, both forward and reverse mapping information is
available. You must have reverse mapping from TMR server to client and from
client to TMR server. If you are using DNS and do not provide the IP address to
host name mapping, the Tivoli client installation will fail.
The DNS server is the outlet server.
3.6 Configuration management
This section describes how, when using the described products, you can gain the
ability to track physical assets, implement desired state management, and
control asset investment across large enterprises, enhancing business
competitiveness.
3.6.1 Tivoli Configuration Manager: Inventory
The Inventory component of Tivoli Configuration Manager, referred to here as
Tivoli Inventory, will provide Outlet Inc. with enterprise system configuration
information. By using the capabilities of Tivoli Inventory, Outlet Inc. will be able to
scan automatically for and collect hardware and software configuration
information from distributed systems.
Tivoli Inventory is tightly integrated with Tivoli Software Distribution. An example
of this would be an administrator preparing to distribute a software upgrade to
Chapter 3. The Outlet Systems Management Solution Architecture
87
automatically generate a list of systems for Outlet Inc. that meet the new
version's hardware prerequisites.
Tivoli Inventory deployment in Outlet Inc.
A basic purpose of deploying a systems management solution for Outlet Inc. will
be gathering systems configuration data. By gathering this data, Outlet Inc. will
gain an understanding of what is present in their environment, and use this
information as a source for file distributions, asset tracking, service desk analysis
and many other activities. This section describes the Tivoli Inventory solution
intended for Outlet Inc. The solution will provide the ability for the rapid scanning
and collection of both hardware and software information from designated
endpoint targets within the Outlet Inc. Tivoli Management Environment. The first
section will explain how the Inventory architecture principle will operate on an
overall basis.
Within the Outlet Systems Management Solution there will be a standardized
build of server systems which will dictate what software is installed into set local
directory’s, as an example these be:
򐂰 /opt/outlet/StoreApp
򐂰 /opt/IBMIHS/htdocs/en_US
Because these directories are set paths into which applications are and will be
installed, it is only necessary to scan these areas for software. If it is decided that
other paths need to be scanned, then separate inventory profiles can be created
for different machine configurations. This will enable several profiles to be used
when scanning different paths, depending on the classification of the systems
that require scanning.
Specifying the paths to scan for files is recommended in order to avoid a scan of
the entire disk and to reduce the amount of data.
The inventory profile describes the scanning operation: hardware and software,
selected paths and directories. This profile will be created in a policy region of the
Hub TMR. This profile will be cloned to profile managers in each of the Spoke
TMRs. All the target systems are grouped in profile managers of subscribers
located in the Spoke TMRs.
Each Spoke TMR will have a cloned copy of the same inventory profile residing
in the Inventory policy region.
With this setup, the profile distribution can be performed from either the Hub or
the Spoke TMRs. However, using the Inventory profile from the Spoke TMRs will
optimize performance because the RDBMS can be accessed in parallel through
the RIMs defined in each Spoke TMR, instead of using the HubTMR’s RIM.
88
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
In the Outlet Systems Management Solution, the inventory solution will require
that the endpoint data within the Inventory repository is kept up to date. Defining
a scheduled time span with which to scan the endpoints will do this. This time
span should not be to frequent, but often enough to keep Inventory data accurate
enough for use by support staff and administrators.
The software scans should be performed on a more frequent basis than
hardware scans. This allows for installations of software to be picked up on a
regular basis. Hardware scans are not required as frequently because that
information should not change as often. Keep in mind that these scanning
intervals can be changed at any time, to account for changes in business
requirements. Specified Tivoli administrators can also run the scans manually. All
the information retrieved from these scheduled or manual scans will be stored in
the RDBMS.
Under all circumstances, any centrally initiated change to any system should be
followed by a relevant scan to update the Inventory information. This might also
help verily the correct implementation of the change.
Inventory profile distribution
In Outlet Inc. it is assumed that all the outlet servers are never switched off. A
switched off machine has to be considered an exceptional event.
The general rule-of-thumb is to distribute Inventory scanning profiles on a regular
basis. For workstations and other systems that are used by individuals on a
regular basis, the recommendation is to perform a hardware scan every week to
10 days, and a software scan every one or two days. For servers and service
systems that are under strict, central change control and are physically
protected, scans should be performed after each application of any change to the
system. In addition, periodic scanning for hardware and software should be
performed once a month, or by a frequency determined by company policies. For
workstations and servers, scan status information should be logged to the
standard output file.
Applying these scanning policies will remove the need for a preliminary check of
the active systems prior to implementing a change application.
Performing the Inventory profile distribution with a command line, is the
recommended method, redirecting the command output to a file. This file can be
used to verify which systems have not been discovered by Tivoli Inventory such
as a switched off or unreachable machine. Optionally, the scans can be
scheduled to run automatically using a Tivoli job.
Scalable Collection Service
Tivoli environment In Outlet Inc. will use SCS service for Inventory collections.
Chapter 3. The Outlet Systems Management Solution Architecture
89
With SCS, Outlet Inc. will have the possibility of controlling how much data is
collected in the network as well as when the data is collected. In addition, the
internal workings of SCS allows for highly efficient scanning and storing of vast
amounts of data.
SCS include the following components:
򐂰 Repeater sites organized into a repeater hierarchy
Repeater hierarchies are systems that use the multiplexed distribution (Mdist)
service. Mdist parameters control the way that information is distributed
throughout a Tivoli environment. A repeater hierarchy is the order in which
information flows from one repeater to the next, and then to the endpoints that
are targets of the distributed data. The SCS uses a collector hierarchy that
mirrors the Mdist repeater hierarchy. SCS sends data upstream through this
hierarchy, in the opposite direction of Mdist distributions.
򐂰 Collectors
Collectors are repeater sites on which SCS has been installed. Specifically, a
collector is a SCS daemon process on either a managed node or gateway
that stores and then forwards data to other collectors or to the Inventory
receiver. A managed node is a collector if SCS has been installed on the node
which is part of the SCS collection hierarchy.
In Outlet Inc. all the managed node gateways will be Collectors, with SCS
installed.
Collectors are composed of the following components:
– The depot persistently stores data collected from endpoints or other
collectors. The depot also sends data to other collectors that request it.
– The queues hold the CTOCs (Collection Table of Contents). The input
queue controls the order in which CTOC is processed for collection. The
output queue controls the order of the CTOCs as they are sent out from
the collector. The completed, deferred, and error queues hold CTOCs for
completed and deferred data collection and error conditions, respectively.
– A multithreaded scheduler daemon processes input and output queues
and controls data flow through the collector depot.
򐂰 Collection manager
Maintains the collector hierarchy based on repeater hierarchy information
obtained from the Mdist repeater manager.
򐂰 Inventory receiver
This Tivoli Inventory object receives data from collectors and sends the data
to one or more RDBMS RIM objects. The Inventory receiver can be
90
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
considered the final collector in an Inventory solution. Like collectors, the
Inventory receiver has a depot and queues. However, the Inventory receiver
uncompresses and decodes the data and sends it to the RIM rather than
requesting collection from an upstream collector.
In Outlet Inc., the HUB and Spoke TMRs will be defined as Inventory
Receivers.
򐂰 The status collector
This component collects, stores, and distributes status information for each
scan endpoint. You can configure the status collector to keep lists of
completed scans, successful scans, failed scans, and error messages. The
status collector maintains this information throughout a scan, so scan status
information is available during the scan.
򐂰 The Inventory configuration repository
This repository stores the data collected by Inventory in a relational database
management system (RDBMS).
򐂰 RIM objects In addition to the existing Inventory RIM object
These other RIM objects connect Inventory to the RDBMS for access to the
Inventory configuration repository. Multiple RIM objects can be configured for
writing SCS data in parallel to the configuration repository.
In Outlet Inc., each TMR (Hub and Spoke) will have a RIM object. At a Spoke
Level, it will define three additional RIM objects: inventory1, inventory2,
inventory3.
Figure 3-12 on page 92 shows the architecture for Inventory management.
Chapter 3. The Outlet Systems Management Solution Architecture
91
Figure 3-12 Logical Architecture for Outlet Inc. Inventory Management
When an Inventory profile is created in the Hub TMR server and distributed to the
endpoints defined as subscribers in the Spoke TMR profile manager, all the
inventory traffic will pass through the HubTMR’s RIM host. This will limit
performance by bypassing the RIM hosts that can be defined in the Spoke
TMRs.
For performance reasons, it is better to create Inventory profiles in the Spoke
TMR. The preferred method is to create a main Inventory profile in the HubTMR
and clone it to each of the Spoke TMRs. Using this method, instead of
distributing the profile from the HubTMR to subscribing profiles in the Spoke
TMRs, will ensure both consistency and performance. In this situation, the Spoke
TMR is the Inventory receiver.
Data collection using Inventory and SCS occurs in three major phases:
1. In the first phase, distribute an Inventory profile that has been enabled for
SCS to the endpoints. As each scan completes on each endpoint, Inventory
generates a compressed and encoded RIM data file. The endpoint sends a
CTOC, which contains information about the RIM data file, to the collector on
the gateway that manages the endpoint. The collector daemon queues the
92
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
CTOC for processing. The scan on the endpoint then completes. After the
scan completes on all endpoints, the Inventory profile distribution is finished.
2. In the second phase, SCS moves data from each endpoint to a depot in the
gateway collector that controls the endpoint. While the data transfers from the
endpoint to the depot, the CTOC for that data remains in the input queue of
the collector. When the collector has collected all the data, the CTOC moves
to the output queue. The collector then notifies the next collector in the
hierarchy that the data is ready to be collected. The upstream collector places
the CTOC in its input queue, and then collects the data from the downstream
collector. When the data is completely transferred to the upstream collector,
the downstream collector discards the CTOC in its output queue and data in
its depot.
3. In the third phase, the Inventory receiver sends the collected data to any
available RIM object configured to work with it. From the RIM object, the
collected data is sent to the Inventory configuration repository. The Inventory
receiver sends the completion status of each scanned end-point, as well as
any error information, to the status collector. The status collector changes the
status for each endpoint from pending to successful or failed.
Features of SCS
The following SCS features illustrate the advantages of asynchronous data
collection:
򐂰 Asynchronous data collection and backend processing lead to better
distribution of processing load across more nodes in the TMR, as well as
better use of the network. SCS stages the data at different collectors in the
network as it flows to the RIM, and allows control over the data flow between
successive collectors in the hierarchy.
򐂰 SCS returns scan data as the scan of each endpoint completes. This feature
reduces RIM overload at the end of a distribution.
򐂰 After data has been collected from an endpoint, that endpoint can be
disconnected without affecting the SCS service.
򐂰 The Inventory receiver can write data in parallel to multiple RIM objects.
Multiple RIM objects improve throughput by allowing data for multiple targets
to be written in parallel to the RDBMS.
򐂰 With the Tivoli scheduler, scans can be scheduled to collect data at times
when network traffic is at a minimum.
򐂰 Collectors along the route store collected data in depots. If a failure occurs,
the collector can resend the data when it is back online rather than scanning
all the endpoints again.
򐂰 With SCS, an Inventory profile distribution completes when all endpoints have
been scanned rather than when scan data reaches the configuration
Chapter 3. The Outlet Systems Management Solution Architecture
93
repository. Therefore, in most cases the profile distribution completes (and
users regain access to the graphical user interface or command line
interface) more quickly.
Network Data Flow
SCS provides features to control the flow of SCS data across the network. These
features are helpful if we have slow network links or if we want to specify when
SCS data crosses your network.
SCS provides the following mechanisms to control data flow:
򐂰 Offlinks
Offlinks are the main mechanism which regulate SCS traffic across your
network. You can use offlinks to enable and disable SCS traffic between
collectors at specified times. To use offlinks, install a collector on either side of
the network that requires flow control, then schedule offlinks using the Tivoli
scheduler.
򐂰 Transmission chunk size
Use the wcollect –c option to configure the size of transmission chunks. With
this option, you have control over the size of the SCS data packet that
crosses the inter-object message (IOM) connection. The default transmission
chunk size is 1 MB. If you choose a smaller chunk size, the application data is
sent in smaller fragments. Decreasing transmission chunk size might be
beneficial for slow links because the link is not congested with large block
transmissions of SCS data. You configure transmission chunk size on the
downstream collector.
Note: Offlinks and transmission chunks affect data transmission between
collector nodes, or between a collector and the Inventory receiver. These
mechanisms do not affect transmissions between an endpoint and the
gateway collector.
򐂰 Input Threads
Collectors use input threads to open an IOM session for retrieving data from a
downstream node. The maximum number of input threads for a collector can
be controlled using the wcollect –t command. Increase the maximum
number of input threads to allow more concurrent SCS IOM sessions, or
decrease the number to reduce SCS IOM traffic coming into the collector.
94
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Note: By configuring a collector’s input threads, you can reduce the
gateway load by limiting the number of simultaneously active SCS
sessions to endpoints from that gateway. This also helps reduce SCS
network traffic coming into the gateway.
Table 3-9 shows the principal parameters for thee collector, using the wcollect
command. Refer to the Tivoli Configuration Manager: User’s Guide for Inventory
which is available online at the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
From the welcome page, navigate to Configuration Manager → User’s Guide
for Inventory → Inventory and SCS Features:
Table 3-9 Suggested wcollect settings
Parameter
Settings
depot_size
50 MB
depot_chunk_size
256 KB
max_input_threads
5
max_input_retries
10
max_output_threads
5
Scheduling offlinks
For offlinks settings, you can decide to collect data from Outlet Inc. downstream
gateway collectors for a fixed window in time during the night, such as two or
three hours per night, without affecting business hours. The following steps are
an overview of the procedure to schedule collections:
1. Create a task that turns off links to a collector, then halts and restarts the
collector so the changes take effect, as in Example 3-2:
Example 3-2 Changing collector starts and stops
wcollect -x offlinks_range_to_prohibit_collection collector
wcollect -h immediate
wcollect -s
where:
offlinks_range_to_prohibit_collection:Specifies the object dispatcher IDs
of the collectors for which links to the
specified collector must be turned off.
collectorSpecifies the collector to which links must be turned off.
Chapter 3. The Outlet Systems Management Solution Architecture
95
To get the object dispatcher ID of a collector, use the odadmin command and
odlist option.
You can list the object dispatcher IDs, separated by commas, as shown in the
following example:
wcollect -x "4,5,6,7" collector
Alternatively, you can use a dash to indicate a range of object dispatcher IDs:
wcollect -x "4-7" collector
You must enclose the range of offlinks in double quotation marks ("").
2. Create a task that turns on the links to all systems for which links were
previously turned off, then halts and restarts the collector so the changes take
effect:
wcollect -x "" collector
wcollect -h immediate
wcollect –s
3. Repeat steps 1 and 2 on each collector for which you want to schedule
collections:
4. Create jobs to run these tasks.
5. Use the Tivoli scheduler to control when and how often to run these jobs.
Collectors can also be controlled by creating a job that starts and stops collectors
using the wcollect –s and –h options. Alternatively, you can schedule a job that
shuts down a collector’s output or input queue by setting the wcollect –o or –t
options for that collector to 0.
Note: When a collector is reconfigured, the changes do not take effect until
the collector is restarted.
Database size
For the initial scan of an Outlet Solution Server, the rule of thumb is to consider
one MB of data per system. For subsequent scans, the amount of data is minor.
To calculate the size of the Inventory database, we should consider the bigger
amount of data per machine (1 MB x 1.000 = 1GB).
Signature files
Example 3-3 is an example of a signature file.
Example 3-3 Signature file
<I>,AcroRd32.exe,2316288,Adobe Acrobat Reader,4.0
<I>,ARUSER.EXE,2908320,Action Request System,4.02
96
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
<I>,builder,2713148,TME10 TEC Rule Builder (AIX 4.x),3.1
<I>,builder.exe,1865216,TME10 T/EC Rule Builder (WinNT),3.1
<I>,CAFE.EXE,95276,Symantec Cafe,7.50b56
<I>,CCDIST.EXE,123904,Microsoft Plus! For Windows 95,4.70
<I>,CCDIST35.EXE,107008,Microsoft Plus! For Windows 95,4.70
<I>,EXPLORER.EXE,204288,Microsoft Windows Internet Explorer,4.00.950
<I>,explorer.exe,234256,Microsoft Explorer,4.00
<I>,explorer.exe,239888,Microsoft Explorer,5.00
Tailor the signature file to indicate only the signatures for applications that need
to be discovered.
Use the wfilesig command to add or delete software signatures.
3.7 Release management
The Software Distribution component of Tivoli Configuration Manager, referred to
in the rest of this book as Tivoli Software Distribution, is a core element of the
entire Outlet Systems Management Solution. Tivoli Software Distribution is the
main component which will support Outlet Inc. IT department’s release
management processes.
To better understand the software distribution process, these are some of the
main elements:
򐂰 Distribution Manager is installed on the TMR Server that also acts as the
Software Distribution Server. The Distribution Manager monitors and controls
distributions and update the status in the database (RDBMS). There is one
distribution manager for each TMR, which keeps track of all the distributions
started in it. In the case of interconnected TMRs, you have more than one
distribution manager but all share one single database.
򐂰 The Software Package (SP) contains the complete definition, including related
actions to be performed on the target system, of what is to be distributed. You
can create a SP using a GUI or text file editor. In case of the latter, a Software
Package Definition (SPD) is created. Files to be distributed are stored on the
Source Host system.
򐂰 A Software Package Block (SPB) bundles all the resources necessary to
execute the actions in the software package into a standard zipped format. At
distribution, the resources do not need to be collected from the source host;
they are already contained in the SPB, which must reside on the source host.
When SPB is distributed to an endpoint, it is not stored on the endpoint, but is
unzipped in the target directory. By unpacking the zipped file immediately,
Chapter 3. The Outlet Systems Management Solution Architecture
97
there is no need for additional disk space on the endpoint for the .spb file. A
software package block is created from a SP or a SPD through the build
function, available from the Tivoli Desktop or by using the wconvspo
command.
򐂰 Source Host is the managed node that stores data to be distributed with a
software package (SP). If an SPB is used, the source host contains all the
resources, including files, to be distributed.
򐂰 Gateway Repeater as depot is the intermediate distribution point; it can be
configured to be a depot of the software to be distributed, temporary or
permanent. Guarantee a certain amount of free disk space to store data. This
functionality is provided by MDist2 functionality, and allows for predistributing
software packages to the local endpoint gateways that are the targets of the
software distribution.
Figure 3-13 depicts the architecture for software distribution:
Figure 3-13 Software Distribution architecture
The fact that gateways can be defined as hosting depots means you can reduce
network traffic for frequently distributed data, or use it for mass distributions to a
single location.
Among the other interesting features introduced with MDist2, there are the
following aspects:
98
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
򐂰 Asynchronous delivery, or the possibility of checking and transmitting the
status of a single distribution, without waiting for the completion of all targets
򐂰 File caching locally on the repeater, to avoid the complete restart of the
distribution process after network errors
򐂰 Checkpoint and restart from where an interruption occurred
򐂰 Distribution control with GUI
򐂰 Distribution queues with three different types of priority
Tivoli Software Distribution also includes Activity Planner (AP) and the Change
Manager (CCM).
With the Activity Planner, Software Distribution users can plan correlated
activities such as software package distribution operations and Framework tasks
for a group of targets and monitor the status of the submitted plans.
With the Change Manager, Software Distribution users can prepare reference
models containing specific hardware inventory and software distribution
conditions to be met on specific groups of targets. If some of those conditions are
not met, an APM plan is automatically generated to update the subscribers to the
desired state.
3.7.1 Software Distribution in Outlet Inc.
Outlet Inc. will use the functions in Software Distribution to endpoint systems
running on the Linux platform.
The following guidelines apply to creating and maintaining the software
packages to be distributed:
򐂰 Every distribution involves a set of preinstallation and postinstallation
procedures, containing change management commands such as install,
commit, undo, remove, rollback, restart, and so on.
򐂰 The application size is a critical factor when you configure disk space on
remote depots that will store the packages.
򐂰 Installation and activation phases of distributed software should be
distinguished.
This will allow Outlet Inc. to preinstall new versions of components to all the
servers in the infrastructure well in advance of cutover. This activity is usually
time consuming, can involve distributing large software package blocks, and,
optionally, one or more restarts of the target systems.
Chapter 3. The Outlet Systems Management Solution Architecture
99
At cutover time, the new versions of the components can be activated
simultaneously on all servers in the infrastructure with a minimum of effort
and interruption of service, if any.
It should be noted, however, that the use of predistribution and activation is
dictated by the capabilities of the components that you are installing.
Thorough testing is recommended.
򐂰 Locked files must be managed by software distribution. Tivoli Configuration
Manager has build-in support for detecting locked files, files that are in use at
the time of installation, and replacing these with the new version at the next
restart of the system. Software packages can even be defined to
automatically restart the target system if locked files are discovered.
All Outlet Inc. outlets will be interested in the Software Distribution process. Due
to the network links, it will be crucial to find a solution that better exploits the
Tivoli three-tier architecture and the functionalities of MDist repeater in
geographical network. We recommend the following:
򐂰 The Software Distribution objects will be Software Package Blocks (SPB).
򐂰 The distribution will be done overnight, using a five to six-hour distribution
window.
򐂰 Every gateway in the outlets will be a repeater, and configured to host a depot
used by Software Distribution.
򐂰 Based on the network configuration and the needs of Outlet Inc., the parallel
distribution will be set to 300 outlets.
Figure 3-14 on page 101 depicts the typical software distribution process:
100
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 3-14 Software package distribution
The main steps of the process are:
1. The preparation of the SPB will be performed at the preparation site.
2. The SPB is loaded to the Source Host.
3. The SPB will be sent to gateways depots.
4. The SPB will be installed in the transactional area in order to handle locked
files, or to preinstall in order to activate the new application versions at a later
point.
5. The SPB will be activated, committed.
Note: Due to the nature of certain, basic packages, such as core middleware
components, it does not make sense to install these as undoable, because
these components will be installed when systems are out-of-service.
Chapter 3. The Outlet Systems Management Solution Architecture
101
To properly design a software distribution scenario, target systems should be
organized in groups that will be populated automatically, or as a result of specific
queries to the inventory database.
Each group is then configured as a subscriber of a software distribution profile,
the software package object. Profiles are grouped with their subscribers in profile
managers, which are finally collected in policy regions.
The Software Distribution operator can perform install, commit, and undo
operations.
3.7.2 Integrating Tivoli Software Distribution with the TEC
Tivoli Software Distribution TEC integration enables Software Distribution to
send events to the Tivoli Enterprise Console’s event server when a Software
Distribution operation is performed.
Software Distribution automatically generates events based on its operations
(successful or failed distributions, commit operations, or removals) and sends
these events to the TEC event server. This provides a means for centrally
collecting Software Distribution events and triggering actions that can be treated
in the same fashion as events from other sources or even be correlated with
other events.
In the Outlet Systems Management Solution several event classes for Software
Distribution will be used. Below is a list of the high-level event classes that will be
processed by the Tivoli Enterprise Console in Outlet Inc.
򐂰 Distribution Started (Distributed, Remove, Commit, and so forth)
򐂰 Distributed Completed
򐂰 Distribution Errors
3.7.3 Integrating Tivoli Software Distribution with Inventory
Tivoli Software Distribution maintains a database that automatically updates the
Tivoli Inventory configuration repository with information about Software
Packages, AutoPacks and Software Package Blocks. The information allows
administrators to view what has been installed, removed or distributed to Tivoli
clients within the environment.
Integration with Tivoli Inventory will provide Outlet Inc. with a facility to create
custom queries, and to obtain information about successful or unsuccessful
distributions.
When the Software Distribution product is installed, a script is executed to create
a unique SWDIST_QUERIES query library and related views and tables.
102
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Table 3-10 describes the tables that are created within the configuration
repository for use by Software Distribution.
Table 3-10 Tables for storing Software Distribution status
Tables
Description
INSTALLED_SW_COMPONENT
Stores information about the status of software
distribution operations, specifically about the
machine on which an operation was performed.
This can include such things as, software
package distribution, installation or removals,
the time at which an operation occurred, the
name of the profile that was distributed or
removed, and the result of the operation.
SOFTWARE_COMPONENT
Stores information about software names and
versions that have been distributed or
removed, then links this information to profile
identifications. SOFTWARE_COMPONENT
Stores information about software package
identification and relates this information to a
file package, Auto Pack, or software package
block and the source host.
SD_CM_STATUS
Stores information about the names and
versions of software packages, the time the last
successful action or operation was performed
on a software package, and the status of a
software package on a particular machine.
Before Software distribution can update the database with information, an
inventory scan must be completed for all intended Outlet Inc. target endpoints.
Once information regarding distribution of software is populated into the Software
Distribution database, this can then be queried in the normal way using the Tivoli
querying facility. Outlet Inc. will be using this information to retrieve specific
information regarding successful and failed distributions and which endpoints to
which they relate.
3.8 Availability and Capacity Management
In the Outlet Systems Management Solution a number of products will be used
for monitoring availability and performance of the resources hosted by the outlet
servers. In essence, two distinctly different implementations will be put in place,
one for hardware, the operating system, and middleware monitoring, as well as
another for application monitoring.
Chapter 3. The Outlet Systems Management Solution Architecture
103
The products used for hardware, operating system and middleware are:
򐂰 IBM Tivoli Monitoring
򐂰 IBM Tivoli Monitoring for Databases
򐂰 IBM Monitoring for Web Infrastructure
For application monitoring we will use:
򐂰 IBM Tivoli Monitoring for Transaction Performance
The following sections provide high-level descriptions of the architecture and
main components of the tools used for capacity and availability management in
the Outlet Systems Management Solution.
3.8.1 IBM Tivoli Monitoring architecture
Figure 3-15 presents a high-level overview of the interaction between main
components of IBM Tivoli Monitoring 5.1.
Figure 3-15 IBM Tivoli Monitoring High-level overview
The IBM Tivoli Monitoring 5.1 profile contains, among other information, a
resource model. The resource model is a collection of monitors that correlate
among themselves before attempting to perform a corrective action or sending
notification to the central event manager. The IBM Tivoli Monitoring 5.1 profile is
distributed to the endpoints to monitor one or more resources. Examples of
typical resources are: hard disk space, paging space, and process and service.
104
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Based on configuration settings in the IBM Tivoli Monitoring 5.1 profile, the
engine runs on the endpoint and performs the necessary monitoring on the
resources that are specified in the distributed resource models. The Web Health
Console obtains logged data from selected endpoints and displays the health of
the endpoints.
The following sections describe CIM architecture and the three key components
of IBM Tivoli Monitoring 5.1, which are:
򐂰
򐂰
򐂰
Profile
Resource models
Engine
The Common Information Model
The Common Information Model (CIM) is part of the industry-wide initiative called
Distributed Management Task Force (DMTF). DMTF provides a common model
for the management of environments across multiple vendor-specific products.
The ultimate goal of CIM compliance is to build applications that can extract and
use management data from products released by different vendors, hardware or
software. IBM Tivoli Monitoring 5.1 is CIM-compliant and hence can collect,
store, and analyze management data from other CIM-compliant products. The
definite advantage of using CIM-based resource monitoring is that, when newer
versions of the product whose resources are already being monitored are
released, the monitor is not required to know the implementation details of the
system, but only interacts with it through an already accepted management
interface, the CIM Schema.
In UNIX systems, a CIM-compliant instrumentation called TouchPoint is
embedded in the IBM Tivoli Monitoring 5.1 engine to facilitate the retrieval of
management data. On Windows 2000 systems, the CIM-compliant Microsoft
implementation called Windows Management Instrumentation (WMI) is already
installed as part of the operating system. For a detailed discussion of CIM, refer
to the DMTF Web site:
http://dmtf.org
The IBM Tivoli Monitoring 5.1 engine
To configure the various components of the IBM Tivoli Monitoring 5.1 profile, it is
important to have a basic understanding of the internal workings of the IBM Tivoli
Monitoring 5.1 engine and how the different components interact with each other.
Understanding the internals will also make it easier to understand the impact of
changing the out-of-the-box parameters of the IBM Tivoli Monitoring 5.1 profile.
The following sections briefly describe the six main components that make up the
IBM Tivoli Monitoring 5.1 engine:
Chapter 3. The Outlet Systems Management Solution Architecture
105
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Analyzer
Event aggregator
Event correlator
Action manager
Logger
Scheduler
Analyzer
The analyzer component receives the resource model from the IBM Tivoli
Monitoring 5.1 engine and executes the monitoring best practices contained in
the reference part of the model. The reference models gets, through the WMI or
Touchpoint, the performance and availability data of all physical and logical
resource instances. The analyzer collects and analyzes the performance data to
verify the service level of the endpoint with a determined class of service. The
best practices or knowledge paths are coded in Visual Basic (VB) on Windows
and JavaScript on UNIX. JavaScript is also be available on the Windows
platform. The VB or JavaScripts are executed during every cycle. If a threshold is
exceeded, the analyzer sends an indication to the event aggregator.
In a simple case, we would compare one threshold, resource model property, to
one resource property, the real world, to trigger indications. However, in most
cases, you would have to look at any number of thresholds and compare them to
the resource property values to find the root cause of a problem.
Event aggregator
When a service level is exceeded at a determined time, it does not always
represent a problem, but its persistence can represent a real problem. The event
aggregator measures the persistence of the indications generated by the
analyzer. The event aggregator collects all indications coming from all decision
trees currently running, and consolidates them based on the aggregation rules
configured in the profile. The aggregation rules are controlled by the occurrences
and holes that are configured in the IBM Tivoli Monitoring 5.1 profile.
An occurrence refers to a cycle during which an indication occurs for a given
resource model, an above threshold cycle. A hole refers to a cycle during which
an indication does not occur, a below threshold cycle.
If the persistence of an indication meets the configured number of occurrences,
an event is generated and sent to the event correlator. If the profile is configured
to send events to Tivoli Enterprise Console for a particular indication, then the
event aggregator is responsible for sending the event to Tivoli Enterprise
Console. However, all events generated by the event aggregator are sent to the
event correlator, irrespective of whether it is configured to send events to Tivoli
Enterprise Console.
106
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
For example, assume that there is a resource property that changes its value
rapidly. The decision tree would be visited every cycle. As part of this, the value
of the resource property is retrieved. In Figure 3-16, the vertical dashed lines
represent the moments of the queries. The point at which the dotted lines meet
the graph are values that are the results of the inquiries. The one horizontal
dashed line represents the threshold. Values above that line are considered
potential problems and will trigger an indication.
Every time the values of the resource properties exceed the thresholds, the
resource model will generate an indication. If the value of the resource property
drops below the threshold for a particular cycle, then no indication is generated
and the event aggregator counts the nonoccurrence of an indication as a hole.
So, if we define an event as four occurrences and one hole when we configure
our profile, then based on the indications generated in Figure 3-16, the event will
be generated when the fourth indication occurs. If we had two consecutive holes,
then the occurrence count would be reset to zero and a clearing event would be
sent, if we configured the profile to send a clearing event.
Tip: The concept of holes can be a bit confusing if you do not realize that the
number of holes specified in your profile represent the acceptable number of
consecutive holes, not the accumulated number of holes over a sliding window
of time.
Figure 3-16 Sampling of volatile metric
Chapter 3. The Outlet Systems Management Solution Architecture
107
Event correlator
The event correlator is responsible for consolidating and correlating the events
generated by the event aggregator between different resource models. It
receives all events from the event aggregator irrespective of whether the event
aggregator has been configured to send events to Tivoli Enterprise Console or
not. It uses a set of static rules to correlate between the different resource
models. This static correlation between resource models is only available for the
Windows resource models.
Action manager
This component is responsible for executing corrective actions consisting of
tasks and built-in actions when a resource model detects a problem. It executes
the actions that the user has associated to a particular indication. There are a
number of ways that the action manager executes the actions. Built-in actions
are implemented as methods of resource models and they are executed through
the WMI. Tivoli tasks are the classic Tivoli tasks that the user can associate with
a particular indication through the IBM Tivoli Monitoring 5.1 profile GUI. The
action manager also generates an ActionResult or a TaskResult indication that is
received by the Tivoli Enterprise Console adapter and then forwarded to the
Tivoli Enterprise Console server.
Logger
The logger is responsible for collecting performance and availability data on the
endpoints and storing these locally. The logger handles multiple resource models
distributed to an endpoint. It also handles the way data is collected: aggregated
data (minimum, maximum, or average values) or raw data. The logger is also
responsible for clearing the oldest records in the database. Nightly, around
midnight, a data purging process is executed, which removes all data more than
24 hours old. You can only view 24 hours of data; however, there are times when
this database has almost 48 hours of data, just prior to the midnight purge. By
default, logging is turned off. The data can also be consolidated to a central
RDBMS for trend analysis and reporting.
On the UNIX platforms, IBM Tivoli Monitoring 5.1 uses an open-source database
from Quadcap Software. Quadcap is a pure Java™ RDBMS designed to be
embedded into applications. For more information and documentation about the
Quadcap database, visit their Web site:
http://www.quadcap.com/home.html
Scheduler
IBM Tivoli Monitoring 5.1 contains a scheduling feature that allows you to
determine:
108
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
򐂰 A period within which monitoring should take place
򐂰 Specific scheduling rules
The monitoring period is determined by defining a from and a to date. With the
scheduling rules, you define time periods on specific weekdays during which
monitoring will take place. Any number of rules can be defined, letting you set up
a profile which covers the periods important to you. The scheduled times are
always interpreted as local times. With this ability, you can set up a single rule
that will monitor the same local time period in different time zones. For example,
if your region covers several time zones but you wish to monitor morning
activities in each time zone, create a single rule defining the monitoring period as
between 08:00 and 13:00. We will monitor the same relative period across time
zones. You should note also that all times of events or activities reported from
endpoints or gateways are also logged in the local time of the system from where
they originated.
Thresholds and indications
Indications are sent to signal the occurrence of a situation that might be a
problem. This assessment is based on a number of thresholds configured by an
experienced user. But an indication usually does not depend on only one
threshold value, and the same threshold can be applied to different resource
properties. When you want to adjust the models, it is necessary to understand
the dependencies, and when events are generated, it will be useful to know what
resource properties triggered it.
Resource models
The resource model contains the program scheme necessary to determine what
data is to be accessed from an endpoint at runtime and how this data is to be
handled. In other words, the resource model is equivalent to an implementation
of monitors from previous editions of Tivoli Distributed Monitoring, albeit, using
the object-oriented modeling approach and integration with CIM. Each resource
model obtains resource data from the endpoint to which it is distributed, performs
root cause analysis using a built-in algorithm, and reacts accordingly in the form
of built-in actions or user-defined tasks.
When an IBM Tivoli Monitoring 5.1 profile containing a resource model is
distributed to the endpoint, the following actions take place:
1. Resource model files are unzipped from the resource model specific zip file.
2. MOF files specific to the resource model are complied.
3. All components of the IBM Tivoli Monitoring 5.1 engine (logger, analyzer,
action manager, event aggregator, and event correlator) are created, loaded
in memory, and started.
Chapter 3. The Outlet Systems Management Solution Architecture
109
Heartbeat monitor
The heartbeat function is enabled on the managed node or gateway of the
endpoints that you want to monitor. The heartbeat monitoring engine has the
ability to forward events to the Tivoli Enterprise Console, Tivoli Business
Systems Manager, and send notices to the Tivoli notice groups. There are three
activities that make up the heartbeat function:
򐂰 Endpoint registration
򐂰 Heartbeat monitoring
򐂰 Viewing endpoint cache
Endpoint registration
When an IBM Tivoli Monitoring 5.1 profile is pushed to an endpoint for the first
time or the IBM Tivoli Monitoring 5.1 engine is restarted on an endpoint, the
information in the endpoint cache is updated when the gateway receives a
message from the endpoint informing the gateway that its IBM Tivoli Monitoring
5.1 engine has been started. Figure 3-17 illustrates the flow of data.
Figure 3-17 Heartbeat endpoint registration data flow
Once the profile has been distributed to the endpoint or the IBM Tivoli Monitoring
5.1 engine has been restarted on the endpoint, it sends an upcall, a method
called register_endpoint, to the gateway, which then registers the endpoint in the
endpoint cache. Additionally, while the engine is active, it sends this
register_endpoint upcall to its gateway every 18 minutes. This behavior is in
place to help track endpoint migration between gateways.
110
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Also note that this upcall occurs regardless of whether the heartbeat is turned on
or turned off.
Heartbeat monitoring
The heartbeat status check of endpoints is initiated from the gateway. Because
of this, the granularity of control is at the gateway level. Either all registered
endpoints on a given gateway are eligible for a heartbeat (the heartbeat is turned
on for that gateway) or none of them are (the heartbeat is turned off on the
gateway). Figure 3-18 shows the flow of data during the heartbeat monitoring
process.
Figure 3-18 Heartbeat monitoring data flow
The gateway issues periodic heartbeat requests, downcalls, to all attached
endpoints. The returned data is stored in the endpoint cache and events are sent
to the configured event targets.
Viewing the endpoint cache
When the heartbeat monitor detects problems with the resource model, IBM
Tivoli Monitoring 5.1 engine, or endpoint, it will send events to Tivoli Enterprise
Console, Tivoli Business Systems Manager, or the Tivoli notice groups. In
addition, the heartbeat information in the endpoint cache can also be viewed
using the wdmmngcache command. Figure 3-19 on page 112 shows the flow of
data when the command is invoked.
Chapter 3. The Outlet Systems Management Solution Architecture
111
Figure 3-19 Endpoint cache retrieval data flow
Web Health Console
The Web Health Console (WHC) is a Web-based GUI that displays the state
information about the deployed resource models and aggregated health
information about endpoints. You can use the Web Health Console to monitor
real-time data or historical data. The distributed profile must be configured to
collect historical data for that data to be accessible by the Web Health Console.
The Web Health Console follows a Model-View_controller architecture for
retrieving and showing performance data. Each request is executed in a different
thread. The exchange of information among the main objects inside the Web
Health Console is realized through an event-driven architecture. The real-time or
historical data is retrieved using XML. The console displays activity of the
analyzer and aggregator components of the new agent technology.
The console operates independently of the TME environment, but is still
validated and authenticated through the TME. You do not need to have an
endpoint on the machine from which you run the Web Health Console. When you
start the Web Health Console, you are prompted to log in to the Tivoli
Management Region server or managed node using a valid Tivoli user ID.
The Web Health Console feature has been available since Tivoli Distributed
Monitoring for Windows 3.7, when it was developed as a stand-alone Java
112
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Client. The Java Health Console for Tivoli Distributed Monitoring (Advanced
Edition) 4.1 works with IBM Tivoli Monitoring 5.1.
In IBM Tivoli Monitoring 5.1, the Health Console is now Web-based only, and
uses the following components, installed automatically with the Health Console.
For further installation information about these components, see 4.5, “Installing
Web Health Console” on page 105:
򐂰
򐂰
WebSphere Application Server, Advanced Edition Single Server 4.0.2
IBM HTTP Server
Web Health Console components
The Web Health Console consists of the following components:
򐂰
򐂰
򐂰
Client browser console interface
WebSphere Web Health Console component
IBM Tivoli Monitoring TMR monitoring components
The high-level data flow of the Web Health Console is shown in Figure 3-20.
Figure 3-20 High-level data flow of the Web Health Console
The Web Health Console has the following features:
򐂰
Very easy startup for end users
– Simple, single Web site access
– No installation for individual users, no space needed
– A lower powered, end-user machine used for the Web Health Console
– Low data transfer
Only absolutely necessary data is transferred to the Web Health Console
browser. The Web Health Console server offloads a significant amount of
data processing.
Chapter 3. The Outlet Systems Management Solution Architecture
113
򐂰
Single-point upgrade
– Upgrades are made only at the Web Health Console server.
򐂰 Standard IBM Web technology
– The WebSphere Application Server is the underlying application
technology.
– With it, you can perform remote Web-based management of the Web
Health Console server.
ITM in the Outlet Systems Management Solution
In the Outlet Systems Management Solution implementation IBM Tivoli
Monitoring will be used to monitor basic hardware, and operating system
resources, based on the best practice resource models provided with the
product. Initially the following resources will be monitored:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
CPU
Process
Memory
PhysicalDisk
FileSystem
NetworkInterface
Security
As a starting point, default values will be used for all resource models, and event
forwarding to Tivoli Enterprise Console will be enabled.
In addition to the resource monitoring, availability monitoring will be enabled in
the Outlet Systems Management Solution using the heartbeat monitor. Online
surveillance will be enabled for the operators at Outlet Inc. through the
implementation of the Web Health Console.
A further analysis of the needs for monitoring and adjustment of selection of
resources (thresholds, holes and corrective actions) to monitor and the
customize has to be performed before specific recommendations can be made.
3.8.2 ITM for Databases: DB2
IBM Tivoli Monitoring for Databases is a collection of ITM resource models and
related tasks that provides best practices for database management.
They help to ensure the availability and performance of critical applications in an
integrated e-business environment. Its capabilities include:
򐂰 Auto-discovery of the resources to be monitored
򐂰 Problem identification, notification, and correction
114
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
򐂰 Automated best practices for management and operations
򐂰 Historical reporting through a centralized data warehouse
In addition to providing extensive resource models and tasks for various
database actions such as stop, start, reorg, and so forth, ITM for Databases
provides the operators with access to database actions from the Tivoli Desktop
as well as the commandline.
From an architectural point of view, ITM for Databases uses the familiar three-tier
architecture: TMR Server, Gateway, Endpoint. However, because any one
database server can host several, distinct database instances, the normal
endpoint executing on the outlet servers cannot be used directly. Before
downcalls reach the endpoint on the database system, the commands need to
be wrapped into the proper scope, that of a particular instance.
Therefore, when you define database server systems to the Tivoli infrastructure
or elect to discover them automatically, proxy endpoints that are responsible for
defining the proper scope for each database instance are created at the gateway.
The type of proxy endpoints used in the Outlet Systems Management Solution
are DB2InstanceManager, and DB2DatabaseManager. Endpoints of this type
should always be used as the target of the operation when distributing monitoring
profiles containing DB2 resource models.
ITM for DB2 in the Outlet Systems Management Solution
As for monitoring the basic system resources, management of DB2 servers,
instances and databases in the Outlet Systems Management Solution will be
baed on the standard resource models provided with IBM Tivoli Monitoring for
Databases:DB2.
As a starting point the following resources will be monitored:
򐂰 Instances resources
– Instance Status
򐂰 Database resources
– Database Status
– Table Activity
3.8.3 ITM for Web Infrastructure: WebSphere Application Server
IBM Tivoli Monitoring for Web Infrastructure: Monitoring for WebSphere
Application Server provides the ability to register a application servers and to
collect their configurations in order to manage them. Once registered, resource
models and management functions can be used to:
Chapter 3. The Outlet Systems Management Solution Architecture
115
򐂰 Start, stop, restart, and retrieve the status of your servers, and retrieve the
status of the related virtual hosts and applications.
򐂰 Monitor key performance and availability of virtual hosts and applications run
by each server.
򐂰 Forward Tivoli Monitoring for Web Infrastructure PAC events to the IBM Tivoli
Enterprise Console.
򐂰 Forward Tivoli Monitoring for Web Infrastructure PAC events to Tivoli
Business Systems Manager.
򐂰 Store historical data on Tivoli Enterprise Data Warehouse.
Tivoli Monitoring for Web Infrastructure components provide resource models, or
groups of monitors, that periodically check the status of your application server
components, the application server and its virtual hosts. The status can be either
active, operational, or inactive, nonoperational. You can customize the resource
models to meet your local requirements.
The resource models enable measuring and reporting of the availability and
performance of the virtual hosts run by application server resources, to identify
bottlenecks and potential problems in the Outlet Inc. Web infrastructure. For
example, you can measure:
򐂰 Traffic
򐂰 Access
򐂰 Error conditions
For WebSphere Application Server, some of the available key performance
metrics are:
򐂰
򐂰
򐂰
򐂰
Enterprise JavaBean (EJB) performance
Database connection pool performance
JVM Runtime Performance
Servlets/JSP Performance
IBM Tivoli Monitoring for Web Infrastructure event rules manage the information
presented on your event console. These rules remove duplicate and harmless
events and correlate events to close events that are no longer relevant. IBM
Tivoli Monitoring for Web Infrastructure event reporting functions support
standard Tivoli event filtering, which you can use to reduce the number of events
sent to your events server, in addition to forwarding events to Tivoli Enterprise
Console.
Outlet Systems Management Solution
IBM Tivoli for Web Infrastructure in the Outlet Systems Management Solution will
be used only to monitor and manage the WebSphere Application Server
116
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
instances on the outlet servers, and is similar to DB2. The endpoint type related
to ITM for WebSphere Application Server is IWebSphere Application Server.
As for monitoring the basic system resources, management of WebSphere
Application Servers, and their logical application servers in the Outlet Systems
Management Solution will be based on the standard resource models provided
with IBM Tivoli Monitoring for Web Infrastructure: WebSphere Application Server.
As a starting point, we will monitor the following resources:
򐂰 WebSphere Application Servers
–
–
–
–
–
ApplicationServerStatus
DBPools
JVMRuntime
ThreadPool
Transactions
3.8.4 IBM Tivoli Monitoring for Transaction Performance
IBM Tivoli Monitoring for Transaction Performance provides functions to monitor
outlet transaction performance in a variety of situations. Focusing on outlet
transactions, it should come as no surprise that the product provides functions
for transaction performance measurement for various Web-based transactions
originating from external systems, systems on the Internet and not managed by
the organization, that provide the outlet transactions or applications that are the
target of the performance measurement. These transactions are referred to in
the following pages as Web transactions, and they are implemented by the Web
Transaction Performance component of IBM Tivoli Monitoring for Transaction
Performance.
Web transaction monitoring
In general, the nature of Web transaction performance measurement is random
and generic. There is no way of planning the execution of transactions or the
origin of the transaction initiation, unless other measures have been taken to
record these measurements. When the data from the transaction performance
measurements are being aggregated, they provide information about the random
transaction invocation, without affinity for location, geography, workstation
hardware, browser version, or other parameters that can affect the experience of
the end user. All of these parameters are out of the application provider’s control.
Naturally, both the data gathering and reporting can be set up to handle only
transaction performance measurements from machines that have specific
network addresses, for example, thus limiting the scope of the monitoring to
well-known machines. However, the transactions are executed, the sequence is
still random and unplanned.
Chapter 3. The Outlet Systems Management Solution Architecture
117
The monitoring infrastructure used to capture performance metrics of the
average transaction can also be used to measure transaction performance for
specific, preplanned transactions initiated from well-known systems accessing
the outlet applications through the Internet or intranet. To facilitate these kinds of
controlled measurements, certain programs must be installed on the systems
initiating the transactions, and they will have to be controlled by the organization
that wants the measurements. From a transaction monitoring point of view, there
are no differences between monitoring random or controlled transactions; the
same data can be gathered to the same level of granularity. The big difference is
that the monitoring organization knows that the transaction is being executed, as
well as the specifics of the initiating systems.
The main functions provided by IBM Tivoli Monitoring for Transaction
Performance: Web Transaction Performance are:
򐂰 For both unknown and well-known systems:
– Real-time transaction performance monitoring
– Transaction topology breakdown
– Automatic problem identification and baselining
򐂰 For well-known systems with specific programs installed:
– Transaction simulation based on recording and playback
TMTP Architecture
The basic architecture is shown in Figure 3-21 on page 119 and elaborated on in
further sections.
118
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 3-21 TMTP architecture
IBM Tivoli Monitoring for Transaction Performance (TMTP) establishes a
comprehensive transaction decomposition environment that allows users to
visualize the path of problem transactions, isolate problems at their source,
launch the IBM Tivoli Monitoring Web Health Console to repair the problem, and
restore good response time.
TMTP provides the following broad areas of functionality:
򐂰 Transaction definition
The definition of a transaction is governed by the point at which it first comes
in contact with the instrumentation available within this product. This can be
considered the edge definition, where each transaction, upon encountering
the edge of the instrumentation available, will be defined through policies that
define each transactions uniqueness specific to the edge it encountered.
򐂰 Distributed transaction monitoring
Once a transaction has been defined at its edge, there is a need for
customers to define the policy that will be used in monitoring this transaction.
This policy should control the monitoring of the transaction across all of the
systems where it executes. To that end, monitoring policies are generic in
nature and can be associated with any group of transactions.
Chapter 3. The Outlet Systems Management Solution Architecture
119
򐂰 Cross system correlation
One of the largest challenges in providing distributed transaction performance
monitoring is the correlation of subtransaction data across a range of systems
for a specified transaction. To that end, TMTP uses an ARM correlator in
order to correlate parent and child transactions.
The components of TMTP share a common infrastructure based on the IBM
WebSphere Application Server.
The first major component is the central Management Server and its database.
The Management Server governs all activities in the transaction monitoring
environment and controls the repository in which all objects and data related to
Web Transaction Performance activity and use are stored.
The other major component is the Management Agent. The Management Agent
provides the underlying communications mechanism and can have additional
functionality implemented on to it.
The following four broad functions can be implemented on a Management Agent:
򐂰 Discovery enables automatic identification of incoming Web transactions that
need to be monitored.
򐂰 Listening provides two components that can listen to real end-user
transactions being performed against the Web servers. These components,
also called listeners, are the Quality of Service and J2EE monitoring
components.
򐂰 Playback provides two components that can robotically playback or execute
transactions that have been recorded earlier in order to simulate user activity.
These components are the Synthetic Transaction Investigator and Rational®
Robot/Generic Windows components.
򐂰 Store and Forward can be implemented on one or more agents in your
environment to handle firewall situations.
Whenever a management agent discovers that thresholds are being violated, an
event is sent to the TMTP Management Server, which forwards the event
information to TEC, if configured to perform that task. Naturally, the TMTP
Agents support local event responses as well as the forwarding of clearing
events when the condition raising the event has been resolved.
TMTP in the Outlet Systems Management Solution
.In the Outlet Systems Management Solution, a central TMTP Management
Service will be established. The control database will be hosted at the central
rdbms system, just like all other databases in the Outlet Systems Management
120
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Solution.Naturally, the server will be configured to forward all events to the TEC
server, and to integrate to the Web Health Console.
To facilitate monitoring of the Outlet Solution, TMTP Management Agents will be
installed to all the outlet servers with the intent to deploy a J2EE policy to monitor
transaction performance of the TimeCard application of the Outlet Solution.
As a starting point, the J2EE Monitoring Policy will monitor all transactions routed
to http://localhost:9080/TimeCard/*, and events will be generated for all
sub-transactions longer than one second.
3.9 Event management
The main objective of event management is to improve the service to customers
by collecting and analyzing meaningful events from the resources in the Outlet
Solution, in order to identify and to resolve, in proactive way if at all possible,
situations or incidents that could generate problems.
The event management architecture in the Outlet Systems Management
Solution is based on two fundamental elements, the monitoring agents and the
event server. The primary responsibility of the monitoring agents is to identify
suspicious situations, collect information about those situations, then send this
information to the interested sources. The event server represents the points of
collection, correlation and processing of the events.
Thanks to the monitoring agents, the ITM monitoring engine, TMTP agents, the
TEC login adapter, Inventory, SWD and more, installed on the servers in the
Outlet Solution, it is possible to:
򐂰 Obtain and anticipate identification of error conditions
򐂰 Store the alarms in an event database for analysis and further statistics
򐂰 Generate the automatic opening of problems
The information associated with each event must be defined initially and be
revisited on periodic base. Initially, the list of agents that can generate events
must also be identified and analyzed.
The monitoring agents capture an initial set of information and use this
information to identify the type of event. Once the event has been identified, the
agent can collect further information that is unique to the specific type of event.
Most agents have the ability to initiate an automatic action of recovery, if
necessary, which attempts to correct the problem locally as soon as possible.
Subsequently, the event is forwarded to the Event Manager (TEC Server).
Chapter 3. The Outlet Systems Management Solution Architecture
121
To limit the costs of processing the events, rules that filter out repeating events
can be established at the event-server level. The rules determine if an event is
serious enough to need a processing, or if duplicate events can be eliminated. In
a dynamic environment such as the one at Outlet Inc., these rules are not static,
and must be reviewed periodically for possible revision to provide the most
efficient event handling. All the information for the type of defined event is
formatted and collected for transmission to the event manager responsible for
the ulterior necessary processes.
Once a monitoring agent event is received, the event manager forwards the
event correctly and verifies that the event-sending agent is recognized and
authorized. The event manager correlates the received events, retaining the
history of the last events to determine if there is a pattern to aid in deciding the
type of action to perform. Based on the analysis, the event manager can take
one or more actions. The recovery can be executed on the server site or on the
agent site proactively to prevent problems to the end users of the resource in
question.
Once the event has been analyzed, it must be closed. Independently from the
various adopted ways, the operation of closing of an event represents the formal
fact that the action (manual or automatic) associated with the event has
completed. Rules of correlation between events can exist for which the closing of
an event implies the closing of other events.
Architecture and implementation
The central Tivoli Enterprise Console (TEC) Server receives events from all the
distributed systems in the Outlet Systems Management Solution and manages
them at the middle level.
Every event generated from any monitoring agent is forwarded to the TEC
Server. The TEC server, in turn, is responsible for the analysis and correlation of
the collected events.
The TEC is the focal point for event management in the Outlet Systems
Management Solution. As such, the TEC Server is responsible for receiving and
processing alarms and events originating any type of resource in the Outlet Inc.
information system, including Tivoli solutions, applications, infrastructural
components and more. Once the management rules have been established,
TEC will be able to help manage components automatically in real time, by
receiving and correlating events and, optionally, issuing commands or executing
Tivoli tasks to facilitate corrective actions.
Figure 3-22 on page 123 provides a general depiction of the event management
architecture to be adopted by Outlet Inc.
122
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 3-22 Event Management architecture
Tivoli Enterprise Console
The Tivoli Enterprise Console (TEC) will provide Outlet Inc. with a centralized
point of control, enabling the IT staff to control the Outlet Solution across all
systems. The TEC will act as a central collection point for alarms and events
received from the other Tivoli applications and will help ensure the identification
of impending problems.
The TEC processes thousands of events and alarms daily from network devices,
hardware systems, relational database management systems, Web servers,
applications and Tivoli partner and client applications.
In the Outlet Systems Management Solution the TEC Server will be implemented
in the Hub TMR and support the entire organization. Events will be stored in the
RDBMS located alongside the TEC server.
The implementation of Tivoli Enterprise Console will provide Outlet Inc. with a
complete mission-control view of the health of the entire distributed environment
from a single screen on a desktop. It will also provide specialized views of events
Chapter 3. The Outlet Systems Management Solution Architecture
123
to different administrators, displaying only events relative to their responsibilities,
thereby optimizing staff resources.
TEC Components are:
򐂰 Events are the central units of information within TEC. An event describes any
significant change in the state of a resource, which has been detected by
event adapters and monitoring agents. The content of an event are
name/value pairs, which are referred to as attributes.
򐂰 Event adapters are processes which typically reside on the same host as a
managed resource. When an adapter receives information from its source,
the adapter formats the information and forwards it to the event server.
򐂰 The event server is the central servers that receives and processes all events.
The event server evaluates these events against a set of rules to determine if
it can respond to or modify the event automatically.
򐂰 The Event Console provides a graphical interfaces for event display.
򐂰 The TEC Gateway is a process that runs on a gateway. Tivoli adapters send
their events to a TEC gateway, which forwards them to the TEC server
specified in the adapter’s configuration file. TEC gateways can be configured
with a configuration file distributed with an Adapter Configuration Profile
(ACP) using the Adapter Configuration Facility (ACF).
򐂰 RDBMS is a relational database management system that stores events on
behalf of the TEC. Two TEC event server components, the reception log and
event repository, are RDBMS tables accessed through the RIM.
򐂰 Event sources are where the events are generated.
Figure 3-23 on page 125 shows the typical event flow.
124
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 3-23 Event management flow with Tivoli Enterprise Console
The TEC server reception engine receives events, optionally logs them, parses
them against known BAROC class definitions, and then forwards the parsed
events to the rules engine.
To optimize the reception engine, make sure the reception buffer is large enough
to prevent overflow. The default is 500 events.
A healthy and well-configured TEC Server can support a sustained rate of event
processing of about 10 events per second, or 36 000 events per hour. This
processing is highly dependent on the complexity and structure of the event rule
base. Obviously, the more complex the rule base, the more processing and
input/output (I/O) overhead is required. Depending on the number of event
sources, this might or might not be a reasonable number of events to expect.
The requirements for database storage are directly proportional to the number of
events. Events can be different sizes. For example, assume that the event size
ranges from 1k to 5k bytes with the average event requiring approximately 2 KB.
A database of 100 MB will allow for storing 20 000 to 10 0000 events. In most
cases, 200 MB of RDBMS space is sufficient. However, allocating an additional
ten percent for temporary space is recommended.
To keep the active TEC database as small and responsive as possible, it should
also be carefully considered which events are being sent and how they are
Chapter 3. The Outlet Systems Management Solution Architecture
125
filtered, the closer to the source the better. In addition, a procedure to manage
the database size to avoid if from filling up should be defined.
TEC Server and TEC Console configuration
Every TEC Server can manage approximately 30 TEC Consoles, and every
console can be personalized with a specific view of the events according to the
needs of the users. Each TEC Console can be accessed from different
workstations, but it is not possible to have more than one active instance of the
same console at the same time.
TEC in the Outlet Systems Management Solution
In the architecture for Outlet Inc. only one TEC Server and one TEC Console will
be implemented. In addition as part of the initial phase only standard, default
event-processing rules will be defined.
The scope of the initial deployment of the Outlet Solution does not allow for
defining specific rules for event filtering, correlation and processing customized
for Outlet Inc. However, this task should be performed once the analysis of event
sources and the volume has been conducted.
126
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Part 2
Part
2
Management
solution
implementation
Now, that the basic architecture for the Outlet Systems Management Solution
has been established, it’s time to start implementation.
This section provides step-by-step instructions on how to install and customize
the various Tivoli components in order to implement the Outlet Systems
Management Solution.
The information in this section is presented according to the logical flow of tasks:
򐂰 Chapter 4, “Installing the Tivoli Infrastructure” on page 129
򐂰 Chapter 5, “Creating profiles, packages, and tasks” on page 271
© Copyright IBM Corp. 2005. All rights reserved.
127
128
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
4
Chapter 4.
Installing the Tivoli
Infrastructure
We have designed and architected the Tivoli Management infrastructure. Now, it
is time to install and deploy it. There are a few general prerequisites that need to
be established and verified to eliminate installation problems. While the most
general prerequisites are described in 4.2, “Installation planning and preparation”
on page 140, they are not intended to be all-encompassing. Issues such as
network issues apply to a particular environment, and can impact the success of
the installation.
This chapter discusses the steps needed to install, configure, and verify that the
Outlet Systems Management Solution environment functions correctly. The
following topics are included:
򐂰 4.1, “The Outlet Systems Management Solution” on page 130, including the
Proof-of-Concept environment, and placement of the Tivoli components.
򐂰 4.2, “Installation planning and preparation” on page 140 describes
preinstallation activities.
򐂰 4.3, “Installation and configuration” on page 147, easy-to-use step-by-step
instructions on installing the various Tivoli product components
򐂰 4.4, “Postinstallation configuration” on page 223, initial configuration of the
Tivoli environment: defining policies to enforce naming standards, subscribing
endpoints, controlling bandwidth usage, correlating events and so on.
© Copyright IBM Corp. 2005. All rights reserved.
129
4.1 The Outlet Systems Management Solution
Based on the overall discussion provided in Chapter 3., “The Outlet Systems
Management Solution Architecture” on page 37, a Proof-of-Concept architecture
for the Outlet Systems Management Solution is presented in Figure 4-1.
Figure 4-1 Outlet Systems Management Solution architecture
130
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
While developing this Proof-of-Concept architecture our target enterprise, Outlet
Inc., was envisioned as a huge, continent or country-wide retailer or financial
institution, with multiple brands and lines of business (LOB). We assumed that
the enterprise has a four-tier geographical structure including the following:
򐂰
򐂰
򐂰
򐂰
Enterprise main office
Multiple regional and country offices connected to the head office
Multiple local and state-wide offices connected to the regional offices
Multiple outlets connected to the local and state-wide offices
As you can see in Figure 4-1 on page 130, the Outlet Systems Management
Solution architecture follows the traditional Hub-Spoke architecture with
distributed managed nodes and gateways to allow for the lowest possible
bandwidth usage. In addition to the Tivoli Management Environment (TME)
management solution, the architecture includes an environment for database
support for all components as well as an environment for transaction monitoring.
It should be noted that the proposed architecture does not include two major
functional components which should be included to make the solution
production-ready. These two components are:
򐂰 Developing the TMR, TEC, and DB environments
򐂰 Staging the TMR, TEC, and DB environments
From a production point of view, these two environments will be required to
develop and test new Tivoli objects such as software packages, resource models
for monitoring, scanning profiles, event rules and so on, and to ensure that they
will operate flawlessly in the production environment. Development and staging
environments are, however, expected to be copies of the production
environment, so the following information about how to establish the productions
environment can easily be applied when implementing the development and
staging environments.
Design and development of procedures for testing, approving, and transferring
objects from the Development and Staging environments into the production
environment are so closely related to the Change Management philosophy of
each organization that they will not be addressed in the following.
4.1.1 Management environments
The TME architecture for the Outlet Systems Management Solution be divided
into four functionally different Tivoli Management Environment (TME) based
types of environments supported by two non-TME based environments.
However, the non-TME based environments will be monitored and managed
from the main Hub TMR environment:
Chapter 4. Installing the Tivoli Infrastructure
131
򐂰 non-TME environments
– ”The database environment”
– ”The transaction performance management environment”
򐂰 TME environments
– The Hub TMR environment
– Multiple Spoke TMR Environments
– Multiple Regional Environments
– Multiple Outlet Environments
For all four environments, the following technical and logistical prerequisites are
assumed to have been implemented prior to establishing the Outlet Systems
Management Solution:
򐂰 The operating platform for all systems is UnitedLinux 1.0 SP3.
򐂰 Every system involved in the systems management solution must satisfy the
Tivoli products hardware and software requirements, as described 3.4.1,
“Suggested Tivoli hardware requirements” on page 62.
򐂰 Every system involved in the systems management solution (TMRs,
managed node, and endpoints) must be connected to the TCP/IP Network.
򐂰 All systems will have domain Name Server (DNS) availability to perform IP
address resolution, including reverse mapping.
򐂰 WAN and LAN connections must be active and tested, and Outlet Inc.’s staff
will have to ensure a permanent effectiveness of the same.
In the following sections, we give brief descriptions of each of the functional
environments, and describe the roles and responsibilities of each systems in the
architecture.
The database environment
The purpose of the database environment is to provide RDBMS support for the
various solution components to store persistent data for specific management
tasks.
The final design of the database environment will be determined by a multitude
of factors that relate directly to client practices and policies as well as the
anticipated load on the systems. In a real implementation, the customer’s
database administrator (DBA) would be consulted for advice on how to configure
and manage the production RDBMS environment.
In the Outlet Systems Management Solution, we have decided to establish
RDBMS support for all systems management components in a single, dedicated
DB2 server named rdbms and provide access to this server through DB2 Client
code installed where necessary.
132
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
From a strictly functional perspective and without taking performance into
account, the DB2 Client code will have to be installed on all TME Servers owning
RIM objects as well as the TMTP Server. In our design, most RIM objects are
owned by The Hub TMR Server, some are replicated on the Spoke TMR servers,
and the TEC Server owns a single RIM. These systems will be the only TME
Servers that requires installation of the DB2 Client.
Table 4-1 details the systems involved in the database environment and their
functional roles.
Table 4-1 The database environment
System
Main
Software
Components
RDBMS Server
DB2 Server
Provide persistent storage in an RDBMS for
all systems management components in
the Outlet Systems Management Solution
DB2 Client
Establish access to the persistent RDBMS
based storage provided by the RDBMS
Server
Functional role
Hub TMR Server
Spoke TMR servers
TEC Server
TMTP Server
The transaction performance management environment
The purpose of the transaction performance management environment is to
provide capabilities to monitor business transactions as they are executed in any
Web server in the infrastructure.
As already discussed in Chapter 1, “The challenges of managing an outlet
environment” on page 3, the availability and performance of business
transactions should be the primary indicator of the overall health of the
infrastructure.
The transaction monitoring environment on the Outlet Systems Management
Solution is established by a dedicated server, tmtpsrv, hosting the Tivoli
Monitoring for Transaction Performance Server. Agents will be installed on each
of the systems hosting the WebSphere Application Server in the Outlets. The
central database environment provides persistent storage facilities to the TMTP
Server, and in case of threshold violations, events are sent to the Tivoli
Enterprise Console Server for processing.
Chapter 4. Installing the Tivoli Infrastructure
133
This implementation provides for basic monitoring capabilities of real-time,
back-end application transaction response times for applications hosted in the
WebSphere environment.
Table 4-2 displays the systems that are part of the transaction performance
environment and their functional roles.
Table 4-2 Transaction performance monitoring environments
System
Main Software Components
Functional role
TMTP
Server
Tivoli Monitoring for Transaction
Performance Server
Provide central functions to manage the
transaction performance management
ClientXX
Tivoli Monitoring for Transaction
Performance J2EE Agent
Target of management/monitoring
The transaction monitoring solution can be extended by deploying a number of
Windows based systems that automatically execute prerecorded transactions
that run on a schedule. This will provide additional transaction availability
monitoring capabilities, however, this is not considered part of the Outlet
Systems Management Solution. Refer to the Tivoli Monitoring for Transaction
Monitoring User’s Guide for more information.
The Hub TMR environment
The intended use of the Hub TMR environment is to provide the central,
enterprise wide functions and facilities to be used by all of the Spoke TMR
environments. As such, the Hub TMR environment becomes the focal point for
managing the remote store infrastructure.
All master objects, including software packages, resource models, scanning
profiles, profile managers, and so on are owned by, and maintained from the Hub
TMR environment. All systems in the Hub TMR environment rely on the RDBMS
environment for persistent data storage.
Besides providing the main Tivoli Management Framework (TMF) functions for
managing authorizations, enforcing policies, and distributing profiles etc. the Hub
TMR environment includes dedicated systems for event handling and
correlation, software distribution and inventorying, as well as console operations.
In the design of the Outlet Systems Management Solution architecture, it was
decided to use dedicated systems for these functions for the following reasons:
򐂰 Performance
The TEC Server, used for event management, quickly becomes a bottleneck,
unless placed on a dedicated system. The same is true for the primary
system responsible for importing inventory scans into the RDBMS.
134
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
򐂰 Capacity
The storage requirements for hosting the master code repository at the
srchost can become relatively significant, and justifies a dedicated system.
򐂰 Reuse
The Web-based consoles used in the Tivoli environment can be implemented
on existing systems running WebSphere Application Server.
Table 4-3 displays the systems that are part of the Hub TMR environment and
their functional roles.
Table 4-3 The Hub TMR environment
System
Main
Software
Components
Functional role
Hub TMR
Server
Tivoli
Management
Framework
Establish central focal-point for monitoring and
management of the Remote Store infrastructure.
Event
Server
Tivoli
Enterprise
Console
Provide event management capabilities for the entire
Remote Store infrastructure
Source
Host
Tivoli
Configuration
Manager
Provide central functions for configuration and release
management processes that apply to the entire
Remote Store infrastructure.
Console
Server
Tivoli Web
based
Consoles
Provide access to centrally located Web-based
consoles to be used for monitoring and managing the
Remote Store infrastructure by local and remote
operators and administrators.
The Spoke TMR Environment
The main role of the Spoke TMR environment is to act as a proxy for the Hub
TMR environment, by implementing functions taking ownership of a subset of the
managed resources (TME endpoints) on behalf of the Hub TMR Server.
As the number of managed systems (TME endpoints) in the remote store
infrastructure increases, and the organizational responsibilities become more
and more dispersed, more Spoke TMR environments can be added to help
reduce the load on the Hub TMR Server and allow for easy distribution of
management functionality and authorizations.
Through an automated process, the Hub and Spoke TMR Servers exchange
information about all the management objects in the TME environment, thus
allowing the Spoke TMR Server to forward events, status information, inventory
Chapter 4. Installing the Tivoli Infrastructure
135
scans and so forth to the components owned by the Hub TMR environment.
They also enable the components in the Hub TMR environment to issue
management actions against the managed objects (TME endpoints, and so on)
owned by the Spoke TMR environment. In addition, management actions against
managed resources owned by a particular Spoke TMR Server can be executed
from the Spoke TMR Server itself, thus allowing for distributing management
authority for a subset of the remote store infrastructure.
In the Outlet Systems Management Solution, we only use one Spoke TMR
environment to demonstrate the use of spoke TMR systems and keep the
complexity at a minimum.
Table 4-4 shows the systems that are part of the SpokeTMR environment and
their functional roles.
Table 4-4 The Spoke TMR environments
System
Main
Software
Components
Spoke
TMR
Server
Tivoli
Management
Framework
Functional role
Provide proxy focal-point for endpoint interactions for a
subset of the Remote Store infrastructure.
The regional environments
In the Outlet Systems Management Solution architecture we assumed that a fast
network connection exists between the central site and a limited number of
distributed regional offices. The fast connection allows for optimizing the
distribution of management objects, primarily software packages, by establishing
remote depots that always contain copies of the most used distributions. The
depot is self-maintained, based on a size limit and a least-recently-used
algorithm, but packages can be preloaded to depots prior to distribution to the
endpoints.
Another use of the managed node in the regional environment is to take
responsibility for hosting the definitions of managed resources (TME endpoints)
for a particular region. By defining the Regional Server as the primary gateway,
and the Spoke TMR Server as the backup Gateway, fault tolerance is build into
the Outlet Systems Management Solution to help avoid single-points-of-failure of
the management solution.
Table 4-5 on page 137 displays the systems that are part of the Regional
environment and their functional roles.
136
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Table 4-5 The PoC Regional environments
System
Regional
Server
Main
Software
Components
Tivoli
Management
Framework
Functional role
Provide Gateway and remote depot functionality to
provide fault-tolerance and limit bandwidth usage
The Outlet environments
In the Outlet Systems Management Solution architecture the Outlet and Regional
environments share similar responsibilities. None of them are required, although
they are highly recommended, to allow for timely installation of software
packages in a low-bandwidth environment. By establishing a managed node and
a depot as close to the target systems as possible, meaning in the store or at a
subregional office, we can move copies of the profiles and installation images
within close proximity of the targets in advance, using a predetermined fraction of
the available bandwidth during working hours, and enable fast profile deployment
and detached software installations. The term detached software installation is
used to denote scheduled software installation can be invoked at a time when
the TMR server (the network connection) is not available to the managed
resource (the TME endpoint).
Table 4-6 displays the systems that are part of the Outlet environment and their
functional roles.
Table 4-6 The Outlet environments
System
Main Software Components
Functional role
Outlet
Server
Tivoli Management Framework
Provide Gateway and remote depot
functionality to provide fault tolerance
and limit bandwidth usage
ClientXX
WebSphere Application Server,
DB2 Server, and Tivoli
Management Agents
Target of management/monitoring
4.1.2 Functional component locations
Based on the solution component selection, the location of various components
in environment are detailed in the following tables:
Non-TMR servers:
The hubtmr environment:
The spoketmr environment:
Table 4-7 on page 138
Table 4-8 on page 138
Table 4-9 on page 139
Chapter 4. Installing the Tivoli Infrastructure
137
Non-TME related components
Table 4-7 details the location of functional components for the non-TME based
systems.
Table 4-7 Functional component location details for non-TMF systems
Role
IP address
Hostname
Software Components
Database
Server
10.1.1.3
rdbms.demo.tivoli.com
UnitedLinux 1.0 SP3
DB2 UDB 8.2
TMTP
Server
10.2.1.1
tmtpsrv.demo.tivoli.com
UnitedLinux 1.0 SP3
WAS 5.1 Server
DB2 8.2 Client
TMTP V5.2 Server
Hub TMR components
Table 4-8 details the location of functional components in the Hub TMR
environment.
Table 4-8 Hub TMR functional component location details
138
Role
IP address
Hostname
Software Components
Hub TMR
Server
10.1.1.1
hubtmr.demo.tivoli.com
UnitedLinux 1.0 SP3
DB2 Client v8.2
TMF 4.1.1
JRE130
JRIM411
JavaHelp
JCF411
mdist2gui
TEC39JRE
TEC_UI_SRVR
TECJCONSOLE
ACF
JCF41
InventoryServer
InventoryGateway
swdis
swdisgw
swdisjps
apm
ccm
TMNT_3.6.2
ITMCmptSvcs
ITMWAS
DB2ECC_V2R2
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Role
IP address
Hostname
Software Components
Event
Server
10.1.1.2
tec.demo.tivoli.com
UnitedLinux 1.0 SP3
DB2 Client v8.2
TMF 4.1.1
JRE130
JCF411
TEC39JRE
TEC_SERVER
TECJCONSOLE
Source
Host
Server
10.1.1.4
srchost.demo.tivoli.com
UnitedLinux 1.0 SP3
TMF 4.1.1
JRE130
JCF411
swdis
Console
Server
10.1.1.5
console.demo.tivoli.com
UnitedLinux 1.0 SP3
TME Web Health Console
Spoke TMR components
Table 4-9 details the location of functional components in the Spoke TMR
environments.
Table 4-9 Sopke TMR functional component location details
Role
IP address
Hostname
Software Components
Spoke
TMR
Server
10.1.1.6
spoketmr.demo.tivoli.com
UnitedLinux 1.0 SP3
DB2 Client v8.2
TMF 4.1.1
JRE130
JCF411
JRIM41
TEC39JRE
ACF
JCF41
InventoryServer
InventoryGateway
swdisgw
swdisjps
apm
TMNT_3.6.2
ITMCmptSvcs
ITMWAS
DB2ECC_V2R2
Chapter 4. Installing the Tivoli Infrastructure
139
Role
IP address
Hostname
Software Components
Region
Server
10.2.0.10
region01.demo.tivoli.com
UnitedLinux 1.0 SP3
TMF 4.1.1
JRE130
JCF411
TEC39JRE
ACF
InventoryGateway
swdisgw
TMNT_3.6.2
ITMCmptSvcs
ITMWAS
DB2ECC_V2R2
Outlet
Server
10.2.1.10
outlet01.demo.tivoli.com
UnitedLinux 1.0 SP3
TMF 4.1.1
JRE130
JCF411
TEC39JRE
ACF
InventoryGateway
swdisgw
TMNT_3.6.2
ITMCmptSvcs
ITMWAS
DB2ECC_V2R2
Client0x
10.2.1.x
client02.demo.tivoli.com
UnitedLinux 1.0 SP3
WAS 5.1 Server
DB2 Server 8.2
TMTP 5.3 MA
4.2 Installation planning and preparation
Besides architecting and designing the systems management solution, a few
prerequisites have to be verified prior to installing the Tivoli infrastructure. The
main items which need to be in place are:
򐂰
򐂰
򐂰
򐂰
”Create naming standards for all Tivoli related objects”
”Operating platform preparation”1
”Enabling SMB server on the srchost server”
”Establishing a code library”
This is required for all systems participating in the Outlet Systems
Management Solution environment.
1
140
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
4.2.1 Create naming standards for all Tivoli related objects
Table 4-10 outlines the naming standards that will be applied to the Tivoli objects
in the Outlet Systems Management Solution environment.
Table 4-10 Naming standards for the Outlet Systems Management Solution
object type
example
description
TMR Servers
<hostname>
descriptive name
Managed nodes
<hostname>
descriptive name
Gateways
<hostname>-gw
Endpoints
<hostname>-ep
Policy regions
descriptive name
Names are enforced by Tivoli products
creating the policy regions.
<TMR>_region_<
REGION>_PR
for profiles
<TMR>_region_<
REGION>_EP
for endpoints
<Policy-Region>_
PM_<type>
where type denotes the type of profiles
hosted:
INV
ITM
SWD
TEC
<Policy-Region>_
PMS_<opsys>
where opsys denotes the operating
system of the endpoints subscribed
Profile Managers
Profiles
No specific standards will be applied
Task Libraries
No specific standards will be applied
Tasks
No specific standards will be applied
Query Libraries
No specific standards will be applied
Queries
No specific standards will be applied
InventoryConfig
<rPolicy-Region>_
INV_HW
Hardware scanning
<rPolicy-Region>_
INV_SW
Software scanning
<rPolicy-Region>_
INV_CUSTOM
Customized functionality
Chapter 4. Installing the Tivoli Infrastructure
141
object type
example
SoftwarePackace
<product_name>^
<version>
description
Tmw2kProfiles
ACF Profiles
not used
RIMs
Names are enforced by Tivoli products
using the various RIM objects.
APM plans
No specific standards will be applied
4.2.2 Operating platform preparation
Important: The following activities have to be performed for all systems that
will be part of the Outlet Systems Management Solution.
Prior to performing the initial operating system customization, you should
carefully review the setup with your local security administration. The
proposed customization has been designed for convenience in setting up the
proof-of-concept environment, and allow access to facilities that your local
policies do not allow.
The minimum requirements will be for a central user to perform remote login.
In a production environment, we recommend using ssh wherever possible.
To set up the basic functionality that will allow the user root to perform the
required actions on our UnitedLinux systems, you should consider performing
these steps:
1. Allow root to login to the system.
In the file /etc/pam.d/login, comment out the line:
auth required pam_securetty=so
2. Allow remote logins to the system.
In the file /etc/pam.d/rlogin, comment out the line:
auth required pam_securetty=so
3. Enable the user root to open a telnet session. In the file /etc/securetty add
one line for each session you will allow. To allow three simultaneous root
sessions, you need three similar lines:
142
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
pts/0
pts/1
pts/2
4. Enable ftp. This procedure enables the local users, including the root user, to
open ftp sessions and write in the local file system:
a. In the file /etc/vsftpd.conf file, uncomment the following two lines by
removing the preceding hash (#) character:
local_enable=yes
write_enable=yes
b. Allow the user root to open an ftp session. In the file /etc/ftpusers file,
comment out the line that specifies root.
# root
c. Allow anonymous ftp logins to the system. In the file /etc/pam.d/vsftpd,
comment out the line:
auth sufficient pam_ftp=so
5. Enable the ftp server to be started by the inetd daemon. In the file
/etc/inetd.conf uncomment the line:
vsftpd
Issue the rcinetd restart command to activate the change.
6. Enable automatic start of the inetd daemon on system restart. From the
directory /etc/rd.d/rc3.d, create two symbolic links, K19inetd and S04inetd,
both pointing to the inetd directory.
cd /etc/rc.d/rc3.d
ln -s ../inetd K19inetd
ln -s ../inetd S04inetd
7. Restart your system.
Preparing the Tivoli environment
The following applies only to systems which assume the role of Tivoli Server or
managed node.
Before you can use the Tivoli Desktop or commands, you must set up the Tivoli
environment variables. You can manually run one of the scripts provided by Tivoli
Management Framework, or modify the /etc/profile.local file in the UnitedLinux
operating systems. This file can change based upon the UNIX OS you are using.
if [ -f /etc/Tivoli/setup_env.sh ]; then
. /etc/Tivoli/setup_env.sh
fi
Chapter 4. Installing the Tivoli Infrastructure
143
4.2.3 Enabling SMB server on the srchost server
Prior to establishing the code library, enable either the NFS or the SMB server
components on the srchost system. Either of these servers will enable access to
the srchost file systems from other systems, thereby allowing for software
installations using the images that reside on the srchost system. Because the
SMB server, also known as the samba server, component will allow access from
Windows systems, we will use that in the Outlet Systems Management Solution.
Complete the following steps to enable and configure the samba server on the
srchost system:
1. Logon to the srchost system as root.
2. Verify that the directory /img has been created.
3. Create the credentials for the user root.
smbpasswd -a root <password>
4. Edit the file /etc/samba/smb.conf. Append the lines in Example 4-1 to the file:
Example 4-1 Append file/etc/samba/smb.conf
[image]
comment = code images
path = /img
browsable = Yes
printable = No
read only = No
create mask = 0775
directory mask = 775
5. Stop the samba server using the command: rcsmb stop.
6. Allow the samba server to be started by at boot time, by appending the
following lines to the file /etc/inetd.conf:
netbios-ns dgram udp wait
root /usr/sbin/nmbd nmbd
netbios-ssn stream tcp nowait root /usr/sbin/smbd smbd
7. Start the samba server manually, using the command: rcsmb start or restart
the inetd service with the rcinetd restart command.
8. Now, from other systems in the infrastructure that have an smb client
installed, typically UnitedLinux or Windows based systems, you can access
the /img file system on the srchost using the following command:
smbmount //srchost/image <mount-point> -o username=root,password=<password>
144
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
4.2.4 Establishing a code library
Prior to installation, it is a good practice to create a library of all the installation
images to be used. This will help ease the installation process and ensure that
the same level of code is used throughout. In addition, a central code library will
help ensure that local modifications, if any, are applied to all systems involved.
The natural location of the code library is the Source Host system.
The library structure in Example 4-2 allows for differentiating installation images
between versions and platforms for the same components. The proposed
structure will also allow for differentiating the file read-write authorities to facilitate
central logging and response file generation.
Example 4-2 Library structure code
<img_dir>/code/<vendor>/<product>/<version.release.level>/[all | <component> |
<patchID>]/<platform>
<img_dir>/spb/<vendor>/<product>/<version.release.level>/[all | <component> |
<patchID>]/<platform>
<img_dir>/tools/<vendor>/<product>/<version.release.level>/[all | <component> |
<patchID>]/<platform>
<img_dir>/rsp/<vendor>/<product>/<version.release.level>/[ all | <component> |
<patchID> ]/<platform>/[ template | <hostname> ]
<img_dir>/logs/<vendor>/<product>/<version.release.level>/[ all | <component> |
<patchid>]/<hostname>
where:
<platform> be one of: generic, aix, hpux, solaris, linux-i386, linux-s390,
win-ix86 etc.
In the proposed library structure, the intended use of the various high-level
directories are:
򐂰 code
This structure contains installation files for the various products. The
installation image for IBM WebSphere Application Server for Linux on Intel®
is in the directory:
/img/code/websphere/was/510/server/Linux-IX86
FixPack1 would be found in:
/img/code/websphere/was/510/FixPack1/generic
Chapter 4. Installing the Tivoli Infrastructure
145
The systems administrators have to be able to modify the content of this
directory structure. All potential users of installation images should have
read-execute (755) access to all the files.
The use of the code directory will, under normal conditions, be restricted to
developing and testing new software packages. In the Outlet Systems
Management Solution, we assumed that the installation images will be built
into software package blocks, which in turn will be distributed as a
self-contained installation package to the target systems.
򐂰 spb
This directory will contain the production-ready software packages that will be
distributed to the target systems. During the software package build process,
the installation images from the code directory will be imbedded into the
software package blocks, which then will be stored in the spb path.
򐂰 tools
This directory is the location where locally developed installation scripts and
helper files are stored. To be able to differentiate between versions and
components the same structure as used for installation images is used, and
the file permissions should be the same (755).
If you develop a script to install a Tivoli managed node using default
installation parameters, the location would be:
/img/tools/tivoli/TMF/411/MN/Linux-IX86/instMN.sh
򐂰 rsp
Most installation procedures today allow for providing basic information to be
used during installation of a product in a separate response file, which can be
prepared in advance. This allows the installation to be performed unattended
and automatically, without requiring an operator to provide installation specific
details at install time. The rsp directory is intended to contain host-specific
response files for installing products and components. These response files
will usually be generated as part of the installation procedure based on
templates prepared by the system administrator.
The location of the response file for installing IBM DB2 UDB Server v8.2
Enterprise Edition on a system named db2srv would be:
/img/rsp/db2/udb-ee/820/server/Linux-IX86/db2_server_820.rsp
򐂰 logs
To keep logs available for review or auditing, it is a good practice to establish
a log directory structure. All logs from installation activities are held here. To
allow the user IDs used to perform installations to create and modify the log
files, these should have read-write access to this directory structure.
146
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Naturally, administrators need to be able to maintain the space, so they also
need read-write permissions as a minimum. The recommended permissions
are 766.
The location of the log files from the installation of IBM Tivoli Monitoring for
Transaction Performance v5.3 Server on a host named tmtpsrv might be
found in:
/img/log/tivoli/TMTP/530/server/generic/install.log
Note that the recommendation is to establish a common root, or base directory,
for all the above directories. In the above examples, this root directory is known
as /img. Establishing a root base directory will allow for easy control of and
access to the central code library from remote machines using nfs- or smbmount.
4.3 Installation and configuration
This section details the steps to be performed to install and configure the
necessary components in the environment as described in “Establishing an
installation roadmap” on page 147.
4.3.1 Establishing an installation roadmap
Once the prerequisites have been verified, the installation activities need to be
sequenced in a way which will allow for a smooth installation path. Most of the
Tivoli components that need to be installed have inherent relationships which
require the installation activities to be serialized accordingly.
The following list outlines the main activities for establishing the systems
management framework according to the architecture described in Chapter 3,
“The Outlet Systems Management Solution Architecture” on page 37. This
method will allow for central execution and control of management activities
related to configuration-, release-, change-, availability-, performance- and
incident management of the outlet environments.
This list is the installation roadmap, the order in which you install and configure
the components. Each activity links to sections with further detail.
1. 4.3.2, “Installing a working database environment” on page 149.
a.
b.
c.
d.
e.
f.
“Prepare for DB2 installation” on page 149
“Install and configure DB2 Server” on page 150
“Creating DB2 instances and databases” on page 150
“Verifying DB2 Server Installation” on page 151
“Installing DB2 Client” on page 152
“Verifying DB2 client operation” on page 154
Chapter 4. Installing the Tivoli Infrastructure
147
2. 4.3.3, “Establishing the TME infrastructure” on page 155.
a.
b.
c.
d.
“Installing and configuring the Hub TMR Server” on page 156
“Installing and configuring Hub TMR managed nodes” on page 162
“Installing and configuring the Spoke TMR server” on page 170
“Installing and configuring Spoke managed nodes” on page 171
3. Install and configure TME-based products.
a.
b.
c.
d.
e.
f.
g.
“Install and configure TMF 4.1.1 Components and Patches” on page 171
4.3.4, “Tivoli Enterprise Console v3.9” on page 181
4.3.5, “Installing Tivoli Configuration Manager” on page 187
4.3.6, “IBM Tivoli Monitoring installation” on page 197
4.3.8, “IBM Tivoli Monitoring for Web Infrastructure” on page 209
“Installing WebSphere Application Server v5.1 installation” on page 204
4.3.9, “IBM Tivoli Monitoring for databases” on page 213
4. Install and configure non-TME products.
a. 4.3.10, “TMTP Management Server” on page 216
5. 4.4, “Postinstallation configuration” on page 223
a.
b.
c.
d.
e.
f.
g.
h.
i.
j.
k.
l.
m.
n.
4.4.1, “Framework customization” on page 223
“Connect TMRs and define resource exchange interval” on page 226
“Setup gateways and repeaters” on page 228
4.4.2, “Enabling MDIST2” on page 230
4.4.4, “Configuring the Tivoli Enterprise Console” on page 234
4.4.5, “Customizing the Inventory” on page 238
4.4.6, “Configuring Software Distribution” on page 245
4.4.7, “Enabling the Activity Planner” on page 247
4.4.8, “Enabling Change Manager” on page 252
4.4.10, “IBM Tivoli Monitoring for Web Infrastructure” on page 259
4.4.11, “IBM Tivoli Monitoring for Databases” on page 262
4.4.3, “Enabling Tivoli End-User Web Interfaces” on page 232
4.4.9, “IBM Tivoli Monitoring configuration” on page 255
4.4.12, “IBM Tivoli Monitoring for Transaction Performance” on page 266
Naturally, each of these installation activities in the roadmap include multiple
subtasks, some of which cannot be performed until related components have
been implemented.
Sections 4.3.2, “Installing a working database environment” on page 149 through
4.4, “Postinstallation configuration” on page 223 describe the main installation
and configuration tasks in this roadmap. These tasks establish the core
functionality for each component or component group, and the subtasks related
to basic cross-component configuration. They are necessary to meet security
and organizational requirements, and are required to enable certain functions in
to the proposed architecture.
148
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
These functions will be explained and described in 4.4, “Postinstallation
configuration” on page 223.
In summary, the steps required to build the Outlet Systems Management
Solution include:
–
–
–
–
–
–
–
–
–
–
–
–
“Installing a working database environment” on page 149
“Installing and configuring the Hub TMR Server” on page 156
“Installing and configuring Hub TMR managed nodes” on page 162
“Installing and configuring the Spoke TMR server” on page 170
“Installing and configuring Spoke managed nodes” on page 171
“Install and configure TMF 4.1.1 Components and Patches” on page 171
4.3.5, “Installing Tivoli Configuration Manager” on page 187
4.3.4, “Tivoli Enterprise Console v3.9” on page 181
4.3.6, “IBM Tivoli Monitoring installation” on page 197
4.3.8, “IBM Tivoli Monitoring for Web Infrastructure” on page 209
4.3.9, “IBM Tivoli Monitoring for databases” on page 213
4.3.10, “TMTP Management Server” on page 216
4.3.2 Installing a working database environment
The first step in the installation roadmap is to install and configure the database
environment. The database environment consists of a DB2 UDB Enterprise
Edition Server v8.2 on the rdbsm system, and DB2 Clients v8.2 on the hubtmr,
tec, and tmtpsrv systems.
In the Outlet Systems Management Solution, the DB2 server will provide
database services to the entire Tivoli environment end. In addition, the DB2
server will contain a Tivoli endpoint in order to facilitate monitoring of the
databases using Tivoli Monitoring for DB2.
To begin, setup a DB2 database server on the rdbms machine. This includes the
installation and configuration of DB2.
Prepare for DB2 installation
To install the DB2 software, perform the following steps:
1. Download DB2 Universal Database™ Enterprise Server Edition v8.2 for Linux
and unpack the installation image to the code directory on the srchost system.
The suggested location is:
<img_dir>/code/db2/udb-ee/820/server/all/Linux-IX86
2. Make the installation image accessible from the rdbms system using NFS or
SMB mount:
smbmount //srchost/img /mnt -o username=root,password=<password>
Chapter 4. Installing the Tivoli Infrastructure
149
Install and configure DB2 Server
For documentation on DB2 server installation, refer to the DB2 Information
Center Web site:
http://publib.boulder.ibm.com/infocenter/db2help/index.jsp
Navigate to Installing → DB2 Universal Database for Linux, UNIX, and
Windows → DB2 Servers → DB2 UDB Enterprise Server Edition
(non-partitioned) → Linux → Starting the DB2 Setup wizard.
For documentation on system requirements, refer to the DB2 Information Center
Web site and navigate to Installing → DB2 Universal Database for Linux,
UNIX, and Windows → DB2 Servers → DB2 UDB Enterprise Server Edition
(non-partitioned) → Linux → Installation requirements.
1. Start the Installation Wizard and follow the instructions in the DB2 Information
Center.
1. The DB2 initial installation asks you whether or not to create an instance or
administration server. Choose yes for both of these. It then prompts you for
information to set up an instance, an administration server, and a fenced user
ID. (user name, password, home directory, and so forth) These fields are
straightforward, but must be filled out for the DB2 environment to be set up
fully.
Creating DB2 instances and databases
The Tivoli infrastructure requires several databases to support its operation.
Table 4-11 outlines the required and recommended setup of database instances
and databases.
Table 4-11 DB2 Instance and Database Matrix
Tivoli Component
Instance
Name /
Owner
TCP
port
Database
Name
related
RIM
Tivoli Inventory
db2inv
60004
inv_db
invdh_1
inv_query
MDIST2
db2mdist
60008
mdist_db
mdist2
swd_db
n/a
apm_db
planner
ccm_db
ccm
TCM - Software Distribution
TCM - Activity Planner
db2swd
60012
TCM - Change Manager
150
Tivoli Enterprise Console
db2tec
60016
tec_db
tec
TMTP
db2tmtp
60020
tmtp_db
n/a
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Creating DB2 instances
Create the required database instances using the following commands. The
database instances must be created while logged in as root.
1. Create the instance user/
useradd -g db2grp1 -s /bin/bash -m -d /home/<instance_name> <instance_name>
-p <password>
2. Create the DB2 instance.
/opt/IBM/db2/V8.1/instance/db2icrt -a server -p <port_number> -u db2fenc1
-s ese <instance_name>
Note: We assumed that the fenced user to be used by the instance
(db2fenc1) and the instance group (db2grp1) has been created prior to
creating this instance. If a default instance was created during DB2 Server
installation, these prerequisites will have been defined.
3. Repeat the above commands to create each of the required instances.
Tip: Specifying the port number while creating the instance will prevent
DB2 connectivity problems later. Each instance will reserve four ports. On
United Enterprise Linux, the DB2 ports typically start at 60000 and
increment up by one. For example:
db2icrt -a server -p 60004 -f db2fenc1 db2inv
This command adds the following four lines to the /etc/services file:
DB2_db2inv
DB2_db2inv_1
DB2_db2inv_2
DB2_db2inv_END
60004/tcp
60005/tcp
60006/tcp
60007/tcp
Creating the databases
To create the databases, perform the following steps:
1. Create the required databases using the following commands. Each DB2
database must be created while logged in as the instance owner.
su - <instance_name>
db2 create database <database_name>
2. Repeat these commands to create each of the required databases.
Verifying DB2 Server Installation
The verify the server installation, perform the following steps:
Chapter 4. Installing the Tivoli Infrastructure
151
1. While you are logged in as root and from the command line on the rdbms
server, use the following commands to verify that the database has been
created:
su - <instance_owner>
db2 list database directory
You should see the database for this instance listed, similar to the results in
Example 4-3:
Example 4-3 Database created
Database aliasINV_DB
Database name INV_DB
Local database directory/home/db2inst1
Database release levela.00
Comment
Directory entry typeIndirect
Catalog database partition number0
2. Repeat this procedure for all instances.
Installing DB2 Client
Preparing the DB2 clients on the hubtmr and tec systems require completion of
the following steps on both systems:
1.
2.
3.
4.
”DB2 client installation”
”Establishing a connection to the DB2 Server Instances”
”Cataloging databases”
”Verifying DB2 client operation”
DB2 client installation
To install the DB2 client on the hubtmr, spoketmr, tec, and tmtpsrv systems,
make the installation image available to each system using smbmount as shown
in “Prepare for DB2 installation” on page 149), start the Installation Wizard and
follow the instructions listed in the DB2 Information Center.
For documentation on DB2 client installation, refer to the DB2 Information Center
Web site:
http://publib.boulder.ibm.com/infocenter/db2help/index.jsp
Navigate to Installing → DB2 Universal Database for Linux, UNIX, and
Windows → DB2 clients → UNIX.
For documentation on system requirements, from the same Web site navigate to
Installing → DB2 Universal Database for Linux, UNIX, and Windows → DB2
clients → UNIX → Linux → Installation requirements.
152
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Start the Installation Wizard and follow the instructions listed in the DB2
Information Center.
Establishing a connection to the DB2 Server Instances
Once the DB2 client has been installed, the clients needs to catalog the DB2
Server instances which host the databases accessed from the client systems.
In the Outlet Systems Management Solution, the tec system only uses the
tec_db database hosted by the db2tec instance, the tmtpsrv system uses the
tmtp_db database hosted by the db2tmtp instance, and the hubtmr system
requires access to all the remaining databases and instances. Refer to
Table 4-11 on page 150 for detailed information.
򐂰 To make an instance known to a client, issue the db2 catalog tcpip node
command from the db2inst1 user, created on the client system during DB2
client installation.
򐂰 When logged in as root, issue the following commands:
su - db2inst1
db2 catalog tcpip node <instancename> remote rdbms server <portnumber>
where instancename and portnumber refers to the instance specific
information found in Table 4-11 on page 150.
򐂰 To catalog the db2tec instance on the tec system, the command would be:
db2 catalog tcpip node db2tec remote rdbms server 60016
򐂰 To catalog the db2tmtp instance on the tmtpsrv system, the command is:
db2 catalog tcpip node db2tmtp remote rdbms server 60020
򐂰 The commands to catalog the remaining instances at the hubtmr are:
Chapter 4. Installing the Tivoli Infrastructure
153
db2 catalog tcpip node db2inv remote rdbms server 60004
db2 catalog tcpip node db2mdist remote rdbms server 60008
db2 catalog tcpip node db2swd remote rdbms server 60012
Cataloging databases
Similar to cataloging the DB2 instances, each database accessed from a client
has to be cataloged. The process is similar to cataloging instances, however the
command to use is db2 catalog database, which has to be invoked from the
db2inst1 user.
1. While logged in as root, issue the following commands to catalog a database:
su - db2inst1
db2 catalog database <dbname> as <dbname> at node <instancename>
authentication server
dbname and instancename refer to the database specific information found in
Table 4-11 on page 150.
2. To catalog the tec_db database from the tec system, the command is:
db2 catalog database tec_db as tec_db at node db2tec authentication server
3. To catalog the tmtp_db database from the tmtpsrv system, the command is:
db2 catalog database tmtp_db as tmtp_db at node db2tmtp authentication
server
4. To catalog the required databases at the hubtmr system, the commands are
as in Example 4-4:
Example 4-4 Cataloging hubtmr system databases
db2 catalog database inv_db as inv_db at node db2inv authentication server
db2 catalog database mdist_db as mdist_db at node db2mdist authentication
server
db2 catalog database swd_db as swd_db at node db2swd authentication server
db2 catalog database apm_db as apm_db at node db2swd authentication server
db2 catalog database ccm_db as ccm_db at node db2swd authentication server
Verifying DB2 client operation
To verify the DB2 Client installation and configuration we need to establish a
connection to the newly defined databases. Since no database tables have been
created at this point, the verification cannot select actual data.
1. To connect to a database, issue the db2 connect command from the db2inst1
user.
154
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
2. While logged in as root, issue the following commands:
su - db2inst1
db2 connect to database <dbname> user <instancename> using <password>
dbname and instancename refer to the database specific information found in
Table 4-11 on page 150. password is the password assigned to the instance
user on the rdbms server.
3. To verify the connection from the tec system to the tec_db database, issue
the following command:
db2 connect to tec_db user db2tec using <password>
4. To verify the connection from the tmtpsrv system to the tmtp_db database,
issue the following command:
db2 connect to tmtp_db user db2tmtp using <password>
5. To verify database access for the hubtmr system, try the following commands,
as in Example 4-5:
Example 4-5 Verifying databases for the hubtmr system
db2 connect to mdist_db user db2mdist using <password>
db2 connect to inv_db user db2inv using <password>
db2 connect to swd_db user db2swd using <password>
db2 connect to apm_db user db2swd using <password>
db2 connect to ccm_db user db2swd using <password>
4.3.3 Establishing the TME infrastructure
Establishing the Tivoli Management Environment infrastructure is the second
major activity in the installation roadmap.
Establishing the TME Infrastructure consists of several steps. These steps are
covered in the following sections and include:
1.
2.
3.
4.
5.
6.
“Preparing the Tivoli Framework installation media” on page 156
“Installing and configuring the Hub TMR Server” on page 156
“Installing and configuring Hub TMR managed nodes” on page 162
“Installing and configuring the Spoke TMR server” on page 170
“Installing and configuring Spoke managed nodes” on page 171
“Install and configure TMF 4.1.1 Components and Patches” on page 171
The reason for installing and configuring the Tivoli Framework components until
after establishing the TMR Server environments is that these activities have to be
Chapter 4. Installing the Tivoli Infrastructure
155
performed on all of the TMR Servers and managed nodes. The Tivoli Framework
provides functions to perform installation on multiple systems at the same time.
For reference information and more detailed descriptions of the various steps
performed in the following sections, refer to the Tivoli Management Framework
Enterprise Installation Guide v4.1.1, GC32-0804-01, which is available online at
the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
For details on functional roles of each of the systems involved, refer to Table 4-8
on page 138.
Preparing the Tivoli Framework installation media
Prior to installation of the Tivoli Framework, make sure that the installation media
have been copied to the <img_dir>/code/tivoli/TMF/411/all/generic
directory on the srchost system, and that the /img directory on the srchost
system has been mounted as /mnt on the local system using smbmount:
smbmount //srchost/img /mnt -o username=root,password=<password>
To determine which installation images you need, and from where they can be
downloaded, refer to Table B-1 on page 647.
Installing and configuring the Hub TMR Server
The first step is to install and configure the TMR Server, sometimes referred to
simply as the Tivoli Server, on the hubtmr system. The steps involved are:
1. ”Installing the Hub TMR Server”
2. “Verifying the Hub TMR installation” on page 161
Installing the Hub TMR Server
Install the TMR server on machine hubtmr and configure it by following the
instructions in the Tivoli Management Framework Enterprise Installation Guide,
in the section entitled “Installing on a UNIX operating system.”
In the Tivoli Information Center, this publication is available by selecting
Management Framework → Enterprise Installation Guide → Installing a
Tivoli Server → Overview of Installing a UNIX Tivoli Server.
1. Run the following commands in Example 4-6 on hubtmr to install and
configure the Hub TMR Server.
Example 4-6 Installing the Hub TMR Server
mkdir -p /usr/local/Tivoli/install_dir
DOGUI=no
export DOGUI
156
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
cd /usr/local/Tivoli/install_dir
<path>/WPREINST.SH
The line <path> WPREINST.SH points to the directory containing the file
WPREINST.SH, for example /mnt/code/tivoli/TMF/411/all/generic.
2. Now, you can install the TMR Server directly from a command line or use the
GUI installation method provided by Tivoli. In either case, use the values for
the installation parameters shown in Table 4-12 on page 157.
Table 4-12 hubtmr TMR Server installation parameters
Parameter
value
Encryption level set to
DES
Region Name
hubtmr-region
Server Name
hubtmr
Use the default directory structure as
suggested by the installation.
Allow the installation to create missing
directories
checked
3. To start the command line installation use the following command:
./wserver -c /<path> -a hubtmr -d @EL@=DES AutoStart=1 CreatePaths=1
RN=hubtmr-region
<path> points to the installation directory, for example
/mnt/code/tivoli/TMF/411/all/generic.
The command displays a list of actions that will take place during the
installation and issues a confirmation prompt. To continue the installation
process, type y then press the Enter key. Status information displays on your
terminal as the installation proceeds.
Alternatively, use this command to initiate the GUI installer:
./wserver -c /mnt/code/tivoli/TMF/411/all/generic/CD1
You will be presented the Install Tivoli Server dialog box as depicted in
Figure 4-2 on page 158.
Chapter 4. Installing the Tivoli Infrastructure
157
Figure 4-2 Install Tivoli Server: Specify Directory Locations
4. In this dialog box, do the following:
a. Accept the default values, and enable or check the first of the Server
install options: When installing create ‘Specified Directories’ if
missing.
b. Click Set to continue.
The dialog shown in Figure 4-3 on page 159 continues the installation.
158
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 4-3 Install Tivoli Server: server and region names
Do the following:
5. Leave the Licence Key field blank. Since version 4.1, it is no longer needed.
6. Select DES as the Encryption Level.
7. Enter the TMR Server Name. For the hubtmr system in the Outlet Systems
Management Solution the TMR Server Name is hubtmr.
8. Note that the Region Name has been pre-filled by the installation program.
9. Click Install to proceed.
10.You are presented a confirmation dialog box. Verify that all the parameters
are correct, and click Continue Install.
The installation will start. During the installation, if your DISPLAY environment
variable is set, the Tivoli Desktop will open. Note that the installation is not
complete until you see the dialog box shown in Figure 4-4 on page 160.
Important: Do not use the Tivoli Desktop until the installation is complete.
During installation you might receive a message stating /etc/init.d/xinetd: No
such file or directory. This message is generated if the xinetd rpm is not
installed, however this is not a problem because it is not required by Tivoli.
Chapter 4. Installing the Tivoli Infrastructure
159
Figure 4-4 TM Install: completion
11.Now you can use the Tivoli Desktop depicted in Figure 4-5 on page 161.
160
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 4-5 Tivoli Desktop
12.Close the Tivoli Desktop by selecting Desktop → Exit.
Verifying the Hub TMR installation
To verify the Hub TMR installation, start the Tivoli Desktop, invoke a Tivoli
command, or do both. The Tivoli Desktop start procedure described in this
section will start the Tivoli Desktop on your local system, and the Tivoli command
procedure will list the installed Tivoli components. Successful execution of either
or both procedures indicate that the TMR Server installation was successful.
1. For both verification procedures, initialize the Tivoli environment by sourcing
the file /etc/Tivoli/setup_env.sh :
. /etc/Tivoli/setup_env.sh
This will define the Tivoli environment variables for the current shell. However,
if you completed the procedure described in “Preparing the Tivoli
environment” on page 143, you will not have to setup the Tivoli environment
every time a new shell is started.
Chapter 4. Installing the Tivoli Infrastructure
161
2. Start the Tivoli Desktop. This verification process starts the Tivoli Desktop
from a shell.
If the DISPLAY environment variable is not set, you can start the Tivoli
Desktop by performing the following steps:
a. Set your DISPLAY environment variable to point to the IP address of your
local system, and export the variable. For examples:
DISPLAY=9.35.29.35:0:0
export DISPLAY
b. Initialize the Tivoli environment variables by sourcing the setup_env.sh
script:
. /etc/Tivoli/setup_env.sh
c. Start the Tivoli Desktop by issuing the tivoli command:
The Tivoli Desktop will now be started, indicating that your TMR server is
operational.
3. Invoke a Tivoli command.
Use the following command to execute a Tivoli command to verify the
installation of Tivoli Server on hubtmr.
wlsinst -a
You will get an output similar to what is shown in Example 4-7 indicating that
the only the Tivoli Management Framework has been installed.
Example 4-7 List of installed Tivoli components generated by wlsinst
hubtmr:/ # wlsinst -a
*--------------------------------------------------------------------*
Product List
*--------------------------------------------------------------------*
Tivoli Management Framework 4.1.1
Installing and configuring Hub TMR managed nodes
As indicated in Table 4-8 on page 138, we need to install managed nodes on the
srchost, and tec systems in the central Hub TMR environment, because each of
these servers will assume active roles in the Outlet Systems Management
Solution.
Prior to installing the managed nodes, defining a new policy region in which they
will reside is recommended.
The activities to install managed nodes for the Hub TMR environment include:
162
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
1.“Creating the ManagedNodes-region policy region” on page 163
2. “Installing managed nodes” on page 164
3. “Verifying the managed node installation” on page 170
Creating the ManagedNodes-region policy region
Creation of policy regions is performed from the TMR Server, the hubtmr system
in our example. Either the Tivoli Desktop or command line can be used:
1. Using Tivoli Desktop, perform the following steps:
a. Create a new policy region on the TMR Server. In this case, the hubtmr
system. Select the menu option Create → Region.
b. Enter ManagedNodes-region as the region name in the Create Policy
Region dialog box, as shown in Figure 4-6.
Figure 4-6 Create a new Policy Region
c. Click Create & Close to complete the creation.
2. Allow resources of the type managed node to be defined in the newly created
policy region:
a. Double-click the new ManagedNodes-region policy region icon.
b. Select the menu option Properties → Managed Resources. This will
open the Set Managed Resources dialog box shown in Figure 4-7 on
page 164.
Chapter 4. Installing the Tivoli Infrastructure
163
Figure 4-7 Set managed resources
c. Select Managed Node from the Available Resources group and click the
left-arrow icon to bring it to the Current Resources: column.
d. Click Set & Close.
3. Using the command line:
Issue the following command in a shell in which the Tivoli environment has
been sourced:
wcrtpr -a Root_hubtmr-region -m ManagedNode ManagedNodes-Region
Installing managed nodes
Installation of managed nodes is performed from the TMR Server, the hubtmr
system in our case, and requires that the root user from the hubtmr system can
authenticate remotely with the target systems. See step 2 on page 142 for
details.
For the Outlet Systems Management Solution Hub TMR environment we install
managed nodes on the tec and srchost systems. Either the Tivoli Desktop or
command line can be used to initiate the installation, and it is required that the
installation image is accessible from the hubtmr system.
1. Using Tivoli Desktop, select the menu option Create → Managed Node. The
Client Install dialog box depicted in Figure 4-8 on page 165 will be displayed.
164
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 4-8 Client Install: main window
2. Supply the following parameters:
– TMR Installation Password
Leave this field blank if you did not specify a password during the TMR
server installation.
– Check Use SSH
– Default Access Method
Click Account, use root and the proper password.
– Click Add Clients. The Add Client dialog box, as shown in Figure 4-9 on
page 166 is displayed.
Chapter 4. Installing the Tivoli Infrastructure
165
Figure 4-9 Client Install: Add Clients
3. Enter the name of the desired client system, and select the access method. In
our case, we want to add two new systems. Do the following:
a.
b.
c.
d.
e.
Enter srchost in the Add Client field.
Check Use Default Access Method.
Click Add.
Enter tec in the Add Client field.
Click Add & Close.
This will bring you back to the main Client Install dialog box shown in
Figure 4-8 on page 165. Now the client names appear in the lower part of the
dialog box.
4. Click Select Media to inform the Tivoli installer about where to find the
installation images. The File Browser dialog box shown in Figure 4-10 on
page 167 will be displayed.
166
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 4-10 Client Install: Set Path
5. Select the directory in which the Framework installation images can be found,
for example /mnt/code/tivoli/TMF/411/all/generic/CD1.
6. Click Set Media & Close. You are once again returned to the Client Install
dialog box, shown in Figure 4-11 on page 168.
Chapter 4. Installing the Tivoli Infrastructure
167
Figure 4-11 Client Install: return to beginning
7. Click Install to start the installation.
The selected client systems will be contacted by the TMR server and the
Client Install Confirmation dialog box in Figure 4-12 on page 169 will appear.
168
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 4-12 Client Install confirmation
8. Click Continue Install when you have checked for errors.
At the end of the client installation, both clients should be included in
ManagedNodes-region, as shown in Figure 4-13 on page 170.
Chapter 4. Installing the Tivoli Infrastructure
169
Figure 4-13 Policy Region: ManagedNodes-region
9. Install the tec and srchost managed nodes from the command line:
Run this command to create the following tec and srchost managed nodes:
wclient -c <path> -d -p ManagedNodes-region -U root -y @AutoStart@=1
srchost tec
<path> points to the location of the installation media relative to the hubtmr
system, for example: /mnt/code/tivoli/TMF/411/generic/CD1.
Verifying the managed node installation
From the hubtmr system, issue the odadmin odlist command, and verify that
the newly installed managed nodes show up in the dispatcher list, as shown in
Example 4-8:
Example 4-8 Managed node list produced by odadmin odlist
hubtmr:/ # odadmin odlist
Region
Disp Flags
1393424439
1
ct2
ct3
ct-
Port
94
94
94
IPaddr
10.1.1.1
10.1.1.2
10.1.1.4
Hostname(s)
hubtmr.demo.tivoli.com,hubtmr
tec.demo.tivoli.com,tec
srchost.demo.tivoli.com,srchost
Note: Notice the c in the flags column, indicating that the systems are
connected.
Installing and configuring the Spoke TMR server
Installation and configuration of the Spoke TMR systems in the Outlet Systems
Management Solution is similar to installation of the Hub TMR environment, as
described in previous sections.
170
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
For an overview of the functional roles of the systems in the Spoke TMR
environment, refer to Table 4-4 on page 136 and the Regional and Outlet
environments related to the Spoke TMR. see Table 4-5 on page 137 and
Table 4-6 on page 137.
To summarize, the Spoke TMR environment consists of the Spoke TMR Server
(spoketmr), and two managed nodes, a regional gateway and repeater
(region01) and an outlet gateway and repeater (outlet01).
Install the TMR server on spoketmr, and configure it similar to “Installing and
configuring the Hub TMR Server” on page 156.
Installing and configuring Spoke managed nodes
Set up spoke managed nodes for the Outlet Systems Management Solution
environment. This includes installation and configuration of both the region01
and outlet01 managed nodes.
The installation and configuration steps are similar to the “Installing and
configuring Hub TMR managed nodes” on page 162, but instead of tec and
srchost system names, use region01 and outlet01 as the clients in the
ManagedNodes-region.
At the end of the client installation, both clients region01 and outlet01 should be
members of the ManagedNodes-region in the spoke TMR environment.
Install and configure TMF 4.1.1 Components and Patches
This section discusses the installation of additional Tivoli Management
Framework 4.1.1 components and patches on TMR servers and managed
nodes. The preparation steps include the following:
1. ”Preparing installation media for TEC”
2. “Determining TEC Component locations” on page 182
TMF 4.1.1 Component installation media preparation
Obtain the installation images for the following components, and unpack them to
the proper location on the srchost server in the <img_dir>/code directory
structure:
򐂰 Tivoli Management Framework v4.1.1 2 of 2
<img_dir>/code/tivoli/TMF/411/all/generic/CD2
򐂰 Tivoli Management Framework v4.1.1 2 of 2
<img_dir>/code/tivoli/TMF/411/all/generic/CD2
򐂰 Tivoli Framework Patch 4.1.1-LCF-0008 (build 08/12)
Chapter 4. Installing the Tivoli Infrastructure
171
<img_dir>/code/tivoli/TMF/411/LCF0008/generic
򐂰 Tivoli Framework Patch 4.1.1-TMF-0011 (build 05/06)
<img_dir>/code/tivoli/TMF/411/TMF0011/generic
򐂰 Tivoli Framework Patch 4.1.1-TMF-0021 (build 09/20)
<img_dir>/code/tivoli/TMF/411/all/generic/TMF0021
򐂰 Tivoli Java Client Framework 4.1.1 Patch JCF411-0001
<img_dir>/code/tivoli/TMF/411/JCF0001/generic
To determine which installation images you need, and from where they can be
downloaded, refer to Table B-1 on page 647.
Note: This list of patches were those that applied to the Outlet Systems
Management Solution environment at the time we wrote this book. For the
latest information, visit the Tivoli support site when you plan to setup your own
environment.
The installation of components and patches will be initiated from the TMR
Servers, hubtmr and spoketmr. Prior to starting the installation, you should make
sure that these two systems can access the installation images. Use the
smbmount command to map the /img directory on the srchost system to the /mnt
directory on each of the TMR servers:
smbmount //srchost/img /mnt -o username=root.password=<password>
Determining TMF component locations
Table 4-13 on page 172 outlines which components and patches need to be
installed on which systems.
Table 4-13 Tivoli Management Framework installation roadmap
172
x
x
Tivoli Java RDBMS Interface Module
(JRIM) 4.1.1
x
JavaHelp 1.0 for Tivoli
x
Tivoli Java Client Framework 4.1.1
x
outlet01
srchost
x
region01
tec
Java 1.3 for Tivoli
spoketmr
component/patch
hubtmr
system
x
x
x
x
x
x
x
x
x
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
srchost
spoketmr
region01
outlet01
Distribution Status Console, Version
4.1.1
x
Tivoli Management Framework SSLB
Version 1.1
x
x
x
x
x
x
Tivoli Framework Patch
4.1.1-LCF-0008 (build 08/12)
x
x
x
x
x
x
Tivoli Framework Patch
4.1.1-TMF-0011 (build 05/06)
x
Tivoli Framework Patch
4.1.1-TMF-0021 (build 09/20)
x
x
x
x
x
x
Tivoli Java Client Framework 4.1.1
Patch JCF411-0001
x
x
x
x
x
x
Tivoli MDist 2 Graphical User Interface
4.1.1 002 Patch a
x
hubtmr
tec
system
component/patch
x
a. This patch is delivered as part of the Tivoli Framework Patch 4.1.1-TMF-0011
Installing TMF components and patches
As usual, Framework components and patches can be installed using either the
Tivoli Desktop or a command line.
In the this section, we show the command line-based installation, from which you
can extract the information you need to perform the installation using the Tivoli
Desktop installation GUI.
For information about how to navigate the Tivoli Desktop installation GUI, refer to
the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Management Framework →Enterprise Installation
Guide →Installing Tivoli Products and Patches.
The following instructions are two-tiered. The first set of instructions show how to
install the components on the hubtmr and related systems. The second set of
instructions shows the command for installing the same components on the
spoketmr and related systems.
Chapter 4. Installing the Tivoli Infrastructure
173
From the command line, the following commands install Tivoli Management
Framework components and patches:
򐂰 Java 1.3 for Tivoli
a. Run this command from hubtmr to install Java 1.3 for Tivoli on hubtmr, tec,
and srchost systems.
winstall –c /mnt/code/tivoli/TMF/411/all/generic/CD2/JAVA –i JRE130 –y
@CreatePaths@=1 hubtmr tec srchost
b. Run this command from spoketmr to install Java 1.3 for Tivoli on spoketmr,
region01, and outlet01 from your specific product path
winstall –c /mnt/code/tivoli/TMF/411/all/generic/CD2/JAVA –i JRE130 –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 Tivoli Java RDBMS Interface Module (JRIM) 4.1.1
a. Run this command from hubtmr to install Tivoli Java RDBMS Interface
Module (JRIM) 4.1.1 on the hubtmr system:
winstall –c /mnt/code/tivoli/TMF/411/all/generic/CD2/JAVA –i JRIM411 –y
@CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Tivoli Java RDBMS Interface
Module (JRIM) 4.1.1 on the spoketmr system:
winstall –c /mnt/code/tivoli/TMF/411/all/generic/CD2/JAVA –i JRIM411 –y
@CreatePaths@=1 spoketmr
򐂰 JavaHelp 1.0 for Tivoli
a. Run this command from hubtmr to install JavaHelp 1.0 for Tivoli on the
hubtmr system:
winstall –c /mnt/code/tivoli/TMF/411/all/generic/CD2/JAVA –i JHELP41 –y
@CreatePaths@=1 hubtmr
򐂰 Tivoli Java Client Framework 4.1.1
a. Run this command from hubtmr to install Tivoli Java Client Framework
4.1.1 on the hubtmr, tec and srchost systems:
winstall –c /mnt/code/tivoli/TMF/411/all/generic/CD2/JAVA –i JCF411 –y
@CreatePaths@=1 hubtmr tec srchost
b. Run this command from spoketmr to install Tivoli Java Client Framework
4.1.1 on the spoketmr, region01 and outlet01 systems:
winstall –c /mnt/code/tivoli/TMF/411/all/generic/CD2/JAVA –i JCF411 –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 Distribution Status Console, Version 4.1.1
a. Run this command from hubtmr to install Distribution Status Console,
Version 4.1.1 on the hubtmr system:
174
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
winstall –c /mnt/code/tivoli/TMF/411/all/generic/CD2/JAVA –i MDIST2GU –y
@CreatePaths@=1 hubtmr
򐂰 Tivoli Management Framework SSLB Version 1.1
a. Run this command from hubtmr to install Tivoli Management Framework
SSLB Version 1.1 on the hubtmr, tec and srchost systems:
winstall –c /mnt/code/tivoli/TMF/411/all/generic/CD2/SSL –i SSLB –y
@CreatePaths@=1 hubtmr tec srchost
b. Run this command from spoketmr to install Tivoli Management Framework
SSLB Version 1.1 on the spoketmr, region01, and outlet01 systems:
winstall –c /mnt/code/tivoli/TMF/411/all/generic/CD2/SSL –i SSLB –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 Tivoli Framework Patch 4.1.1-LCF-0008
a. Run this command from hubtmr to install Tivoli Framework Patch
4.1.1-LCF-0008 on the hubtmr, tec, and srchost systems:
wpatch –c /mnt/code/tivoli/TMF/411/LCF0008/generic –i 411LCF08 –y
@CreatePaths@=1 hubtmr tec srchost
b. Run this command from spoketmr to install Tivoli Framework Patch
4.1.1-LCF-0008 on the spoketmr, region01, and outlet01 systems:
wpatch –c /mnt/code/tivoli/TMF/411/LCF0008/generic –i 411LCF08 –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 Tivoli Framework Patch 4.1.1-TMF-0011
a. Run this command from hubtmr to install Tivoli Framework Patch
4.1.1-TMF-0011 on the hubtmr system:
wpatch –c /mnt/code/tivoli/TMF/411/TMF0011/generic –i 411TMF11 –y
@CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Tivoli Framework Patch
4.1.1-TMF-0011 on the spoketmr system:
wpatch –c /mnt/code/tivoli/TMF/411/TMF0011/generic –i 411TMF11 –y
@CreatePaths@=1 spoketmr
򐂰 Tivoli Framework Patch 4.1.1-TMF-0021
a. Run this command from hubtmr to install Tivoli Framework Patch
4.1.1-TMF-0021 on the hubtmr, tec, and srchost systems:
wpatch –c /mnt/code/tivoli/TMF/411/TMF0021/generic –i 411TMF21 –y
@CreatePaths@=1 hubtmr tec srchost
b. Run this command from spoketmr to install Tivoli Framework Patch
4.1.1-TMF-0021 on the spoketmr, region01, and outlet01 systems:
wpatch –c /mnt/code/tivoli/TMF/411/TMF0021/generic –i 411TMF21 –y
@CreatePaths@=1 spoketmr region01 outlet01
Chapter 4. Installing the Tivoli Infrastructure
175
򐂰 Tivoli Java Client Framework 4.1.1 Patch JCF411-0001
a. Run this command from hubtmr to install Tivoli Java Client Framework
4.1.1 Patch JCF411-0001 on the hubtmr, tec, and srchost systems:
wpatch –c /mnt/code/tivoli/TMF/411/JCF0001/generic –i JCF411 –y
@CreatePaths@=1 hubtmr tec srchost
b. Run this command from spoketmr to install Tivoli Java Client Framework
4.1.1 Patch JCF411-0001 on the spoketmr, region01, and outlet01
systems:
wpatch –c /mnt/code/tivoli/TMF/411/JCF0001/generic –i JCF411 –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 Tivoli MDist 2 Graphical User Interface 4.1.1 002 Patch
a. Run this command from hubtmr to install Tivoli MDist 2 Graphical User
Interface 4.1.1 002 Patch on the hubtmr system:
wpatch –c /mnt/code/tivoli/TMF/411/TMF0011/generic/411MD202 –i 411MD202
–y @CreatePaths@=1 hubtmr
Verifying Framework component installation
To verify the installation of all the components and patches, check the
information in the Tivoli catalog for all installed components and patches. We
start with the instructions for using the Tivoli Desktop. The instructions for using
the command line start with step a on page 179.
򐂰 Using Tivoli Desktop
a. From the Tivoli Desktop, navigate to Desktop → About, which will
display the About Tivoli dialog box shown in Figure 4-14 on page 177.
176
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 4-14 About Tivoli
b. Press the Installed Products button to get the detailed list of installed
components, as in Figure 4-15 on page 178.
Chapter 4. Installing the Tivoli Infrastructure
177
Figure 4-15 Installed Products
c. For each component, highlight it and press the Patches button to view the
patches that have been applied to that specific component. Figure 4-16 on
page 179 shows the patches installed for Tivoli Management Framework.
178
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 4-16 Installed Patches
d. Use the Close button on all the previous windows to return to the Tivoli
Desktop.
򐂰 Using the command line
a. Use the wlsinst -ah command from the hubtmr system to obtain a list of
all the installed components and patches in the TMR. Check the output,
which should be similar to the list shown in Example 4-9.
Example 4-9 Installed Tivoli Components and patches
hubtmr:/mnt/code/tivoli # wlsinst -ah
*---------------------------------------------------------------------------*
Product List
*---------------------------------------------------------------------------*
Tivoli Management Framework 4.1.1
hubtmr
linux-ix86
srchost
linux-ix86
tec
linux-ix86
Tivoli Java Client Framework 4.1
hubtmr
linux-ix86
Tivoli Java Client Framework 4.1.1
hubtmr
linux-ix86
srchost
linux-ix86
tec
linux-ix86
Java 1.3 for Tivoli
Chapter 4. Installing the Tivoli Infrastructure
179
hubtmr
srchost
tec
linux-ix86
linux-ix86
linux-ix86
Tivoli Java RDBMS Interface Module (JRIM) 4.1
hubtmr
linux-ix86
Tivoli Java RDBMS Interface Module (JRIM) 4.1.1
hubtmr
linux-ix86
JavaHelp 1.0 for Tivoli
hubtmr
linux-ix86
Tivoli Management Framework SSLB Version 1.1
hubtmr
linux-ix86
srchost
linux-ix86
tec
linux-ix86
Distribution Status Console, Version 4.1.1
hubtmr
linux-ix86
*----------------------------------------------------------------------------*
Patch List
*----------------------------------------------------------------------------*
Tivoli Framework Patch 4.1.1-LCF-0008
hubtmr
linux-ix86
srchost
linux-ix86
tec
linux-ix86
(build 08/12)
Tivoli Framework Patch 4.1.1-TMF-0011
hubtmr
linux-ix86
(build 05/06)
Tivoli Framework Patch 4.1.1-TMF-0021
hubtmr
linux-ix86
srchost
linux-ix86
tec
linux-ix86
(build 09/20)
Tivoli Java Client Framework 4.1.1 Patch JCF411-0001
hubtmr
linux-ix86
srchost
linux-ix86
tec
linux-ix86
Tivoli MDist 2 Graphical User Interface 4.1.1 002 Patch
hubtmr
linux-ix86
180
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
b. To verify the installations in the spoke TMR environment, issue the
wlsinst -ah command from the spoketmr system.
At this point, all the TMR Servers and managed nodes are ready for receiving the
specialized components they need to fulfill their roles within the Outlet Systems
Management Solution.
Before you start using the newly installed Tivoli Framework functions, you should
complete the post-installation configuration steps described in 4.4.1, “Framework
customization” on page 223.
4.3.4 Tivoli Enterprise Console v3.9
The role of the Tivoli Enterprise Console (TEC) Server is to receive and process
all events generated from all systems in the Outlet Systems Management
Solution infrastructure.
Preparing for TEC installation
The preparation steps for installing the IBM Tivoli Enterprise Console v3.9
product, include the following:
1. ”Preparing installation media for TEC”
2. “Determining TEC Component locations” on page 182.
Preparing installation media for TEC
To prepare the media, complete these tasks:
1. Obtain the installation images for the following components, and unpack them
to the proper location on the srchost server in the <img_dir>/code directory
structure:
– IBM Tivoli Enterprise Console (TME New Installations) V3.9 Multi Int
English
<img_dir>/code/tivoli/TEC/3.9.0/all/generic/NewInstall
– IBM Tivoli Enterprise Console Version 3.9.0 LA Interim Fix 03
<img_dir>/code/tivoli/TEC/3.9.0/TEC-0003LA/generic/Wizard
– IBM Tivoli Enterprise Console Version 3.9.0 Fix Pack 2
<img_dir>/code/tivoli/TEC/3.9.0_fp02/all/generic
To determine which installation images you need, and from where they can be
downloaded, refer to Table B-1 on page 647.
2. Make the installation image accessible from the tec system using NFS or
SMB mount:
smbmount //srchost/img /mnt -o username=root,password=<password>
Chapter 4. Installing the Tivoli Infrastructure
181
Determining TEC Component locations
Table 4-14 outlines which TEC components to install where, in accordance with
the functional roles of the systems described in 4.1.1, “Management
environments” on page 131.
Table 4-14 TEC product and path installation roadmap
IBM Tivoli Enterprise Console Server 3.9
outlet01
x
region01
x
spoketmr
tec
IBM Tivoli Enterprise Console JRE 3.9
srchost
components
hubtmr
servers
x
x
x
x
x
x
x
x
x
x
x
x
x
IBM Tivoli Enterprise Console User Interface
Server 3.9
x
IBM Tivoli Enterprise Console Console 3.9
x
IBM Tivoli Enterprise Console Adapter
Configuration Facility 3.9
x
IBM Tivoli Enterprise Console JRE Fix Pack 2
x
IBM Tivoli Enterprise Console Server Fix Pack 2
x
x
IBM Tivoli Enterprise Console User Interface
Server Fix Pack 2
x
IBM Tivoli Enterprise Console Console Fix Pack 2
x
IBM Tivoli Enterprise Console Adapter
Configuration Facility Fix Pack 2
x
Note: The Tivoli Enterprise Console JRE is an installable component and is a
requirement for the event server, the UI server, the event console, and the
Adapter Configuration Facility. This component makes the Java Runtime
environment available for use. As appropriate, the installation wizard installs
and uninstalls the Tivoli Enterprise Console JRE automatically. However, if
you are using the Tivoli Management Framework tools, you must perform
these operations manually.
Installing the TEC
The TEC components can be installed either through the Installation Wizard,
through the Tivoli Desktop, or from the command line.
182
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
In this section we show the command line-based installation, from which you can
extract the information needed to perform the installation using the Tivoli Desktop
installation GUI. For information about how to navigate the Tivoli Desktop
installation GUI, please refer to the IBM Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Management Framework →Enterprise Installation
Guide →Installing Tivoli Products and Patches.
For information about how to install Tivoli Enterprise Console components using
the Installation Wizard, refer to Enterprise Console →Installation Guide →
Installing, upgrading, and uninstalling using the Installation
Wizard →Using the Installation Wizard in the IBM Tivoli Information Center.
Important: You should be aware that the Installation Wizard performs both
initial installation and customization steps, so if you use the Installation
Wizard, some of the steps described in 4.4.4, “Configuring the Tivoli
Enterprise Console” on page 234 are obsolete.
To install the products and patches for the Tivoli Enterprise Console you should
perform the following steps:
򐂰 IBM Tivoli Enterprise Console JRE 3.9
a. Run this command from hubtmr to install IBM Tivoli Enterprise Console
JRE 3.9 on the hubtmr and tec systems:
winstall –c /mnt/code/tivoli/TEC/3.9.0/all/generic/NewInstall/cdrom –i
TECJRE –y @CreatePaths@=1 hubtmr tec
b. Run this command from spoketmr to install IBM Tivoli Enterprise Console
JRE 3.9 on the spoketmr, region01 and outlet01 systems:
winstall –c /mnt/code/tivoli/TEC/3.9.0/all/generic/NewInstall/cdrom –i
TECJRE –y @CreatePaths@=1 spoketmr region01 outlet01
򐂰 IBM Tivoli Enterprise Console Server 3.9
a. Run this command from hubtmr to install IBM Tivoli Enterprise Console
Server 3.9 on the tec system:
winstall –c /mnt/code/tivoli/TEC/3.9.0/all/generic/NewInstall/cdrom –i
SERVER –y @CreatePaths@=1 hubtmr tec
򐂰 IBM Tivoli Enterprise Console User Interface Server 3.9
a. Run this command from hubtmr to install IBM Tivoli Enterprise Console
User Interface Server 3.9 on the hubtmr system:
winstall –c /mnt/code/tivoli/TEC/3.9.0/all/generic/NewInstall/cdrom –i
UI_SRVR –y @CreatePaths@=1 hubtmr
Chapter 4. Installing the Tivoli Infrastructure
183
򐂰 IBM Tivoli Enterprise Console Console 3.9
a. Run this command from hubtmr to install IBM Tivoli Enterprise Console
Console 3.9 on the hubtmr system:
winstall –c /mnt/code/tivoli/TEC/3.9.0/all/generic/NewInstall/cdrom –i
CONSOLE –y @CreatePaths@=1 hubtmr
򐂰 IBM Tivoli Enterprise Console Adapter Configuration Facility 3.9
a. Run this command from hubtmr to install IBM Tivoli Enterprise Console
Adapter Configuration Facility 3.9 on the hubtmr system:
winstall –c /mnt/code/tivoli/TEC/3.9.0/all/generic/NewInstall/cdrom –i
ACF –y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install IBM Tivoli Enterprise Console
Adapter Configuration Facility 3.9 on the spoketmr, region01 and outlet01
systems:
winstall –c /mnt/code/tivoli/TEC/3.9.0/all/generic/NewInstall/cdrom –i
ACF –y @CreatePaths@=1 spoketmr region01 outlet01
– 3.9.0 Tivoli Enterprise Console JRE Fix Pack 2
a. Run this command from hubtmr to install 3.9.0 Tivoli Enterprise Console
JRE Fix Pack 2 on the hubtmr and tec systems:
wpatch –c <img_dir>/code/tivoli/TEC/3.9.0_fp02/alll/generic/TME –i
390JREFP –y @CreatePaths@=1 hubtmr tec
b. Run this command from spoketmr to install 3.9.0 Tivoli Enterprise Console
JRE Fix Pack 2 on the spoketmr, region01 and outlet01 systems:
wpatch –c <img_dir>/code/tivoli/TEC/3.9.0_fp02/all/generic/TME –i
390JREFP –y @CreatePaths@=1 spoketmr region01 outlet01
򐂰 3.9.0 Tivoli Enterprise Console Server Fix Pack 2
a. Run this command from hubtmr to install 3.9.0 Tivoli Enterprise Console
Server Fix Pack 2 on the tec system:
wpatch –c <img_dir>/code/tivoli/TEC/3.9.0_fp02/all/generic/TME –i
390SVRFP –y @CreatePaths@=1 tec
򐂰 3.9.0 Tivoli Enterprise Console User Interface Server Fix Pack 2
a. Run this command from hubtmr to install 3.9.0 Tivoli Enterprise Console
User Interface Server Fix Pack 2 on the hubtmr system:
wpatch –c <img_dir>/code/tivoli/TEC/3.9.0_fp02/all/generic/TME –i
390UISFP –y @CreatePaths@=1 hubtmr
򐂰 3.9.0 Tivoli Enterprise Console Console Fix Pack 2
a. Run this command from hubtmr to install 3.9.0 Tivoli Enterprise Console
Console Fix Pack 2 on the hubtmr system:
184
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
wpatch –c <img_dir>/code/tivoli/TEC/3.9.0_fp02/all/generic/TME –i
390CONFP –y @CreatePaths@=1 hubtmr
򐂰 3.9.0 Tivoli Enterprise Console ACF Fix Pack 2
a. Run this command from hubtmr to install 3.9.0 Tivoli Enterprise Console
ACF Fix Pack 2 on the hubtmr system:
wpatch –c <img_dir>/code/tivoli/TEC/3.9.0_fp02/all/generic/TME –i
390ACFFP –y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install 3.9.0 Tivoli Enterprise Console
ACF Fix Pack 2 on the spoketmr, region01 and outlet01 systems:
wpatch –c <img_dir>/code/tivoli/TEC/3.9.0_fp02/all/generic/TME –i
390ACFFP –y @CreatePaths@=1 spoketmr region01 outlet01
Verifying the TEC installation
Because the TEC components have not been customized yet, and therefore are
not operational, our installation verification will be limited to looking into the Tivoli
Registry to verify that the components have been registered correctly.
The verification can be performed from the Tivoli Desktop, similar to the
procedure described in “Verifying Framework component installation” on
page 176, or using the command line, as described in the following:
To verify the installation of the TEC components in each individual TMR
environment, use the wlsinst -h -s<component>. Example 4-10 shows the
verification steps performed in the Outlet Systems Management Solution Hub
TMR environment.
Example 4-10 TEC component installation verification on Hub TMR
wlsinst -h -s”IBM Tivoli Enterprise Console JRE 3.9”
IBM Tivoli Enterprise Console JRE 3.9
hubtmr
linux-ix86
tec
linux-ix86
wlsinst -h -s”IBM Tivoli Enterprise Console Console 3.9”
IBM Tivoli Enterprise Console Console 3.9
hubtmr
linux-ix86
wlsinst -h -s”IBM Tivoli Enterprise Console Server 3.9”
IBM Tivoli Enterprise Console Server 3.9
tec
linux-ix86
wlsinst -h -s”IBM Tivoli Enterprise Console User Interface Server 3.9”
IBM Tivoli Enterprise Console User Interface Server 3.9
hubtmr
linux-ix86
Chapter 4. Installing the Tivoli Infrastructure
185
wlsinst -h -s”IBM Tivoli Enterprise Console Adapter Configuration Facility 3.9”
IBM Tivoli Enterprise Console Adapter Configuration Facility 3.9
hubtmr
linux-ix86
wlsinst -h -s”3.9.0 Tivoli Enterprise Console JRE Fix Pack 2”
3.9.0 Tivoli Enterprise Console JRE Fix Pack 2
hubtmr
linux-ix86
tec
linux-ix86
wlsinst -h -s”3.9.0 Tivoli Enterprise Console Console Fix Pack 2”
3.9.0 Tivoli Enterprise Console Console Fix Pack 2
hubtmr
linux-ix86
wlsinst -h -s”3.9.0 Tivoli Enterprise Console Server Fix Pack 2”
3.9.0 Tivoli Enterprise Console Server Fix Pack 2
tec
linux-ix86
wlsinst -h -s”3.9.0 Tivoli Enterprise Console ACF Fix Pack 2”
3.9.0 Tivoli Enterprise Console ACF Fix Pack 2
hubtmr
linux-ix86
wlsinst -h -s”3.9.0 Tivoli Enterprise Console User Interface Server Fix Pack 2”
3.9.0 Tivoli Enterprise Console User Interface Server Fix Pack 2
hubtmr
linux-ix86
Similarly, Example 4-11 shows the verification steps performed in the Outlet
Systems Management Solution Spoke TMR environment.
Example 4-11 TEC component verification on spokeTMR
wlsinst -h -s”IBM Tivoli Enterprise Console JRE 3.9”
IBM Tivoli Enterprise Console JRE 3.9
outlet01
linux-ix86
region01
linux-ix86
spoketmr
linux-ix86
wlsinst -h -s”IBM Tivoli Enterprise Console Adapter Configuration Facility 3.9”
IBM Tivoli Enterprise Console Adapter Configuration Facility 3.9
outlet01
linux-ix86
region01
linux-ix86
spoketmr
linux-ix86
wlsinst -h -s”3.9.0 Tivoli Enterprise Console JRE Fix Pack 2”
3.9.0 Tivoli Enterprise Console JRE Fix Pack 2
outlet01
linux-ix86
region01
linux-ix86
spoketmr
linux-ix86
wlsinst -h -s”3.9.0 Tivoli Enterprise Console ACF Fix Pack 2”
186
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
3.9.0 Tivoli Enterprise Console ACF Fix Pack 2
outlet01
linux-ix86
region01
linux-ix86
spoketmr
linux-ix86
Note: To verify installation, you might find it easier to use the general
command wlsinst -ah to list all products and patches on all systems in each
TMR. However, doing this will result in much more output, through which you
will need to scan.
Before you start using the newly installed Tivoli Enterprise Console functions,
you should complete the post-installation configuration steps described in 4.4.4,
“Configuring the Tivoli Enterprise Console” on page 234.
4.3.5 Installing Tivoli Configuration Manager
This section details the installation of the Inventory, Software Distribution, Activity
Planner, and Change Manager components of the Tivoli Configuration Manager
v4.2.1 product.
Preparing for TCM installation
The preparation steps for installing the IBM Tivoli Configuration Manager v4.2.1
product, include the following:
1. ”Preparing installation media for TCM”
2. “Determining TCM Component locations” on page 188
Preparing installation media for TCM
Obtain the installation images for the following components, and unpack them to
the proper location on the srchost server in the <img_dir>/code directory
structure:
򐂰 Tivoli Management Framework v4.1.1 Compatibility
<img_dir>/code/tivoli/TMF/411/all/generic/Compatibility
򐂰 IBM Tivoli Configuration Manager Installation v4.2.1, Multiplatform, Brazilian
Portuguese, French, German, International English, Italian, Japanese,
Korean, Simplified Chinese, Spanish
<img_dir>/code/tivoli/TCM/421/all/generic/cd5
򐂰 IBM Tivoli Configuration Manager v4.2.1, Multiplatform, Brazilian Portuguese,
French, German, International English, Italian, Japanese, Korean, Simplified
Chinese, Spanish
<img_dir>/code/tivoli/TCM/421/all/generic/cd1
Chapter 4. Installing the Tivoli Infrastructure
187
򐂰 IBM Tivoli Configuration Manager Web Gateway v4.2.1, Multiplatform,
Brazilian Portuguese, French, German, International English, Italian,
Japanese, Korean, Simplified Chinese, Spanish
<img_dir>/code/tivoli/TCM/421/all/generic/cd4
򐂰 IBM Tivoli Configuration Manager Windows v4.2.1, Brazilian Portuguese,
French, German, International English, Italian, Japanese, Korean, Simplified
Chinese, Spanish
<img_dir>/code/tivoli/TCM/421/all/generic/Windows
򐂰 IBM Tivoli Configuration Manager Documentation v4.2.1, International
English,
<img_dir>/code/tivoli/TCM/421/all/generic/documentation
򐂰 IBM Tivoli Configuration Manager, Version 4.2.1,Fix Pack 4.2.1-TCM-FP01
(U498122)
<img_dir>/code/tivoli/TCM/421_fp01/all/generic
To determine which installation images you need, and from where they can be
downloaded, refer to Table B-1 on page 647.
Make the installation image accessible from the rdbms system using NFS or
SMB mount:
smbmount //srchost/img /mnt -o username=root,password=<password>
Determining TCM Component locations
Table 4-15 outlines which TCM related components to install where, in
accordance with the functional roles of the systems described in “Management
environments” on page 131.
Table 4-15 TCM product and patch installation roadmap
188
spoketmr
region01
outlet01
srchost
tec
servers
hubtmr
components
x
x
x
x
x
Tivoli Scalable Collection Service
4.1.1
x
Tivoli Java Client Framework 4.1a
x
x
Tivoli Inventory Server 4.2.1
x
x
Tivoli Inventory Gateway 4.2.1
x
x
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
x
x
TIvoli Software Distribution Gateway
4.2.1
x
x
Tivoli Software Distribution Software
Package Editor 4.2.1
x
x
Tivoli Java RDBMS Interface Module
(JRIM) 4.1b
x
Activity Planner, Version 4.2.1
x
Change Manager, Version 4.2.1
x
Tivoli Scalable Collection Service
4.1.1 Fix Pack 4.1.1-CLL-FP01
x
x
Tivoli Inventory Server 4.2.1 Fix Pack
4.2.1-INV-FP01
x
x
Tivoli Inventory Gateway 4.2.1 Fix
Pack 4.2.1-INVGW-FP01
x
x
Tivoli Inventory Server 4.2.1 Interim
Fix 4.2.1-INV-0006
x
x
Tivoli Inventory Gateway 4.2.1
Interim Fix 4.2.1-INV-0006
x
x
Tivoli Software Distribution 4.2.1 Fix
Pack 4.2.1-SWDSRV-FP01
x
TIvoli Software Distribution Gateway
4.2.1 Fix Pack 4.2.1-SWDGW-FP01
x
x
Tivoli Software Distribution Software
Package Editor 4.2.1 Fix Pack
4.2.1-SWDJSP-FP01
x
x
Activity Planner Fix Pack
4.2.1-APM-FP01
x
Change Manager Fix Pack
4.2.1-CCM-FP01
x
x
outlet01
spoketmr
x
Tivoli Software Distribution 4.2.1
region01
srchost
tec
servers
hubtmr
components
x
x
x
x
x
x
x
x
x
x
x
Chapter 4. Installing the Tivoli Infrastructure
189
a. Will be found in the Compatibility folder in the TMF installation path
b. Tivoli Java RDBMS Interface Module (JRIM) 4.1 is a prerequisite for Activity
Planner
Installing the TCM components
The Tivoli Configuration Manager components can be installed either through the
Installation Wizard, through the Tivoli Desktop, or through command line
commands.
In the following we show the command line based installation, from which you
will be able to deduct the information needed to perform the installation using the
Tivoli Desktop installation GUI. For information about how to navigate the Tivoli
Desktop installation GUI, please refer to the IBM Tivoli Information Center Web
site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Management Framework →Enterprise Installation
Guide →Installing Tivoli Products and Patches.
For information about how to install Tivoli Configuration Manager components
using the Installation Wizard, refer to Configuration Manager →Installation
Guide → Component Installation Prerequisites →IBM Tivoli Configuration
Manager InstallShield Wizard in the IBM Tivoli Information Center.
Important: You should be aware that the Installation Wizard performs both
initial installation and customization steps, so if this installation method is
used, some of the steps described in 4.4.5, “Customizing the Inventory” on
page 238 and related sections be obsolete.
To install the products and patches for the Tivoli Configuration Manager through
the command line interface you should perform the following steps:
򐂰 Tivoli Java Client Framework 4.1
a. Run this command from hubtmr to install Tivoli Java Client Framework 4.1
on the hubtmr system:
winstall –c /mnt/code/tivoli/TMF/411/all/generic/Compatibility –i JCF41
–y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Tivoli Java Client Framework
4.1 on spoketmr from your specific product path
winstall –c /mnt/code/tivoli/TMF/411/all/generic/Compatibility –i JCF41
–y @CreatePaths@=1 spoketmr
򐂰 Tivoli Java RDBMS Interface Module (JRIM) 4.1
190
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
a. Run this command from hubtmr to install Tivoli Java RDBMS Interface
Module (JRIM) 4.1 on the hubtmr system:
winstall –c /mnt/code/tivoli/TMF/411/all/generic/Compatibility –i JRIM
–y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Tivoli Java RDBMS Interface
Module (JRIM) 4.1 on the spoketmr system:
winstall –c /mnt/code/tivoli/TMF/411/all/generic/Compatibility –i JRIM
–y @CreatePaths@=1 spoketmr
򐂰 Scalable Collection Service, Version 4.1.1
a. Run this command from hubtmr to install Scalable Collection Service,
Version 4.1.1 on the hubtmr system:
wpatch –c /mnt/code/tivoli/TCM/421/all/generic/cd1/MCOLLECT –i MCOLLECT
–y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Scalable Collection Service,
Version 4.1.1 on the spoketmr, region01 and outlet01 systems:
wpatch –c /mnt/code/tivoli/TCM/421/all/generic/cd1/MCOLLECT –i MCOLLECT
–y @CreatePaths@=1 spoketmr region01 outlet01
Note: The installation of the first component, Scalable Collection
Service, Version 4.1.1, is a required patch install to enable the Inventory
installation.
򐂰 Inventory, Version 4.2.1
c. Run this command from hubtmr to install Inventory, Version 4.2.1 on the
hubtmr system:
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/INVENTORY –i
421_INV –y @CreatePaths@=1 hubtmr
d. Run this command from spoketmr to install Inventory, Version 4.2.1 on the
spoketmr system:
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/INVENTORY –i
421_INV –y @CreatePaths@=1 spoketmr
򐂰 Inventory Gateway, Version 4.2.1
a. Run this command from hubtmr to install Inventory Gateway, Version 4.2.1
on the hubtmr:
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/INVENTORY –i
421_GW_F –y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Inventory Gateway, Version
4.2.1 on the spoketmr, region01 and outlet01 systems:
Chapter 4. Installing the Tivoli Infrastructure
191
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/INVENTORY –i
421_GW_F –y @CreatePaths@=1 spoketmr region01 outlet01
򐂰 Software Distribution, Version 4.2.1
a. Run this command from hubtmr to install Software Distribution, Version
4.2.1 on the hubtmr and srchost systems:
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/SWD –i SWDIS –y
@CreatePaths@=1 hubtmr srchost
b. Run this command from spoketmr to install Software Distribution, Version
4.2.1 on the spoketmr system:
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/SWD –i SWDIS –y
@CreatePaths@=1 spoketmr
򐂰 Software Distribution Gateway, Version 4.2.1
a. Run this command from hubtmr to install Software Distribution Gateway,
Version 4.2.1 on the hubtmr system:
winstall –c <img_dir>/code/tivoli/TCM/421/all/generic/cd1/SWD –i SWDISGW
–y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Software Distribution Gateway,
Version 4.2.1 on the spoketmr, region01 and outlet01 systems:
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/SWD –i SWDISGW –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 Software Distribution Software Package Editor, Version 4.2.1
a. Run this command from hubtmr to install Software Distribution Software
Package Editor, Version 4.2.1 on the hubtmr and srchost systems:
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/SWD –i SWDISJPS –y
@CreatePaths@=1 hubtmr srchost
򐂰 Activity Planner, Version 4.2.1
a. Run this command from hubtmr to install Activity Planner, Version 4.2.1 on
the hubtmr system:
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/SWD –i APM –y
@CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Activity Planner, Version 4.2.1
on the spoketmr system:
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/SWD –i APM –y
@CreatePaths@=1 spoketmr
򐂰 Change Manager, Version 4.2.1
a. Run this command from hubtmr to install Change Manager, Version 4.2.1
on hubtmr system:
192
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
winstall –c /mnt/code/tivoli/TCM/421/all/generic/cd1/SWD –i CCM –y
@CreatePaths@=1 hubtmr
򐂰 Scalable Collection Service 4.1.1, Fixpack 4.1.1-CLL-FP01
a. Run this command from hubtmr to install Scalable Collection Service
4.1.1, Fixpack 4.1.1-CLL-FP01 on the hubtmr system:
wpatch –c /mnt/code/tivoli/TCM/411/CCLFP01/generic –i 411CLLFP –y
@CreatePaths@=1 hubtmr tec srchost
b. Run this command from spoketmr to install Scalable Collection Service
4.1.1, Fixpack 4.1.1-CLL-FP01 on the spoketmr, region01, and outlet01
systems:
wpatch –c /mnt/code/tivoli/TCM/411/CCLFP01/generic –i 411CLLFP –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 Inventory, Version 4.2.1, Fix Pack 4.2.1-INV-FP01
a. Run this command from hubtmr to install Inventory, Version 4.2.1, Fix
Pack 4.2.1-INV-FP01 (U498122 - 2004/06) on the hubtmr system:
wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/INV –i
421INVFP –y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Inventory, Version 4.2.1, Fix
Pack 4.2.1-INV-FP01 (U498122 - 2004/06) on the spoketmr system:
wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/INV –i
421INVFP –y @CreatePaths@=1 spoketmr
򐂰 Inventory Gateway, Version 4.2.1, Fix Pack 4.2.1-INVGW-FP01
a. Run this command from hubtmr to install Inventory Gateway, Version
4.2.1, Fix Pack 4.2.1-INVGW-FP01 (U498122 - 2004/06) on the hubtmr
from your specific patch path
wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/INV –i
421LCFFP –y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Inventory Gateway, Version
4.2.1, Fix Pack 4.2.1-INVGW-FP01 (U498122 - 2004/06) on the spoketmr,
region01 and outlet01 system:
wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/INV –i
421LCFFP –y @CreatePaths@=1 spoketmr region01 outlet01
򐂰 Inventory, Version 4.2.1, Interim Fix 4.2.1-INV-0006
a. Run this command from hubtmr to install Inventory, Version 4.2.1, Interim
Fix 4.2.1-INV-0006 on the hubtmr system:
wpatch –c /mnt/code/tivoli/TCM/421/INV0006/generic –i 421INV06 –y
@CreatePaths@=1 hubtmr
Chapter 4. Installing the Tivoli Infrastructure
193
b. Run this command from spoketmr to install Inventory, Version 4.2.1,
Interim Fix 4.2.1-INV-0006 on the spoketmr system:
wpatch –c /mnt/code/tivoli/TCM/421/INV0006/generic –i 421INV06 –y
@CreatePaths@=1 spoketmr
򐂰 Inventory Gateway, Version 4.2.1, Interim Fix 4.2.1-INVGW-0006
a. Run this command from hubtmr to install Inventory Gateway, Version
4.2.1, Interim Fix 4.2.1-INVGW-0006 on the hubtmr system:
wpatch –c /mnt/code/tivoli/TCM/421/INV0006/generic –i 421LCF06 –y
@CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Inventory Gateway, Version
4.2.1, Interim Fix 4.2.1-INVGW-0006 on the spoketmr, region01, and
outlet01 systems:
wpatch –c /mnt/code/tivoli/TCM/421/INV0006/generic –i 421LCF06 –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 Software Distribution, Version 4.2.1, Fix Pack 4.2.1-SWDSRV-FP01
a. Run this command from hubtmr to install Software Distribution, Version
4.2.1, Fix Pack 4.2.1-SWDSRV-FP01 (U498122 - 2004/06) on the hubtmr
and srchost systems:
wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/SWD –i
SWDFP1 –y @CreatePaths@=1 hubtmr srchost
򐂰 Software Distribution Gateway, Version 4.2.1, Fix Pack 4.2.1-SWDGW-FP01
a. Run this command from hubtmr to install Software Distribution Gateway,
Version 4.2.1, Fix Pack 4.2.1-SWDGW-FP01 (U498122 - 2004/06) on the
hubtmr system:
wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/SWD –i
SDGWFP1 –y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Software Distribution Gateway,
Version 4.2.1, Fix Pack 4.2.1-SWDGW-FP01 (U498122 - 2004/06) on the
spoketmr, region01, and outlet01 systems:
wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/SWD –i
SDGWFP1 –y @CreatePaths@=1 spoketmr region01 outlet01
򐂰 Software Distribution Software Package Editor, Version 4.2.1, Fix Pack
4.2.1-SWDJPS-FP01
a. Run this command from hubtmr to install Software Distribution Software
Package Editor, Version 4.2.1, Fix Pack 4.2.1-SWDJPS-FP01 (U498122 2004/06) on the hubtmr and srchost systems:
wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/SWD –i
SDJPFP1 –y @CreatePaths@=1 hubtmr srchost
194
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
򐂰 Activity Planner, Version 4.2.1, Fix Pack 4.2.1-APM-FP01
a. Run this command from hubtmr to install Activity Planner, Version 4.2.1,
Fix Pack 4.2.1-APM-FP01 (U498122 - 2004/06 on the hubtmr system:
wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/SWD –i
APMFP1 –y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install Activity Planner, Version 4.2.1,
Fix Pack 4.2.1-APM-FP01 (U498122 - 2004/06 on the spoketmr system:
wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/SWD –i
APMFP1 –y @CreatePaths@=1 spoketmr
򐂰 Change Manager, Version 4.2.1, Fix Pack 4.2.1-CCM-FP01
a. Run this command from hubtmr to install Change Manager, Version 4.2.1,
Fix Pack 4.2.1-CCM-FP01 (U498122 - 2004/06) on the hubtmr system:
b. wpatch –c /mnt/code/tivoli/TCM/421_fp01/all/generic/cd1/images/SWD –i
CCMFP1 –y @CreatePaths@=1 hubtmr
Verifying the TCM component installation
As was the case for the TEC components, the TCM environment has not yet
been customized, and is therefore not operational at this point, The installation
verification will be limited to looking into the Tivoli Registry to verify that the
components have been registered correctly.
The verification can be performed from the Tivoli Desktop, similar to the
procedure described in “Verifying Framework component installation” on
page 176, or using the command line, as described in the following:
To verify the installation of the TCM components in each individual TMR
environment, use the wlsinst -ah. Example 4-12 shows the verification
information related to the TCM components installed in the Outlet Systems
Management Solution Hub TMR environment.
Example 4-12 TCM Components installed
hubtmr:~ # wlsinst -ah | more
*-----------------------------------------------------------------------------*
Product List
*-----------------------------------------------------------------------------*
Inventory Gateway, Version 4.2.1
hubtmr
linux-ix86
Inventory, Version 4.2.1
hubtmr
linux-ix86
Chapter 4. Installing the Tivoli Infrastructure
195
Tivoli Java RDBMS Interface Module (JRIM) 4.1
hubtmr
linux-ix86
Tivoli Java RDBMS Interface Module (JRIM) 4.1.1
hubtmr
linux-ix86
Activity Planner, Version 4.2.1
hubtmr
linux-ix86
Change Manager, Version 4.2.1
hubtmr
linux-ix86
Software Distribution , Version 4.2.1
hubtmr
linux-ix86
srchost
linux-ix86
Software Distribution Gateway, Version 4.2.1
hubtmr
linux-ix86
Software Distribution Software Package Editor, Version 4.2.1
hubtmr
linux-ix86
srchost
linux-ix86
*-----------------------------------------------------------------------------*
Patch List
*-----------------------------------------------------------------------------*
Activity Planner, Version 4.2.1, Fix Pack 4.2.1-APM-FP01 (U498122 - 2004/06)
hubtmr
linux-ix86
Change Manager, Version 4.2.1, Fix Pack 4.2.1-CCM-FP01 (U498122 - 2004/06)
hubtmr
linux-ix86
Inventory, Version 4.2.1, Interim Fix 4.2.1-INV-0006
hubtmr
linux-ix86
Inventory, Version 4.2.1, Fix Pack 4.2.1-INV-FP01 (U498122 - 2004/06)
hubtmr
linux-ix86
Inventory Gateway, Version 4.2.1, Interim Fix 4.2.1-INVGW-0006
hubtmr
linux-ix86
Inventory Gateway, Version 4.2.1, Fix Pack 4.2.1-INVGW-FP01 (U498122 - 2004/06)
hubtmr
linux-ix86
Software Distribution Gateway, Version 4.2.1, Fix Pack 4.2.1-SWDGW-FP01
(U498122 - 2004/06)
hubtmr
linux-ix86
196
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Software Distribution Software Package Editor, Version 4.2.1, Fix Pack
4.2.1-SWDJPS-FP01 (U498122 - 2004/06)
hubtmr
linux-ix86
Software Distribution, Version 4.2.1, Fix Pack 4.2.1-SWDSRV-FP01 (U498122 2004/06)
hubtmr
linux-ix86
srchost
linux-ix86
Scalable Collection Service, Version 4.1.1
hubtmr
linux-ix86
1. To verify the TCM component installation in the spoke TMR environment,
issue the wlsinst -ah command from the spoketmr.
2. Before you start using the newly installed Tivoli Configuration Manager
functions, complete the post-installation configuration steps described in
4.4.5, “Customizing the Inventory” on page 238 through 4.4.8, “Enabling
Change Manager” on page 252.
4.3.6 IBM Tivoli Monitoring installation
This procedure describes how to install IBM Tivoli Monitoring v5.1.2 in the Data
Center to enable the monitoring of resources in the Outlet Systems Management
Solution environment. The Web Health Console Installation is also included in
this step.
Preparing for the ITM Installation
The preparation steps for installing the IBM Tivoli Monitoring v5.1.2 product,
include the following:
1. ”Preparing installation media for ITM”
2. “Determining ITM Component locations” on page 198.
Preparing installation media for ITM
Obtain the installation images for the following components, and unpack them to
the proper location on the srchost server in the <img_dir>/code directory
structure:
򐂰 Tivoli Management Framework v4.1.1 Compatibility
򐂰 Tivoli Monitoring Install V5.1.2 Multi Int Engl Braz Port French Italian German
Spanish Japanese Korean Simp Chin Trad Chin
<img_dir>/code/tivoli/ITM/512/all/generic
Chapter 4. Installing the Tivoli Infrastructure
197
򐂰 Tivoli Monitoring V5.1.2 Multi Int English
<img_dir>/code/tivoli/ITM/512/all/generic
򐂰 Tivoli Monitoring Tools V5.1.2 Multi Int English Braz Port French Italian
German Spanish Japanese Korean Simp Chin Trad Chin
<img_dir>/code/tivoli/ITM/512/tools/generic
򐂰 Tivoli Monitoring Web Health Console V5.1.2 Fixpack 6 Multi Int Engl Braz
Port French Italian German Spanish Japanese Korean Simp Chin Trad Chin
<img_dir>/code/tivoli/ITM/512/WHCFP06/generic
򐂰 Obtain Web Health Console for WebSphere 5.1 from your local Tivoli Support
organization
򐂰 Fix Pack (5.1.2-ITM-FP02) for IBM Tivoli Monitoring 5.1.2
<img_dir>/code/tivoli/ITM/512_fp02/all/generic
򐂰 IBM Tivoli Monitoring Component Services 5.1.1 Fix Pack 2
<img_dir>/code/tivoli/ITM/511/ICSFP02/generic
To determine which installation images you need, and from where they can be
downloaded, refer to Table B-1 on page 647.
Make the installation image accessible from the rdbms system using NFS or
SMB mount:
smbmount //srchost/img /mnt -o username=root,password=<password>
Determining ITM Component locations
Table 4-16 outlines which ITM related components to install where, in
accordance with the functional roles of the systems described in 4.1.1,
“Management environments” on page 131.
Table 4-16 IBM Tivoli Monitoring product and patch installation roadmap
198
outlet01
console
region01
IBM Tivoli Monitoring Web Health
Console 5.1.1a
IBM Tivoli Monitoring Component
Services 5.1.1b
srchost
x
spoketmr
IBM Tivoli Monitoring 5.1.2
tec
servers
hubtmr
components
x
x
x
x
x
x
x
x
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
outlet01
x
x
IBM Tivoli Monitoring for Database
5.1.1 - DB2 Component Software
5.1.0
x
x
x
x
IBM Tivoli Monitoring 5.1.2 Fix Pack 2
x
x
x
x
IBM Tivoli Monitoring Web Health
Console 5.1.1 Fix Pack 6
console
x
srchost
x
tec
IBM Tivoli Monitoring for Web
Infrastructure 5.1.2 - WAS
hubtmr
region01
servers
spoketmr
components
x
IBM Tivoli Monitoring Component
Services 5.1.1 Fix Pack 2
x
x
x
x
IBM Tivoli Monitoring for Web
Infrastructure 5.1.2 - WAS Fix Pack 2
x
x
x
x
IBM Tivoli Monitoring for Database
5.1.1 - DB2 Component Software
5.1.0 Fix Pack 5
x
x
x
x
a. Uses the installation wizard. See the IBM Tivoli Monitoring Users Guide for
more information
b. ITMCS511 is a patch install
Installing the IBM Tivoli Monitoring components
The IBM Tivoli Monitoring components can be installed either through the
Installation Wizard, through the Tivoli Desktop, or through command line
commands.
In the following we show the command line based installation, from which you
will be able to deduct the information needed to perform the installation using the
Tivoli Desktop installation GUI. For information about how to navigate the Tivoli
Desktop installation GUI, refer to the IBM Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Management Framework →Enterprise Installation
Guide →Installing Tivoli Products and Patches.
Chapter 4. Installing the Tivoli Infrastructure
199
For information about how to install IBM Tivoli Monitoring components using the
Installation Wizard, refer to the manual IBM Tivoli Monitoring User's Guide,
Version 5.1.2, SH19-4569-03.
Important: You should be aware that the Installation Wizard performs both
initial installation and customization steps, so if this installation method is
used, some of the steps described in 4.4.9, “IBM Tivoli Monitoring
configuration” on page 255 and related sections can be obsolete.
To install the products and patches for IBM Tivoli Monitoring through the
command line interface, perform the following steps:
򐂰 IBM Tivoli Monitoring, Version 5.1.2
a. Run this command from hubtmr to install IBM Tivoli Monitoring, Version
5.1.2 on the hubtmr system:
winstall –c /mnt/code/tivoli/ITM/512/all/generic/ITM –i DM512 –y
@CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install IBM Tivoli Monitoring, Version
5.1.2 on the spoketmr, region01 and outlet01 systems:
winstall –c /mnt/code/tivoli/ITM/512/all/generic/ITM –i DM512 –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 IBM Tivoli Monitoring, Version 5.1.2 - Fix Pack 2
a. Run this command from hubtmr to install IBM Tivoli Monitoring, Version
5.1.2 - Fix Pack 2 on the hubtmr system:
wpatch –c /mnt/code/tivoli/ITM/512/ITMFP02/generic/cdrom –i 512ITM02 –y
@CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install IBM Tivoli Monitoring, Version
5.1.2 - Fix Pack 2 on the spoketmr, region01 and outlet01 systems:
wpatch –c /mnt/code/tivoli/ITM/512/ITMFP02/generic/cdrom –i 512ITM02 –y
@CreatePaths@=1 spoketmr region01 outlet01
Verifying IBM Tivoli Monitoring installation
As was the case for other Tivoli components, the ITM environment has not yet
been customized, and are therefore not operational at this point, The installation
verification will be limited to looking into the Tivoli Registry to verify that the
components has been registered correctly.
The verification can be performed from the Tivoli Desktop, similar to the
procedure described in “Verifying Framework component installation” on
page 176, or using the command line, as in Example 4-13 on page 201.
200
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
To verify the installation of the TCM components in each individual TMR
environment, use the wlsinst -a -s<Component>. Example 4-13 shows the
verification information related to the TCM components installed in the Outlet
Systems Management Solution Hub TMR environment.
Example 4-13 ITM Components installed
hubtmr:~ # wlsinst -h -s"IBM Tivoli Monitoring, Version 5.1.2"
IBM Tivoli Monitoring, Version 5.1.2
hubtmr
linux-ix86
hubtmr:~ # wlsinst -h -s"IBM Tivoli Monitoring, Version 5.1.2 - fix pack 2"
IBM Tivoli Monitoring, Version 5.1.2 - fix pack 2
hubtmr
linux-ix86
To verify the ITM component installation in the spoke TMR environment, issue
the wlsinst -ah or wlsinst -h -s<Component> command from the spoketmr.
Before you start using the newly installed Tivoli Configuration Manager functions,
complete the post-installation configuration steps described in 4.4.5,
“Customizing the Inventory” on page 238 through 4.4.9, “IBM Tivoli Monitoring
configuration” on page 255.
4.3.7 IBM Tivoli Manager Web Health Console
This procedure describes how to install IBM Tivoli Monitoring v5.1.2 Web Health
Console (WHC) to enable the surveillance of resources in the Outlet Systems
Management Solution environment.
The preparation steps for installing the IBM Tivoli Monitoring v5.1.2 product,
include the following:
1. “Preparing the installation media for WHC” on page 202
2. “Determining WHC Component locations” on page 202
Before embarking on this task, you should determine if you wish to install the
generally available Web Health Console version v5.1.1, which requires
WebSphere Application Server v4, or the updated Web Health Console v5.1 for
WebSphere Application Server v5.1, an update that is available from Tivoli
Support.
Please note, that during installation of the generally available WHC, WebSphere
Application Server v4 is installed under the covers. For the updated version, you
are required to install WebSphere Application Server v5.1 manually.
Chapter 4. Installing the Tivoli Infrastructure
201
Preparing the installation media for WHC
Obtain the installation images for the following components, and unpack them to
the proper location on the srchost server in the <img_dir>/code directory
structure:
򐂰 IBM Tivoli Monitoring Web Health Console V5.1.1 for Windows, Linux Braz
Port, French, Italian, German, Spanish, Japanese, Korean, Simp Chin, Trad
Chin, Int English
<img_dir>/code/tivoli/ITM/511/WHC/generic
򐂰 Tivoli Monitoring Web Health Console V5.1.2 Fixpack 6 Multi Int Engl Braz
Port French Italian German Spanish Japanese Korean Simp Chin Trad Chin
<img_dir>/code/tivoli/ITM/512/WHCFP06/generic
If you plan to install the Web Health Console on top of an existing WebSphere
Application Server v5.1 you should obtain the images for Web Health Console for
WebSphere 5.1 from your local Tivoli Support organization. In addition, you
should obtain the installation images for WebSphere Application Server v5.1.
򐂰 WebSphere Application Server v5.1 for Linux, (Brazilian Portuguese, French,
German, Italian, Japanese, Korean, Spanish, US English)
<img_dir>/code/websphere/was/510/server/Linux-IX86
򐂰 WebSphere Application Server V5.1.1 for Linux, Fixpack 1, English, Brazilian
Portuguese, French, German, Italian, Spanish, Japanese, Korean, Simplified
Chinese, Traditional Chinese
<img_dir>/code/websphere/was/510_fp1/appserver/Linux-IX86
To determine which installation images you need, and from where they can be
downloaded, refer to Table B-1 on page 647.
Make the installation image accessible from the rdbms system using NFS or
SMB mount:
smbmount //srchost/img /mnt -o username=root,password=<password>
Determining WHC Component locations
Table 4-17 on page 203 outlines which GA version WHC components to install
where, in accordance with the functional roles of the systems described in 4.1.1,
“Management environments” on page 131. Refer to Table 4-18 on page 203 for
information regarding components to install for the Limited Availability version of
WHC that will install on WebSphere Application Server v5.1.
202
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Table 4-17 Web Health Console installation roadmap: GA version
outlet01
outlet01
x
region01
IBM Tivoli Monitoring Web Health
Console 5.1.1 Fix Pack 6
region01
x
spoketmr
IBM Tivoli Monitoring Web Health
Console 5.1.1
spoketmr
console
srchost
tec
servers
hubtmr
components
x
x
x
Table 4-18 Web Health Console installation roadmap - LA version
console
srchost
tec
servers
hubtmr
components
IBM Tivoli Monitoring Web Health
Console LA Version for WAS 5.1 a
x
WebSphere Application Server v5.1
(x)
WebSphere Application Server V5.1.1
for Linux, Fixpack 1
(x)
a. Obtain the WHC LA version for WAS 5.1 from Tivoli Support
Installing WHC components
The following contains information about installing both the General Availability
version 5.1.1 of Web Health Console and the Limited Availability WHC for
WebSphere Application Server 5.1.
Installing WHC 5.1.1 GA
The IBM Tivoli Monitoring Web Health Console GA Version components are to
be installed through the Installation Wizard provided with the product. During
installation, a copy of WebSphere Application Server v4.01 Express will be
installed.
To install IBM Tivoli Monitoring Web Health Console 5.1.1, run the following
setup executable from the console system to launch the installation wizard and
follow the instructions in the wizard panels:
<img_dir>/code/tivoli/ITM/511/WHC/generic/setuplinux.bin
Chapter 4. Installing the Tivoli Infrastructure
203
Refer to the manual IBM Tivoli Monitoring User's Guide, Version 5.1.2,
SH19-4569-03 for additional information.
To apply ITM Web Health Console - FixPack 6, run the following command from
the console system to install on console
<img_dir>/code/tivoli/ITM/512/WHCFP06/generic/ITMWHS/disk1/5.1.1-WHC-FixPack6_l
in.bin
Installing WHC LA for WebSphere Application Server 5.1
Installation of the WHC for WebSphere Application Server 5.1 requires that the
WebSphere Application Server be installed prior to installation of the Web Health
Console.
Installing WebSphere Application Server v5.1 installation
Prior to installing the LA Version of the Tivoli Web Health Console for WAS 5.1
you have to install the WebSphere Application Server v5.1 on the console
system.
To install WebSphere Application Server 5.1 on the console system, open a
command shell from the desktop, and perform the following steps:
1. Ensure that the DISPLAY variable has been set.
export DISPLAY=<local ip-address>
<local ip-address> is the local address of the system, for example 10.1.2.3.
2. Add mqm and mqmbrkrs groups:
groupadd mqmbrkrs
groupadd mqm
3. Add the new user id mqm:
useradd -s /bin/bash -g mqm -m -d /home/mqm mqm
4. Add root to mqm and mqbrkrs groups:
usermod -G mqm,mqmbrkrs root
204
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Attention: Be aware of logging in appropriately to pick up secondary user
groups for root.
When the root user on all platforms, except Windows platforms, does not
belong to the mqbrkrs and mqm user groups, errors occur when installing
the embedded messaging feature.
On many systems, such as UnitedLinux, if you telnet and issue the id
command or the groups command, you cannot see the groups mqm or
mqbrkrs even though root can be assigned to them. Solve this problem in
one of two ways:
1. Use the ssh command to log into the server.
2. Issue the ‘su -’ command.
After using one of the methods above, verify the required groups with the
‘id’ command or the ‘groups’ command.
# id
uid=0(root) gid=0(root) groups=0(root)
# su # id
uid=0(root) gid=0(root)
groups=0(root),64(pkcs11),101(mqm),102(mqbrkrs)
3. Start the installation.
<img_dir>/code/WebSphere/was/510/appserver/Linux-IX86/install
4. Follow the prompts until installation is complete.
For additional information, refer to the IBM WebSphere Information Center Web
site:
http://publib.boulder.ibm.com/infocenter/ws51help/index.jsp
Navigate to WebSphere Application Server, Version 5.1.x →Installing.
To install the products and patches for IBM Tivoli Monitoring through the
command line interface, you should perform the following steps:
– IBM Tivoli Monitoring, Version 5.1.2
WHC LA for WebSphere Application Server 5.1 installation
The IBM Tivoli Monitoring (ITM) Web Health Console (WHC) 5.1.1 was not
originally developed to be used with WebSphere Application Server v5. However,
these instructions walk you through exactly what is necessary to manually install
the application.
Chapter 4. Installing the Tivoli Infrastructure
205
Note: Even though you are installing this software manually, please obtain the
ITM WHC 5.1.1 installation CDs or downloads, as well as ITM WHC 5.1.1
Fixpack 6 or 7. These CDs or downloads have information that might be
needed later. The ITM WHC 5.1.1 documentation, release notes, and so forth,
should all still closely apply.
To manually set up Web Health Console on an existing WebSphere Application
Server v5.1 perform the following steps:
1. Install in a single server WebSphere 5 environment.
a. Open a Web browser to load the WebSphere Application Server
Administration Console at the following URL:
http://machinename:9090/admin
Log in using any username, or leave the username blank.
b. From the WebSphere Administration Console, install dm.ear. Accept all
defaults for installation, and save the WebSphere Application Server
master configuration after the install is complete.
i. Navigate though Node →Applications →Enterprise Applications.
ii. Click the install button and give the path to the dm.ear file from the
patch bundle.
iii. Accept all defaults; do not check anything; and click Finish.
iv. Select Save at the top of the page.
v. Click the Save button.
c. In the WebSphere Administration Console, set the classloader policy for
dm.war and DM Web Health Console.ear to PARENT_LAST as follows:
i. Navigate to Node →Applications →Enterprise Applications →DM
Web Health Console.
ii. Change classloader mode to PARENT_LAST and click OK.
iii. Select the link Save at the top of the page.
iv. Click the Save button.
v. Navigate to Node →Applications →Enterprise Applications →DM
Web Health Console →Web Modules →dm.war.
vi. Change classloader mode to PARENT_LAST and click OK.
vii. Select Save at the top of the page.
viii.Click the Save button.
2. Modify the tracing and logging location.
206
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
The WHC installation modifies a file to point to the correct location to put the
WHC log files. This log file entry is modified because the installer does not
know until install time under which drive to put the files, or whether it is a
UNIX drive or a Windows drive. As a result, you need to manually modify this
file to point to the proper location for log file output.
a. On each server in the cluster (or the single machine), edit the file
dm.war/WEB-INF/classes/PDLog.properties
Set file.fileDir for the appropriate platform in the Define FileHandlers
section, as shown in Example 4-14.
Example 4-14 WebSphere Application Server tracing and logging location
#----------------------------------------------------------------------# Define FileHandlers
#----------------------------------------------------------------------file.fileDir=/opt/Tivoli/AMW/logs/
(use whatever drive you like, but try and keep the path the same)
b. Restart the WebSphere Application Server
i. Stop the WebSphere server using the following command:
WHC_INSTALL_DIR/bin/stopServer.sh <serverName>
ii. Start the WebSphere server using the following command:
WHC_INSTALL_DIR/bin/startServer.sh <serverName>
3. SSL Considerations (Optional if you use SSL)
There are two SSL connections in use with the ITM WHC. The first is from the
browser to the ITM WHC server. The second is to the ITM WHC server to the
ITM TME server. The first SSL connection works exactly as it is described in
the ITM WHC documentation. The second SSL connection works with the
instructions found in the included SSL document.
4. UNIX Graphing Display
The original WebSphere Application Server 4 WHC had a headless issue,
where if no X Windowing server is loaded on the UNIX machine when the
WHC WebSphere Application Server was started, graphs would not display
properly. This problem only happened on UNIX machines and only when the
graphics server was not started first.
The included document called ITM511_release_notes.rtf describes this issue
in detail and how to solve it. The details are on page 8, item #2.
5. External WHC Launch
The WHC Fixpack 5 added a feature to externally launch the WHC using a
URL. When enough information is passed to it in the form of URL arguments,
Chapter 4. Installing the Tivoli Infrastructure
207
the WHC will automatically log in the user and display endpoint information.
This is not a required piece of the WHC. In fact, only one Tivoli application
(Tivoli Management for Transaction Performance, TMTP) uses it to launch
the WHC, so you can skip this step easily if you are not interested.
Steps to install the LaunchITM application:
a. Open a Web browser to the load the WebSphere Application Server
Administration Console at the following URL:
http://machinename:9090/admin
Log in using any username, or leave the username blank.
iii. From the Administrative Console, install LaunchITM.ear. Accept all
defaults for installation. Save WAS master configuration after the install
is complete.
iv. Navigate through Node Æ Applications Æ Enterprise Applications
and click the install button. Provide the path to the LaunchITM.ear
from the patch bundle.
v. Accept all defaults. Do not check anything. Click Finish.
vi. Select the link Save at the top of the page.
vii. Click the Save button.
b. On each server in the cluster (or single machine), edit the file ITM
Launcher.ear\LaunchITM.war\jsp\LaunchWHC.jsp
viii.Search for “String serverName = scheme + "://" + serverHostname;”
ix. Add the port to the serverName, so the string is replaced as follows:
String serverName = scheme + "://" + serverHostname + ì:port;
The port is 9080.
c. Follow Step 2b on page 207 to restart the WebSphere Application Server.
Now the ITM Web Health Console can be launched from the TMTP user
interface. In TMTP, the launch can be configured in the Web Health Console
setting on TMTP user interface. The user is requested to enter the Web Health
Server name in the following format:
http://host_computer_name(:port)
host_computer_name is the fully qualified host name for the computer that hosts
the WHC, and port is the WebSphere Application Server internal HTTP transport
port. The default port is 9080.
Other information required to be set are the TME host name, TME username and
TME password. For further instructions to launch from TMTP, refer to the TMTP
Users Guide.
208
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Verifying WHC installation
Opening the Web Health Console will verify the installation. This is accomplished
through a Web Browser
1. Connect to WHC.
– WHC on WebSphere Application Server v4
Connection to the original WebSphere Application Server v4 WHC server
from a Web browser was usually to the URL:
http://machine_name/itmwhc
This is only valid if it is installed on the default port 80. If WebSphere
Application Server is installed on a different port, you would have needed
to include that in the URL address:
http://machine_name:port/itmwhc
– WHC on WebSphere Application Server 5
The WebSphere Application Server 5 version of the Web Health Console
needs to connect to dmwhc, rather than itmwhc. Also, in WebSphere
Application Server 5 installation, the internal HTTP transport port is
configured by default to use port 9080. Thus, by default you would need to
use port 9080 to connect.
http://machine_name:9080/dmwhc
If you have manually changed the port to something other than 9080, you
will need to change that.
If you have turned on SSL security, it should be:
https://machine_name:SSL_PORT/dmwhc
2. Supply the following:
User:
Tivoli user ID
Password:
Password associated with the Tivoli User ID
Host:
The name of the managed node to which you want to
connect. In the Outlet Systems Management Solution
the hubtmr system is a good choice.
3. Click OK to connect to the Web Health Console.
4.3.8 IBM Tivoli Monitoring for Web Infrastructure
This section demonstrates how to install IBM Tivoli Monitoring for Web
Infrastructure v5.1.2 in order to enable the monitoring of WebSphere Application
Server v5.x resources in the Outlet Systems Management Solution.
Chapter 4. Installing the Tivoli Infrastructure
209
For detailed information about IBM Tivoli Monitoring for Web Infrastructure
related topics, refer to the information available at the IBM Tivoli Information
Center:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Web Infrastructure.
Preparing for the installation
The preparation steps for installing the IBM Tivoli Monitoring for Web
Infrastructure v5.1.2 and the related WebSphere Application Server feature
include the following:
1. ”Preparing the media for IBM Tivoli Monitoring”
2. “Determining component locations” on page 211
Preparing the media for IBM Tivoli Monitoring
Obtain the installation images for the following components, and unpack them to
the proper location on the srchost server in the <img_dir>/code directory
structure:
򐂰 Tivoli Monitoring for Web Infrastructure
Install V5.1.2 AIX HP-UX Linux Solaris Win2000 WinNT WinXP Int Engl Braz
Port French Italian German Spanish Japan Korea Simp Chin Trad Chin
<img_dir>/code/tivoli/ITMWI/512/all/generic
򐂰 Tivoli Monitoring for Web Infrastructure
Install 2 V5.1.2 AIX HP-UX Linux Solaris Win2000 WinNT WinXP Int Engl
Braz Port French Italian German Spanish Japan Korea Simp Chin Trad Chin
<img_dir>/code/tivoli/ITMWI/512/all/generic
򐂰 Tivoli Monitoring for Web Infrastructure
Install 3 V5.1.2 AIX HP-UX Linux Solaris Win2000 WinNT WinXP Int Engl
Braz Port French Italian German Spanish Japan Korea Simp Chin Trad Chin
<img_dir>/code/tivoli/ITMWI/512/all/generic
򐂰 Tivoli Monitoring for Web Infrastructure
WebSphere Application Server Component Software V5.1.2 AIX HP-UX
Linus Solaris Win2000 WinNT WinXP Int Engl
<img_dir>/code/tivoli/ITMWI/512/WAS/generic
򐂰 Tivoli Monitoring for Web Infrastructure
Documentation V5.1.2 AIX HP-UX Linus Solaris Win2000 WinNT WinXP Int
Engl
<img_dir>/code/tivoli/ITMWI/512/all/generic/documentation
210
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
򐂰 5.1.2-WAS-FP02 Tivoli Monitoring for Web Infrastructure: WebSphere
Application
<img_dir>/code/tivoli/ITMWI/512_fp02/all/generic
In addition you, will need the images for ITM Component Services which were
downloaded during the preparation for ITM installation as described in “Preparing
installation media for ITM” on page 197.
To determine which installation images you need, and from where they can be
downloaded, please refer to Table B-1 on page 647.
Make the installation image accessible from the rdbms system using NFS or
SMB mount:
smbmount //srchost/img /mnt -o username=root,password=<password>
Determining component locations
Table 4-16 on page 198 outlines which ITM for Web Infrastructure components to
install where, in accordance with the functional roles of the systems described in
4.1.1, “Management environments” on page 131.
The summarized version of the information from Table 4-16 on page 198 is that
the ITM for WI components must be installed on all TMR Servers and Tivoli
Gateways in the Outlet Systems Management Solution infrastructure. This boils in our environment - down to the following systems: hubtmr, spoketmr, regionXX,
and outletXX.
Installing IBM Tivoli Monitoring for WI
The IBM Tivoli Monitoring for Web Infrastructure components can be installed
either through Tivoli Desktop, or through command line commands.
In this section we show the command line-based installation, from which you will
be able to deduct the information needed to perform the installation using the
Tivoli Desktop installation GUI. For information about how to navigate the Tivoli
Desktop installation GUI, please refer to the IBM Tivoli Information Center Web
site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Management Framework →Enterprise Installation
Guide →Installing Tivoli Products and Patches.
For specific information about how to install IBM Tivoli Monitoring for Web
Infrastructure components, refer to the information available at the IBM Tivoli
Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Chapter 4. Installing the Tivoli Infrastructure
211
Navigate to Monitoring for Web Infrastructure →Installation and Setup
Guide →Installing or upgrading the product manually in an existing Tivoli
environment.
To install the products and patches for IBM Tivoli Monitoring for Web
Infrastructure through the command line interface you should perform the
following steps:
򐂰 IBM Tivoli Monitoring Component Services, Version 5.1.1
a. Run this command from hubtmr to install IBM Tivoli Monitoring Component
Services, Version 5.1.1 on the hubtmr system:
wpatch –c <img_dir>/code/tivoli/ITM/511/all/generic/ITMCS –i ITMCS511 –y
@CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install IBM Tivoli Monitoring
Component Services, Version 5.1.1 on the spoketmr, region01 and
outlet01 systems:
wpatch –c <img_dir>/code/tivoli/ITM/511/all/generic/ITMCS –i ITMCS511 –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 IBM Tivoli Monitoring for Web Infrastructure: WAS 5.1.2
a. Run this command from hubtmr to install IBM Tivoli Monitoring for Web
Infrastructure - WebSphere Application Server 5.1.2 on hubtmr from your
specific product path:
winstall –c <img_dir>/code/tivoli/ITMWI/512/all/generic/PRODUCT –i
ITMWAS –y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install IBM Tivoli Monitoring for Web
Infrastructure - WebSphere Application Server 5.1.2 on spoketmr,
region01 and outlet01 from your specific product path
winstall –c <img_dir>/code/tivoli/ITMWI/512/all/generic/PRODUCT –i
ITMWAS –y @CreatePaths@=1 spoketmr region01 outlet01
򐂰 IBM Tivoli Monitoring for Web Infrastructure:WAS 5.1.2 FP02
a. Run this command from hubtmr to install IBM Tivoli Monitoring for Web
Infrastructure 5.1.2 - WebSphere Application Server - Fix Pack 02 on
hubtmr from your specific product path
wpatch –c <img_dir>/code/tivoli/ITMWI/512_fp02/all/generic –i 512WAS02
–y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install IBM Tivoli Monitoring for Web
Infrastructure 5.1.2 - WebSphere Application Server - Fix Pack 02 on
spoketmr, region01 and outlet01 from your specific product path
wpatch –c <img_dir>/code/tivoli/ITMWI/512_fp02/all/generic –i 512WAS02
–y @CreatePaths@=1 spoketmr region01 outlet01
212
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Verifying IBM Tivoli Monitoring for WI installation
Verification steps can be found at the IBM Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Web Infrastructure →Installation and Setup
Guide →Completing the installation of the product →Verifying the
installation of the product.
In addition to the verifications steps described in the Information Center, you can
check for the existence of a new policy region by the name Monitoring for
WebSphere Application Server. You can also use the wlookup command to verify
the installation of Tivoli Monitoring for Web Infrastructure.
4.3.9 IBM Tivoli Monitoring for databases
This section demonstrates how to install IBM Tivoli Monitoring for Databases
v5.1.1 in order to enable the monitoring of DB2 Server v8 resources in the Outlet
Systems Management Solution.
For detailed information about IBM Tivoli Monitoring for Web Infrastructure, refer
to the information available at the IBM Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Databases.
Preparing for installation
The preparation steps for installing the IBM Tivoli Monitoring for Databases
v5.1.1 product and the related WebSphere Application Server feature include the
following:
1. ”Preparing installation media: DB2”
2. “IBM Tivoli Monitoring for Databases:DB2 component locations” on page 214
Preparing installation media: DB2
Obtain the installation images for the following components, and unpack them to
the proper location on the srchost server in the <img_dir>/code directory
structure:
򐂰 Tivoli Monitoring for Databases
Install V5.1.1 AIX HP-UX Linux Solaris Win2000 WinNT WinXP Int Engl Braz
Port French Italian German Spanish Japan Korea Simp Chin Trad Chin
<img_dir>/code/tivoli/ITMDB/511/all/generic
򐂰 Tivoli Monitoring for Databases
Chapter 4. Installing the Tivoli Infrastructure
213
Install 2 V5.1.1 AIX HP-UX Linux Solaris Win2000 WinNT WinXP Int Engl
Braz Port French Italian German Spanish Japan Korea Simp Chin Trad Chin
<img_dir>/code/tivoli/ITMDB/511/all/generic
򐂰 Tivoli Monitoring for Databases: Documentation V5.1.1 AIX HP-UX Linux
Solaris Win2000 WinNT WinXP Int Engl
<img_dir>/code/tivoli/ITMDB/511/all/generic
򐂰 Tivoli Monitoring for Databases: DB2 Component Software V5.1 AIX HP-UX
Linux Solaris Win2000 WinNT WinXP Int Engl
<img_dir>/code/tivoli/ITMDB/510/all/generic
򐂰 IBM Tivoli Monitoring for Databases 5.1.0 - DB2 Component Software 5.1.0
Fix Pack 5
<img_dir>/code/tivoli/ITMDB/510_fp05/all/generic
In addition, you will need the images for ITM Component Services which were
downloaded during the preparation for ITM installation as described in “Preparing
installation media for ITM” on page 197.
To determine which installation images you need, and from where they can be
downloaded, refer to Table B-1 on page 647.
Make the installation image accessible from the rdbms system using NFS or
SMB mount:
smbmount //srchost/img /mnt -o username=root,password=<password>
IBM Tivoli Monitoring for Databases:DB2 component locations
Table 4-16 on page 198 outlines which ITM for Databases related components to
install where, in accordance with the functional roles of the systems described in
4.1.1, “Management environments” on page 131.
The summarized version of the information from Table 4-16 on page 198 is, that
the ITM for Databases components must be installed on all TMR Servers and
Tivoli Gateways in the Outlet Systems Management Solution infrastructure. This
boils in our environment down to the following systems: hubtmr, spoketmr,
regionXX, and outletXX.
ITM for DB:DB2 Component installation and configuration
The IBM Tivoli Monitoring for Databases components be installed either through
Tivoli Desktop, or through command line commands.
In the following we show the command line based installation, from which you
will be able to deduct the information needed to perform the installation using the
Tivoli Desktop installation GUI. For information about how to navigate the Tivoli
214
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Desktop installation GUI, please refer to the IBM Tivoli Information Center at
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp and navigate to
Management Framework →Enterprise Installation Guide →Installing Tivoli
Products and Patches.
For specific information about how to install IBM Tivoli Monitoring for Databases
components please refer to the information available at the IBM Tivoli
Information Center at
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp and navigate to
Monitoring for Databases →Installation and Setup Guide →Installing or
upgrading the product manually in an existing Tivoli environment.
To install the products and patches for IBM Tivoli Monitoring for Databases
through the command line interface, perform the following steps:
򐂰 IBM Tivoli Monitoring Component Services, Version 5.1.1
a. If this step was not performed previously run the following command from
hubtmr to install IBM Tivoli Monitoring Component Services, Version 5.1.1
on the hubtmr system:
wpatch –c <img_dir>/code/tivoli/ITM/511/all/generic/ITMCS –i ITMCS511 –y
@CreatePaths@=1 hubtmr
b. If this step was not performed previously run this command from spoketmr
to install IBM Tivoli Monitoring Component Services, Version 5.1.1 on the
spoketmr, region01 and outlet01 systems:
wpatch –c <img_dir>/code/tivoli/ITM/511/all/generic/ITMCS –i ITMCS511 –y
@CreatePaths@=1 spoketmr region01 outlet01
򐂰 IBM Tivoli Monitoring for Databases 5.1.0 - DB2
a. Run this command from hubtmr to install IBM Tivoli Monitoring for
Databases, Version 5.1.0 - DB2 on hubtmr from your specific product path
winstall –c <img_dir>/code/tivoli/ITMDB/510/all/generic/PRODUCT –i
DB2ECC22 –y @CreatePaths@=1 hubtmr
b. Run this command from spoketmr to install IBM Tivoli Monitoring for
Databases, Version 5.1.0 - DB2 on spoketmr, region01 and outlet01 from
your specific product path
winstall –c <img_dir>/code/tivoli/ITMDB/510/all/generic/PRODUCT –i
DB2ECC22 –y @CreatePaths@=1 spoketmr region01 outlet01
򐂰 IBM Tivoli Monitoring for Databases 5.1.0 - DB2 FP05
a. Run this command from hubtmr to install IBM Tivoli Monitoring for
Databases - DB2, Version 5.1.0 - Fix Pack 05 on hubtmr:
wpatch –c <img_dir>/code/tivoli/ITMDB/510_fp05/all/generic –i DB2ECC22
–y @CreatePaths@=1 hubtmr
Chapter 4. Installing the Tivoli Infrastructure
215
b. Run this command from spoketmr to install IBM Tivoli Monitoring for
Databases - DB2, Version 5.1.0 - Fix Pack 05 on spoketmr, region01 and
outlet01 from your specific product path
wpatch –c <img_dir>/code/tivoli/ITMDB/510_fp05/all/generic –i DB2ECC22
–y @CreatePaths@=1 spoketmr region01 outlet01
Verifying the IBM Tivoli Manager for DB installation
Verification steps can be found at the IBM Tivoli Information Center at
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp and navigate to
Monitoring for Databases →Installation and Setup Guide →Completing the
installation of the product →Verifying the installation of the product
In addition to the verifications steps described in the Information Center, you can
check for the existence of a new policy region by the name Monitoring for
Databases. As described in the previous section, you can also use the wlookup
command to verify the installation of Tivoli Monitoring for Databases.
4.3.10 TMTP Management Server
This section details the installation of the Transaction Monitoring components of
the Outlet Systems Management Solution.
Preparing the TMTP installation
The Tivoli Monitoring for Transaction Performance (TMTP) components can be
installed either through the Installation Wizard, or through command line
commands.
In this section, we show the command line based installation, from which you will
be able to deduct the information needed to perform the installation using the
Installation Wizard. For information about how to navigate the Wizard installation
GUI, Please refer to the IBM Tivoli Information Center at
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp and navigate
to Monitoring for Transaction Performance → Installation and
Configuration Guide → Installing a management server → Typical
installation: management server.
For the Outlet Systems Management Solution it was chosen to use the Custom
installation, requiring you to manually provide the following prerequisite
components:
򐂰 A working WebSphere Application Server v5.1 - or later - to host the TMTP
Management Server application on the tmtpsrv system.
򐂰 An empty DB2 database to be use as a permanent data store for TMTP
216
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
In addition to providing these prerequisite components, the installation media
needs to be acquired, so, in summary, the preparation steps for installing the IBM
Tivoli Monitoring for Transaction Performance v5.3 product, include the following:
1. ”Preparing installation media for TMTP”
2. “Determining TMTP Component locations” on page 218
3. “Establishing the TMTP WebSphere Application Server platform” on page 218
4. “Creating the TMTP DB2 database” on page 218
Preparing installation media for TMTP
Obtain the installation images for the following components, and unpack them to
the proper location on the srchost server in the <img_dir>/code directory
structure:
򐂰 IBM Tivoli Monitoring for Transaction Performance, Version 5.3.0: Web
Transaction Performance Component Software Management Server (1 of 2)
<img_dir>/code/tivoli/TMTP/530/server/generic
򐂰 IBM Tivoli Monitoring for Transaction Performance, Version 5.3.0: Web
Transaction Performance Component Software Management Server (2 of 2)
<img_dir>/code/tivoli/TMTP/530/server/generic
򐂰 IBM Tivoli Monitoring for Transaction Performance, Version 5.3.0: Web
Transaction Performance Component Software Management Agent, Store
and Forward
<img_dir>/code/tivoli/TMTP/530/agents/generic
򐂰 IBM Tivoli Monitoring for Transaction Performance, Version 5.3.0: Rational
Robot, Warehouse Enablement Pack, Tivoli Intelligent Orchestrator, TMTP
5.1 to TMTP 5.3 Upgrade
<img_dir>/code/tivoli/TMTP/530/components/generic
򐂰 IBM Tivoli Monitoring for Transaction Performance, Version 5.3.0:
Documentation CD (English only) UNIX
<img_dir>/code/tivoli/TMTP/530/all/unix/documentation
To determine which installation images you need, and from where they can be
downloaded, refer to Table B-1 on page 647.
Make the installation image accessible from the tmtpsrv system using NFS or
SMB mount:
smbmount //srchost/img /mnt -o username=root,password=<password>
Chapter 4. Installing the Tivoli Infrastructure
217
Determining TMTP Component locations
Table 4-19 outlines which TMTP related components to install where, in
accordance with the functional roles of the systems described in “Management
environments” on page 131.
Table 4-19 TMTP product and patch installation roadmap
Tivoli Monitoring for Transaction Performance v5.3:
Management Server
Tivoli Monitoring for Transaction Performance v5.3:
Command Line Interface
outlet01
region01
spoketmr
tmtpsrv
srchost
tec
servers
hubtmr
components
x
x
Tivoli Monitoring for Transaction Performance v5.3:
Web Transaction Monitoring Management Agent
x
Establishing the TMTP WebSphere Application Server platform
A working WebSphere Application Server environment is required for custom
installation of the TMTP Management Server. For a description of how to install
the WebSphere Application Server v5 on the tmtpsrv system, refer to “Installing
WebSphere Application Server v5.1 installation” on page 204.
Creating the TMTP DB2 database
Prior to TMTP Management Server custom installation, the database to be used
must exist and be totally empty. In addition, the database must be accessible
from the tmtpsrv system, , at a minimum requiring us to install the DB2 Client
component on the tmtpsrv system. Both of these activities have been generally
described in the previous sections. Refer to “Creating DB2 instances and
databases” on page 150 and “DB2 client installation” on page 152.
Regarding the creation of the database for TMTP, refer to the IBM Tivoli
Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Transaction Performance → Installation and
Configuration Guide → Preparing an existing database for configuration
with the management server → Preparing an existing DB2 database for the
specific procedures and parameters to be used when defining the DB2 database.
218
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Important: Ensure the tmtp database is empty, including default tablespaces
such as the systoolspace. Failing to do this can result in an error similar to The
selected database is not empty, delete any tables or tablespaces...
Installing the TMTP Management Server
Prior to initiating the TMTP Management Server, you might want to review the
information available at the IBM Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Transaction Performance → Installation and
Configuration Guide → Installing a management server → Custom
installation: management server.
This section demonstrate the steps we performed to install the TMTP
Management Server in the Outlet Systems Management Solution environment
on the tmtpsrv system:
1. To install TMTP Server, execute the following command from the installation
media downloaded to the <img_dir>/code/tivoli/TMTP/530/server/generic
directory:
setup_MS_lin.bin -W tempBrowsePanel.active=true
You will now be guided through a series of dialog boxes in which you can
specify the required parameters for the initial configuration of the TMTP
Management Server.
a. Ensure that the Perform Embedded Installation Using CD-ROMS
check-box is not checked.
b. Click I accept the terms in the license agreement.
c. Change the Directory Name to: /opt/IBM/tivoli/MS.
d. Configure SSL communication.
Temporary SSL keys are provided with the TMTP installation image.
Follow the instructions in the TMTP Installation Guide (available at the
InfoCenter) to replace these keys with permanent ones. If you would
prefer to use the provided keys, use the following information to complete
the setup dialog:
Key File Password
Trust File Password
changeit
changeit
e. Accept the defaults, and click Install.
2. Wait until the installation process has completed.
3. Restart server1 the WebSphere Application Server.
Chapter 4. Installing the Tivoli Infrastructure
219
Verifying the TMTP Server installation
To verify the installation of the TMTP Server, make sure that the WebSphere
Application Server named server1 has been started, and open the TMTP
Console in your browser, using one of the following URLs:
non-SSL:
SSL:
http://tmtpsrv:9080/tmtpUI
https://tmtpsrv:9445/tmtpUI
The TMTP log on page should be displayed. You can login with the user ID and
password used during the installation. For the Outlet Systems Management
Solution, we used the user root.
Note: The TMTP Server installation enables Global Security for the
WebSphere server, if it was not previously enabled. This will require you to
logon with a valid user ID and password to access the WebSphere Application
Server Administrative Console, for example the user root.
Setting up the TMTP command line interface
TMTP provides a command line interface that is very useful in automating the
setup. The command line interface is provided with the installation media, under
the tio directory because it is truly a component that has been developed to allow
for automated provisioning of TMTP Management Agents through Tivoli
Intelligent Orchestrator.
Preparing
The TMTP Command Line Interface code is delivered as part of the TMTP v5.3
installation images, and can be found on the CD-ROM named IBM Tivoli
Monitoring for Transaction Performance, Version 5.3.0: Rational Robot,
Warehouse Enablement Pack, Tivoli Intelligent Orchestrator, TMTP 5.1 to TMTP
5.3 Upgrade. The image was downloaded to the
<img_dir>/code/tivoli/TMTP/530/components/generic directory as part of the
TMTP Management Server installation preparation.
Determining TMTP CLI locations
Table 4-19 on page 218 outlines which TMTP components to install where, in
accordance with the functional roles of the systems described in 4.1.1,
“Management environments” on page 131. The table shows that the TMTP
Command Line interface will be installed on the hubtmr system, because it is
from this system, we will issue configuration and operation commands to the
TMTP Server through scripts or Tivoli tasks.
Installing and configuring
The following steps outline the procedure to install and configure the TMTP CLI:
220
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
1. Copy ITMTP_MA.tcdriver from the tio directory of the installation media to
/opt/IBM/tivoli/tmtp_cli on the hubtmr system. Create the directory, if it does
not already exist.
2. Unpack ITMTP_MA.tcdriver using the jar command as shown:
jar xvf ITMTP_MA.tcdriver
3. Configure the TMTP Command Line Interface options using the configCLI.sh
script:
/opt/IBM/tivoli/tmtp_cli/scripts/configCLI.sh MSServer MSUserName
MSPassword MSPort SSLEnabled PropertiesFileName CLIKeyFile
CLIKeyFilePassword
The descriptions in Table 4-20 are of the options you must pass to the
configCLI.sh script.
Table 4-20 Options for configCLI.sh script
Option
Definition
MSServer
The fully qualified host name of the management server,
for example: ims.ibm.com
MSUser
A valid administrator username for the management
server, for example:root
MSUserPassword
The password for the administrator username, for
example:smartway
MSPortWithAuth
The secure management server communication port for
example:9446
MSPortWithoutAuth
The nonsecure management server communication port,
for example: 9082
MSSSLEnabled
A flag indicating whether security is enabled, for
example: true
PropertiesFileName
The name of the properties file to be created, for
example: tmtpcli
CLIKeyFile
The path to the CLI key fileto be used, for example:
/home/thinkcontrol/repository/ tivoli/ITMTP/
cli/config/agent.jks
CLIKeyFilePassword
The CLI key file password, for example: changeit
Chapter 4. Installing the Tivoli Infrastructure
221
Note: The TMTP CLI require Java 1.4.1 or later. The default java version
used by UnitedLinux v1.0 is 1.3.1. If necessary, you can modify the
tmtpcli.sh script (/opt/IBM/tivoli/tmtp_cli/cli) to point to the correct java
version as follows:
򐂰 Modify the section of the script by commenting out the “look for java...”
section and specifying the value:
# look for java (borrowed from ant)
#if [ -z "$JAVACMD" ] ; then
# if [ -n "$JAVA_HOME" ] ; then
# if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
# JAVACMD=$JAVA_HOME/jre/sh/java
# else
# JAVACMD=$JAVA_HOME/bin/java
# fi
# else
# JAVACMD=java
# fi
#fi
JAVACMD=/opt/IBMJava2-141/bin/java
In the Outlet Systems Management Solution we used the following TMTP CLI
configuration:
/opt/IBM/tivoli/tmtp_cli/scripts/configCLI.sh 9.3.5.206 root smartway 9446
true tmtpcli /opt/IBM/tivoli/tmtp_cli/cli/config/agent.jks changeit
Verifying TMTP CLI installation and configuration
To verify that the TMTP CLI has been installed and configured properly, run the
following command from the hubtmr system:
/opt/IBM/tivoli/tmtp_cli/cli/tmtpcli.sh -VerifyManagementServer -Console
Tip: The '-Console' option instructs the script to display the results on the
screen instead of only in the log file (/var/ibm/tivoli/common/BWM/logs)
Make sure that the last line of the output contains the word Success, as shown in
Example 4-15:
Example 4-15 Verifying TMTP CLI installation and configuration
/opt/IBM/tivoli/tmtp_cli/cli/tmtpcli.sh -VerifyManagementServer -Console
java version "1.4.1"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1) Classic VM
(build 1.4.1, J2RE 1.4.1 IBM build cxia321411-20040301 (JIT enabled: jitc))
222
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Command = VerifyManagementServer
Retrieving document at '/opt/IBM/tivoli/tmtp_cli/cli/wsdl//PolicyManager.wsdl'.
Retrieving document at '/opt/IBM/tivoli/tmtp_cli/cli/wsdl//CLI.wsdl'.
Nov 24, 2004 4:41:36 PM org.apache.wsif.logging.MessageLogger logIt
WARNING: WSIF0006W: Multiple WSIFProvider found supporting the same namespace
URI 'http://schemas.xmlsoap.org/wsdl/soap/'. Found
('org.apache.wsif.providers.soap.apacheaxis.WSIFDynamicProvider_ApacheAxis,
org.apache.wsif.providers.soap.apachesoap.WSIFDynamicProvider_ApacheSOAP')
Nov 24, 2004 4:41:36 PM org.apache.wsif.logging.MessageLogger logIt
INFO: WSIF0007I: Using WSIFProvider
'org.apache.wsif.providers.soap.apacheaxis.WSIFDynamicProvider_ApacheAxis' for
namespaceURI 'http://schemas.xmlsoap.org/wsdl/soap/'
Status = BWMCR1300I Success
Note: As shown in Example 4-15 you can receive a warning indicating that
multiple namespace providers exist. This message can be ignored.
4.4 Postinstallation configuration
This section describes the postinstallation configuration for each product needed
to customize the systems management tools to the Outlet Systems Management
Solution.
4.4.1 Framework customization
Initial postinstallation customization of the Tivoli Framework involves the
following steps:
1. ”Setup administrator roles”
2. “Setup and configure SSL communication” on page 225
3. “Connect TMRs and define resource exchange interval” on page 226
Setup administrator roles
Most TMF-based products create specific administrative roles during installation.
These roles will normally be required to have been assigned to an administrator
for that administrator to perform specific, product related operations. Therefore,
you should remember to check them after the installation, and associate them to
the Tivoli Administrators according to your policies.
To verify the available roles you can run the wgetadmin command line or use the
graphical function that you can access right clicking on the Administrator icon,
and selecting the scroll down menu item Edit TMR Roles. The dialog shown in
Figure 4-17 on page 224 will be displayed:
Chapter 4. Installing the Tivoli Infrastructure
223
Figure 4-17 Edit TMR Roles
Select the roles you want to associate to the particular administrator in the
Available Roles columns, and move them to the Current Roles column using the
left arrow button located between the two columns.
When you click Change and Close to save the modifications, the confirmation
dialog shown in Figure 4-18 will appear.
Figure 4-18 Administrator Info Changed
224
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
You can simply dismiss this dialog, but remember that the modification made will
not be active until the next login for the administrator in question.
Setup and configure SSL communication
To allow for secure communication between managing systems (managed
nodes) in the Outlet Systems Management Solution, we have to install and
enable Secure Socket Layer (SSL) communications. This requires setup and
configuration of SSL on all the managed nodes in the hub and spoke TMRs.
On all nonLinux managed nodes you must install the SSL-A package to enable
SSL connections. On Linux managed nodes, SSL is enabled by default. You only
need to install the SSL-B package for management of security keys. If the SSL-A
package is not installed on a managed node, the managed node can only accept
non-SSL connections from clients. For detailed information about how to enable
SSL in your Tivoli environment, refer to the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Management Framework → Enterprise Installation Guide →
Installing Tivoli Products and Patches → Installing MDist 2 Components →
Installing Secure Sockets Layer
After installation and restarting of all systems, set the network security level on
each managed node on which you want to enable SSL communications. This
setting determines how the managed node logs in to the server. Use the odadmin
command from each of the TMR Servers to achieve this.
odadmin set_network_security SSL all
Specify the cipher list on each managed node to dictate the strength of the
encryption used by SSL.
odadmin set_ssl_ciphers “05040A030609” all
Note: Restart the managed node for the changes to take effect. For more
information about the odadmin and oserv commands, refer to the Tivoli
Management Framework Reference Manual.
To restart the nodes, run these commands on the TMR server:
1. odadmin shutdown clients
2. odadmin reexec 1
3. odadmin start clients
You can run the odadmin odinfo command to verify that the settings have taken
effect. The output will be similar to the outpun in Example 4-16 on page 226.
Chapter 4. Installing the Tivoli Infrastructure
225
Example 4-16 oserv configuration information
hubtmr:/ # odadmin odinfo 1
Tivoli Management Framework (tmpbuild) #1 Sun Sep 19 17:27:56 CDT 2004
(c) Copyright IBM Corp. 1990, 2004. All Rights Reserved.
Region = 1393424439
Dispatcher = 1
Interpreter type = linux-ix86
Database directory = /var/spool/Tivoli/hubtmr.db
Install directory = /usr/local/Tivoli/bin
Inter-dispatcher encryption level = DES
Kerberos in use = FALSE
Remote client login allowed = TRUE
Install library path =
/usr/local/Tivoli/lib/linux-ix86:/usr/local/Tivoli/install-dir/iblib/linux-ix86
:/usr/lib:/usr/ucb
lib
Force socket bind to a single address = FALSE
Perform local hostname lookup for IOM connections = FALSE
Use Single Port BDT = FALSE
Use communication channel check = FALSE
Communication check timeout = default (180 secs)
Communication check response timeout = default (180 secs)
Oserv connection validation timeout = 300
Port range = (not restricted)
Single Port BDT service port number = default (9401)
Network Security = SSL
SSL Ciphers = 05040A030609
ALLOW_NAT = FALSE
State flags in use = TRUE
State checking in use = TRUE
State checking every 180 seconds
Dynamic IP addressing allowed = FALSE
Transaction manager will retry messages 4 times.
Connect TMRs and define resource exchange interval
In the Tivoli Management Framework, each Tivoli Management Region (TMR)
operates independently, managing only the resources that have been defined to
it. However, it is also possible to connect TMRs to allow for cross-TMR actions,
an activity that is necessary in the Outlet Systems Management Solution to allow
centrally managed resources such as software packages and resource models
to be distributed to the managed systems in the outlets.
To connect the hub and spoke TMRs, use either the graphical functions
available, as described in the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
226
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Navigate to Management Framework → User’s Guide → Tivoli regions and
interregion connections → Making a secure region connection → Desktop.
As an alternative, you use the wconnect command to achieve this from the
command line, as outlined in the following steps:
1. Lookup the TMR numbers using the odadmin odlist command from the
humbtmr (Example 4-17) and from the spoketmr (Example 4-18).
Example 4-17 From the hubtmr
hubtmr:~ # odadmin odlist
Region
Disp Flags Port IPaddr
Hostname(s)
1393424439
1
ct94
10.1.1.1 hubtmr.demo.tivoli.com,hubtmr
2
ct94
10.1.1.2 tec.demo.tivoli.com,tec
9.3.5.214 9.3.5.214
3
ct94
10.1.1.4 srchost.demo.tivoli.com,srchost
Example 4-18 From the spoketmr
Region
1282790711
Disp
1
Flags Port IPaddr
ct94
10.1.1.6
2
ct-
94
3
ct-
94
Hostname(s)
spoketmr.demo.tivoli.com,spoketmr
9.3.5.236 9.3.5.236
10.2.0.10 region01.demo.tivoli.com,region01
9.3.5.242 9.3.5.242
10.2.1.10 outlet01.demo.tivoli.com,outlet01
9.3.5.211 9.3.5.211
2. To enable the connection between the TMRs, you have to run the wconnect
command from both sides of the secure connection:
hubtmr: wconnect -s spoketmr 1282790711
spoketmr: wconnect -s hubtmr 1393424439
3. Once the TMRs have been connected, you should exchange information
about the resources between them. Use the wupdate command from each
TMR as shown:
hubtmr: wupdate -r All spoketmr-region
spoketmr:> wupdate -r All hubtmr-region
4. Finally, you can use the wlsconn command to verify your settings as in
Example 4-19.
Example 4-19 Using the wlsconn command
spoketmr:~ # wlsconn hubtmr-region
Name:
hubtmr-region
Server: hubtmr.demo.tivoli.com
Region: 1393424439
Chapter 4. Installing the Tivoli Infrastructure
227
Mode: two_way
Port: 94
Resource Name
------------ManagedNode
Repeater
Gateway
Endpoint
EventServer
TaskLibrary
Tmw2kProfile
Last Exchange
------------Fri Oct 29 11:01:56
Tue Nov 9 15:48:36
Tue Nov 9 15:46:47 2004
Thu Dec 9 01:28:24
Fri Oct 29 14:05:19
Tue Nov 16 09:19:46
Fri Nov 19 10:47:36
2004
2004
2004
2004
2004
2004
5. Once the connection has been established, you want to create a task to
automatically exchange resources at specific intervals.
Setup gateways and repeaters
Each managed node or TMR server in the Tivoli Management Environment
assume the role of a repeater and a gateway. By default, gateways are
automatically configured as repeaters for their client endpoints.
Gateways
The gateway functionality allows the managed node to serve as a gateway for
endpoints, thus allowing endpoint logins and assuming responsibility for passing
packets back and forth between endpoints and the management server systems.
In the Outlet Systems Management Solution, all managed nodes in the spoke
TMRs will be defined as gateways, in order for them to accept logins from local
endpoints.
The gateway is defined with default configuration on a managed node using the
wgateway command:
wgateway -h <managed node>
If special configuration is needed, for example to allow communications to pass
firewalls, use the wgateway command to set specific configuration options for the
gateways. For more details on gateway configuration, refer to the Tivoli
Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Management Framework → Planning for Deployment Guide →
Endpoints and gateways.
In the Outlet Systems Management Solution, gateways will be established on all
TMR Servers and all managed nodes belonging to spoke TMRs with names
228
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
equal to the managed node itself with the string -gw appended. We used the
following commands:
wcrtgate -h hubtmr -n hubtmr-gw
wcrtgate -h spoketmr -n spoketmr-gw
wcrtgate -h <outlet_server_name> -n <outlet_server_name>-gw
Having been assigned the gateway functionality, it is important to define login
policies to help control which endpoints will be allowed to connect to the gateway.
See 6.5, “Creating endpoint policies” on page 343 for more details.
Repeaters
The repeater functionality allows the managed node to act as a fan-out node for
mdist distributions, typically software distributions, setting aside a number of
system resources such as disk space, bandwidth, and so on to be used
specifically for handling distributions. In the outlet environment, this ensures that
a software package to be distributed to servers in the same region only travels
across the network path between the central management location and the
region once, and then is distributed to the individual stores.
The storage set aside for a repeater is also known as a software distribution
depot, in which software packages can be preloaded. This enables Outlet Inc. to
use network bandwidth at offpeak hours to distribute large software packages to
the remote sites ahead of time for use.
The optimal repeater configuration depends on a number of factors ranging from
available bandwidth on both sides of the repeater, other network usage, number
of downstream systems, available resources at the repeater system, business
policies and so on. Refer to “Repeater guidelines” on page 65 for a more detailed
discussion.
By default, any TMR server is a repeater, and it is assigned the special WAN role
that ensures that any distribution to a different subnet will pass through this
repeater.
Gateways will, by default, become repeaters for all endpoints connected to the
endpoint.
In the Outlet Systems Management Solution we specifically defined a repeater
on the srchost system, since all other required repeaters were installed by
default. To create a repeater, use the wrpt command:
wrpt -n <server_name>
Because all additional servers in the Outlet infrastructure will become gateways
by default, the repeaters will be created automatically. However, adjusting
repeater configuration parameters might be needed.
Chapter 4. Installing the Tivoli Infrastructure
229
Among other, these can include network usage, disk usage and other
capacity-related parameters. The most of these are described in “Repeater
parameters and settings” on page 65.
To enable multicast distributions in Outlet Systems Management Solution, the
following command is issued from the hubtmr against all managed nodes that
have the gateway enabled:
wmdist -s <managed node> -C noprompt endpoint_multicast=TRUE
gateway_multicast=TRUE
For the srchost system, which does not host a gateway, we used:
wmdist -s <managed node> -C noprompt gateway_multicast=TRUE
4.4.2 Enabling MDIST2
Enablement of MDist 2, which is heavily used by Tivoli Configuration Manager
components, requires the completion of the following activities:
1.
2.
3.
4.
5.
”Creating the mdist_db database”
”Cataloging the mdist_db database”
“Running the MDIST admin and schema scripts” on page 231
“Creating the mdist2 RIM object” on page 231
“Verifying MDist2 operation” on page 232
More details regarding the activities involved in configuring Mdist 2 are available
in the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Management Framework → Enterprise Installation Guide →
Installing Tivoli Products and Patches → Installing MDist 2 Components.
Creating the mdist_db database
Use the DB2 command create database mdist_db from the desired instance
user to create the database for the MDist 2 subsystem. This step has probably
already been performed as part of the database installation and configuration
described in “Creating DB2 instances and databases” on page 150.
Cataloging the mdist_db database
From the hubtmr system, issue the following commands to catalog the mdist_db
database residing on the rdbms system:
1. Access the db2 client:
su - db2inst1
230
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
2. Define the instance on the rdbms system hosting the mdist_db database:
catalog tcpip node db2mdist remote rdbms.demo.tivoli.com server 60008
Note: The instance name db2mdist and related port number 60008 was
determined when creating the database instances as described in
“Creating DB2 instances and databases” on page 150
3. Catalog the mdist_db database to make it accessible from the hubtmr
system:
catalog database mdist_db as mdist_db at node db2mdist authentication
server
The mdist_db database should now be accessible from the hubtmr system.
Running the MDIST admin and schema scripts
The Distribution Status Console will be used both by Inventory and Software
Distribution. To complete its configuration, you must run the admin and schema
scripts provided by the installation. These scripts are located in the
$BINDIR/TME/MDIST2/sql directory.
Remember, these are sample scripts that must be customized by your DBA
before you execute them on the rdbms machine.
For DB2 these scripts are called:
mdist_db2_admin.sql
mdist_db2_schema.sql
Creating the mdist2 RIM object
Once the database and the tables are created, you must create a RIM object
called mdist2, which the MDist 2 subsystem will use to store distribution requests
and status information permanently.
We used the parameters in Table 4-21:
Table 4-21 Parameters for the mdist2 RIM
Parameter
value
Database Vendor:
DB2
RIM Host
hubtmr
Database Home:
/opt/IBM/db2/V8.1
Database Name:
mdist_db
Chapter 4. Installing the Tivoli Infrastructure
231
Parameter
value
Database User ID:
db2mdist
Database Password:
********
Verify Password:
********
Server ID:
tcpip
Instance Home (for DB2 only):
~db2inst1
Instance Name (for DB2 only):
db2inst1
Verifying MDist2 operation
To verify the operation of the MDist 2 subsystem, open the Distribution Status
Console from the Tivoli Desktop, or use the wmdist -l command as described in
“Verifying Inventory operation” on page 243.
4.4.3 Enabling Tivoli End-User Web Interfaces
Tivoli Management Framework provides access to Web-enabled Tivoli
Enterprise Applications from a browser. To install the Tivoli Web Interfaces for
WebSphere Application Server on the console server, follow the instructions on
the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Management Framework → Enterprise Installation Guide →
Enabling TIvoli Web Interfaces → Installing Web access for WebSphere
Application Server, Version 5.
Important: The default port used by WebSphere 5 for SSL is 9443. As a
result, you must modify the TivoliFRW.war file with this port (instead of port
443) as described in step 10 of the Installation Guide instructions.
It is important to know these instructions reference a tool named assembly.sh.
This tool has been replaced with the WebSphere Application Server Toolkit as
of Version 5.1+.
Instructions for modifying the TivoliFRW.war file are included here for your
reference.
To modify the TivoliFRW.war file, perform the following steps. If you want to
modify the file using a text editor, use the information immediately following these
steps.
232
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
1. Install the WebSphere Application Server Toolkit (ASTK) on the console
server and start it. This tool is stored on the on the WebSphere Application
Server CDs.
2. Click File → Import.
3. Select WAR file and click Next.
4. Enter the full path to the WAR file, for example:
/opt/IBM/WebSphere/AppServer/installableApps/TivoliFRW.war)
5. Click New for project name.
6. Enter TivoliFRW as the Project Name, accept the default Project Location and
select Finish.
7. Click Finish to import the war file.
8. Click the + beside TivoliFRW to expand it.
9. Expand WebContent.
10.Expand WEB-INF.
11.Open web.xml by double-clicking it or right-clicking and selecting open.
12.Click https-port under Context Parameters.
13.Change the Value under Details to 9443 (from 443.)
14.Click the X beside Web Deployment Descriptor to close the web.xml file.
15.Click Yes to save the changes.
16.Right-click TivoliFRW and Export.
17.Select WAR File and click Next.
18.Leave the Web Project as TivoliFRW.
19.In the Destination field, enter: /<was_home/installableApps/TivoliFRW.war
Alternatively, you can edit the https-port file from a text editor, after you have
installed the TivoliFRW.war application in WebSphere. The information is found
in the web.xml file in the
/opt/IBM/WebSphere/AppServer/installedApps/console/TivoliFRW_war.ear/TivoliFRW.
war/WEB-INF directory.
1. Change the following section from Example 4-20 to Example 4-21 on
page 234.
Example 4-20 Old https-port file
<context-param id="ContextParam_1100051871242">
<param-name>https-port</param-name>
<param-value>443</param-value>
Chapter 4. Installing the Tivoli Infrastructure
233
<description>Identifies port number used by HTTPS.</description>
</context-param>
Example 4-21 Revised http-port file
<context-param id="ContextParam_1100051871242">
<param-name>https-port</param-name>
<param-value>9443</param-value>
<description>Identifies port number used by HTTPS.</description>
</context-param>
To verify your setup, make sure that the WebSphere Application Server on the
console systems has been started, and direct your Web browser to the following
url:
https://<concole system>:9443/TivoliFRW/webapp.
4.4.4 Configuring the Tivoli Enterprise Console
Customizing the Tivoli Enterprise Console after initial installation requires the
completing the following activities:
1.
2.
3.
4.
5.
6.
7.
”Creating the database for TEC”
“Cataloging the tec_db database” on page 235
“Creating the tec RIM object” on page 235
“Executing the TEC admin and schema scripts” on page 236
“Assigning TEC Roles to the Administrator” on page 236
“Configuring the TEC Console” on page 237
“Verifying TEC operation” on page 238
More details regarding the activities involved in configuring TEC are available on
the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Enterprise Console → Installation Guide → Configuring the
event database.
In addition to these steps, maintaining event class definitions and rulebases is an
on-going activity.
Creating the database for TEC
Use the DB2 command create database tec_db from the desired instance user
to create the database for the Tivoli Enterprise Console. This step has probably
already been performed as part of the database installation and configuration
described in “Creating DB2 instances and databases” on page 150.
234
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Cataloging the tec_db database
From the tec system, issue the following commands to catalog the tec_db
database residing on the rdbms system.
1. Access the db2 client:
su - db2inst1
2. Define the instance on the rdbms system hosting the tec_db database:
catalog tcpip node db2tec remote rdbms.demo.tivoli.com server 60016
Note: The instance name db2tec and related port number 60016 was
determined when creating the database instances as described in
“Creating DB2 instances and databases” on page 150
3. Catalog the tec_db database to make it accessible from the tec system:
catalog database tec_db as tec_db at node db2tec authentication server
The tec_db database should now be accessible from the TEC TMR system.
Creating the tec RIM object
During the Desktop installation you were prompted to supply parameters for
configuration of the tec RIM object. We used these parameters in Table 4-22 to
configure the tec RIM object.
Table 4-22 Parameters for the tec RIM
Attribute
value
Database Vendor
DB2
Database Home
/opt/IBM/db2/V8.1
Database Name
tec_db
Database User ID
db2tec
Database Password
********
Verify Password
********
RIM host
tec
Server ID
tcpip
Instance Home (for DB2 only)
~db2inst1
Instance Name (for DB2 only)
db2inst1
Chapter 4. Installing the Tivoli Infrastructure
235
If the tec RIM object was not created during installation, for example because the
installation was run from a command line or the configuration information
provided was wrong, the RIM can be created manually. The scripts to create the
database remove any existing RIM definition named tec and recreate it based on
the information entered when prompted.
Once both the tec_db DB2 database and the tec RIM have been created, test the
validity of the tec RIM object using the wrimtest -l tec command.
Executing the TEC admin and schema scripts
The execution of admin and schema scripts is a required step that creates the
database structure to store TEC events in your RDBMS machine.
You must use the installation Wizard to create your scripts. However when using
the Wizard, it is up to you wether to generate and execute or generate only. In
our environment, we executed the Installation Wizard on the tecserver system,
then elected generate only. After that step, we moved the scripts to the rdbms
server machine for the execution.
1. To start the Installation Wizard, use the script called tec_install.sh from the
Installation Assistant CR-ROM. For more details, refer to the information in
the Tivoli Information Center Web site noted on page 234.
Important: During our installation, we identified a problem with TEC 3.9.0
installation on a 4.1.1 Framework. The Installation Assistant provided with
the original CD-ROM is not able to recognize Framework 4.1.1. To solve
this problem, make sure you have the new Installation Assistant code
provided with LA Interim Fix 3.9.0-TEC-0003LA.
Contact client support to get the above limited availability (LA) code.
2. Once the tec_admin and tec_schema scripts have been generated and
moved to the rdbms system, execute them to create the database objects
required by TEC. Remember to login as the DB2 instance user (db2tec in our
case) assigned to the tec database before executing the scripts.
Assigning TEC Roles to the Administrator
To allow the default administrator to use and operate TEC, you have to assign
the required TEC roles to the administrator. The following outlines the steps
required to achieve this, and detailed descriptions regarding this activity are
available in the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
236
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Navigate to Enterprise Console → Installation Guide → Installing,
upgrading, and uninstalling using the Tivoli Management Framework
tools → Postinstallation tasks
1. To assign TEC roles to the administrator, start the Tivoli Desktop and
double-click the Administrators icon.
2. Right-click the icon representing the Administrator you want to which you
want to assign TEC Roles (Root_hubtmr-region in our case) and select Edit
TMR Roles.
3. Verify that the following roles are listed under Current Roles:
–
–
–
–
user
RIM_view
RIM_update
senior
Configuring the TEC Console
You can start the TEC Console from any system on which it has been installed.
In our case, you have either the tec or hubtmr systems.
1. To start the TEC Console, execute the tec_console executable found in the
$BINDIR/bin directory. You will be presented with the login dialog shown in
Figure 4-19:
Figure 4-19 TEC Console login
2. Once the console opens, you must follow the instructions in the Tivoli
Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Enterprise Console → User’s Guide → Configuring the Tivoli
Enterprise Console product → Creating an event Console to customize
the console to your particular needs.
Chapter 4. Installing the Tivoli Infrastructure
237
Verifying TEC operation
You can run a simple command from the TEC server machine to test the event
processing:
wpostemsg -m TEST_MESSAGE -r WARNING EVENT EVENT
The event will be shown in the TEC Console as soon as it is processed.
4.4.5 Customizing the Inventory
Customizing the Inventory component of Tivoli Configuration Manager after initial
installation requires the completion of the following activities:
1.
2.
3.
4.
5.
6.
7.
8.
9.
”Assigning Inventory Roles to the Administrator”
“Creating the inv_db database” on page 238
“Cataloging the inv_db database” on page 239
“Creating invdh_1 and inventory RIM objects” on page 239
“Executing the Inventory admin and schema scripts” on page 241
“Creating Inventory query libraries” on page 241
“Loading signatures file for software scan” on page 242 (optional)
“Enable Inventory integration to TEC” on page 242
“Verifying Inventory operation” on page 243
More details regarding the activities involved in configuring the Inventory
database are available in the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Configuration Manager → Planning and Installation Guide →
Working With Repositories and Queries.
Assigning Inventory Roles to the Administrator
Follow these steps to assign Inventory roles to an administrator:
1. From Tivoli Desktop, double-click the Administrators icon.
2. Right-click the Root_hubtmr-region icon and select Edit TMR Roles.
3. Verify that the following roles are listed under Current Roles:
– Inventory_edit
– Inventory_end_user
– Inventory_scan
– Inventory_view
1. Select Change & Close.
Creating the inv_db database
Use the DB2 command create database inv_db from the desired instance user
at the rdbms server in order to create the database for Inventory. This step has
238
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
probably already been performed as part of the database installation and
configuration described in “Creating DB2 instances and databases” on page 150.
Cataloging the inv_db database
From the hubtmr system, issue the following commands to catalog the inv_db
database residing on the rdbms system.
1. Access the db2 client:
su - db2inst1
2. Define the instance on the rdbms system hosting the inv_db database:
catalog tcpip node db2inv remote rdbms.demo.tivoli.com server 60004
Note: The instance name db2inv and related port number 60004 was
determined when creating the database instances as described in
“Creating DB2 instances and databases” on page 150
3. Catalog the inventory database to make it accessible from the hubtmr
system:
catalog database inv_db as inv_db at node db2inv authentication server
The inv_db database should now be accessible from the hubtmr system.
Creating invdh_1 and inventory RIM objects
For performance reasons, Inventory can use multiple RIM objects for handling
scanned hardware and software data. The minimal configuration requires two
RIM objects, one for reading and one writing, both pointing to the same DB2
database. During the Desktop installation, you were prompted to supply
parameters for configuration of the two RIM objects. We used the parameters
shown in Table 4-23 to configure both Inventory RIM objects.
Table 4-23 Parameters for the invdh_1 and inv_query RIMs
Parameter
value
Data Handler Host
hubtmr
MDist 2 Callback Host
hubtmr
Database Vendor
DB2
RIM Host
hubtmr
Database ID
inv_db
Database Home
/opt/IBM/db2/V8.1
Server ID
tcpip
Chapter 4. Installing the Tivoli Infrastructure
239
Parameter
value
User Name
db2inv
Instance Home (for DB2 only)
~db2inst1
Instance Name (for DB2 only)
db2inst1
Note: Be aware of the following information when working with RIM objects.
򐂰 The documentation states that RIM hosts should have been created
automatically during installation of the Inventory component. We have
observed inconsistent results in the creation of RIM hosts. Below are
instructions for adding and testing each RIM object that are required for
Inventory to function properly.
򐂰 The wrimtest command will fail if the default password for the RIM object
is different from the password set manually when creating User IDs. The
Tivoli Configuration Manager Planning and Installation Guide Chapter 3,
Page 25 lists the default passwords. The wsetrimpw command is described
in the Tivoli Management Framework Reference Manual. If a RIM object
was created by Tivoli, use the command wsetrimpw as follows to set the
password for the RIM object to the desired value.
wsetrimpw inv_query tivoli ABCDEF
In this case, the RIM object that is modified is inv_query, and the password
is changed from tivoli to ABCDEF, which is the password for the user invtiv,
the default user of the inv_query RIM object.
To verify the RIM objects for Inventory, run these commands on the TMR server
(hubtmr).
1. Test the RIM object invdh_1 to see if it was already created. After running this
command, you should see information about the RIM host on the screen with
no errors: User name, database name home directory, and so forth.
wrimtest –l invdh_1
a. If the RIM object has not been created, create RIM object invdh_1 using
the wcrtrim command as shown:
wcrtrim -v DB2 –h rdbms -d inv_db -u invtiv –H /usr/opt/db2_08_01/ -s
tcpip -I /home/db2inst1 -t db2inst1 invdh_1
RDBMS password: ******** (the password for invtiv)
b. After creation, verify that the RIM object was created successfully by once
again running wrimtest.
wrimtest –l invdh_1
240
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
2. Test the RIM object inv_query to see if it was already created or not. After
running this command, you should see information about the RIM host on the
screen with no errors: user name, database name home directory, and so on.
wrimtest –l inv_query
a. If it was not created, create the RIM object inv_query.
wcrtrim -v DB2 -h rdbms -d inv_db -u invtiv –H /usr/opt/db2_08_01/ -s
tcpip -I /home/db2inst1 -t db2inst1 inv_query
RDBMS password: ******** (the password for invtiv)
b. After creation, verify that the RIM object was created successfully by once
again running wrimtest.
wrimtest –l inv_query
3. Create the two RIM objects on the spoketmr system using the wcrtrim
commands shown in step 1 and step 2. This will provide access to the
inventory database directly from the spoketmr system, instead of directing all
database requests through the hubtmr.
Executing the Inventory admin and schema scripts
To define the various database objects needed by Configuration Managers
Inventory components, two scripts, an admin and a schema script, which set up
the required database objects. These scripts are located in:
$BINDIR/../generic/inv/SCRIPTS/RDBMS
Remember, these are sample scripts that must be customized by your DBA
before you execute them on the rdbms machine.
For DB2 these scripts are called:
inv_db2_admin.sql
inv_db2_schema.sql
Note: These scripts attempt to drop database objects such as tables before
they create them. Thus, for these commands you will see errors stating the
specified tables do not exist. These are normal and are not problems with the
script.
Creating Inventory query libraries
From the hubtmr system and logged in as root, execute the inventory_query and
subscription_query scripts respectively to create the query libraries used by
Inventory.
$BINDIR/../generic/inv/SCRIPTS/QUERIES/inventory_query.sh hubtmr-region
$BINDIR/../generic/inv/SCRIPTS/QUERIES/subscription_query.sh hubtmr-region
Chapter 4. Installing the Tivoli Infrastructure
241
During task execution, you will see scrolling text indicating that the queries are
being created.
Loading signatures file for software scan
A signature is the set of properties - such as the name, date, and size of one or
more files - that uniquely identifies a specific software application. Tivoli
Inventory provides a default set of signatures that can be used to collect
information about the software installed on the managed systems in the Outlet
Systems Management Solution. These default signatures are stored in the
SWSIGS.INI file in the $BINDIR/../generic/inv/SIGNATURES directory. To use
these signatures, you must import them into the inventory databases using the
winvsig command as shown in the following example:
winvsig -a -f $BINDIR/../generic/inv/SIGNATURES/SWSIGS.INI
This command installs the signatures in the configuration repository in the
SWARE_SIG table. You can edit the signatures provided with Inventory, delete
them, or you can add your own signatures. For example, you can add a signature
that is not currently provided with Inventory or for an application that was
developed in-house. Inventory uses signatures to determine which software
applications are installed on the machines you scan.
For further details about the use of signatures please refer to the Tivoli
Information Center at
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp and navigate
to Configuration Manager → User’s Guide for Inventory → Collecting
custom information with Inventory → Using Signatures.
Enable Inventory integration to TEC
If you want to send Inventory events to one or more Tivoli Enterprise Console
servers, you must import the BAROC files into Tivoli Enterprise Console.
Inventory attempts to send events to the Tivoli Enterprise Console servers in the
order that they are specified.
To import the Inventory BAROC files into Tivoli Enterprise Console, you must
perform the following steps:
1. Create a new rule base. Give it a descriptive name, Inventory, for example.
2. Copy the default classes from the Tivoli Enterprise Console rules base to the
new rule base named Inventory.
3. Import the Inventory BAROC file tecad_inv.baroc from the
$BINDIR/TME/INVENTORY directory.
4. Compile the Inventory rule base.
5. Load the Inventory rule base into the Tivoli Enterprise Console server.
242
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
6. Stop the Tivoli Enterprise Console server.
7. Restart the Tivoli Enterprise Console server.
For details about this procedure, see the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Enterprise Console → User’s Guide → Configuring the Tivoli
Enterprise Console product → Configuring the event server → Managing
rule bases.
If your environment is complex and contains multiple Tivoli Management Regions
or Tivoli Enterprise Console servers such as the Outlet Systems Management
Solution environment, the Inventory instances in each spoke TMR need to know
the location of the TEC server.
Perform the following steps from the spoketmr system:
1. Make the EventServer class visible from Inventory by running the following
command before connecting the two Tivoli regions:
wregister -i -r EventServer
wupdate -r EventServer hubtmr-region
2. This is an optional step. If more than one EventServer is present in the
environment, create the file $DBDIR/tecad_inv.conf on the inventory server.
This file should contain the following line:
ServerLocation=EventServer#server-region
server-region is the event server to which the events must be sent.
3. Run the wsetinvglobal -l <options> -t <options>
@InventoryConfig:<profile_name> command to verify your configuration.
Consult the User’s Guide for Inventory for details on the wsetinvglobal
command.
Verifying Inventory operation
To verify your inventory setup, you can create an InventoryConfig profile,
customize your hardware and software scan settings for PC or UNIX, then
distribute them to an endpoint. To get further details on how to setup an
InventoryConfig profile, you can refer to the section Creating an Inventory Profile
in Tivoli Configuration Manager - User’s Guide for Inventory.
Once you distribute an InventoryConfig profile, a mdist2 operation will take place
first to associate a distribution ID to the distribution.
1. From the hubtmr, run the wmdist -la command to get status of a scan. Pay
particular attention to the time the scan has been running. In our environment,
our scan for this profile took less then one minute to complete. If the scan is
Chapter 4. Installing the Tivoli Infrastructure
243
sitting at 0% completed for a time much longer then this, something might be
wrong and need to be analyzed.
The output will be similar to Example 4-22.
Example 4-22 Verifying inventory profile distribution using wmdist command
wmdist -la
Name Distribution ID Targets Completed Successful Failed
InventoryScan 1600263082.1 1 1(0%) 1(0%) 0( 0%)
HardwareScan 1600263082.2 1 1(0%) 1(0%) 0( 0%)
2. If the distribution is not shown in the above output, re-execute the wmdist with
only the -l option. You will get an output similar to Example 4-23.
Example 4-23 wmdist -l sample output
wmdist -l|tail -10
Name Distribution ID Targets Completed Successful Failed
InventoryScan 1600263082.1 1 1(100%) 1(100%) 0( 0%)
HardwareScan 1600263082.2 1 1(100%) 1(100%) 0( 0%)
3. If the distribution is completed on your target, the inventory data collection
could still be running. To check the status of the scan operation, you can use
the wgetscanstat command as shown:
wgetscanstat -a -p -f
4. When you get the output message No scans using the Inventory status
collector are in progress. you can use either wqueryinv <EP_name> or
wqueryinv -s <EP_name> based on whether you want a hardware or software
scan. You will get an output similar to Example 4-24.
Example 4-24 Command line invocation of Inventory queries - sample output
hubtmr:/etc # wqueryinv hubtmr-2nd-ep
Query Name: INVENTORY_HWARE
TME_OBJECT_LABEL,TME_OBJECT_ID,COMPUTER_SYS_ID,COMPUTER_SCANTIME,COMPUTER_MODEL
,COMPUTER_ALIAS,SYS_SER_NUM,OS_NAME,OS_TY
PE,PROCESSOR_MODEL,PROCESSOR_SPEED,PHYSICAL_TOTAL_KB,PHYSICAL_FREE_KB,TOTAL_PAG
ES,FREE_PAGES,PAGE_SIZE,VIRT_TOTAL_KB,VIR
T_FREE_KB
hubtmr-2nd-ep,1393424439.6.522+#TMF_Endpoint::Endpoint#,9A7FDB68-1DD1-11B2-8381
-8705A1D7E615,2004-11-01 13:21:17.000000,
VMware, Inc. VMware Virtual Platform,hubtmr,VMware-56 4d 1a ce 20 a4 1f
76-6,UnitedLinux 1.0 (i586) VERSION = 1.0 PATCHL
EVEL = 3 ,LINUX,Pentium 4,3000,514804,196352,125,47,4096,1036152,1034848
244
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
4.4.6 Configuring Software Distribution
After installation, the Software Distribution component of Tivoli Configuration
Manager is ready for use. However, some additional configuration steps are
required to facilitate integration between Software Distribution and related Tivoli
components, such as Inventory and TEC. These activities are
򐂰 ”Enabling SWD-Inventory integration”
򐂰 ”Enabling SWD and TEC Integration”
Because Software Distribution relies on the basic administrator roles (super,
admin, senior, and user) of the Tivoli management Framework, there is no need
to define specific roles for Software Distribution.
Enabling SWD-Inventory integration
Tivoli Software Distribution provides facilitates to automatically define software
signatures for Inventory based on definitions in software packages. The
signatures will be defined at the time of import of software packages, an only if
the software package specifically defines one or more files to be used as
signatures. By default, this feature is disabled.
In order to enable this automatic software signature definition issue the wswdinv
-y command on each managed node from which you will import software
packages.
In our case, software package import is only allowed from the managed node
srchost, so the command only needs to be executed there.
Enabling SWD and TEC Integration
Just like the Inventory component, Software Distribution (SWD) can send events
to the Tivoli Enterprise Console whenever specific Software Distribution actions
occur. Perform the following steps to facilitate the integration between SWD and
TEC:
1. Ensure the integration is enabled. The integration is enabled by default,
otherwise, use the wswdmgr command to enable the integration, as show in
step 2 on page 246.
2. Register the Software Distribution event classes on the Tivoli Enterprise
Console server.
3. Configure the event server, if necessary.
4. Create the Software Distribution Console.
Chapter 4. Installing the Tivoli Infrastructure
245
For more information about integrating SWD and TEC, refer to the Tivoli
Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Configuration Manager → User’s Guide for Software
Distribution → Integrating the Tivoli Enterprise Console.
The following procedure demonstrates the steps required to enable SWD to send
events to the TEC server in the Outlet Systems Management Solution.
1. Before connecting the Tivoli management regions, run the wregister
command to register the resource for the event server (EventServer) on all
the Software Distribution servers from which you want events sent and where
you want the EventServer class visible. Use the following command:
wregister -i -r EventServer
2. After you have installed Software Distribution, the Tivoli Enterprise Console
integration is enabled by default. If it is not enabled, use the wswdmgr
command to set the value of the is_swd_tec_enabled key to true. To display
the current setting, use the wswdmgr -s command on the Software
Distribution server as follows:
wswdmgr -s is_swd_tec_enabled=true
3. Software Distribution event classes are defined in the tecad_sdnew.baroc file,
located in the $BINDIR/TME/SWDIS/SPO directory on the Software Distribution
server.
To set a rule base to manage events and to install the Software Distribution
event classes on the event server, the swdistecsrvr_inst.sh script file is
provided and is located in the $BINDIR/TME/SWDIS/SCRIPTS directory.
Copy the following files from the Software Distribution server to a temporary
directory (for this example, /tmp) on the tec system:
$BINDIR/TME/SWDIS/SCRIPT/swdistecsrvr_inst.sh
$BINDIR/TME/SWDIS/SPO/tecad_sdnew.baroc
4. Run the swdistecsrvr_inst.sh script as follows, specifying the path (/tmp) of
the tecad_sdnew.baroc file using the -w option:
sh /tmp/swdistecsrvr_inst.sh -b <your_rule> -s aix270 -u root -p
<your_password> -t <your_console> -w /tmp/tecad_sdnew.baroc
246
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Note: If you are running this script in a Linux environment, you might need
to change the value of a variable inside the script to avoid an error such as
the following:
ECO2045W: "Tec_Console" is not a valid console.
This error is caused by the wrong case of a variable set inside the script.
The case must be changed to match the following:
TECCONSOLE=”tec_console”
5. Specify the event server where Software Distribution events should be sent.
The event server is specified by defining the SeverLocation key in the
tecad_sd.conf file located in the following path:
$BINDIR/TME/SWDIS/SPO/tecad_sd.conf
A sample file is shown in Example 4-25.
Example 4-25 tecad_sd.conf sample file
#######################################################
#
# Configuration file for Software Distribution 4.x
#
#######################################################
#
# Put the event server location here
ServerLocation=@EventServer
ConnectionMode=connection_oriented
RetryInterval=1
NO_UTF8_CONVERSION=NO
ServerPort=0
6. Complete the process by creating the Software Distribution console, as
described in the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Configuration Manager → User’s Guide for Software
Distribution → Integrating the Tivoli Enterprise Console → Creating the
Software Distribution Console.
4.4.7 Enabling the Activity Planner
Enabling the Activity Planner requires the completion of the following activities:
1. “Assigning APM roles to administrators” on page 248
2. “Creating the apm_db database” on page 248
Chapter 4. Installing the Tivoli Infrastructure
247
3.
4.
5.
6.
7.
8.
“Cataloging the apm_db database” on page 248
“Running the database admin and schema scripts for APM” on page 249
“Defining the planner RIM object” on page 249
“Registering plug-ins to APM” on page 251
“Managing Linux versions” on page 251
“Verifying Activity Planner operations” on page 251
Detailed information about enabling APM can be located in the Tivoli Information
Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Configuration Manager → User’s Guide for Deployment
Services.
Assigning APM roles to administrators
Use the following procedure to verify and assign APM roles to the Tivoli
administrator named Root_hubtmr-region:
1. From the Tivoli Desktop, double-click the Administrators icon.
2. Right-click the Root_hubtmr-region icon and select Edit TMR Roles.
3. Verify that the following roles are listed under Current Roles.
–
–
–
–
APM_Admin
APM_Edit
APM_Manage
APM_View
4. Select Change & Close.
Creating the apm_db database
Use the DB2 command create database apm_db from the desired instance user
to create the database for the Activity Planner. This step has probably already
been performed as part of the database installation and configuration described
in “Creating DB2 instances and databases” on page 150.
Cataloging the apm_db database
From the hubtmr system, issue the following commands to catalog the apm_db
database residing on the rdbms system.
1. Access the db2 client:
su - db2inst1
2. Define the instance on the rdbms system hosting the apm_db database:
catalog tcpip node db2swd remote rdbms.demo.tivoli.com server 60012
248
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Note: The instance name db2swd and related port number 60012 was
determined when creating the database instances as described in
“Creating DB2 instances and databases” on page 150.
Because the database instance named db2swd hosts multiple databases for
the Tivoli Configuration Manager components, chances are that the node
db2swd might have already been cataloged with the db2 client running at the
hubtmr system. Optionally, use the command db2 list node directory to
verify the existence of the catalog entry.
3. Catalog the apm_db database to make it accessible from the hubtmr system:
catalog database apm_db as apm_db at node db2swd authentication server
The apm_db database should now be accessible from the hubtmr system.
Running the database admin and schema scripts for APM
Once the apm_db database has been created, you can run the scripts to create
the database objects in the newly created database. These scripts are located in
$BINDIR/TME/APM/SCRIPTS/. Remember, these are sample scripts that must
be customized by your DBA before you execute them on the rdbms machine.
For DB2, the database object creation scripts are called:
plans_db2_admin.sql
plans_db2_schema.sql
Note: These scripts attempt to drop database objects such as tables before
they create them. As a result, for these commands you will see errors stating
the specified tables do not exist. These error messages are normal and are
not problems with the script.
Defining the planner RIM object
Based on the way you installed the APM component, you might be prompted to
provide information that will be used by the installation process to create the RIM
object for the Activity Planner by the name of planner.
We used the parameters shown in Table 4-24 to configure the planner RIM:
Table 4-24 Parameters for the planner RIM
Parameter
value
Database Vendor
DB2
RIM Host
hubtmr
Chapter 4. Installing the Tivoli Infrastructure
249
Parameter
value
Database ID
apm_db
Server ID
tcpip
DB_UserName
db2swd
APM_UserName
tivapm
APM_Password
********
Database Home
/opt/IBM/db2/V8.1
Instance Name (DB2 only)
db2inst1
Instance Home (DB2 only)
~db2inst1
During the installation of the Activity Planner server component, if it does not
already exist, a new user called tivapm is created on the operating system. This
user does not have any particular privileges, but it is necessary to create a
dedicated Tivoli administrator for that login. As a consequence, on the Tivoli
Desktop, a new administrator called swd_admin_region-name_region is created
and associated with this user. The Activity Planner engine requires the creation
of such a user and of the new administrator. When the engine performs
operations, it must authenticate itself with the Tivoli Management Framework,
and it uses this Administrator to do that.
The management of the password of this new user is very important. The Activity
Planner engine maintains an encrypted copy of this password internally. The
password maintained by the engine must always be synchronized with the
password of the operating system user. If the password of the user is changed on
the operating system, the password maintained by the engine must be changed
accordingly using the wsetapmpw command.
Note: Remember that the default password for planner RIM is planner. To
get the RIM working, change the password using the wsetrimpw command as
shown:
wsetrimpw planner planner <new pass>
1. If the planner RIM object was not created during installation, use the wcrtrim
command to create it.
2. Once the planner RIM object and the database to which it refers have been
created with the schema scripts, verify the connection using the wrimtest -l
planner command.
250
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Registering plug-ins to APM
The Activity Planner needs the inventory scan, Software Distribution, and task
library plug-ins registered in order to use those components. The scripts that can
be used to register the plug-ins to the Activity Planner are located in the
$BINDIR/TME/APM/SCRIPTS/ Tivoli directory, and are called:
reg_swd_plugin.sh
reg_inv_plugin.sh
reg_tl_plugin.sh
Software Distribution plug-in
Inventory Scan plug-in
Task Libraty plug-in
On the hubtmr system, execute each of these scripts to register the plug-ins.
To verify that the plug-ins have been registered, you can run this command:
wapmplugin -l
Managing Linux versions
Depending on the version of Linux you have installed, the Activity Planner engine
might not start. In this case, you must define the APM_KERNEL_LINUX and
APM_THREADS_FLAG variables to the Tivoli environment.
APM_KERNEL_LINUX
The default value for Linux kernel is 2.2.5. If you are using UnitedLinux with
Kernel Version 2.4.19, add the following variable to the Tivoli environment using
the odadmin environ set command:
APM_KERNEL_LINUX=2.4.19
APM_THREADS_FLAG
If you are using Linux Advanced Server set the LD_ASSUME_KERNEL variable
in the .profile file for the root user to 2.2.5. If the APM engine still does not start,
add the following variable to the Tivoli environment using the odadmin environ
set command:
APM_THREADS_FLAG=native
Verifying Activity Planner operations
The APM installation creates a Tivoli Administrator that is used for APM
operations. This Administrator is called swd_admin_tmrserver-region and user
tivapm is added as login name. In addition, all the APM and RIM Roles added by
the installation by default.
To verify your APM installation, run the Activity Plan Monitor or Activity Plan
Editor GUI from the Tivoli Desktop. A login dialog box similar to the one shown in
Figure 4-20 on page 252 will be displayed:
Chapter 4. Installing the Tivoli Infrastructure
251
.
Figure 4-20 Activity Plan Editor launch
The default user is tivapm and the password was defined during the APM
installation. Press OK to start the Activity Plan Editor desktop.
4.4.8 Enabling Change Manager
Enabling the Change Manager requires the completion of the following activities:
1.
2.
3.
4.
5.
6.
”Assigning CCM Roles to the Administrator”
“Creating the ccm_db database” on page 253
“Cataloging the ccm_db database” on page 253
“Running the database admin and schema scripts for CCM” on page 253
“Defining the CCM RIM object” on page 254
“Registering plug-ins to CCM” on page 255
Detailed information about enabling APM can be located in the Tivoli Information
Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Configuration Manager → User’s Guide for Deployment
Services.
Assigning CCM Roles to the Administrator
Use the following procedure to verify and assign CCM roles to the Tivoli
administrator named Root_hubtmr-region:
1. From the Tivoli Desktop, double-click the Administrators icon
2. Right-click the Root_hubtmr-region icon and select Edit TMR Roles.
3. Verify that the following roles are listed under Current Roles:
–
–
–
–
–
252
CCM_Admin
CCM_Edit
CCM_Manage
CCM_View
RIM_update
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
– RIM_view
– user
4. Select Change & Close.
Creating the ccm_db database
Use the DB2 command create database ccm_db from the desired instance user
to create the database for the Change Manager. This step has probably already
been performed as part of the database installation and configuration described
in “Creating DB2 instances and databases” on page 150.
Cataloging the ccm_db database
From the hubtmr system, issue the following commands to catalog the ccm_db
database residing on the rdbms system.
1. Access the db2 client:
su - db2inst1
2. Define the instance on the rdbms system hosting the ccm_db database:
catalog tcpip node db2swd remote rdbms.demo.tivoli.com server 60012
Note: The instance name db2swd and related port number 60012 were
determined when creating the database instances as described in
“Creating DB2 instances and databases” on page 150
Because the database instance named db2swd hosts multiple databases
used by Tivoli Configuration Manager components, chances are that the node
db2swd might have already been cataloged with the db2 client running at the
hubtmr system. Optionally, you can use the db2 list node directory
command to verify the existence of the catalog entry.
3. Catalog the ccm_db database to make it accessible from the hubtmr system:
catalog database ccm_db as ccm_db at node db2swd authentication server
The ccm_db database should now be accessible from the hubtmr system.
Running the database admin and schema scripts for CCM
Once the CCM database has been created you can run the scripts to create the
database objects needed to store CCM data. These scripts are located in the
$BINDIR/TME/CCM/SCRIPTS/ directory.
Remember, these are sample scripts that must be customized by your DBA
before their execution on the rdbms machine.
Chapter 4. Installing the Tivoli Infrastructure
253
For DB2, the database object creation scripts are called:
ccm_db2_admin.sql
ccm_db2_schema.sql
Note: These scripts attempt to drop database objects such as tables before
they create them. Thus, for these commands you will see errors stating the
specified tables do not exist. These are normal and are not problems with the
scripts.
Defining the CCM RIM object
Based on the way you installed the CCM component, you might be asked to
provide parameters to be used by the installation process to create the CCM RIM
object, which is used by the Change Manager to store information.
We used these parameters in Table 4-25 to configure the RIM:
Table 4-25 Parameters for the ccm RIM
Parameter
value
Database Vendor
DB2
RIM Host
hubtmr
Database ID
ccm_db
Server ID
tcpip
DB_UserName
db2swd
Database Home
/opt/IBM/db2/V8.1
Instance Name (DB2 only):
db2inst1
Instance Home (DB2 only)
~db2inst1
If the CCM RIM object was not created during installation, use the wcrtrim
command to create it.
Once the CCM RIM object and the database to which it refers have been created
using the schema scripts, verify the connection with the wrimtest -l ccm
command.
Note: Remember that the default password for CCM RIM is tivoli. To get the
RIM working, you need to change the password using the wsetrimpw
command as shown below:
wsetrimpw ccm tivoli <new pass>
254
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Registering plug-ins to CCM
The Change Manager needs the inventory scan and Software Distribution
plug-ins registered in order to use those components. The scripts that can be
used to register the plug-ins to the Change Manager are located in the directory:
/usr/local/Tivoli/bin/aix4-r1/TME/CCM/SCRIPTS/, and are called:
reg_swd_plugin.sh
reg_invscan_plugin.sh
Software Distribution plug-in
Inventory Scan plug-in
1. Execute each of these scripts to register the plug-ins.
2. To verify that the plug-ins have been registered, run the wccmplugin -l
command.
Verifying the Change Manager operation
To verify your CCM installation, run the Change Manager GUI from the Tivoli
Desktop. A login dialog box similar to the one shown in Figure 4-21 will be
displayed:
.
Figure 4-21 Change Manager launch dialog
The default user is root and the password is the one defined for the root user.
Press OK to display the Change Manager Desktop.
4.4.9 IBM Tivoli Monitoring configuration
To set up and configure IBM Tivoli Monitoring for use in the Outlet Systems
Management Solution environment, the following steps must be completed:
1. “Assigning ITM Administrator Roles” on page 256
2. “IBM Tivoli Monitoring event and heartbeat message reception” on page 256
3. “Enabling endpoint heartbeating” on page 258
Chapter 4. Installing the Tivoli Infrastructure
255
Assigning ITM Administrator Roles
Before using IBM Tivoli Monitoring it is necessary to assign the proper roles to
the Tivoli Administrator. This has to be performed on the TMR server in each
TMR - the hubtmr and spoketmr systems in out setup.
From the Tivoli Desktop:
1. Double-click the Administrators icon
2. Right-click the Root_hubtmr-region icon and select Edit TMR Roles.
3. Make sure that the following roles are listed under Current Roles:
– itm_whc_user
– itm_tasks
4. Select Change & Close.
IBM Tivoli Monitoring event and heartbeat message reception
To enable forwarding of Tivoli Monitoring events on a Tivoli Enterprise Console
(TEC), you need to import the Tivoli Monitoring BAROC files into the rule base
used by the Tivoli Enterprise Console server.
Even though scripts (dmae_tec_inst.sh) and tasks
(DMCreateRuleAndLoadBaroc) are provided to accomplish this task, the files
required for successful completion are not available at the TEC Server machine
because ITM has not been installed there. Since the ITM class and rule-set files
are located only on the systems where ITM was installed (hubtmr and spoketmr)
the files needs to copied to the TEC system for processing, so for the Outlet
Systems Management Solution the tasks and scripts provided with the ITM
product will not work.
To enable ITM event and heartbeat message reception in TEC, perform the steps
outlined in this section. For full information about performing each step of the
procedure, refer to the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Enterprise Console → User’s Guide → Configuring the Tivoli
Enterprise Console product → Configuring the event server → Managing
rule bases.
To enable monitoring and message reception, perform the following steps:
1. Select an existing rule base or create a new rule base to contain the Tivoli
Monitoring BAROC files.
2. Copy all the *.baroc and *.rls files from the $BINDIR/TMNT_TEC directory at
the hubtmr or spoketmr server to the rule base directory on the tec system
identified in the previous step.
256
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
3. Import the required BAROC files from the rule base directory into the rule
base. It is important that you import the files in the following order:
a. The Tmw2k.baroc file
b. BAROC files for all the resource models for which events you want to send
to TEC
For the Outlet Systems Management Solution we imported the following
ITM specific BAROC files:
•
•
•
•
•
•
•
•
•
DMXPhysicalDisk.baroc
DMXCpu.baroc
DMXFile.baroc
DMXFileSystem.baroc
DMXMemory.baroc
DMXNetworkInterface.baroc
DMXNetworkRPCNFS.baroc
DMXProcess.baroc
DMXSecurity.baroc
c. The hb_events.baroc file to enable heartbeat messages
4. Import the required heartbeat rules file into the rule base to enable the
support of heartbeat messages. The file is called hb_events.rls. For more
information about the rules contained in this file, see “Understanding the Tivoli
Enterprise Console rules” in the IBM Tivoli Monitoring User’s Guide 5.1.2.
5. Import the required clearing event rules file into the rule base to enable
clearing events to close the error events to which they relate. The file is called
dmae_events.rls. For more information about the rules contained in this file,
see “Understanding the Tivoli Enterprise Console rules” in the IBM Tivoli
Monitoring User’s Guide 5.1.2.
6. Compile and load the rule base.
7. Stop and restart the Tivoli Enterprise Console server.
The TEC server is now ready to receive Tivoli Monitoring events from the
monitoring sources, the corresponding BAROC files of which you have imported
into the active rule base. To see the events sent by Tivoli Monitoring from the
Tivoli Enterprise Console main dialog box, click the All icon.
Note: The BAROC available with this version of Tivoli Monitoring can also be
used with Tivoli Distributed Monitoring (Advanced Edition) 4.1 or with Tivoli
Distributed Monitoring for Windows 3.7 Patch 3.
Chapter 4. Installing the Tivoli Infrastructure
257
Enabling endpoint heartbeating
Heartbeats for endpoints are controlled by the gateways serving each endpoint.
Therefore this function is controlled on a managed-node level. The command
used to control heartbeating is wdmheartbeat. Refer to IBM Tivoli Monitoring
User’s Guide 5.1.2 for details on the syntax.
1. Before defining the heartbeat frequency, we need to configure the heartbeat
engine on all gateways, and because new configurations require restart of the
heartbeat engine, we start by stopping the heartbeat engine on all gateways.
wdmmn -stop -m all -h
2. Next we want notifications and events to be sent to the TEC server in the
event the heartbeat function discovers that an endpoint is down, and we want
the heartbeat engine to start automatically if it goes down. These additional
configuration items in Example 4-26 are set using the wdmconfig command.
Example 4-26 Notifications to the TEC server
wdmconfig
wdmconfig
wdmconfig
wdmconfig
-m
-m
-m
-m
all
all
all
all
-D
-D
-D
-D
heartbeat.send_events_to_notice=true
heartbeat.send_events_to_tec=true
heartbeat.tec_server=EventServer
heartbeat.reboot_engine_if_down=true
3. FInally, we are ready to enable heartbeating in the Outlet Systems
Management Solution. We enable heartbeating every 60 seconds for all
endpoints controlled by all gateways in each TMR by issuing the following
command in from both the hubtmr and the spoketmr systems:
wdmheartbeat -m all -s 60
Note that all heartbeat enablement commands should be issued from both
the hubtmr and the spoke tmr systems.
4. Use the wdmheartbeat -m all -q command to verify the status of the
heartbeat function. The output shown in Example 4-27 shows the heartbeat
status from spoketmr.
Example 4-27 Querying the status of heartbeating
spoketmr:/usr/local/Tivoli/bin/linux-ix86 # wdmheartbeat -m all -q
Processing ManagedNode spoketmr...
HeartBeat processor status: STARTED, frequency: 60 (secs).
Processing ManagedNode region01...
HeartBeat processor status: STARTED, frequency: 60 (secs).
Processing ManagedNode outlet01...
HeartBeat processor status: STARTED, frequency: 120 (secs).
Processing ManagedNode hubtmr...
HeartBeat processor status: STARTED, frequency: 60 (secs).
258
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
4.4.10 IBM Tivoli Monitoring for Web Infrastructure
Enabling the monitoring of WebSphere servers through IBM Tivoli Monitoring for
Web Infrastructure (WI) requires a few post-installation steps before it can be
used. These are;
1. ”Assigning IBM Tivoli Monitoring for WI Roles to the Administrator”
2. “Defining IBM Tivoli Monitoring for WI events and rules to TEC” on page 259
3. “Updating Web Health Console files for IBM Tivoli Monitoring for WI” on
page 262
Detailed information about enabling IBM Tivoli Monitoring for Web Infrastructure
can be located in the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Web Infrastructure → Installation and Setup
Guide.
Assigning IBM Tivoli Monitoring for WI Roles to the
Administrator
Before monitoring WebSphere Application Server, it is necessary to assign the
proper roles to the Tivoli Administrator. For additional information, refer to the
instructions in the IBM Tivoli Monitoring for Web Infrastructure: WebSphere
Application Server User’s Guide under the section titled “Setting authorization
roles.”
From the Tivoli Desktop, perform the following steps:
1. Double-click the Administrators icon.
2. Right-click the Root_hubtmr-region icon and select Edit TMR Roles.
3. Verify that the following roles are listed under Current Roles:
– websphereappsvr_super
– websphereappsvr_admin
– websphereappsvr_user
4. Select Change & Close.
Defining IBM Tivoli Monitoring for WI events and rules to TEC
To set up your Tivoli Enterprise Console event server to process IBM WebSphere
Application Server events, you need to copy the BAROC and rule set files to the
TEC Server machine, import the appropriate TEC class and rule set files into a
TEC rule base, compile and load the rule base and restart the TEC Server.
Chapter 4. Installing the Tivoli Infrastructure
259
The classes required to be copied to the tec system, and imported into the
rulebase are:
򐂰
򐂰
򐂰
򐂰
򐂰
WebSphere_MQ_Channel.baroc
WebSphere_MQ_QueueManager.baroc
WebSphere_MQ_Queue.baroc
itmwas_events.baroc
itmwas_dm_events.baroc
And the following rule sets are also required:
򐂰 itmwas_events.rls
򐂰 itmwas_monitors.rls
Detailed information about the classes and rule sets are available in the Tivoli
Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Web Infrastructure → Reference Guide.
All of these steps have been performed already. See “IBM Tivoli Monitoring event
and heartbeat message reception” on page 256, but IBM Tivoli Monitoring for
Web Infrastructure provides a task that automates the process. The name of the
task is Configure_Event_Server, and it can be found in the WebSphere Event
Tasks task library.
The task can be invoked from the Tivoli Desktop through the WebSphere Event
Tasks task library, found in the Monitoring for WebSphere Application Server
policy region. The task can also be executed from the command line, as shown
in the Example 4-28, where we are extending the rule base that was build for
ITM, as described in “IBM Tivoli Monitoring event and heartbeat message
reception” on page 256.
Example 4-28 ITM for WI Configure_Event_Server task invocation
hubtmr:/ # wruntask -t Configure_Event_Server -l "WebSphere Event Tasks" -h tec
-m 300 -o 15 -a UPDATE -a Production -a "" -a "" -a RESTART
############################################################################
Task Name:
Configure_Event_Server
Task Endpoint: tec (ManagedNode)
Return Code:
0
------Standard Output-----IZY9035I Configuring the event server.
IZY9001I The Configure Event Server task is preparing the rule base.
IZY9010I Rule base Production was compiled successfully.
IZY9047I Importing classes.
260
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
IZY9008I IBM Tivoli Monitoring classes are already installed.
IZY9016I Imported classes in file itmwas_events.baroc into rule base
Production.
IZY9016I Imported classes in file itmwas_dm_events.baroc into rule base
Production.
IZY9026I
IZY9018I
IZY9043I
IZY9018I
IZY9043I
Importing rules.
Replacing rules in file itmwas_events.rls in rule base Production.
Imported rules in file itmwas_events.rls into rule base Production.
Replacing rules in file itmwas_monitors.rls in rule base Production.
Imported rules in file itmwas_monitors.rls into rule base Production.
IZY9010I Rule base Production was compiled successfully.
IZY9027I Installing event sources.
IZY9021I Event source WAS already exists.
IZY9021I Event source TMNT already exists.
IZY9029I Activating the rule base.
IZY9024I Rule base Production was loaded.
IZY9030I The event server was stopped.
The Tivoli Enterprise Console Server is initializing...
The Tivoli Enterprise Console Server is running.
IZY9031I Event server started.
IZY9039I The Configure Event Server task completed successfully.
------Standard Error Output-----############################################################################
As shown, the task requires four parameters, which are all described in the Tivoli
Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Web Infrastructure → Reference Guide → IBM
WebSphere Application Server tasks → Configure_Event_Server.
Note: We specified a task execution time-out value of 300 seconds to allow
enough time for the rule base changes to complete. The default task
execution time-out value is 60 seconds, which is insufficient for this task.
Chapter 4. Installing the Tivoli Infrastructure
261
Updating Web Health Console files for
IBM Tivoli Monitoring for WI
This procedure describes how to update the class files for the Web Health
Console to support IBM Tivoli Monitoring for Web Infrastructure. The class files
specify the standard text that the console displays. Whenever you add or
upgrade components for your installation of IBM Tivoli Monitoring for Web
Infrastructure, you must perform this procedure.
1. Copy the contents of the HCONSOLE directory of catalog files from the IBM
Tivoli Monitoring for Web Infrastructure, Version 5.1.2, Component Software
CD to the Web Health Console resources directory, which can be found in the
<WASHOME>/installedApps/dm.ear/dm.war/WEB-INF/classes/com/tivoli/DmF
orNt/resources on the system hosting the Web Health Console, the console
system for the Outlet Systems Management Solution.
4.4.11 IBM Tivoli Monitoring for Databases
Enabling the monitoring of DB2 Servers through IBM Tivoli Monitoring for
Databases requires a few post-installation steps before it can be used. These
are;
1. ”Assigning IBM Tivoli Monitoring for DB2 Roles to the Administrator”
2. ”Defining IBM Tivoli Monitoring for DB2 events and rules to TEC”
3. ”Updating Web Health Console files for IBM Tivoli Monitoring for DB2”
4. ”Link the Monitoring for DB2 policy region on root Desktop”
each of which are described in detail in the following.
Detailed information about enabling ITM for WI can be located in the Tivoli
Information Center at
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp and navigate
to Monitoring for Databases → Installation and Setup Guide
Assigning IBM Tivoli Monitoring for DB2 Roles to the
Administrator
Before monitoring DB2, it is necessary to give the administrators the proper
roles. From the Tivoli Desktop, perform the following steps:
1. Double-click the Administrators icon/
2. Right-click the Root_hubtmr-region icon and select Edit TMR Roles.
262
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
3. Verify that the following roles are listed under Current Roles:
– db2_user
– db2_dba
4. Select Change & Close.
Link the Monitoring for DB2 policy region on root Desktop
The installation creates a new policy region called Monitoring for DB2, which by
default is not visible on the Tivoli Desktop. You can link it to the root
Administrator’s Desktop to store similar managed resources to share one or
more common policies. To set up the link, run the following command:
wln @PolicyRegion:"Monitoring for DB2#hubtmr-region"
/Administrators/Root_hubtmr-region
Note: If you have interconnected TMRs, you must use the above command to
reference the Monitoring for DB2 policy region instead of referencing the
object with the database format /Regions/”Monitoring for DB2.
In that case, you will get an error messages similar to the following:
FRWAE0134E Object label Monitoring for DB2 is ambiguous.
are:
1393424439.1.1211#TMF_PolicyRegion::GUI#
1282790711.1.1084#TMF_PolicyRegion::GUI#
Matching objects
For the Outlet Systems Management Solution, we also created a subregion
called DB2 Database Servers within the Monitoring for DB2 policy region to
contain all of the DB2 related objects.
Defining IBM Tivoli Monitoring for DB2 events and
rules to TEC
This section provides information about setting up the Tivoli Enterprise Console
(TEC) for use with the IBM Tivoli Monitoring for Databases: DB2.
To set up your Tivoli Enterprise Console event server to process events related
to DB2 Servers generated by ITM for DB, you need to copy the necessary
BAROC and rule set files to the TEC Server machine, import the appropriate
TEC class and rule set files into a TEC rule base, compile and load the rule base
and restart the TEC Server.
The classes required to be copied to the tec system, and imported into the
rulebase are:
򐂰 DB2_Event.baroc
򐂰 DB2Agents.baroc
Chapter 4. Installing the Tivoli Infrastructure
263
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
DB2HostThroughput.baroc
DB2CpuUtilization.baroc
DB2DatabaseStatus.baroc
DB2InstanceStatus.baroc
DB2ApplyReplication.baroc
DB2BufferPool.baroc
DB2BufferPoolExtStorage.baroc
DB2CatalogCache.baroc
DB2Cursor.baroc
DB2DirectIO.baroc
DB2FCMActivity.baroc
DB2LockWaits.baroc
DB2Locks.baroc
DB2Logging.baroc
DB2PackageCache.baroc
DB2ReplicationCapture.baroc
DB2SAPTablespaceUsageStatus.baroc
DB2SQLStatementActivity.baroc
DB2Sorts.baroc
DB2TableActivity.baroc
DB2TableApplyReplication.baroc
No rule sets are used.
For a listing of event classes and events, see the Tivoli Information Center Web
site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Databases → Reference Guide.
All of these steps haves been performed previously. See “IBM Tivoli Monitoring
event and heartbeat message reception” on page 256. However, IBM Tivoli
Monitoring or Databases provides a task that automates the process. The name
of the task is ECC_Configure_TEC_Classes, and it can be found in the
DB2ManagerAdminTasks task library.
You can invoke the task two ways:
򐂰 From the Tivoli Desktop through the DB2ManagerAdminTasks task library,
found in the Monitoring for DB2 policy region
264
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
򐂰 From the command line (shown in this section.)
Note: The ECC_Configure_TEC_Classes task must be run from a managed
node at which IBM Tivoli Monitoring for Databases has been installed. The
target of the execution (the -h parameter) must point to the TEC server
machine (tecm in our case), and can only be used if you are going to create a
new Rule Base. It cannot be used to modify your production Rule Base.
1. For the Outlet Systems Management Solution we executed the task:
wruntask -t ECC_Configure_TEC_Classes -l DB2ManagerAdminTasks -h tec -a
"ITMfDB2" -a "Production" -a DB2 -a N -m 600
We specified a task execution time-out value of 600 seconds to allow enough
time for the rule base changes to complete. The default task execution
time-out value is 60 seconds, which is insufficient for this task.
The task requires four parameters, which are all described in the Tivoli
Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Databases → Reference Guide → DB2
Manager Admin Tasks → ECC_Configure_TEC_Classes.
Note: During task execution you might receive one or more of the error
messages below:
򐂰 ECO3042E Can not import
hubtmr:/usr/local/Tivoli/bin/generic/ITM/DB2/TECClasses/DB2_Event.b
aroc to rule base:
򐂰 Error::ECO:0001:0196 The class file named DB2_Event.baroc is
already defined in the specified rule base.
These messages only advise you that the specified baroc file has already
been imported into the Rule Base.
2. To verify that the classes have been successfully loaded, you can execute
this sample command:
wpostemsg -m "Testing ITM for Databases: DB2 Component Software baroc
files" -r FATAL DB2_High_TimePerStatement DB2_Event
The output of the wtdumprl should list the event as PROCESSED, as in
Example 4-29.
Chapter 4. Installing the Tivoli Infrastructure
265
Example 4-29 TEC reception log output
1~685~65537~1100683885(Nov 17 03:31:25 2004)
### EVENT ###
DB2_High_TimePerStatement;source=DB2_Event;severity=FATAL;msg=Testing ITM for
Databases: DB2 Component Software baroc files;origin=10.1.1.2;END
### END EVENT ###
PROCESSED
Updating Web Health Console files for IBM Tivoli Monitoring
for DB2
This procedure describes how to update the class files for the Web Health
Console to support IBM Tivoli Monitoring for Databases. The class files specify
the standard text that the console displays. Whenever you add or upgrade
components for your installation of IBM Tivoli Monitoring for Databases you must
perform this procedure.
1. Copy the contents of the HCONSOLE directory of catalog files from the IBM
Tivoli Monitoring for Databases, Version 5.1.0: DB2, Component Software CD
to the Web Health Console resources directory, which can be found in the
<WASHOME>/installedApps/dm.ear/dm.war/WEB-INF/classes/com/tivoli/DmF
orNt/resources on the system hosting the Web Health Console, the console
system for the Outlet Systems Management Solution.
4.4.12 IBM Tivoli Monitoring for Transaction Performance
Enabling the monitoring of transactions through IBM Tivoli Monitoring for
Transaction Performance (TMTP) requires a few postinstallation steps before it
can be used. These are;
1. ”Configuring TEC to work with TMTP”
2. “Integrating the Web Health Console” on page 269
Detailed information about enabling Tivoli Monitoring for Transaction
Performance can be located in the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Transaction Performance → Installation and
Configuration Guide and Monitoring for Transaction Performance →
Problem Determination Guide.
Configuring TEC to work with TMTP
For Tivoli Enterprise Console to receive events from IBM TMTP, verify that the
TransPerf.baroc file has been loaded into the active rule base and the TEC
266
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
server has been restarted. The TransPerf.baroc file is located in the
<MS_insatll>/config/tec directory on the TMTP server machine. See the
instructions provided in “IBM Tivoli Monitoring event and heartbeat message
reception” on page 256 for details on how to import a new BAROC file into a rule
base.
Use the following steps to configure Tivoli Enterprise Console:
1. Perform the following steps on the management server, the tmtpsrv system in
our case, to configure it to work with Tivoli Enterprise Console:
a. Edit the eif.conf file, located in the <MS_install>/config directory.
b. Define the Tivoli Enterprise Console event server by setting the
ServerLocation property to the fully qualified host name of the TEC server.
c. If the event server is located on a UNIX computer, set ServerPort=0. The
default value is 5529, which is valid for Microsoft Windows.
The two affected sections of the eif.conf file used in the Outlet Systems
Management Solution as shown in Example 4-30.
Example 4-30 TMTP configuration of TEC Server
###############################################################################
#ServerLocation=host
#
#Specifies the name of the host on which the event server is installed. The
#value of this field must be one of the formats shown in Table below, depending on
#whether the adapter is a TME adapter or a non-TME adapter, and whether the event
#server is part of an interconnected Tivoli management region:
#
#Adapter Type
Format
#---------------------------------------------------#TME
@EventServer
#
#TME in an interconnected
#Tivoli management region
@EventServer#region_name
#
#non-TME
host_name or IP_address.
#
#
#The ServerLocation keyword is optional and not used when the TransportList keyword
#is specified.
#
#Note:
#
The ServerLocation keyword defines the path and name of the file for logging
#events, instead of the event server, when used with the TestMode keyword.
###############################################################################
#
# NOTE: SET THE VALUE BELOW AS SHOWN IN THIS EXAMPLE TO CONFIGURE TEC EVENTS
#
# Example: ServerLocation=marx.tivlab.austin.ibm.com
#
Chapter 4. Installing the Tivoli Infrastructure
267
ServerLocation=tec.demo.tivoli.com
###############################################################################
#ServerPort=number
#
#Specifies the port number on a non-TME adapter only on which the event server
#listens for events. Set this keyword value to zero (0), the default value,
#unless the portmapper is not available on the event server, which is the case
#if the event server is running on Microsoft Windows or the event server is a
#Tivoli Availability Intermediate Manager (see the following note). If the port
#number is specified as zero (0) or it is not specified, the port number is
#retrieved using the portmapper.
#
#The ServerPort keyword is optional and not used when the TransportList keyword
#is specified.
###############################################################################
#ServerPort=5529
#commented by Fabrizio. Must be set to 0 if the TEC server runs on Linux
ServerPort=0
d. Shut down and restart the management server.
2. Run the following command on the Tivoli Enterprise Console event server to
add IBM Tivoli Monitoring for Transaction Performance to the Tivoli object
database that event consoles use to create event filters:
wcrtsrc -l TMTP TMTP
3. To verify that the setup is complete you can run the following sample
command:
wpostemsg -m Test -r FATAL TMTP-MS-Event TMTP-Event
Running wtdumprl from the TEC server machine, you should see an output
similar to Example 4-31.
Example 4-31 TMTP event processing verification
1~94~65537~1100112031(Nov 10 12:40:31 2004)
### EVENT ###
TMTP-MS-Event;source=TMTP-Event;severity=FATAL;msg=Test;origin=10.1.1.2;END
### END EVENT ###
PROCESSED
For further instructions to set up the connections between TEC and TMTP, refer
to in the Tivoli Information Center Web site
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Transaction Performance → Administrator’s
Guide → Configuring communications in the monitoring environment →
Configuring Tivoli Enterprise Console to work with IBM Tivoli Monitoring
for Transaction Performance.
268
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Integrating the Web Health Console
The Web Health Console Fixpack 5 added a feature to externally launch the
WHC using a URL. When enough information is passed to it in the form of URL
arguments, the WHC will automatically log in the user and display endpoint
information. This is not a required piece of the WHC, and in fact, only one Tivoli
application, TMTP, uses it to launch the WHC, so you can easily skip this step if
you are not interested.
To install the Launch ITM application, perform the following steps:
1. Open a Web browser to load the WebSphere Application Server
Administrative Console using this URL:
http://console:9090/admin
2. In the WebSphere Application Server Administrative Console, install
LaunchITM.ear. Accept all defaults for installation. Save the WebSphere
Application Server master configuration after the install is complete.
a. Click Applications → Enterprise Applications.
b. Click the install button and give the path to the LaunchITM.ear from the
patch bundle.
c. Accept all defaults (do not check anything) and click Finish.
d. Select the link Save at the top of the page.
e. Click the Save button.
f. Exit from the WebSphere Application Server Administrative Console.
3. Edit the file /opt/IBM/WebSphere/AppServer/installedApps/console/ITM
Launcher.ear/LaunchITM.war/jsp/LaunchWHC.jsp
a. Find the line containing the following:
String serverName = scheme + "://" + serverHostname;
b. Add the port to the serverName, so the string is replaced exactly as
follows:
String serverName = scheme + “://” + serverHostname + “:9080”;
4. Restart the WebSphere Application Server.
5. Launch the TMTP Administrative Console from your browser:
https://tmtpsrv:9445/tmtpUI/
6. Click System Administration → Configure User Settings.
7. Enter the requested information similar to what is shown in Figure 4-22 on
page 270:
Chapter 4. Installing the Tivoli Infrastructure
269
Figure 4-22 TMTP: Web Health Console Integration
For further instructions to launch the Web Health Console from TMTP, refer to
the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Transaction Performance → User’s Guide and
Monitoring for Transaction Performance → Problem Determination Guide.
270
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
5
Chapter 5.
Creating profiles, packages,
and tasks
For the Outlet Systems Management Solution, we assumed that all the deployed
systems will be delivered with the correct level of operating system installed and
configured to connect to the network without intervention. Once active, it is the
task of the deployment team to install the required software components to
support the role of a system, and facilitate monitoring to help ensure smooth
operation.
This chapter discusses the definition of the Tivoli objects, profiles, packages and
tasks that will be necessary to create to fulfill this mission.
The common goal for the activities described in this chapter is to enable
automatic deployment through the use of an Analysis Point Monitoring (APM)
plan, which will be deployed automatically when new endpoints are discovered.
For each major, managed component, the logical flow of activities is:
1. Core component customization
2. Monitoring policy definition and profile creation
3. Discovery (if applicable)
However, before embarking on these tasks, we need to define the overall profile
manager structure to which the managed systems will subscribe.
© Copyright IBM Corp. 2005. All rights reserved.
271
5.1 Defining the logical structure of the environment
As described in 3.3, “Logical Tivoli architecture for Outlet Inc.” on page 49, the
managed systems will be subscribed to profile managers organized in
hierarchies. These hierarchies will allow us to target all systems with similar
attributes (location, architecture, applications) in a single operation.
All master profiles are kept in the hubtmr environment to which profile managers
at the spoke TMRs will be subscribed. To allow for easy assignment of
authorizations to manipulate the objects, each type of profile is kept in a separate
profile manager. The profile managers we need at the hubtmr are all stored in the
hubtmr-region. The ones we will need for the Outlet Systems Management
Solution are:
hubtmr-region_MASTER_PM_INV
hubtmr-region_MASTER_PM_TEC
hubtmr-region_MASTER_PM_ITM
hubtmr-region_MASTER_PM_SWD
To be used for Inventory profiles
To be used for ACP profiles
To be used for Monitoring profiles
To be used for software packages
We will need a dataless profile manager to which the endpoints to be managed
will be subscribed. In the Outlet Systems Management Solution, we create the
hubtmr-region_MASTER_PMS_linux-ix86 profile manager for this purpose.
1. Create the profile managers from the hubtmr system using the wcrtprfmgr
command as demonstrated in Example 5-1.
Example 5-1 Creating profile managers with crtprfmgr
wcrtprfmgr hubtmr-region hubtmr-region_MASTER_PM_INV
wcrtprfmgr hubtmr-region hubtmr-region_MASTER_PM_ITM
wcrtprfmgr hubtmr-region hubtmr-region_MASTER_PM_SWD
wcrtprfmgr hubtmr-region hubtmr-region_MASTER_PM_TEC
wcrtprfmgr hubtmr-region hubtmr-region_MASTER_PMS_linux-ix86
wsetpm -d @ProfileManager:hubtmr-region_MASTER_PMS_linux-ix86
2. Create the main hierarchy at the spoke TMRs and subscribe the new profile
managers to the one on the hubtmr. From the spoketmr system, issue the
following commands, as in Example 5-2.
Example 5-2 Issuing wcrtprfmgr from the spoketmr system
wcrtprfmgr spoketmr-region spoketmr-region_MASTER_PM_INV
wcrtprfmgr spoketmr-region spoketmr-region_MASTER_PM_ITM
wcrtprfmgr spoketmr-region spoketmr-region_MASTER_PM_SWD
wcrtprfmgr spoketmr-region spoketmr-region_MASTER_PM_TEC
wcrtprfmgr spoketmr-region spoketmr-region_MASTER_PMS_linux-ix86
wsetpm -d @ProfileManager:spoketmr-region_MASTER_PMS_linux-ix86
272
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
3. After exchanging resources between the TMR environments using the
wupdate -r ProfileManager spoketmr-region command, you can subscribe
the spoketmr profile managers to the hubtmr profile managers as in
Example 5-3:
Example 5-3 Subscribing the spoketmr and hubtmr profile managers
wsub @ProfileManager:hubtmr-region_MASTER_PM_INV
@ProfileManager:spoketmr-region_MASTER_PM_INV
wsub @ProfileManager:hubtmr-region_MASTER_PM_ITM
@ProfileManager:spoketmr-region_MASTER_PM_ITM
wsub @ProfileManager:hubtmr-region_MASTER_PM_SWD
@ProfileManager:spoketmr-region_MASTER_PM_SWD
wsub @ProfileManager:hubtmr-region_MASTER_PM_TEC
@ProfileManager:spoketmr-region_MASTER_PM_TEC
5.2 Deploying management endpoints
To test the monitoring profiles and software packages we will create, we have to
have at least one test system defined as an endpoint. In this section, we
introduce a new system, dev01, which will be the target of the management
operations while we develop and test profiles.
In addition, a new policy region for holding only endpoint definitions is created as
a subregion of the main hubtmr-region.
wcrtpr -s /Regions/hubtmr-region -m Endpoint hubtmr-region_EP
1. To manage the dev01 system, the Tivoli Management Agent, known as the
endpoint, needs to be installed. This action is performed from the hubtmr
system itself using the winstlcf command as shown:
winstlcf -j -g hubtmr:9494 -l 9495 -n dev01-ep -r hubtmr-region_EP -Y
‘dev01 root <password>’
The wlcfinst command installs a TMA on the dev01 system and assigns the
gateway on the hubtmr system as the default gateway for the new endpoint
labeled dev01-ep. Authentication is gained through ssh (the -j parameter)
using the root user and the valid password for that user. Finally, upon
installation, the new endpoint is defined to the Tivoli environment in the
hubtmr-region_EP policy region.
2. To verify that the endpoint has been defined, use the wep <ep_label> status
command, and verify that the returned message indicates that the endpoint is
alive - as shown in Example 5-4:
Chapter 5. Creating profiles, packages, and tasks
273
Example 5-4 Verifying endpoint availability
hubtmr:/ # wep dev01-ep status
dev01-ep is alive.
3. Finally the new endpoint , dev01-ep, needs to be subscribed to the
hubtmr-region_PMS-linux-ix86 profile manager, which in turn is subscribed to
all the resource specific profile managers hosting the monitoring profiles:
wsub @ProfileManager:hubtmr-region_MASTER_PMS_linux-ix86 @Endpoint:dev01-ep
5.3 Creating monitoring profiles
Now that the logical structure has been established, and a test system is
available, we can define the profiles used to monitor the systems. For the Outlet
Systems Management Solution we define profiles for:
򐂰
򐂰
򐂰
򐂰
򐂰
Inventory scanning
System Resource Monitoring
Event forwarding
WebSphere Monitoring
Database Monitoring
5.3.1 Inventory scanning
In the Outlet Systems Management Solution, we operate with separate
InventoryConfig profiles for hardware, and software scanning, and will also
create a CUSTOM profile that can be used for implementation specific purposes.
The steps required to create these profiles are:
1.
2.
3.
4.
5.
Create and InventoryConfig profile using the wcrtprf command.
Set event and distribution options using use wsetglobalinv command.
Set hardware scan options using the wsetinvunixhw command.
Set software scan options using the wsetinvunixsw command.
Optionally use the wsetinvunixfiles command to add/modify scanning
behavior.
5.3.2 Hardware scanning
Use the following steps to create a profile for HW scanning.
1. To create an InventoryConfig profile for scanning hardware of the endpoints in
the Outlet Systems Management Solution, use the commands shown in
Example 5-5 on page 275.
274
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Example 5-5 Creating inventory hardware scanning profile
export profname=hubtmr-region_MASTER_INV_HW
export prof=@InventoryConfig:$profname
wcrtprf @ProfileManager:hubtmr-region_MASTER_PM_INV InventoryConfig $profname
wsetinvglobal -l TEC -u REPLACE $prof
wsetinvunixhw -t Y -u Y $prof
wsetinvunixsw -p NO $prof
wsetinvpchw -t N -u N $prof
wsetinvpcsw -r NO $prof
2. The above example uses default values for the hardware scan. you might
want to apply specific customization through the wsetinvunix command in
order to speed up processing or add additional information of your choice.
3. To test the newly created InvontoryConfig profile, use the wdistinv command
and record the distribution ID. In Example 5-6 the distribution id is 389.
Example 5-6 distributing an InventoryConfig profile
wdistinv @InventoryConfig:hubtmr-region_MASTER_INV_HW @Endpoint:dev01-ep
Distribution ID: 1393424439.389
Scan ID: 62
4. Now use the wmdist command to determine if the scan has completed, as in
Example 5-7.
Example 5-7 Using wmdist to determine the status of a distribution
hubtmr:/ # wmdist -l -i 389
Name
Distribution ID Targets Completed Successful Failed
hubtmr-region_MASTER_INV_HW 1393424439.389 1
1(100%)
1(100%)
0( 0%)
5. Finally, to see the results of the scan, use the wqueryinv command as shown
in Example 5-8:
Example 5-8 Viewing scan results through wqueryinv
hubtmr:/ # wqueryinv -d "
" dev01-ep
Query Name: INVENTORY_HWARE
TME_OBJECT_LABEL
TME_OBJECT_ID
COMPUTER_SYS_ID
COMPUTER_SCANTIME
COMPUTER_MODEL
COMPUTER_ALIAS
SYS_SER_NUM
OS_NAME
OS_TYPE
Chapter 5. Creating profiles, packages, and tasks
275
PROCESSOR_MODEL
PROCESSOR_SPEED
PHYSICAL_TOTAL_KB
PHYSICAL_FREE_KB
TOTAL_PAGES
FREE_PAGES
PAGE_SIZE
VIRT_TOTAL_KB
VIRT_FREE_KB
dev01-ep
1393424439.29.522+#TMF_Endpoint::Endpoint#
D54DE1FC-1DD1-11B2-A431-F38473C2E1D8
2005-02-07 15:42:33.000000
VMware, Inc. VMware Virtual Platform
dev01
VMware-56 4d 66 f3 c9 b7 c7 25-5
UnitedLinux 1.0 (i586) VERSION = 1.0 PATCHLEVEL = 3
LINUX
Pentium 4
3000
514804
156492
125
38
4096
1036152
1036152
5.3.3 System software scan
To create the software scan profile, perform the following tasks:
1. Similar to creating InventoryConfig profiles for hardware scanning, create the
profile for scanning for default software using the commands shown in
Example 5-9.
Example 5-9 Creating profiles for software scanning
export profname=hubtmr-region_MASTER_INV_SW
export prof=@InventoryConfig:$profname
wcrtprf @ProfileManager:hubtmr-region_MASTER_PM_INV InventoryConfig $profname
wsetinvglobal -l TEC -u REPLACE $prof
wsetinvunixhw -t N -u N $prof
wsetinvunixsw -p BOTH $prof
2. To see the results of the scan, use the wqueryinv command as shown in
Example 5-10:
276
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Example 5-10 Operating System information from Inventory
hubtmr:/ # wqueryinv -d "
Query Name: OS_QUERY
" -q @OS_QUERY dev01-ep
TME_OBJECT_LABEL
TME_OBJECT_ID
COMPUTER_SYS_ID
OS_NAME
OS_TYPE
OS_MAJOR_VERS
OS_MINOR_VERS
OS_SUB_VERS
OS_INST_DATE
dev01-ep
1393424439.29.522+#TMF_Endpoint::Endpoint#
D54DE1FC-1DD1-11B2-A431-F38473C2E1D8
UnitedLinux 1.0 (i586) VERSION = 1.0 PATCHLEVEL = 3
LINUX
2
4
21-251-default
5.3.4 Custom scan
If you want to create a custom scan profile, it is similar to creating
InventoryConfig profiles for hardware and software scanning.
1. Create the profile for custom scanning using the commands shown in
Example 5-11.
Example 5-11 Creating custom profiles for software scanning
export profname=hubtmr-region_MASTER_INV_CUSTOM
export prof=@InventoryConfig:$profname
wcrtprf @ProfileManager:hubtmr-region_MASTER_PM_INV InventoryConfig $profname
wsetinvglobal -l TEC -u REPLACE $prof
wsetinvunixhw -t N -u N $prof
wsetinvunixsw -s BOTH -p NO -x N $prof
wsetinvunixfiles -e -t EXCLUDE -d +"*/tmp" $prof
2. When scanning for signatures, remember to load the default signatures as
described in “Loading signatures file for software scan” on page 242.
Optionally, add your own using the winvsig -a command.
Example 5-12 on page 278 shows the standard output, slightly reformatted,
from a software signature scan performed on a pristine UnitedLinux system.
Chapter 5. Creating profiles, packages, and tasks
277
Example 5-12 Signature scanning of pristine system
hubtmr:/ # wqueryinv -s dev01-ep
Query Name: INVENTORY_SWARE
TME_OBJECT_LABEL
TME_OBJECT_ID
COMPUTER_SYS_ID
SWARE_DESC
SWARE_VERS
SWARE_NAME
SWARE_SIZE
RECORD_TIME
dev01-ep
1393424439.29.522+#TMF_Endpoint::Endpoint#
D54DE1FC-1DD1-11B2-A431-F38473C2E1D8
Java Support
5.2.0.2
profdb
612
2005-02-07 21:10:31.173377
dev01-ep
1393424439.29.522+#TMF_Endpoint::Endpoint#
D54DE1FC-1DD1-11B2-A431-F38473C2E1D8
VERITAS Perl for VRTSvcs
1.1.1
extralibs.ld
1
2005-02-07 21:10:31.175888
dev01-ep
1393424439.29.522+#TMF_Endpoint::Endpoint#
D54DE1FC-1DD1-11B2-A431-F38473C2E1D8
OrbixOTM
1.0c
README
784
2005-02-07 21:10:31.183458
dev01-ep
1393424439.29.522+#TMF_Endpoint::Endpoint#
D54DE1FC-1DD1-11B2-A431-F38473C2E1D8
HP OpenView OmniBack II Cell Server
A.03.10
README
2962
2005-02-07 21:10:31.184393
dev01-ep
1393424439.29.522+#TMF_Endpoint::Endpoint#
D54DE1FC-1DD1-11B2-A431-F38473C2E1D8
OrbixOTM
1.0c
278
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
README
1143
2005-02-07 21:10:31.185708
dev01-ep
1393424439.29.522+#TMF_Endpoint::Endpoint#
D54DE1FC-1DD1-11B2-A431-F38473C2E1D8
Network Computing System 1.5.1
4.3.3.0
VERSION
21
2005-02-07 21:10:31.193596
3. To verify that the expected events have been send to TEC, go to the tec
system, and issue the wtdumprl command. At the bottom of the list, you will
see messages similar to Example 5-13.
Example 5-13 Inventory scan events forwarded to TEC
1~1443~65537~1107833205(Feb 07 21:26:45 2005)
### EVENT ###
Inv_Dist_Start;scan_id=69;ip_name="hubtmr-region_MASTER_INV_CUSTOM";dist_start_
time="Mon Feb 7 20:27:14 2005";num_targets=1;END
### END EVENT ###
PROCESSED
1~1444~65537~1107833321(Feb 07 21:28:41 2005)
### EVENT ###
Inv_Dist_Complete;scan_id=69;ip_name="hubtmr-region_MASTER_INV_CUSTOM";dist_ela
psed_time="0 Days 0 Hours 1 Minutes 55
Seconds";number_success=1;number_failed=0;END
### END EVENT ###
PROCESSED
Notice that the scan_id which is reported by the wdistinv command also
appears in the event information.
5.3.5 Defining OS and HW Monitoring Profiles
Monitoring the use and performance of operating system (OS) and hardware
resources on the endpoints require setting up one or more IBM Tivoli Monitoring
profiles, also known as Tmw2kProfiles.
The executables that implement the OS and HW monitoring capabilities are
based on Java. Before distributing any Tmw2kProfile, each endpoint needs to
have the proper level of the Java Runtime Environment (JRE) linked to the
endpoint itself. How to do this is discussed in “Linking endpoints to a JRE” on
page 280.
Chapter 5. Creating profiles, packages, and tasks
279
Next we define the specific monitors that collectively implement the monitoring
policies of Outlet Systems Management Solution. This is discussed in “Creating
a profile for monitoring system resources” on page 281. Finally, when the
prerequisites are in place, we can implement the monitoring policies by
distributing the Tmw2kProfile to the endpoints to which it applies. as shown in
“Distributing the monitoring profile” on page 288.
Linking endpoints to a JRE
Various components of IBM Tivoli Monitoring require Java Runtime Environment
(JRE), Version 1.3.x. For the Outlet Systems Management Solution, this JRE is
installed by default on all the systems that will be introduced into the environment
in the /usr/lib/java directory.
Because the correct JRE is already installed, we need to link the JRE to the IBM
Tivoli Monitoring engine at each endpoint using the DMLinkJre task from the IBM
Tivoli Monitoring Tasks task library. We can assume that all the instances have
JRE installed in the same location, therefore we can use a general command:
wruntask -t DMLinkJre -l "IBM Tivoli Monitoring Tasks" -h srchost-ep -a
"/usr/lib/java" -m 120
Your output from this task will be similar to what is shown in Example 5-14:
Example 5-14 Linking Jre using the DMLinkJre task
hubtmr:/ # wruntask -t DMLinkJre -l "IBM Tivoli Monitoring Tasks" -h dev01-ep
-a "/usr/lib/java" -m 120
############################################################################
Task Name:
DMLinkJre
Task Endpoint: dev01-ep (Endpoint)
Return Code:
0
------Standard Output-----****************************************************************
java version "1.3.1_04"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.1_04-b02)
Java HotSpot(TM) Client VM (build 1.3.1_04-b02, mixed mode)
****************************************************************
JRE is correctly installed.
JRE is a suitable version.
The JRE has been successfully linked.
------Standard Error Output-----Warning: Uncertified JRE build version
############################################################################
As an alternative. especially if JRE has not been installed, you can use the
wdmdistrib –J command. For information about this command, see the Tivoli
Management Framework Reference Manual.
280
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
For additional information, refer to “Linking to an existing Java Runtime
Environment” in IBM Tivoli Monitoring for Web Infrastructure - Installation and
Setup Guide Version 5.1.2, SH19-4569-03.
Creating a profile for monitoring system resources
Now that the prerequisites for the IBM Tivoli Monitoring engine has been
established on out test endpoint, we have to define which resources to monitor,
and how to monitor them. In the Outlet Systems Management Solution we will
use the best practices provided as defaults for the following resource types:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
CPU
FileSystem
Memory
NetworkInterface
PhysicalDisk
Process
Security
1. To create a Tmw2kProfile to monitor these resources, use the GUI or
commands from the command line, similar to the ones shown in
Example 5-15.
Example 5-15 Creating a Tmw2kProfile from the commandline
export profname=hubtmr-region_MASTER_ITM_linux-ix86
export prof=@Tmw2kProfile:$profname
wcrtprf @ProfileManager:hubtmr-region_MASTER_PM_ITM Tmw2kProfile $profname
wdmeditprf -P $profname -add DMXCpu
wdmeditprf -P $profname -add DMXFileSystem -AddPar
FileSystemsToMonitorDiffThres '/dev/sda2 | - | 85 | 1000 | - | 15'
wdmeditprf -P $profname -add DMXMemory
wdmeditprf -P $profname -add DMXNetworkInterface
wdmeditprf -P $profname -add DMXPhysicalDisk
wdmeditprf -P $profname -add DMXProcess
wdmeditprf -P $profname -add DMXSecurity
wdmeditprf -P $profname -Tec broadcast -S EventServer#hubtmr-region -TBSM no
# Set default push parameters for Tmw2kProfile to all levels
TMW2KOID=`wlookup -or Tmw2kProfile $profname`
idlcall $TMW2KOID _set_default_push_params '{ force_all FALSE }'
In Example 5-15, we used defaults for all the added resource models except
the DMXFileSystem. For this resource model, we added a special threshold
for the /dev/sda2 file system, allowing this file system to become 85% used
instead of the default 80%. In addition, we monitored for the existence of
1000kB or more than 15% free space.
Chapter 5. Creating profiles, packages, and tasks
281
The last wdmeditprf line in Example 5-15 is used to tell the monitoring profile
which types of events to generate, and where to send them. In this case we
forward events to the TEC server at in the hubtmr-region. Event forwarding to
TBSM is disabled.
Finally, in the last two lines shown in Example 5-15 on page 281 we set the
distribution defaults for the profile to automatically send the profile to all levels
of subscribers and overwrite any local modifications at any subscriber level.
2. To verify the settings in the newly created monitoring profile, you can use the
wdmeditprf -P <profilename> -list and wdmeditprf -P <profilename>
-print <resourcemodel> <property> commands. These were used to create
the listing presented in Example 5-16.
Example 5-16 hubtmr-region_MASTER_ITM_linux-ix86 monitoring profile details
Resource Model
Enable
DMXCpu
DMXFileSystem
DMXMemory
DMXNetworkInterface
DMXPhysicalDisk
DMXProcess
DMXSecurity
YES
YES
YES
YES
YES
YES
YES
DMXCpu
Cycle Time (sec):60
Tasks:
Low_IdleCPUUsage
High_SysCPUUsage
Data Logging Settings:
Enable
Aggregate Data
Aggregation Period
Minimum
Maximum
Average
Historical Period
NO
YES
15
NO
NO
YES
720
Parameters:
None
Schedule:
Scheduled Always
282
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Schedule Rules :
None
Thresholds:
Threshold Name
IdleCPUTimeThr
SysCPUTimeThr
Send TEC Settings
Send TEC Events
Mode:
Event Server(s)
Send TBSM:134897824
Threshold Value%
10.000000
%
80.000000
%
YES
BroadCast
EventServer#hubtmr-region
DMXFileSystem
Cycle Time (sec):120
Tasks:
LowKAvail
LowPercSpcAvail
FragmentedFileSystem
LowPercInodesAvail
Data Logging Settings:
Enable
Aggregate Data
Aggregation Period
Minimum
Maximum
Average
Historical Period
NO
YES
15
NO
NO
YES
720
Parameters:
IgnoredFileSystems:None
FileSystemsToMonitorDiffThres:/dev/sda2 | - | 85 | 1000 | - | 15
Schedule:
Scheduled Always
Schedule Rules :
None
Thresholds:
Threshold Name
PrcUsedInodes
PrcAvailKspace
Threshold Value%
80.000000
%
15.000000
%
Chapter 5. Creating profiles, packages, and tasks
283
PrcUsedKspace
AvailableSpace
PrcAvailInodes
Send TEC Settings
Send TEC Events
Mode:
Event Server(s)
Send TBSM:134897824
85.000000
7000.000000
20.000000
%
%
%
YES
BroadCast
EventServer#hubtmr-region
DMXMemory
Cycle Time (sec):60
Tasks:
LowStorage
Thrashing
LowSwap
Data Logging Settings:
Enable
Aggregate Data
Aggregation Period
Minimum
Maximum
Average
Historical Period
NO
YES
15
NO
NO
YES
720
Parameters:
None
Schedule:
Scheduled Always
Schedule Rules :
None
Thresholds:
Threshold Name
AvailVirtualStorage
SwapSpacePrc
PageOutRate
PageInRate
Send TEC Settings
Send TEC Events
Mode:
Event Server(s)
Send TBSM:134897824
284
Threshold Value%
40.000000
%
30.000000
%
400.000000
%
400.000000
%
YES
BroadCast
EventServer#hubtmr-region
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
DMXNetworkInterface
Cycle Time (sec):150
Tasks:
HighOutErrorPacks
HighInputErrPacks
InterfaceNotEnabled
HighPacktsCollision
InterfaceNotOperat
IntStatUnknown
Data Logging Settings:
Enable
Aggregate Data
Aggregation Period
Minimum
Maximum
Average
Historical Period
NO
YES
15
NO
NO
YES
720
Parameters:
communityName:public
Schedule:
Scheduled Always
Schedule Rules :
None
Thresholds:
Threshold Name
PercPacketCollisionThr
PercOutPacketErrThr
PercInPacketErrThr
Send TEC Settings
Send TEC Events
Mode:
Event Server(s)
Send TBSM:134897824
Threshold Value%
10.000000
%
10.000000
%
20.000000
%
YES
BroadCast
EventServer#hubtmr-region
DMXPhysicalDisk
Cycle Time (sec):120
Tasks:
HighPhysicalDiskReadBytes
Chapter 5. Creating profiles, packages, and tasks
285
HighPhysicalDiskWriteBytes
HighPhysicalPercentDiskTime
HighPhysicalDiskXferRate
Data Logging Settings:
Enable
Aggregate Data
Aggregation Period
Minimum
Maximum
Average
Historical Period
NO
YES
15
NO
NO
YES
720
Parameters:
None
Schedule:
Scheduled Always
Schedule Rules :
None
Thresholds:
Threshold Name
HighDiskBytes
HighPercentUsage
Send TEC Settings
Send TEC Events
Mode:
Event Server(s)
Send TBSM:134897824
Threshold Value%
1572864.000000 %
90.000000
%
YES
BroadCast
EventServer#hubtmr-region
DMXProcess
Cycle Time (sec):60
Tasks:
ProcessKilledOrNotExisting
ProcessHighCPU
ProcessStopped
HighZombieProcesses
Data Logging Settings:
Enable
Aggregate Data
Aggregation Period
Minimum
Maximum
286
NO
YES
15
NO
NO
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Average
Historical Period
YES
720
Parameters:
processes:basename=lcfd
basename=syslogd
Schedule:
Scheduled Always
Schedule Rules :
None
Thresholds:
Threshold Name
HighZombieProcess
HighCPUUsed
Send TEC Settings
Send TEC Events
Mode:
Event Server(s)
Send TBSM:134897824
Threshold Value%
20.000000
%
60.000000
%
YES
BroadCast
EventServer#hubtmr-region
DMXSecurity
Cycle Time (sec):120
Tasks:
IllegalOwner
DuplicatedAccount
WrongMode
HighLoggingNumber
SuspectSuperGroup
NotRegularRootAccount
FileNotExisting
PasswdNull
SuspectSuperUser
IllegalGroup
Data Logging Settings:
Enable
Aggregate Data
Aggregation Period
Minimum
Maximum
Average
Historical Period
NO
YES
15
NO
NO
YES
720
Chapter 5. Creating profiles, packages, and tasks
287
Parameters:
AlternativeOwners:root
Superusers:root
AlternativeGroups:sys
security
Users: root | 10
FilesList:/etc/passwd | -rw-r--r-- | root | root
/etc/group | -rw-r--r-- | root | root
Supergroups:root
Schedule:
Scheduled Always
Schedule Rules :
None
Thresholds:
None
Send TEC Settings
Send TEC Events
Mode:
Event Server(s)
Send TBSM:134897824
YES
BroadCast
EventServer#hubtmr-region
Distributing the monitoring profile
To use the hubtmr-region_MASTER_ITM_linux-ix86 profile, we need to
subscribe the dev01-ep endpoint, or rather the profile manager
hubtmr-region_MASTER_PMS_linix-ix86 to which the endpoint subscribes, to
the hubtmr-region_MASTER_PM_ITM profile manager.
1. Use the wsub command as shown:
wsub @ProfileManager:hubtmr-region_MASTER_PM_ITM
@ProfileManager:hubtmr-region_MASTER_PMS_linux-ix86
2. Now we can distribute the profile:
wdmdistrib -p hubtmr-region_MASTER_ITM_linux-ix86 dev01-ep
3. To verify that the monitors are executing, choose one of two methods:
a. Use the wdmlseng command to generate output similar to what is shown in
Example 5-17.
Example 5-17 Monitoring engine listing
hubtmr:/ # wdmlseng -e dev01-ep -verbose
Forwarding the request to the endpoint:
288
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
dev01-ep
1393424439.29.522+#TMF_Endpoint::Endpoint#
The following profiles are running:
hubtmr-region_MASTER_ITM_linux-ix86#hubtmr-region
DMXCpu: Running
High_SysCPUUsage 100 %
Low_IdleCPUUsage 100 %
DMXFileSystem: Running
LowPercInodesAvail 100 %
LowPercSpcAvail 100 %
FragmentedFileSystem 100 %
LowKAvail 100 %
DMXMemory: Running
LowStorage 100 %
Thrashing 100 %
LowSwap 100 %
DMXNetworkInterface: Running
HighPacktsCollision 100 %
HighOutErrorPacks 100 %
HighInputErrPacks 100 %
InterfaceNotEnabled 100 %
InterfaceNotOperat 100 %
IntStatUnknown 100 %
DMXSecurity: Running
NotRegularRootAccount 100 %
PasswdNull 100 %
IllegalGroup 100 %
DuplicatedAccount 100 %
SuspectSuperUser 100 %
IllegalOwner 100 %
SuspectSuperGroup 100 %
HighLoggingNumber 100 %
FileNotExisting 100 %
WrongMode 100 %
DMXPhysicalDisk: Running
HighPhysicalPercentDiskTime 100 %
HighPhysicalDiskXferRate 100 %
HighPhysicalDiskWriteBytes 100 %
HighPhysicalDiskReadBytes 100 %
DMXProcess: Running
HighZombieProcesses 100 %
ProcessHighCPU 100 %
ProcessKilledOrNotExisting 100 %
ProcessStopped 100 %
Chapter 5. Creating profiles, packages, and tasks
289
b. Another way of verifying that the expected resource models are running is
to open the Web Health Console (http://console:9080/dmwhc). Look at
the details page for the dev01-ep endpoint, as in Figure 5-1.
Figure 5-1 Resource monitor status for endpoint dev01-ep
5.3.6 Defining TEC profiles
In order to deploy the TEC Logfile Adapter used by IBM Tivoli Monitoring for Web
Infrastructure, we need to define an Adapter Configuration Profile (ACP) to be
distributed to our endpoints.
Because we are not applying any customization to monitor specific logfiles at this
point, we will use the defaults as defined in the tecad_logfile_linux-ix86
configuration file delivered with the product. For more information about the
Adapter Configuration Profiles, refer to the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Enterprise Console → Adapters Guide.
Defining the ACP
For the Outlet Systems Management Solution, the ACP profile resides in the
hubtmr-region_MASTER_PM_TEC profile manager, and we named it
hubtmr-region_MASTER_ACP_linux-ix86. While you can use the GUI to create
290
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
the profile, we used commands to achieve the same goal, as shown in
Example 5-18.
Example 5-18 Defining the ACP profile
wcrtprf @ProfileManager:hubtmr-region_MASTER_PM_TEC ACP
hubtmr-region_MASTER_ACP_linux-ix86
# Add a logfile record
waddac tecad_logfile_linux-ix86 hubtmr-region_MASTER_ACP_linux-ix86
# Set default push parameters for ACP Profile to all levels
ACPOID=`wlookup -or ACP hubtmr-region_MASTER_ACP_linux-ix86`
idlcall $ACPOID _set_default_push_params '{ force_all FALSE }'
Distributing the ACP
To apply a monitoring policy to an endpoint, we have to subscribe the endpoint,
or the profile manager to which it subscribes, to the profile manager hosting the
profile.
1. Because the test system (dev01-ep) has already been subscribed to the
hubtmr-region_MASTER_PMS_linux-ix86 profile manager, we settle for
subscribing this profile manager to the hubtmr-region_MASTER_PM_TEC
profile manager hosting our ACP profile:
wsub @ProfileManager:hubtmr-region_MASTER_PM_TEC
@ProfileManager:hubtmr-region_MASTER_PMS_linux-ix86
2. And now we can distribute the ACP profile.
wdistrib -m @ACP:hubtmr-region_MASTER_ACP_linux-ix86 @Endpoint:dev01-ep
5.3.7 Building the software package for deploying DB2 Server
At this point we have established the dev01 test system, which is monitoring by
the basic operating system and hardware resources are being monitored, and
which availability is managed through the heartbeat function. It is now time to
apply components that will enable this system to perform some useful work - in
terms of hosting databases and applications.
The outlet servers in the Outlet Solution will host a WebSphere Application
Server based application that requires access to a local DB2 database, so we
have to develop a software package to install and customize the DB2 Server and
related components from a central site. Because all the prerequisite components
for installing DB2 is provided by the operating system, the only package to be
developed is for IBM DB2 UDB Server v8.2
In this section, we have assumed that you have downloaded all the software
images in the proper structure on the srchost system. Refer to Appendix B.,
Chapter 5. Creating profiles, packages, and tasks
291
“Obtaining the installation images” on page 645 for details on how and where to
get your copy.
In addition, all the software definitions, response files, scripts and so forth, are
available online. Refer to Appendix C, “Additional material” on page 655 for
details on how to get your machine-readable copies. For reference, we have also
included the files that makes up the software package for DB2 in A.7.7, “DB2
Server v8.2” on page 476 where you can find more details and descriptions of
the DB2 Server software package.
Before defining software packages, we need to make sure that the endpoints on
which we intent to install software components are subscribed to the
hubtmr-region_MASTER_PM_SWD profile manager. As previously shown, the
wsub command does the trick, and since we are still in testing mode, the only
profile manager that needs to be subscribed is
hubtmr-region_MASTER_PMS_linux-ix86.
wsub @ProfileManager:hubtmr-region_MASTER_PM_SWD
@ProfileManager:hubtmr-region_MASTER_PMS_linux-ix86
If you plan to use the software package definitions provided with this book, you
should perform the following setup steps:
1. Define your base directory for software distributions and scripts. We used the
following:
base directory
code directory
response file directory
log directory
tools and scripts
software packages
/mnt
/mnt/code
/mnt/rsp
/mnt/logs
/mnt/tools
/mnt/spd
2. Copy the do_it and java141_it scripts from the online material (see
Appendix C., “Additional material” on page 655) to your
<img_dir>/tools/common/UNIX directory. These two scripts are used by most
software packages.
The do_it script provides consistent logging of scripts and executables
launched from the software package, and java141_it is used to invoke
installation procedures which requires a Java2 environment. Note that we
have assumed that the package IBMJava2-141.rpm has been installed on all
target systems during basic operating system installation.
Listings of do_it and java141_it are available in the section “do_it” on
page 382 and “java141_it” on page 382 respectively.
292
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
IBM DB2 UDB Server v8.2
The first software component we install is IBM DB2 Server v8.2. To set up the
software package for installation and removal of the DB2 Server for Linux,
perform the following steps:
1. Download the installation archive C58S8ML-db2udbes82.tar to the
<img_dir>/code/db2/udb-ee/820/server/Linux-IX86 directory on the
srchost system.
2. Unpack the installation archive using
<img_dir>/code/db2/udb-ee/820/server/Linux-IX86 as the parent directory.
3. Copy the db2_install and db2_uninstall scripts from the
<img_dir>/code/db2/udb-ee/820/server/Linux-IX86/db2/linux directory to
the <img_dir>/tools/db2/udb-ee/820/server/Linux-IX86 directory.
4. Copy the custom scripts for installation and uninstallation:
– db2_server_820_addusers.sh
– db2_server_820_remusers.sh
Paste them to the <img_dir>/tools/db2/udb-ee/820/server/Linux-IX86
directory.
5. The scripts have been designed to create the required group and user to
enable monitoring. Review and modify the db2_server_820_addusers.sh and
db2_server_820_remusers.sh scripts to suit your requirements.
6. Copy or create the response file db2_server_820.rsp to your image server in
the <img_dir>/rsp/db2/udb-ee/820/server/Linux-IX86 directory.
Note: The response file used will not create a default instance in the newly
installed DB2 server.
Review the response file and adjust it to the particulars of your environment.
7. Copy or create the software package description file db2_server.8.2.0.spd to
your image server in the <img_dir>/spd/db2/udb-ee/820/server/Linux-IX86
directory.
8. Build and test the software package db2_server_820:
wimpspo -c hubtmr-region_MASTER_PM_SWD -f
@ManagedNode:<srchost>:<img_dir>/spd/db2/udb-ee/820/server/Linux-IX86/db2_s
erver.8.2.0.spd db2_server^8.2.0
9. Convert the software package to a software package block:
wconvspo -t build -o -p <img_dir>/code/tivoli/SPB/db2_server.8.2.0.spb
db2_server^8.2.0
10.Distribute the software package:
Chapter 5. Creating profiles, packages, and tasks
293
winstsp -f @SoftwarePackage:db2_server^8.2.0 @Endpoint:dev01-ep
To check the status of the distribution, you can use wmdist -e
<distributionId> to see how the package is transferred from the srchost to
the target.
11.Check the result of the distribution by looking in the logfile in
<img_dir>/logs/db2_server_820.log.
For listings of all the files used, except the installation image, refer to A.7.7, “DB2
Server v8.2” on page 476.
5.3.8 Creating a DB2 instance for the Outlet Solution
To facilitate a permanent data store for the Outlet Solution in the newly installed
DB2 system, we need to create an instance, a database, and the tables used by
the application.
For the Outlet Solution TimeCard application the database instance, the
databases, and tables needed to support the application should be created
ideally as part of the application deployment itself. However, the script used
within the software package for database manipulation can be invoked manually.
For testing purposes, use the script shown in “db2_store_cfg.sh” on page 615
along with the related file “Table.ddl” on page 618 to create a database instance
named db2inst1 on the dev01 system. This will allow us to develop and test
monitoring profiles for DB2.
5.3.9 Defining DB2 monitoring objects
This section outlines the tasks involved in defining the proxy-endpoint objects for
DB2 Instances and Databases.
Creating Tivoli DB2 objects
IBM Tivoli Monitoring for DB2 operates with two distinct types of objects, a
DB2InstanceManager object and a DB2DatabaseManager object. These are
both proxy-endpoints used to set the correct scope of operation for the
management operations.
Both object types will be created ultimately on the gateway supporting the
endpoint at which the database resides.
Important: Remember when subscribing endpoints to profile managers
containing profiles that includes DB2 resource models, the subscribers have
to be of the type DB2InstanceManager or DB2DatabaseManager.
294
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Both DB2 InstanceManager and DatabaseManager objects can be created
manually or through a discovery process. If you use the discovery method, the
databases and instances have to exist first. This is not the case if the manual
commandline method is used.
In the following sections, we present a few examples of manually defining DB2
objects for the store database in the db2inst1 instance on the dev01 system.
However, before we reach that point, we need to discuss the discovery process.
The DB2 Discovery object
You can use the DB2Discovery object to create automatically multiple
DB2InstanceManager objects. This enables administrators to create DB2
instances for all DB2 servers simultaneously.
Discovery searches specified endpoints for DB2 instances that are not already
managed. When an instance is encountered, the DB2 discovery process
automatically creates a DB2InstanceManager object. The DB2 discovery
process runs from within the policy region where the instance objects are added.
A default DB2Discovery object is automatically stored in the
DB2Manager-DefaultPolicyRegion upon installation of the IBM Tivoli Monitoring
for Databases:DB2 product. Therefore, the DB2Discovery object must be
created in the specific policy region where servers are to be discovered.
Note: To delete a DB2Discovery object, you must unsubscribe it from all
profile managers.
Creating DB2 instance objects manually
A DB2InstanceManager object is an instantiation of a specialized Tivoli object
representing a partitioned or non-partitioned database server. This section
describes how to create an instance of a DB2 database server object (DB2
Server object) on the Tivoli Desktop so that administrators can monitor server
activity and simplify redundant tasks.
Note: If you delete and then recreate a DB2 instance, you must delete and
recreate the corresponding DB2InstanceManager from the IBM Tivoli
Monitoring for Databases: DB2 product. If you delete a DB2InstanceManager
object, you must also unsubscribe the object from all profile managers to
which it is subscribed. In addition, when you delete a DB2InstanceManager
object, you also delete any child objects associated with the object.
DB2InstanceManager objects can be created manually, as shown in A.4.1,
“create_db2_instance_objects.sh” on page 371, or using the DB2Discovery
Chapter 5. Creating profiles, packages, and tasks
295
object which was created in the DB2 Servers policy region on the Tivoli Desktop
during installation. Refer to the redbook IBM Tivoli Monitoring for Databases:DB2
User’s Guide version 5.1.0, SC23-4726-00 for further details on using the
DB2Discovery object.
1. For creation of the DB2InstanceManager object representing the db2inst1
instance created on the dev01 system in the DB2 Database Servers policy
region, we used the following command:
wcdb2inst -l db2inst1@dev01 -e dev01-ep -i db2inst1 -p “DB2 Database
Servers#hubtmr-region”
2. Upon successful creation, we subscribe the newly created
DB2InstanceManager to the profile manager that will be the target of task
executions and monitoring profile distributions:
wsub @ProfileManager:DB2Manager-Instances#hubtmr-region
@DB2InstanceManager:db2inst1@dev01-ep
After the DB2 instance objects have been created, the DB2InstanceManager
icons should be displayed in the DB2 Database Servers policy region.
Verifying DB2InstanceManager objects
To verify the creation of the DB2InstanceManager object using the command
line, use the well-known wlookup command as in Example 5-19.
Example 5-19 DB2 Instance object creation verification with wlookup
hubtmr:~ # wlookup -ar DB2InstanceManager
db2inv@rdbms-ep
1393424439.1.1683#DB2InstanceManager#
db2mdist@rdbms-ep
1393424439.1.1685#DB2InstanceManager#
db2inst1@dev01-ep
1393424439.1.1898#DB2InstanceManager#
db2swd@rdbms-ep
1393424439.1.1684#DB2InstanceManager#
db2tec@rdbms-ep
1393424439.1.1682#DB2InstanceManager#
db2tmtp@rdbms-ep
1393424439.1.1686#DB2InstanceManager#
Creating DB2 database objects manually
This section describes how to create a DB2DatabaseManager object to allow
administrators to manage and run tasks on the DB2 databases from the Tivoli
Desktop.
To access and manage a DB2 database from the Tivoli Desktop, you must create
a DB2DatabaseManager object.
The DB2DatabaseManager object must be installed on a system for which an
DB2InstanceManager object has already been defined. When you create the
DB2DatabaseManager object, it becomes a database endpoint in the policy
296
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
region. Subscribe the endpoint to profiles in profile managers in the same way as
any other managed resource.
DB2DatabaseManager objects can be created manually, as exemplified in A.4.2,
“create_db2_database_objects.sh” on page 372, or with the GUI.
1. For the store database in the db2inst1 instance on dev01-ep, we used the
following command to create the database object in the Tivoli environment:
wcdb2dbs -l “store@db2inst1@dev01-ep” -i “db2inst1@dev01-ep” -d STORE -p
“DB2 Database Servers#hubtmr-region”
2. We subscribed the database to the profile manager DB2Manager-Databases:
wsub @ProfileManager:DB2Manager-Databases#hubtmr-region
@DB2DatabaseManager:store@db2inst1@dev01-ep
Verifying the DB2DatabaseManager object
After the DB2DatabaseManager object has been created, the
DB2DatabaseManager icons should be displayed in the DB2 Database Servers
Policy Region, and as subscribers in the DB2Manager-Databases profile
manager. You can also verify the creation using the wlookup -ar
DB2DatabaseManager command.
5.3.10 Creating the db2ecc user ID
As already mentioned, the db2ecc user ID must be created on each endpoint in
the Tivoli management region where DB2 resources will be managed and
monitored.
The db2ecc user ID needs to be in the primary operating system group defined
for the DB2InstanceManager as the SYSADM for each instance to be managed.
Tip: The current setting of the SYSADM DB2InstanceManager configuration
file can be checked from the command line by running the following
command:
db2 get dbm config
You will find a line in the output similar to this:
SYSADM group name
(SYSADM_GROUP) = DB2GRP1
The db2ecc user is created automatically during installation of the
db2_server^8.2.0 software package.
Chapter 5. Creating profiles, packages, and tasks
297
5.3.11 Creating profiles for Tivoli Monitoring for Databases
As part of the IBM Tivoli Monitoring for Databases:DB2 installation, a top-level
policy region called Monitoring for DB2 is created and linked to the root desktop.
It contains a number of resources that can be used to administer DB2 resources,
including another policy region, Db2 Database Servers, to provide a placeholder
for your database objects.
Once the DB2 objects DB2InstanceManager and DB2DatabaseManager have
been created, you must subscribe them to the proper profile managers to be able
to distribute profiles containing specific DB2 resource models to the two types of
DB2 objects.
You might want to create new profile managers to differentiate the
DB2InstanceManager objects and the DB2DatabaseManager objects within the
DB2 Database Servers policy region. Inside these, you can create one or more
Tmw2kProfiles, add the desired resource models, and distribute them to the DB2
objects.
In the Outlet Systems Management Solution, we chose to use the profile
managers DB2Managers-Instances and DB2Managers-Databases provided
during installation.
Monitoring profiles for DB2 monitoring are normal Tmw2kProfiles, and they are
manipulated accordingly. However, because different, logical endpoints are
considered the targets of the distributions, you should avoid mixing resource
models with different scopes in the same monitoring profile.
Note: Remember that the subscribers for DB2 profile managers must be DB2
objects and not endpoints. For example, if you create a Tmw2kProfile
containing resource models for database activities, the subscribers must be
DB2DatabaseManager objects.
Instance monitoring profiles
In the Outlet Systems Management Solution we chose, as a starting point, to use
the standard monitoring profile for DB2 Instances. The name of the default profile
provided with the ITM for DB2 installation, named DB2_instances, resides in the
DB2Managers-Instances profile manager.
1. Example 5-20 on page 299 shows the default content of the DB2_instances
profile. The information was generated using a combination of wdmeditprf
commands.
298
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Example 5-20 Default DB2-instances monitoring profile
Resource Model
DB2InstanceStatus
Enable
YES
DB2InstanceStatus
Parameters:
LoggedMetrics_Conn:
LoggedMetrics_Status:
numPctConnectionsExecuting
strCounterResetTimestamp
numdb2Status
Cycle Time (sec): 1800
Schedule:
Scheduled Always
Schedule Rules :
None
Thresholds:
Threshold Name
High_PctConnectionsExecuting
Threshold
75.000000
Value%
%
Indications :
Indications
Occurrences Holes SendTEC Send TBSM Severity Clearing
DB2_Down_Status
1
0
YES
YES
CRITICAL
YES
DB2_High_PctConnectionsExecuting
3
2
NO
NO
WARNING
NO
Tasks:
DB2_Down_Status
DB2_High_PctConnectionsExecuting
Data Logging Settings:
Enable
Aggregate Data
Aggregation Period
Minimum
Maximum
Average
Historical Period
Send TEC Settings
Send TEC Events
Mode:
Event Server(s)
NO
NO
15
NO
NO
YES
720
YES
BroadCast
EventServer#hubtmr-region
2. To distribute the DB2_instances profile to the db2inst1@dev01-ep
DB2InstanceManager endpoint, we used the following command:
Chapter 5. Creating profiles, packages, and tasks
299
wdmdistrib -p DB2_instances#hubtmr-region -M over_all_no_merge
@DB2InstanceManager:db2inst1@dev01-ep
Example 5-21 shows how the wdmlseng command was used to verify the
status of the monitoring.
Example 5-21 Active DB2 Instance monitors
hubtmr:~ # wdmlseng -e dev01-ep
Forwarding the request to the endpoint:
dev01-ep 1393424439.29.522+#TMF_Endpoint::Endpoint#
The following profiles are running:
hubtmr-region_MASTER_ITM_linux-ix86#hubtmr-region
DMXCpu: Running
DMXFileSystem: Running
DMXMemory: Running
DMXNetworkInterface: Running
DMXSecurity: Running
DMXPhysicalDisk: Running
DMXProcess: Running
[email protected]_instances.dup@DB2Manager-Instance#hubtmr-region
DB2InstanceStatus: Running
Database monitoring profiles
Similar to the instance monitoring, specific monitoring policies can be set up in
customized profiles for monitoring databases and database activity.
In the Outlet Systems Management Solution we used the default
DB2_databases profile found in the DB2Manager-Databases profile manager.
The content of the default profile is shown in Example 5-22.
Example 5-22 Default DB2-databases monitoring profile
Resource Model
DB2DatabaseStatus
DB2TableActivity
Enable
YES
YES
DB2DatabaseStatus
Parameters:
LoggedMetrics_DMS:
SMS_TbspNames_List:
LoggedMetrics_App:
TbspNames_List:
300
numPctSpaceUsedDMS
*
numPctConnectionsUsed
*
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
DMS_TbspNames_List:
LoggedMetrics_Status:
LoggedMetrics_Config:
LoggedMetrics_Tbsp:
LoggedMetrics_Gateway:
Cycle Time (sec):
600
Schedule:
Scheduled Always
Schedule Rules :
None
*
strLastBackupTimestamp
strRestorePending
numTableSpaceStatus
numCurrentConnections
numMostRecentConnectResponseTime
numConnectionErrors
numConnWaitingForHostCurrent
Thresholds:
Threshold Name
Old_LastBackupTimestamp
High_PctConnectionsUsed
High_ConnectionErrors
High_CurrentConnections
High_SpaceUsedDMSTablespace
High_SpaceUsedSMSTablespace
High_MostRecentConnectResponse
High_ConnWaitingForHost
Threshold
Value%
2.000000
%
80.000000
%
100.000000
%
50.000000
%
85.000000
%
50000000.000000%
5.000000
%
30.000000
%
Indications :
Indication
Occurrences Holes Send TEC Send TBSM Severity Clearing
DB2_High_CurrentConnections 3 2
NO
NO
WARNING
NO
DB2_High_SpaceUsedDMSTablespace
1 0 NO NO
CRITICAL YES
DB2_High_SpaceUsedSMSTablespace 1 0 NO NO CRITICAL YES
DB2_True_RestorePending 1 0 YES YES CRITICAL YES
DB2_High_ConnectionErrors 1 0 YES YES WARNING YES
DB2_High_MostRecentConnectResponse
3 2
YES YES
WARNING NO
DB2_High_ConnWaitingForHost
3 2 YES YES WARNING NO
DB2_Old_LastBackupTimestamp 2 0 NO NO CRITICAL NO
DB2_High_PctConnectionsUsed 3 1 YES YES WARNING YES
DB2_False_TablespaceNormalStatus 2 1 YES YES WARNING NO
Tasks:
DB2_High_CurrentConnections
DB2_High_SpaceUsedDMSTablespace
DB2_High_SpaceUsedSMSTablespace
DB2_True_RestorePending
DB2_High_ConnectionErrors
DB2_High_MostRecentConnectResponse
DB2_High_ConnWaitingForHost
DB2_Old_LastBackupTimestamp
Chapter 5. Creating profiles, packages, and tasks
301
DB2_High_PctConnectionsUsed
DB2_False_TablespaceNormalStatus
Data Logging Settings:
Enable
Aggregate Data
Aggregation Period
Minimum
Maximum
Average
Historical Period
NO
NO
15
NO
NO
YES
720
Send TEC Settings
Send TEC Events
Mode:
Event Server(s)
YES
BroadCast
EventServer#hubtmr-region
DB2TableActivity
Parameters:
TableNames_List:
LoggedMetrics:
NULL
numRowsReadRate
numRowsWrittenRate
*
SchemaNames_List:
Cycle Time (sec):
Schedule:
Scheduled Always
Schedule Rules :
1800
None
Thresholds:
None
Indications :
None
Tasks:
None
Data Logging Settings:
Enable
Aggregate Data
Aggregation Period
Minimum
Maximum
Average
Historical Period
302
NO
NO
15
NO
NO
YES
720
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Send TEC Settings
Send TEC Events
Mode:
Event Server(s)
YES
BroadCast
EventServer#hubtmr-region
To distribute the DB2_databasess profile to the store@db2inst1@dev01-ep
DB2DatabaseManager endpoint to start monitoring database availability and
performance, we used the following command:
wdmdistrib -p DB2_databases#hubtmr-region -M over_all_no_merge
@DB2DatabaseManager:store@db2inst1@dev01-ep
Example 5-21 shows how the wdmlseng command was used to verify the status
of the monitoring.
Example 5-23 Output from wdmlseng on database server endpoint
hubtmr:~ # wdmlseng -e dev01-ep
Forwarding the request to the endpoint:
dev01-ep 1393424439.29.522+#TMF_Endpoint::Endpoint#
The following profiles are running:
hubtmr-region_MASTER_ITM_linux-ix86#hubtmr-region
DMXCpu: Running
DMXFileSystem: Running
DMXMemory: Running
DMXNetworkInterface: Running
DMXSecurity: Running
DMXPhysicalDisk: Running
DMXProcess: Error
store@[email protected]_databases.dup@DB2Manager-Database#hubtmr-region
DB2TableActivity: Running
DB2DatabaseStatus: Running
[email protected]_instances.dup@DB2Manager-Instance#hubtmr-region
DB2InstanceStatus: Running
5.3.12 Deploying WebSphere Application Server
In the Outlet Systems Management Solution the systems in the outlets will host a
WebSphere application. For convenience, we will develop software packages to
install and customize WebSphere Application Server and related components
from a central site. The installation of WebSphere Application Server requires the
correct level of WebSphere Message Queuing, with the correct fixpacks
installed, and an existing IBM HTTP Server v2, - all of which we will install as a
Chapter 5. Creating profiles, packages, and tasks
303
separate packages. The software packages needed for deploying WebSphere
Application Server are:
򐂰
򐂰
򐂰
򐂰
򐂰
IBM HTTP Server v2.0.47
WebSphere Message Queuing Server V5.3
WebSphere Message Queuing Server V5.3 Fixpack 8
WebSphere Application Server v5.1
WebSphere Caching Proxy v5.1 Fixpack 1
In the following sections, it is assumed that all the software images have been
downloaded in the proper structure on the srchost system.
Refer to Appendix B, “Obtaining the installation images” on page 645 for details
on how and where to get your copy. In addition, all the software package
definitions, response files, scripts and so on are available online. Refer to
Appendix C, “Additional material” on page 655 for details on how to get your
machine-readable copies. For reference, we have also included all of these files
in Appendix A, “Configuration files and scripts” on page 349 where you can find
more details and descriptions of each software packages.
Before defining software packages, we need to make sure that the endpoints on
which we want to install software components are subscribed to the
hubtmr-region_MASTER_PM_SWD profile manager. As previously shown, the
wsub command does the trick. Because we are still in testing mode, the only
profile manager that needs to be subscribed is
hubtmr-region_MASTER_PMS_linux-ix86.
wsub @ProfileManager:hubtmr-region_MASTER_PM_SWD
@ProfileManager:hubtmr-region_MASTER_PMS_linux-ix86
If you plan to use the software package definitions provided with this book, you
should perform the following setup steps:
1. Define your base directory for software distributions and scripts. We used the
following directories:
base directory
code directory
response file directory
log directory
tools and scripts
software packages
/mnt
/mnt/code
/mnt/rsp
/mnt/logs
/mnt/tools
/mnt/spd
2. Copy the do_it and java141_it scripts from the online material (see
Appendix C, “Additional material” on page 655) to your
<img_dir>/tools/common/UNIX directory. These two scripts are used by most
software packages. the do_it script provides consistent logging of scripts and
executables launched from the software package. The java141_it is used to
304
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
invoke installation procedures such as the one for IBM HTTP Server which
requires a Java2 environment. Note that we have assumed that the package
IBMJava2-141.rpm has been installed on all target systems.
Listings of do_it and java141_it are available at “do_it” on page 382 and
“java141_it” on page 382 respectively.
IBM HTTP Server v2.0.47
The first software component we install is IBM HTTP Server v2.0.47. Because
we will not use the IBM HTTP Server v1.3 provided with WebSphere, the correct
level of the HTTP Server must be installed in order for the WebSphere
Application Server installation program to successfully generate the HTTP
plug-in.
To set up the software package for installation and removal of IBM HTTP Server
v2.0.47 for Linux, we performed the following steps:
1. Copy or create the software package description file ihs_server.2.0.47.spd
to your image server in the <img_dir>/spd/ibm/ihs/2047/all/Linux-IX86
directory.
2. Download the installation archive HTTPServer.linux.2047.tar to the
<img_dir>/code/ibm/ihs/2047/all/Linux-IX86 directory on the srchost
system.
3. Copy the custom scripts for installation and uninstallation:
– ihs_2047_linuxi386_install.sh
– ihs_2047_linuxi386_uninstall.sh
Paste them to the <img_dir>/tools/ibm/ihs/2047/all/Linux-IX86 directory.
4. Copy the ihs_server_2047.rsp response file to your image server in the
<img_dir>/rsp/ibm/ihs/2047/all/Linux-IX86 directory.
5. Build and test the software package ihs_server.2.0.47.
wimpspo -c hubtmr-region_MASTER_PM_SWD -f
@ManagedNode:<srchost>:<img_dir>/spd/ibm/ihs/2047/all/Linux-IX86/ihs_server
.2.0.47.spd ihs_server^2.0.27
6. Convert the software package to a software package block:
wconvspo -t build -o -p <img_dir>/code/tivoli/SPB/ihs_server.2.0.47.spb
ihs_server^2.0.47
7. Distribute the software package:
winstsp -f @SoftwarePackage:ihs_server^2.0.47 @Endpoint:dev01-ep
To check the status of the distribution, you use the wmdist -e
<distributionId> to see how the package is being transferred from the
srchost to the target.
Chapter 5. Creating profiles, packages, and tasks
305
8. Check the logfile in <img_dir>/logs/ihs_server_2047.log
For listings of all the files used, except the installation image, refer to A.7.2, “IBM
HTTP Server v2.0.47” on page 383.
WebSphere Message Queuing Server V5.3
Next, we install another prerequisite for WebSphere Application Server:
WebSphere Message Queuing Server v5.3. Even though the WebSphere
Application Server v5.1 installation includes the MQ component. Maintenance
and updates are made easier by installing this component as a standalone
product.
To use the software package for WebSphere Message Queuing v5.3 that we
used, perform the following steps:
1. Copy or create the software package description file mq_server.5.3.0.spd to
your image server in the
<img_dir>/spd/websphere/mq/530/server/Linux-IX86 directory.
2. Download the compressed installation image and place it in the
<img_dir>/code/websphere/mq/530/server/Linux-IX86 directory on the
srchost system with a name of C48UBML_WASMQ-Linux-5.3.0.2.tar.gz.
3. Copy the custom scripts for installation and uninstallation:
– mq_users.sh
– mq_server_53_install.sh
– mq_server_53_uninstall.sh
Paste them to the <img_dir>/tools/websphere/mq/530/server/Linux-IX86
directory.
The mq_users.sh script is used to create the required user and group on the
local system prior to installing WebSphere Message Queuing Server.
4. Build and test the software package mq_server^5.3.0
wimpspo -c hubtmr-region_MASTER_PM_SWD -f
@ManagedNode:<srchost>:<img_dir>/spd/websphere/mq/530/server/Linux-IX86/mq_
server.5.3.2.spd mq_server^5.3.0
5. Convert the software package to a software package block:
wconvspo -t build -o -p <img_dir>/code/tivoli/SPB/mq_server.5.3.0.spb
mq_server^5.3.0
6. Distribute the software package:
winstsp -f @SoftwarePackage:mq_server^5.3.0 @Endpoint:dev01-ep
To check the status of the distribution, you can use the wmdist -l -i
<distribution id> to see the status of the software package distribution.
306
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
7. Check the logfile in <img_dir>/logs/mq_server_530.log
For listings of all the files used, except the installation image, refer to A.7.3,
“WebSphere Message Queuing Server v5.3” on page 395.
WebSphere Message Queuing Server V5.3 Fixpack 8
To apply the latest necessary fixes to the WebSphere Message Queuing Server,
we also have to install Fixpack 8. A special software package has been
developed for this purpose.
To use the software package for WebSphere Message Queuing v5.3 fixpack 8,
perform the following steps:
1. Copy or create the software package description file
mq_server.5.3.0_fp8.spd to your image server in the
<img_dir>/spd/websphere/mq/530_fp6/server/Linux-IX86 directory.
2. Download the compressed installation image and place it in the
<img_dir>/code/websphere/mq/530_fp8/server/Linux-IX86 directory on the
srchost system with a name of WASMQ-CDS8-Linux-U497537.gskit.tar.gz.
3. Copy the custom scripts for installation and uninstallation:
– mq_server_538_install.sh
– mq_server_538_uninstall.sh
Paste them to <img_dir>/tools/websphere/mq/530_fp8/server/Linux-IX86.
4. Build and test the software package mq_server^5.3.0.
wimpspo -c hubtmr-region_MASTER_PM_SWD -f
@ManagedNode:<srchost>:<img_dir>/spd/websphere/mq/532/server/Linux-IX86/mq_
server.5.3.0_fp8.spd mq_server^5.3.0_fp8
5. Convert the software package to a software package block:
wconvspo -t build -o -p <img_dir>/code/tivoli/SPB/mq_server.5.3.0_fp8.spb
mq_server^5.3.0_fp8
6. Distribute the software package:
winstsp -f @SoftwarePackage:mq_server^5.3.0_fp8 @Endpoint:dev01-ep
To check the status of the distribution, you can use wmdist -l -i
<distribution id> to see the status of the software package distribution.
7. Check the logfile in <img_dir>/logs/mq_server_530_fp8.log.
For listings of all the files used, except the installation image, refer to A.7.4,
“WebSphere Message Queuing Server v5.3 Fixpack 8” on page 408.
Chapter 5. Creating profiles, packages, and tasks
307
WebSphere Application Server v5.1
Now we are finally ready to install WebSphere Application Server. The
prerequisite components are available, but in addition to the basic installation of
WebSphere Application Server, we have to apply customization to enable
performance metric monitoring. This will enable resource models for WebSphere
Application Server to receive data.
To use the software package for WebSphere Application Server v5.1 that we
used, you should perform the following steps:
1. Copy or create the software package description file was_server.5.1.0.spd
to your image server in the
<img_dir>/spd/websphere/was/510/appserver/Linux-IX86 directory.
2. Download the WebSphere Application Server v5.1 installation image for Linux
and place it in the
<img_dir>/code/websphere/was/510/appserver/Linux-IX86 directory on the
srchost system with a name of C53IPML-WAS-510-LinuxIX86.tar.
3. Copy the was_appserver_510.rsp response file to your image server in the
<img_dir>/rsp/websphere/was/510/appserver/Linux-IX86 directory.
4. Copy the custom scripts for installation and uninstallation:
–
–
–
–
was_appserver_510_install.sh
mod_was_server_JVM_process.sh
mod_was_pmirm_settings.sh
was_appserver_510_uninstall.sh
Past them to the
<img_dir>/tools/websphere/was/510/appserver/Linux-IX86 directory.
The mod_was...sh scripts are used to update the WebSphere APplication
Server configuration to enable monitoring by IBM Tivoli Monitoring for Web
Infrastructure. See “Preparing WebSphere Application Server for monitoring”
on page 312 for more details on the specifics of these settings.
5. Build and test the software package was_appserver^5.1.0.
wimpspo -c hubtmr-region_MASTER_PM_SWD -f
@ManagedNode:<srchost>:<img_dir>/spd/websphere/was/510/appserver/Linux-IX86
/was_server.5.1.0.spd was_appserver^5.1.0
6. Convert the software package to a software package block:
wconvspo -t build -o -p <img_dir>/code/tivoli/SPB/was_server.5.1.0.spb
was_appserver^5.1.0
7. Distribute the software package:
winstsp -f @SoftwarePackage:was_appserver^5.1.0 @Endpoint:dev01-ep
308
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
To check the status of the distribution, you can use the wmdist -l -i
<distribution id> to see the status of the software package distribution.
8. Check the logfile in <img_dir>/logs/was_appserver_510.log.
For listings of all the files used, except the installation image, please refer to
A.7.5, “WebSphere Application Server v5.1” on page 417.
WebSphere Application Server v5.1 Fixpack 1
The final software package that needs to be installed on our test system is the
WebSphere Application Server v5.1 fixpack1. Creation of the package follows
the normal procedure:
1. Copy or create the software package description file
was_server.5.1.0_fp1.spd to your image server in the
<img_dir>/spd/websphere/was/510_fp1/appserver/Linux-IX86 directory.
2. Download the compressed WebSphere Application Server v5.1 Fixpack 1
installation image for Linux and place it in the
<img_dir>/code/websphere/was/510_fp1/appserver/Linux-IX86 directory on
the srchost system with a name of was51_fp1_linux.tar.gz.
3. Build and test the software package was_appserver^5.1.0_fp1:
wimpspo -c hubtmr-region_MASTER_PM_SWD -f
@ManagedNode:<srchost>:<img_dir>/spd/websphere/was/510_fp1/appserver/LinuxIX86/was_server.5.1.0_fp1.spd was_appserver^5.1.0_fp1
4. Convert the software package to a software package block:
wconvspo -t build -o -p <img_dir>/code/tivoli/SPB/was_server.5.1.0_fp1.spb
was_appserver^5.1.0_fp1
5. Distribute the software package:
winstsp -f @SoftwarePackage:was_appserver^5.1.0_fp1 @Endpoint:dev01-ep
To check the status of the distribution, you can use the wmdist -l -i
<distribution id> to see the status of the software package distribution.
6. Check the logfile in <img_dir>/logs/was_appserver_510_fp1.log.
For listings of all the files used, except the installation image, refer to A.7.6,
“WebSphere Application Server v5.1 Fixpack 1” on page 464.
WebSphere Caching Proxy v5.1
Now it’s time to install WebSphere Caching Proxy v5.1. The prerequisite
components are available, customization will be applied in the ibmproxy.conf file.
To use the software package that we used for WebSphere Caching Proxy v5.1,
perform the following steps:
Chapter 5. Creating profiles, packages, and tasks
309
1. Copy or create the software package description file
was_cachingproxy.5.1.0.spd to your image server in the
<img_dir>/spd/websphere/edge/510/cachingproxy/Linux-IX86 directory.
2. Download the WebSphere caching Proxy v5.1 installation image for Linux
and place it in the
<img_dir>/code/websphere/was/510/appserver/Linux-IX86 directory on the
srchost system with a name of EdgeComponents51-Linux.tar.
3. Copy the ibmproxy.conf configuration file to your image server in the
<img_dir>/rsp/websphere/edge/510/cachingproxy/Linux-IX86 directory.
4. Copy the custom scripts for installation and uninstallation:
– wses_51_cachingproxy_install.sh
– wses_51_cachingproxy_uninstall.sh
Paste them to the
<img_dir>/tools/websphere/edge/510/cachingproxy/Linux-IX86 directory.
5. Build and test the software package was_appserver^5.1.0:
wimpspo -c hubtmr-region_MASTER_PM_SWD -f
@ManagedNode:<srchost>:<img_dir>/spd/websphere/edge/510/cachingproxy/LinuxIX86/wses_cachingproxy.5.1.0.spd wses_cachingproxy^5.1.0
6. Convert the software package to a software package block:
wconvspo -t build -o -p
<img_dir>/code/tivoli/SPB/wses_cachingproxy.5.1.0.spb
wses_cachingproxy^5.1.0
7. Distribute the software package:
winstsp -f @SoftwarePackage:wses_cachingproxy^5.1.0 @Endpoint:dev01-ep
To check the status of the distribution, you use the wmdist -l -i
<distribution id> to see the status of the software package distribution.
8. Check the logfile in <img_dir>/logs/wses_cachingproxy_510.log
For listings of all the files used, except the installation image, please refer to
A.7.9, “WebSphere Caching Proxy v5.1” on page 503.
WebSphere Caching Proxy v5.1 Fixpack 1
The final software package that needs to be installed on our test system is the
WebSphere Caching Proxy v5.1 fixpack1. Creation of the package follows the
normal procedure:
1. Copy or create the software package description file
wses_cachingproxy.5.1.0_fp1.spd to your image server in the
<img_dir>/spd/websphere/edge/510_fp1/cachingproxy/Linux-IX86
directory.
310
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
2. Download the compressed WebSphere Caching Proxy v5.1 Fixpack 1
installation image for Linux and place it in the
<img_dir>/code/websphere/edge/510_fp1/cachingproxy/Linux-IX86
directory on the srchost system with a name of
EdgeCachingProxy511-Linux.tar.
3. Build and test the software package wses_cachingproxy^5.1.0_fp1:
wimpspo -c hubtmr-region_MASTER_PM_SWD -f
@ManagedNode:<srchost>:<img_dir>/spd/websphere/edge/510_fp1/cachingproxy/Li
nux-IX86/wses_cachingproxy.5.1.0_fp1.spd wses_cachingproxy^5.1.0_fp1
4. Convert the software package to a software package block:
wconvspo -t build -o -p
<img_dir>/code/tivoli/SPB/wses_cachingproxy.5.1.0_fp1.spb
wses_cachingproxy^5.1.0_fp1
5. Distribute the software package:
winstsp -f @SoftwarePackage:wses_cachingproxy^5.1.0_fp1 @Endpoint:dev01-ep
To check the status of the distribution, you use the wmdist -l -i
<distribution id> to see the status of the software package distribution.
6. Check the logfile in <img_dir>/logs/wses_cachingproxy_510_fp1.log.
For listings of all the files used - except the installation image - please refer to
A.7.10, “WebSphere Caching Proxy v5.1 Fixpack 1” on page 592.
5.3.13 Enabling WebSphere Application Server monitoring
Now that the WebSphere Application Server as well as the basic monitoring
engine has been established on the endpoint, it is time to discuss how we
monitor the WebSphere Application Server and related resources.
Because one WebSphere Application Server can host multiple logical application
server instances, we need to create unique instances of the
IWSApplicationServer object, which will be hosted on the gateway supporting the
WebSphere Application Server system. When distributing monitoring profiles, it
is important to use the IWSApplicationServer object as the target for the
distribution rather than the normal endpoint representing the system.
The steps necessary to facilitate monitoring of WebSphere Application Server
resources are:
1.
2.
3.
4.
“Preparing WebSphere Application Server for monitoring” on page 312
“Creating WebSphere Application Server Objects” on page 313
“Configuring the Tivoli Enterprise Adapter for local systems” on page 315
“Distributing Profiles” on page 315
Chapter 5. Creating profiles, packages, and tasks
311
Preparing WebSphere Application Server for monitoring
To allow the WebSphere resource models to receive performance and availability
metrics from the WebSphere Application server, the WebSphere Application
Server needs to be configured to gather and report these metrics.
Note: Observe, that this step has been incorporated in the
was_appserver^5.1.0 software package, using the scripted method described
in step 1 on page 313.
If performed manually, the following steps must be completed to configure the
local endpoints to support IBM Tivoli Monitoring for Web Infrastructure:
WebSphere Application Server:
1. Start the WebSphere Administrative Console in your browser. Use the
following url: http://localhost:9090/admin.
2. Expand Servers in the hierarchical tree in the left pane of the IBM
WebSphere Administrators console.
3. Click Application Servers.
4. Select the Application Server: server1.
5. Click Process Definition.
6. Click Java Virtual Machine.
7. Enter -XrunpmiJvmpiProfiler in the Generic JVM arguments field.
8. Click Apply.
9. Click OK.
10.Go back to the Configuration tab for the Application Server.
11.Click Performance Monitoring Service.
12.Select the Custom radio button in the Initial Specification level section.
13.Find jvmRuntimeModule in the list box and change it to jvmRuntimeModule=X.
14.Click Apply.
15.Click OK.
16.Expand the Troubleshooting section in the hierarchical navigation tree in the
pane to the left.
17.Click PMI Request Metrics.
18.Click the check box beside Enable in the Request Metrics field.
19.Click Save in the message at the top above General Properties.
20.Click Save in the Save to Master Configuration pane.
312
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
To apply the modifications to the WebSphere Application Server configuration
automatically, complete the following steps.
1. Run the mod_was_server_JVM_process.sh script (described in
“mod_was_server_JVM_process.sh” on page 458) to configure performance
metrics reporting.
A machine-readable version of this script can be obtained from the online
material accompanying this book. Refer to Appendix C, “Additional material”
on page 655 for details on how to obtain your copy.
2. Execute the mod_was_pmirm_settings.sh script shown in
“mod_was_pmirm_settings.sh” on page 459 to enable PMI reporting for your
WebSphere Application Server.
Note: No matter which way, manual or automated, the performance metrics
reporting has been enabled, the changes do not take effect until you restart
the application server. Start the application server, or restart it if it is currently
running.
Creating WebSphere Application Server Objects
The WebSphere Application Server object can be created in one of three ways:
򐂰 From the command line using the wwebsphere –CreateAppServer to creates
an IWSApplicationServer (IBM WebSphere Application Server Version 5 only)
object.
򐂰 Using the Tivoli Framework GUI
򐂰 Using the Discover_WebSphere_Resources task located in the WebSphere
Application Server Utility Tasks task library
Ultimately, any way will issue wwebsphere commands behind the scenes.
To create the IWSApplicationServer object for the logical server server1 hosted
by the WebSphere Application Server installed on the dev01 system, we used
the task invocation parameters shown in Example 5-24.
Example 5-24 Discover WebSphere Resources task invocation
hubtmr:/ # wruntask -t Discover_WebSphere_Resources -l "WebSphere Application
Server Utility Tasks" -h @ManagedNode:hubtmr -a “WebSphere Appliaction Servers”
-a /opt/IBM/WebSphere/AppServer -a dev01-ep -m 300 -o15
############################################################################
Task Name:
Discover_WebSphere_Resources
Task Endpoint: hubtmr (ManagedNode)
Return Code:
0
------Standard Output-----############################################################################
Chapter 5. Creating profiles, packages, and tasks
313
Return Code:
0
IZY1137I Creating policy region WebSphere Application Servers
IZY1138I The policy region already exists. No need to create it.
############################################################################
IZY1144I Discovering endpoint dev01-ep
############################################################################
Task Name:
Discover_endpoint
Task Endpoint: dev01-ep (Endpoint)
Return Code:
0
------Standard Output-----IZY1148I Creating application server AppSvr@dev01@dev01@server1
IZY1149I Adding AppSvr@dev01@dev01@server1 to IBM Tivoli Monitoring for Web
Infrastructure: WebSphere Application Servers application servers subscriber
list
IZY1150I Adding AppSvr@dev01@dev01@server1 to IBM Tivoli Monitoring for Web
Infrastructure: WebSphere Application Servers Application Servers With Servlets
subscriber list
IZY1002I Task complete
------Standard Error Output-----############################################################################
IZY1002I Task complete
------Standard Error Output-----############################################################################
For more details on the parameters for the Discover_WebSphere_Resources
task, visit the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Web Infrastructure → Reference Guide →
WebSphere Application Server Utility Tasks →
Discover_WebSphere_Resources.
The discovery task requires the WebSphere Application Server to be active. If
this causes a problem, as an alternative, you can create the
IWSApplicationServer object directly using the wwebsphere -CreateAppServer as
shown in A.5.1, “create_app_server.sh” on page 376.
Whether you create the IWSApplicationServer object manually or by using the
create_app_server.sh script, you might want to subscribe it to the default profile
managers used by the Discover WebSphere Resources task. The commands in
Example 5-25 on page 315 will complete this:
314
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Example 5-25 ISWApplicationServer objects and the default profile managers
wsub “@ProfileManager:WebSphere Application Servers#hubtmr-region”
@IWSApplicationServer:AppSvr@dev01@dev01@server1
wsub “@ProfileManager:WebSphere Application Servers With
Servlets#hubtmr-region” @IWSApplicationServer:AppSvr@dev01@dev01@server1
Configuring the Tivoli Enterprise Adapter for local systems
The Tivoli Enterprise Console adapter is used to ensure that general WebSphere
Application Server messages of types FATAL, ERROR, AUDIT, WARNING, and
TERMINATE are forwarded to the Tivoli Enterprise Console. The Tivoli
Enterprise Console adapter is also self-reporting; you can see the adapter status
events in your console.
We used the sample script, was_configure_tec_adapter.sh, shown in A.5.3,
“was_configure_tec_adapter.sh” on page 378 to install and configure the WAS
TEC Adapter using the Configure_WebSphere_TEC_Adapter task from the
WebSphere Event Tasks task library.
For more details on the parameters for the Configure_WebSphere_TEC_Adapter
task, visit the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Web Infrastructure → Reference Guide →
WebSphere Application Server Utility Tasks →
Configure_WebSphere_TEC_Adapter.
After configuring the TEC Adapter, it must be stopped and restarted. Sample
tasks, Stop_WebSphere_TEC_Adapter and Start_WebSphere_TEC_Adapter,
this are provided in the WebSphere Event Tasks task library. The sample scripts
was_start_tec_adapter.sh and was_stop_tec_adapter.sh are shown in A.5.4,
“was_start_tec_adapter.sh” on page 379. A.5.5, “was_stop_tec_adapter.sh” on
page 379 shows how to invoke these tasks automatically.
Distributing Profiles
As part of the IBM Tivoli Monitoring for Web Infrastructure: WebSphere
Application Server installation, a top-level policy region called Monitoring for
WebSphere Application Server is created and linked to the root desktop. It
contains a number of resources that can be used to administer WebSphere
resources.
Once the WebSphere Application Server IWSApplicationServer objects have
been created, you must distribute one or more profiles containing specific
WebSphere resource models to the IWSApplicationServer object.
Chapter 5. Creating profiles, packages, and tasks
315
Use the standard wdmdistrib command, but remember to specify the
IWSApplicationServer object as the target of the distribution.
To distribute the WebSphere Application Servers Profile provided with the IBM
Tivoli Monitoring for WI installation to the dev01 system, we used the following
command:
wdmdistrib -p "WebSphere Application Servers Profile#hubtmr-region"
-M over_all_no_merge @IWSApplicationServer:AppSvr@dev01@dev01@server1
Attention: If you distribute a Tmw2kProfile containing WebSphere Resource
Models to the wrong subscriber, such as a normal endpoint instead of an
IWSApplicationServer object, you will receive the following error message
when running wdmlseng -e EP_NAME:
WebSphereAS_Transactions_10: Unable to start (150)
To correct this problem, you must remove the profile from the endpoint using
the wdmeng command, then redistribute the profile to the correct targets.
If you failed to enable PMI Metrics gathering on the WebSphere Application
Server, you will receive the following error status when running the wdmlseng -e
<ep_name> command:
Example 5-26 wdmlseng - <ep_name>
WebSphereAS_Transactions_10: Failing (54)
WebSphereAS_JVMRuntime_10: Failing (54)
WebSphereAS_ApplicationServerStatus_10: Running
WebSphereAS_DBPools_10: Failing (54)
WebSphereAS_ThreadPool_10: Failing (54)
Refer to “Preparing WebSphere Application Server for monitoring” on page 312
for instructions on how to complete this task.
If the PMI Metrics gathering is configured correctly and you still encounter
problems, some of the attributes of the IWSApplicationServer object might be
incorrect, especially if you created the IWSApplicationServer object manually.
To recover, perform the following steps:
1. Remove the profile (wdmeng -e <endpoint> -p <profile> -delete).
2. Stop the engine (wdmcmd -stop -e <endpoint>).
3. Update/verify the IWSApplicationServer object attributes using this script:
$BINDIR/../generic_unix/TME/WSAPPSVR/tasks/itmwas_set_object_props.sh
316
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
4. Restart the engine (wdmcmd -restart -e <endpoint>).
5. Redistribute the profile (wdistrib -l over_all <profile>
<IWSApplicationServer object>).
5.3.14 Creating a software package for TimeCard Store Server
Now, we are ready to install the TimeCard application on the sample store server
system. Installation will allow us to test it and to create and develop packages for
transaction monitor deployment.
The application is a WebSphere application that uses its own, local database.
This database, as well as the instance that hosts it, will be created and
distributed as part of the package mentioned in 5.3.8, “Creating a DB2 instance
for the Outlet Solution” on page 294,
In addition to the jar file that contains the application code, a properties file also
has to be distributed to the target system.
To set up the software package for installation and removal of the TimeCard
application, copy files from the online material accompanying this book (see
Appendix C, “Additional material” on page 655) using the following steps:
1. Copy the installation files and directories from the
/mnt/code/outlet/TimeCard/510/all/generic directory to the
<img_dir>/code/outlet/TimeCard/510/all/generic directory on the srchost
system.
2. Copy the custom scripts for installation and uninstallation from the
/mnt/tools/outlet/TimeCard/510/all/generic directory to the
<img_dir>/tools/outlet/TimeCard/510/all/generic directory on the srchost
system.
3. Review and customize the scripts to suit your environment. The files are:
–
–
–
–
timecard_install.sh
db2_store_cfg.sh
Table.ddl
timecard_install.jacl
4. Copy or create the software package description file timecard.5.1.0.spd to
your image server in the <img_dir>/spd/outlet/TimeCard/510/all/generic
directory. The spd file we used can be found in
/mnt/spd/outlet/TimeCard/510/all/generic.
5. Build and test the software package timecard^5.1.0:
wimpspo -c hubtmr-region_MASTER_PM_SWD -f
@ManagedNode:<srchost>:<img_dir>/spd/outlet/TimeCard/510/all/generic/timeca
rd.5.1.0.spd timecard^5.1.0
Chapter 5. Creating profiles, packages, and tasks
317
6. Convert the software package to a software package block:
wconvspo -t build -o -p <img_dir>/code/tivoli/SPB/timecard.5.1.0.spb
timecard^5.1.0
7. Distribute the software package:
winstsp -f @SoftwarePackage:timecard^5.1.0 @Endpoint:dev01-ep
8. To check the status of the package distribution, you can use the wmdist -l -i
<distribution id> command.
5.3.15 Installing the TMTP Management Agent
It is recommended that the TMTP Management Agent be installed using a silent
installation. This will allow the deployment of large numbers of Management
Agents with minimal manual effort. The agent could also be deployed using the
manual installation method described in the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Transaction Performance → Installation and
Configuration Guide → Installing a management agent → Installing a
management agent.
Determining TMTP Management Agent locations
Table 4-19 on page 218 outlines which TMTP-related components to install and
where, in accordance with the functional roles of the systems described in
“Management environments” on page 131. The table shows that TMTP
management agents will be installed on the outletXX systems, the targets of the
monitoring. Naturally, the management agent code can also be installed on other
WebSphere Application Server systems, such as the Web Health Console, in the
event you want to monitor the transactions on this server.
Silent installation
To perform a silent installation of the TMTP Management Agent, edit the
provided response file MA.opt with the required settings for your environment
and run the following command from the directory containing the Agent
installation code:
./setup_MA_lin.bin -silent -is:javaconsole -is:silent -options path/MA.opt
A sample response file is provided in “MA.rsp” on page 502. Notice that a few of
the parameters have been commented out because they apply only to Windows
management agents.
318
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Note: You should be aware of the fact that during installation of the TMTP
Management Agent, the Agent attempts to register with the TMTP Server. If
the TMTP Server is not available, the Management Agent installation will fail.
To build your own software package for deployment of TMTP Agents in the
Outlet infrastructure, perform the following steps:
1. Download the TMTP v5.3 installation images, and copy the contents of the
Agents subdirectory of the CD image named IBM Tivoli Monitoring for
Transaction Performance, Version 5.3.0: Web Transaction Performance
Component Software Management Agent, Store and Forward installation
archive to the <img_dir>/code/tivoli/TMTP/530/agents/generic directory on
the srchost system, Refer to Appendix B, “Obtaining the installation images”
on page 645 for advice on how to obtain a copy of the TMTP v3 installation
images.
2. Create a response file for Agent installation, or copy the one provided in
/mnt/rsp/tivoli/TMTP/530/agents/generic/MA.rsp in the online material
accompanying this book. The location of the responsefile on your image
server (srchost) should be
<img_dir>/rsp/tivoli/TMTP/530/agents/generic/MA.rsp.
Review the response file and adjust to the particulars of your environment.
3. Copy or create the software package description file tmtp_agent.5.3.0.spd to
your image server in the <img_dir>/spd/tivoli/TMTP/530/agents/generic
directory.
4. Build and test the software package tmtp_agent_530:
wimpspo -c hubtmr-region_MASTER_PM_SWD -f
@ManagedNode:<srchost>:<img_dir>/spd/tivoli/TMTP/530/agents/generic/tmtp_ag
ent.5.3.0.spd tmtp_agent^5.3.0
5. Convert the software package to a software package block:
wconvspo -t build -o -p <img_dir>/code/tivoli/SPB/tmtp_agent.5.3.0.spb
tmtp_agent^5.3.0
6. Distribute the software package:
winstsp -f @SoftwarePackage:tmtp_agent^5.3.0 @Endpoint:dev01-ep
To check the status of the distribution, you can use the wmdist -e
<distributionId> to see how the package is transferred from the srchost to
the target.
7. Check the result of the distribution by looking in the logfile in
<img_dir>/logs/tmtp_agent_530.log.
Chapter 5. Creating profiles, packages, and tasks
319
For listings of all the files used, except the installation image, refer to A.7.8,
“TMTP Agent v5.3” on page 494.
Verifying the installation
To verify the installation and operation of the newly installed TMTP Management
Agent, perform the following steps:
1. Open the TMTP Console at https://tmtpsrv:9445/tmtpUI/.
2. Navigate to System Administration → Work with Agents and verify that the
newly installed management agents is online.
5.3.16 Configuring TMTP to monitor WebSphere
After the TMTP Management Agents have been deployed, it is necessary to
perform a number of configuration steps to enable the performance monitoring.
Many of these configuration steps can be performed from both the TMTP
Administrative Console, and the Command Line Interface. The CLI is very useful
for creating scripts to automatically configure new Management Agents as they
are deployed.
Where relevant in this section, we will show examples based on the TMTP CLI
installation on the hubtmr system, described in “Setting up the TMTP command
line interface” on page 220. Currently, the tmtpcli does not support a command
line interface for all operations. The location of the tmtpcli executable on the
hubtmr system is /opt/IBM/tivoli/tmtp_cli/cli/tmtpcli.sh.
Remember, that TMTP is not a TMF component, and that has its own
infrastructure. To facilitate transaction monitoring in the Outlet Systems
Management Solution it is necessary to create some basic objects, simmilar to
endpoints, profile managers, and monitoring profiles. To set up basic transaction
monitoring of the Outlet Solution, following tasks must be performed:
1.
2.
3.
4.
5.
6.
7.
“Deploying the J2EE component to the agent” on page 321
“Creating an Agent Group” on page 322
“Add agents to the Agent Group” on page 323
“Creating Schedules” on page 323
“Creating a policy group” on page 324
“Creating and deploying a discovery policy” on page 324
“Creating and deploying a listening policy” on page 325
All of these steps, except for steps 1 and 3, must be performed only once. In the
Outlet Systems Management Solution we will deploy the J2EE component to add
agents to agent groups upon successful installation of the TMTP Agent on each
outlet server. However, these steps are included here to allow for a smooth flow
of the description of the customization process.
320
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Before starting defining the basic TMTP objects, it is recommended that you
develop a naming standard for all TMTP objects. This allows for clear and easy
identification of a particular object’s scope and purpose. We suggest the
following:
<organization name>_<application name> | <short_description>_[<subtype of
object>]_<type of object>
Some sample object names are shown in Example 5-27.
Example 5-27 Sample TMTP object names
Outlet_TimeCard_J2EE_Discovery
Outlet_Always_J2EE_Schedule
Outlet_All-Outlet-Servers_Agent_Group
Outlet_TimeCard_J2EE-Servelet_Listening
Outlet_TimeCard_J2EE-EJB_Listening
Outlet_TimeCard_J2EE_PolicyGrup
In general, the names you use should be as self-explanatory as possible.
Deploying the J2EE component to the agent
TMTP allows you to deploy different monitoring components to endpoints to
facilitate speciailzed transaction monitoring. One of these components that
enables an agent to monitor transactions in WebSphere Applications is the J2EE
component. Before being able to activate monitoring, this component has to be
deployed to the TMTP Agent.
Note: In the production environment, this activity will be carried out
immediately after the successful installation of the TMTP Agent on the OutLet
servers.
The following steps describe how to deploy the J2EE Monitoring component,
either from the TiMTP Console or with the TMTP CLI:
򐂰 TMTP Administrative Console
a. Click System Administration → Work with Agents
b. Select the Agent to which the J2EE listenet is to be distributed to.
c. Select Deploy J2EE from the drop down list and click Go
򐂰 Command Line
– If WebSphere Security at the endpoint is not enabled
./tmtpcli.sh -DeployJ2ee -AgentName agentname -ServerType WebSphere
-Version 5.0 -ServerHome washome -ServerName websphere_appservername
-NodeName websphere_nodename -AutoRestart -Console
Chapter 5. Creating profiles, packages, and tasks
321
– If WebSphere Security is enabled
./tmtpcli.sh -DeployJ2ee -AgentName agentname -ServerType WebSphere
-Version 5.0 -ServerHome washome -ServerName websphere_appservername
-NodeName websphere_nodename -AutoRestart -IsSecure -AdminUserName
adminusername -AdminPassword adminpassword -Console
To deploy the J2EE component to the dev01.demo.tivoli.com TMTP agent
on the dev01 system, using a secure WebSphere Application Server, we
used the following command in Example 5-28:
Example 5-28 Deploying the J2EE component
./tmtpcli.sh -DeployJ2ee \
-AgentName dev01.demo.tivoli.com \
-ServerType WebSphere \
-Version 5.0 \
-ServerHome /opt/IBM/WebSphere/AppServer \
-ServerName server1 \
-NodeName dev01 \
-AutoRestart \
-IsSecure \
-AdminUserName root \
-AdminPassword smartway \
-Console
Creating an Agent Group
Agent groups are used to group agents into one logical group to which the
sample monitoring policies apply. This is very similar to the profile manager and
subscription schema of the Tivoli Framework.
Perform the following actions to create an agent group for the Outlet Servers
running the TimeCard application:
򐂰 From the TMTP Console:
a.
b.
c.
d.
Click Configuration → Work with Agent Groups → Create New
Enter a name and description for the new group.
Optional: You select the agents to subscribe to this group
Click Finish
򐂰 Using the TMTP Command Line Interface:
a. Use the tmtpcli -CreateAgentGroup command from the command line at
the hubtmr system to create a new agent group:
./tmtpcli.sh -CreateAgentGroup groupname -Agents
To create an agent group for the Outlet Systems Management Solution we
used the following command:
322
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
./tmtpcli.sh -CreateAgentGroup Outlet_TimeCard_Servers_AgentGroup
-Agents -Console
Note: The -Console option instructs the TMTP command line interface
to display the output on the console, in addition to storing it in the
default log.
Add agents to the Agent Group
For the purpose of discussing the TMTP customization process, we add the
dev01.demo.tivoli.com agent to the agent group at this point. In the Outlet
Solution production environment, this will be performed automatically upon
successful installation of the TMTP Agent on each Outlet Server.
The Agents can be added either through the Tivoli Console, or with the TMTP
command line interface:
򐂰 TMTP Console
a.
b.
c.
d.
Click Configuration → Work with Agent Groups
Select the agent group
Select Edit from the drop down list and click Go
Select the agents that should be members of this group and click Finish
򐂰 Command Line
a. Use the tmtpcli -AddToAgentGroup command from the command line of
the hubtmr system to add an agent to an agent group:
./tmtpcli.sh -AddToAgentGroup groupname -Agents agent1 agent2...agentn
-Console.
To add a new agent in the Outlet Systems Management Solution, we used
the following command:
./tmtpcli.sh -AddToAgentGroup Outlet_TimeCard_Servers_AgentGroup -Agents
dev01.demo.tivoli.com -Console
Creating Schedules
Schedules are used to determine when monitoring should be active, and, for
playback schedules especially, when to execute a simulated interaction with a
site.
To create schedules for the Outlet Solution, perform the following tasks:
1. Create a Schedule for the J2EE Policies.
a. Select Work with Schedules → Discovery or Listening from the drop
down list and click Create New.
Chapter 5. Creating profiles, packages, and tasks
323
b. Enter a name for the schedule, for example:
Outlet_Always_J2EE_Schedule
c. Accept the remaining defaults for start, stop and duration, and click OK.
Creating a policy group
Just as agent groups are used to group similar agents, policy groups are used to
group related policies that will be applied to agent groups simultaneously.
Follow these steps to create a policy group for the Outlet Solution:
1. Select Work with Policy Groups.
2. Click the Create New button.
3. Enter a name for the new policy group, for example:
Outlet_TimeCard_All_PolicyGroup
4. Select OK.
Creating policies
Now, all the basic constructs needed for creating and deploying policies are in
place.
J2EE monitoring policies, also known as listening policies, cannot be defined
until the basic transactions have been discovered. Therefore, to facilitate
monitoring of the J2EE TimeCard application, we need first to create a discovery
policy. After successful discovery, we can create the listening policy for the
specific transaction pattern.
The discovery process only has to be executed once for each specific
transaction. UIn the Outlet Solution case, we only have to perform one discovery
before deploying the listening policy to all the application servers in the Outlet
Solution.
Creating and deploying a discovery policy
This section outlines the steps required to create a discovery policy for the
TimeCard application.
To create a J2EE Servlet discovery policy, perform the following steps:
1. Select Work with Discovery Policies → J2EE Servlet from the drop down
list and click Create New
2. Supply the following values:
URI Filter
User Name
324
.*TimeCardWebProject/servlet/TimeCardServlet.*
.*
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Sample Rate
100 percent
Click Next.
Note: The parameters supplied are matched by a regular expression,
so the .* notations allows any number of characters before and after
the string specified. For the User Name field, we will discover
transaction from all users.
3. Choose the Outlet_TimeCard_Always_Schedule schedule created
previously and click Next.
4. Select the appropriate agent group , for example
Outlet_TimeCard_Servers_AgentGroup - and click Next.
5. Enter a name for this J2EE Servlet discovery policy, for example
Outlet_TimeCard_J2EEServlet_Discover.
6. Click Finish.
Creating and deploying a listening policy
To create a new listening policy, the discovery policy you created in the previous
section must have first discovered some transactions. To generate traffic to be
discovered, direct your browser to the newly installed TimeCard application.
Because the discovery policy has been defined in such a way that only URL
requests that include the string TimeCardWebProject/servlet/TimeCardServlet
will be discovered, you will need to access, for example:
http://dev01:9080/TimeCardWebProject/servlet/TimeCardServlet.
Wait approximately 15 minutes. The transaction history will have been uploaded
to the TMTP Server, and you can continue.
Meanwhile you might visit the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Monitoring for Transaction Performance → Administrator’s
Guide → →Monitoring transactions with discovery and listening policies to
obtain detailed information about listening policy creation.
The next step is to define the listening policy for the TimeCard application by
using the discovered transactions.
1. Click Work with Discovery Policies.
2. Select the applicable discover policy from the list on the right for example
Outlet_TimeCard_J2EEServlet_Discover.
Chapter 5. Creating profiles, packages, and tasks
325
3. Select View Discovered Transactions from the list and click Go.
You can see a list of discovered transactions similar to that in Figure 5-2.
Figure 5-2 Transactions discovered by discovery policy
4. Select the transaction for which you would like to create a listening policy.
Choose Create Listening Policy From from the list, and click Go.
5. Enter 20 in the Sample Rate field.
6. Select Aggregate and Instance.
7. Click Next.
8. Select Performance from the list, and click the Create button.
9. Enter the desired threshold data and select TEC in both Event Response
boxes, as shown in Figure 5-3 on page 327. Click Apply.
326
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Figure 5-3 Create J2EE Threshold
10.Click Configure J2EE Settings on the left side.
11.Click the check box beside the newly created J2EE threshold.
12.Under Advanced Listening Settings, select Low from the list beside
Choose Default Configuration.
Tip: Under normal circumstances, specify a Low configuration. Only when
you want to diagnose a performance problem should you increase the
configuration to Medium or High.
13.Click the Enable Intelligent Event Generation check box.
14.Enter 1 in the Time interval field.
15.Enter 1 in the Percentage Failed field.
16.Click Next.
17.Do not enter anything on the Configure ARM Settings screen, and click Next.
Chapter 5. Creating profiles, packages, and tasks
327
18.Choose the Outlet_TimeCard_Always_Schedule listening policy schedule,
created in “Creating Schedules” on page 323.
a. Select the Outlet_TimeCard_Servers_AgentGroup agent group, and
click Next.
b. Select the Outlet_TimeCard_All_PolicyGroup and click Next
c. Enter a name and description for this listening policy, for example:
d. Outlet_TimeCard_J2EE_Listen
e. Click Finish.
The listening profile will now be distributed to all the agents in the agent group,
and the transaction monitoring setup is complete.
328
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Part 3
Part
3
Putting the
solution to use
Until this point we have focused on installing and configuring the central
components of the Outlet Systems Management Solution that will enable Outlet
Inc. to manage their huge, widely distributed infrastructure.
In this section, we look closer at how the Outlet Systems Management Solution
benefits Outlet, Inc. The main topic is:
򐂰 Chapter 6, “Deployment” on page 331
© Copyright IBM Corp. 2005. All rights reserved.
329
330
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
6
Chapter 6.
Deployment
A few final tasks must be performed to allow for fully automated deployment of
software and monitoring profiles to the Outlet Servers. These tasks are:
1.
2.
3.
4.
5.
6.2, “Automating tasks” on page 333
6.3, “Creating the deployment activity plan” on page 335
6.4, “Defining the logical structure of the Outlet Inc. environment” on page 340
6.5, “Creating endpoint policies” on page 343
6.6, “Installing endpoints” on page 345
© Copyright IBM Corp. 2005. All rights reserved.
331
6.1 Automating deployment
The process that facilitates automatic deployment of software packages and
monitoring profiles is based on the endpoint policy support in the Tivoli
Management Framework. This allows for executing specialized scripts when the
endpoints log in, and provides support for handling the first login (initial login) as
a special case. Outlet Systems Management Solution will use those capabilities,
as shown in Figure 6-1.
Figure 6-1 The automated deployment process
The outline of the process is:
1. An endpoint is installed on the outlet server, and attempts login to the hubtmr
system (specified during installation).
2. The allow_login policy script is executed at the hubtmr, and uses the
<store>.lst files and the gateway.lst file to verify the eligibility of the endpoint
to log in.
3. The select_gateway policy script takes control, still at the hubtmr, in order to
identify the primary gateway for the endpoint. The gateway.lst file controls
this, and at the end, both the endpoint and the gateway are notified. The
endpoint logs off from the hubtmr.
332
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
4. The endpoint now attempts to log on to the assigned gateway, and the
login_policy script is executed.
5. The gateway realizes that this is the first successful login, and activates the
after_login policy script, which in the Outlet Systems Management Solution is
responsible for:
a.
b.
c.
d.
Subscribing the endpoint to the correct profile managers
Distributing hardware and software inventory scans
Distributing the basic operating system and hardware monitoring profile
Submitting an activity plan against the endpoint
6. The apm plan executes and installs software and monitoring capabilities on
the endpoint.
To implement this setup, all we need are , in addition to the bits and pieces
developed in Chapter 5, “Creating profiles, packages, and tasks” on page 271,
procedures/scripts for:
1.
2.
3.
4.
5.
6.
6.2, “Automating tasks”
6.3, “Creating the deployment activity plan” on page 335
6.4, “Defining the logical structure of the Outlet Inc. environment” on page 340
6.6, “Installing endpoints” on page 345
Apm plan invocation
6.5, “Creating endpoint policies” on page 343
6.2 Automating tasks
In the Outlet Systems Management Solution, we want to automate the manual
procedures related to distributing profiles to subscribing endpoints and described
in 5.3, “Creating monitoring profiles” on page 274.
Naturally, they could all be scripted and put into a huge script that could be
referenced from the after_login policy script. We chose another approach, by
imbedding the profile distributions in the APM plan to allow for flow-control, and
only distribute profiles for monitoring if the monitored component is installed
successfully. Because the activity plan can reference software packages as well
as tasks, all the monitoring profile distribution functions needed must be scripted,
then the scripts must be related to tasks.
For the Outlet Systems Management Solution the profile distribution scripts in
Table 6-1 on page 334 have been created:
Chapter 6. Deployment
333
Table 6-1 Profile distribution scripts
Script
Purpose
ep_customization.sh
Submits the APM plan against a specific
endpoint, for details, refer to A.8.1,
“ep_customization.sh” on page 628.
create_DB2_instance_objects
Creates a DB2 Instance object, code is
available in A.4.1,
“create_db2_instance_objects.sh” on
page 371.
itm_DB2_instance_rm_distrib
Distributes DB2 Instance-related
resource models, for details see A.4.3,
“itm_db2_instance_rm_distrib.sh” on
page 373.
create_DB2_database_objects
Creates a DB2 Database object, see
A.4.2, “create_db2_database_objects.sh”
on page 372 for details.
itm_DB2_database_rm_distrib
Distributes DB2 Database related
resource models, for details, see A.4.4,
“itm_db2_database_rm_distrib.sh” on
page 374.
itm_WAS_rm_distrib
Distributes WebSphere Application
Server related resource models, source
code is available in A.5.2,
“itm_was_rm_distrib.sh” on page 377.
Because it is efficient to automate the exchange of object definitions between the
hub- and spoke TMRs, the script update_TMR_Resources has been developed
as well. For details, refer to A.2.4, “update_resources.sh” on page 357.
To define the scripts as tasks in the TME environment, we created a task library
named Production_TL using the wcrttlib command. In the Outlet Systems
Management Solution, we created it in the hubtmr-region Policy Region using
the following command:
wcrttlib Production_TL huntmr-region
To create tasks within the task library, use the wsettask command. To ease the
job of creating all the required tasks, we developed and used the script described
in A.2.3, “load_tasks.sh” on page 356. Table 6-2 on page 335 outlines the tasks
created and their related scripts.
334
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Table 6-2 Production tasks and related scripts
Task
Script
ep_customization
ep_customization.sh
create_DB2_instance_objects
create_db2_instance_objects.sh
itm_DB2_instance_rm_distrib
itm_db2_instance_rm_distrib.sh
create_DB2_database_objects
create_db2_database_objects.sh
itm_DB2_database_rm_distrib
itm_db2_database_rm_distrib.sh
itm_WAS_rm_distrib
WAS/itm_was_rm_distrib.sh
update_TMR_Resources
update_resources.sh
update_Resources_from_SPOKE
update_resources.sh
Note: In a production environment, the update_TMR... tasks should be
scheduled to run periodically through the definition of a Tivoli job. Refer to the
wcrtjob and wschedjob commands for details on how to script the creation of
jobs.
6.3 Creating the deployment activity plan
At this point, our dev01 test system has all the software components required for
the Outlet Servers, and will now serve as our reference system.
The deployment plan that has to be build to distribute software packages and
monitoring profiles to the endpoints in the Outlet Systems Management Solution,
must reference both the software packages developed in Chapter 5., “Creating
profiles, packages, and tasks” on page 271 and the tasks for automated profile
distribution described in 6.2, “Automating tasks” on page 333.
6.3.1 Building the reference model
To create a basic plan, we start by creating a reference model that includes all
the software packages distributed to the dev01 system. The purpose of building
the reference model with Change Manager is to use this as the starting point for
building the final activity plan that eventually will be distributed to all new
endpoints from the after_install endpoint policy.
Unfortunately, Change Manager does not allow for command line-based
manipulation for all the functions available. As a result, to create a reference
Chapter 6. Deployment
335
model based on the dev01 system, we have to open the CCM GUI (using the
wccmguicommand from the hubtmr system) and create a new reference model
from there. The steps to do this are:
1. Start the CCM GUI from the hubtmr. Invoke the wccmgui command from a
command line.
2. Log in to CCM. Use the user ID root and the normal password.
3. Create a new reference model from a target. From the toolbar select File →
Create reference model from target.
4. Select the endpoint to be the source of the operation.
a.
b.
c.
d.
Set the Target Type to Endpoint.
Specify the name of the target. We used dev01-ep.
Check the Software box.
Press OK.
Now the dev01-ep system is being inspected, and the reference model is
build. At this point you can modify the model to fit your exact needs. Because
the dev01-ep system has the exact set of software components we need for
the Outlet Systems Management Solution, we did not make any
modifications.
5. To name the reference model, select Edit → Properties, and provide your
own name for the reference model. We used Outlet_RefModel_v1.
6. Before we can build the activity plan that will implement the reference model,
one or more targets must be applied. From the toolbar select Edit →
Subscribers → Add → Static subscriber. Specify the endpoints you want to
use. Because this plan is never executed against the selected endpoint, we
used srchost-ep.
7. Save the resulting reference model by selecting File → Export and provide a
name of the *.xml file to hold the exported version of the reference model.
8. Finally, we are ready to build an activity plan based on the newly generated
Reference Model. Select Activities → Submit, and provide a name for the
activity plan to be created. Do not check the Full synchronization check box,
but continue immediately by pressing OK.
9. When the activity plan dialog box is shown, select Import activity plan and
press OK.
10.Select File →Exit to close the CCM GUI.
More details regarding the use of the Change Manager are available at the Tivoli
Information Center Web site:
336
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Configuration Manager → User’s Guide for Deployment
Services → Using Change Manager.
6.3.2 Completing the activity plan
Having built the basic activity plan for all software to be distributed, we are ready
to assign conditions, targets and add additional activities to deploy monitoring
profiles as part of the plan.
The logical flow we want to implement in the activity plan is depicted in
Figure 6-2.
Figure 6-2 Activity flow for Outlet Server deployment
1. Like CCM, the Activity Planner component of Tivoli Configuration Manager
does not allow for all functions to be issued from the command line, so we
have to start the GUI in edit mode, using the wapmgui ed command from the
hubtmr system.
Chapter 6. Deployment
337
2. The logon process is similar to that of CCM, but the user ID to be specified
must be the tivapm user, was created during installation and described in
4.4.7, “Enabling the Activity Planner” on page 247.
3. To start working with the activity plan created from CCM, select File → Open
and select the recently imported plan.
The basic activity plan generated by CCM contains neither relationships nor
additional task activities, so we have to add these. For details on how to perform
this job, refer to the Tivoli Information Center Web site:
http://publib.boulder.ibm.com/infocenter/tiv3help/index.jsp
Navigate to Configuration Manager → User’s Guide for Deployment
Services → Using Activity Planner.
To customize the activity plan to meet the needs of the Outlet Systems
Management Solution the following tasks must be completed:
1. “Defining targets” on page 338
2. “Adding task activities” on page 338
3. “Setting conditions” on page 339
Defining targets
To ensure that the software package activities currently in the activity plan will be
issued against the correct target systems, set the target property of each activity
to $(TARGET_LIST). The TARGET_LIST variable is build on-the-fly when the plan is
submitted, thus guaranteeing that the activities will be issued against the correct
target, and not the srchost-ep we used to generate the plan.
Adding task activities
To include object discovery and monitoring profile distributions, we added the
following task activities, all referring the tasks we created in 6.2, “Automating
tasks” on page 333:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Discover_WebSphere_Resources
ITM_WAS_AppSvr_RM_Distribution
Create_DB2_Instance_Objects
ITM_DB2_Database_RM_Distribution
Create_DB2_Database_Objects
ITM_DB2_Instance_RM_Distribution
The tasks all require different input parameters. Make sure that the correct
parameters are set up in the activity properties dialog.
338
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Tip: A variable such as $(EP) can be used to add the endpoint label
dynamically. The variable then has to be passed to the activity plan when
submitted. For example:
wsubpln -t $ep_label -f $plan_name -o -VEP=$ep_label -VMN=$TMR_NAME
Finally, to ensure that the task activities will be issued against the correct target,
set the targets of all the task activities to $(MN), at execution time it will be
translated to the value of the variable passed to the plan. Remember that the
tasks manipulate Tivoli objects and must be executed on the a managed node
Setting conditions
To ensure that the activities are performed in accordance with our planned
expectations, depicted in Figure 6-2 on page 337, we have to add conditions to
each activity.
To set conditions for an activity using the APM GUI, right-click the activity, and
select Conditions. A conditions property dialog is displayed, in which you can
define the desired conditions for the activity to be started. Table 6-3 outlines the
conditions we defined for the APM plan for deploying the Outlet Servers.
Table 6-3 APM plan activities and conditions
Activity
Condition
ihs_server^2.0.47#hubtmr-region[0]
ST(db2udb_server^8.2.0#hubtmr-region[0])
mq_server^5.3.0#hubtmr-region[0]
ST(ihs_server^2.0.47#hubtmr-region[0])
mq_server^5.3.0.fp8#hubtmr-region[0]
ST(mq_server^5.3.0#hubtmr-region[0])
was_appserver^5.1.0#hubtmr-region[0]
ST(mq_server^5.3.0_fp8#hubtmr-region[0])
was_appserver^5.1.0_fp1#hubtmr-region[0]
ST(was_appserver^5.1.0#hubtmr-region[0])
db2udb_server^8.2.0#hubtmr-region[0]
timecard^5.1.0#hubtmr-region[0]
ST(was_appserver^5.1.0_fp1#hubtmr-region[0])
AND ST(]db2udb_server^8.2.0#hubtmr-region[0])
wses_cachingproxy^5.1.0#hubtmr-region[0]
wses_cachingproxy^5.1.0_fp1#hubtmr-regio
n[0]
ST(wses_cachingproxy^5.1.0#hubtmr-region[0])
tmtp_agent^5.3.0#hubtmr-region[0]
Discover_WebSphere_Resources
SA(was_appserver^5.1.0_fp1#hubtmr-region[0])
Chapter 6. Deployment
339
Activity
Condition
ITM_WAS_AppSvr_RM_Distribution
SA(Discover_WebSphere_Resources)
Create_DB2_Instance_Objects
SA(timecard^5.1.0#hubtmr-region[0])
ITM_DB2_Database_RM_Distribution
SA(Create_DB2_Instance_Objects)
Create_DB2_Database_Objects
SA(Create_DB2_Instance_Objects)
ITM_DB2_Instance_RM_Distribution
SA(Create_DB2_Database_Objects)
If you want to work with the activity plan we created for the Outlet Systems
Management Solution, use File → Import, and select the file
Production_Outlet_Plan_v1.0.xml provided in the online material. A listing of
this file is available in A.8.2, “Production_Outlet_Plan_v1.0.xml” on page 629. As
an alternative to the GUI-based import operation, you can use the command line
option:
wimpln -e -f <xml file>
6.4 Defining the logical structure of the
Outlet Inc. environment
As we described in 3.3, “Logical Tivoli architecture for Outlet Inc.” on page 49, the
managed systems will be subscribed to profile managers that are organized in
hierarchies that allows targeting all systems with similar attributes usch as
location, architecture, applications in a single operation.
In addition, as we described in “Defining the logical structure of the environment”
on page 272, all master profiles are kept in the hubtmr environment to which
profile managers at the spoke TMRs will be subscribed. To allow for easy
assignment of authorities to manipulate the objects, each type of profile is kept in
a separate profile manager. Refer to “Defining the logical structure of the
environment” on page 272 for details about the master profile managers residing
in the hub- and spoketmr environments.
To provide a logical hierarchy in which to subscribe the outlet endpoints, it is
obvious that some kind of structure be established. This structure must allow for
targeting any operation - profile distribution, software update etc. - to groups of
identical servers throughout the Outlet Inc. environment.
To allow for this the following Policy Region structure should be created, as in
Example 6-1 on page 341.
340
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Example 6-1 Creating the Policy Region structure
<<REGION>
<TMR>-region_<REGION>_EP
<TMR>-region_<REGION>_PR
<TMR>-region_<REGION>_<STORE>_PR- one for each store
:At both the region and the store level, we will need profile managers to hold the
various profiles to be distributed with subscriptions to the similar profile manager
on the higher level, as in Figure 6-2.
Example 6-2 Distributing profiles to regions and stores
<[region
<[region
<[region
<[region
<[region
|
|
|
|
|
store]_policy_region_name>_PMS_linux-ix86
store]_policy_region_name>_PM_INV
store]_policy_region_name>_PM_TEC
store]_policy_region_name>_PM_ITM
store]_policy_region_name>_PM_SWD
In Example 6-2, <store_policy_region_name>_PMS_linux-ix86 is used to
subscribe endpoints. This profile manager subscribes, in turn, to all the other
store-level profile managers such as <store_policy_region_name>_INV.
To allow for central distribution of profiles, the profile managers will be arranged
in resource specific hierarchies, as shown in Example 6-3.
Example 6-3 Logical subscription hierarchy
hubtmr_region_MASTER_PM_INV
|
|
V
spoketmr_region_MASTER_PM_INV
|
V
spoketmr_region_<REGION>_PM_INV
|
V
spoketmr_region_<REGION>_<STORE>_PM_INV
|
|
V
spoketmr_region_<REGION>_<STORE>_PMS_linux-ix86
|
V
endpoints
Chapter 6. Deployment
341
Before creating the objects related to regions and stores, the TMR server to
which the region will belong must be prepared. This involves creating the
top-level, object-specific profile managers within the spoke TMR and subscribe
these to the hubtmr profile managers created in 5.1, “Defining the logical
structure of the environment” on page 272.
1. For each spoke TMR, issue the following commands in Example 6-4 to create
the profile managers:
Example 6-4 Creating profile managers
wcrtprfmgr <spoketmr>-region <spoketmr>-region_MASTER_PM_INV
wcrtprfmgr <spoketmr>-region <spoketmr>-region_MASTER_PM_ITM
wcrtprfmgr <spoketmr>-region <spoketmr>-region_MASTER_PM_SWD
wcrtprfmgr <spoketmr>-region <spoketmr>-region_MASTER_PM_TEC
wcrtprfmgr <spoketmr>-region <spoketmr>-region_MASTER_PMS_linux-ix86
wsetpm -d @ProfileManager:spoketmr-region_MASTER_PMS_linux-ix86
2. After having exchanged resources between the TMR environments using the
wupdate -r ProfileManager <spoketmr>-region command, you can
subscribe the spoketmr profile managers to the hubtmr profile managers, as
in Example 6-5.
Example 6-5 Subscribing spoketmr profile managers to hubtmr profile managers
wsub @ProfileManager:hubtmr-region_MASTER_PM_INV
@ProfileManager:<spoketmr>-region_MASTER_PM_INV
wsub @ProfileManager:hubtmr-region_MASTER_PM_ITM
@ProfileManager:<spoketmr>-region_MASTER_PM_ITM
wsub @ProfileManager:hubtmr-region_MASTER_PM_SWD
@ProfileManager:<spoketmr>-region_MASTER_PM_SWD
wsub @ProfileManager:hubtmr-region_MASTER_PM_TEC
@ProfileManager:<spoketmr>-region_MASTER_PM_TEC
Having established the top-level TMR objects, we are ready to create the region
and store-related objects. You can create the subscription hierarchy for each
store in every region manually using the wcrtpr, wcrtpm, and wsub commands.
However creating this structure manually for each individual store would be error
prone and cumbersome; it needs to be scripted. A sample script to create the
structure for a region and its related stores is provided in A.2.1,
“create_logical_structure.sh” on page 351.
This script uses a plain input file, containing a single store name on each line as
input. Observe, that the store files are used as well by the allow_login policy (see
“Allow_login policy” on page 344) so they should not be deleted after use. New
342
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
stores can be added to a region by simply adding a new line to the file. Examples
of store lists can be found in Appendix A.2.2, “Store lists” on page 355.
To create the structure for the NORTH region, which includes the stores N1, N2,
and N3, simple create a file named NORTH.lst, with the following content:
N1
N2
N3
Invoke the create_logical_structure.sh script, from the TMR server that the
region will belong, with the following parameters:
./create_locical_structure.sh NORTH linux-ix86 /tmp/NORTH.lst
Note: The file containing the list of stores must be available to both the hub
TMR Server and the Spoke TMR server to which the region belongs.
Therefore you should save it in a shared file system that can be accessed
from both systems.
6.5 Creating endpoint policies
The login policies are defined in the TMR Environment and are, among other
purposes, used to apply automatically controls and customization related to
endpoint systems during the login process when an endpoint connects to the
Tivoli Management Environment. When implementing the login policies, the
Tivoli Frameworks differentiates between the initial (first time) login of previously
unknown endpoints and subsequent logins - when the endpoint system is known
to the TMR.
All in all, there are four types of login policies:
allow_login
Verifies if an unknown endpoint is allowed to log in to the
TMR.
select_gateway
Helps control the gateway to login to for each new
endpoint.
login
Applies checks and automated functions such as
subscriptions once an endpoint logs in.
after_login
Applies checks and automated functions after an endpoint
has performed the first, successful log in.
The login policies must be maintained in both the hubtmr and all spoketmr
environments. In order to facilitate this, it is good practice to place the policy
scripts in a shared file system accessible from all the TMR servers. However,
Chapter 6. Deployment
343
while this will help the versioning control across TMRs, it does not automatically
ensure that the same policies are active on all TMRs at any one point in time.
The policy scripts need to be imported into each TMR by means of the wputeppol
command.
Naturally this task can be scripted to achieve the desired level of automation. In
A.3.1, “put.sh” on page 357 and A.3.2, “get.sh” on page 358 you will find sample
scripts used to import and export endpoint policies.
6.5.1 Allow_login policy
In the Outlet Systems Management Solution it we decided to let the hub TMR act
as an intercepting gateway for all endpoints. As a result, the hub TMR is
responsible for collecting all the initial login requests, and satisfying those
coming from endpoints that will be part of its own region or redirecting them to
the proper spoke TMR.
The allow_login policy is used to accomplish the first initial step, verifying
whether or not an endpoint is eligible for logging in. In the sample allow_login
policy provided with this book, this determination is performed by checking if the
store component of the endpoint label is present in the store_lists files and the
gateway.list file.
For the Outlet Systems Management Solution, the allow_login policy has been
configured to determine if the endpoint connecting to the TMR server was
allowed to log in. The store_lists files are consulted in order to verify the
relationship based on the endpoint’s label.
Once verified, a final check is performed by verifying that the first three
characters of the endpoint label are assigned to a gateway in the gateway.lst file,
used to control the assignment of gateways to specific regions and stores.
The allow_login policy script is shown in A.3.3, “allow_policy” on page 358. The
control files are provided in A.2.2, “Store lists” on page 355 and A.3.5,
“gateway.lst” on page 364.
6.5.2 Select_gateway policy
When the EP passes the allow login policy, the select gateway policy runs. The
gateway.lst file is consulted again to identify a gateway for the endpoint and
return the proper Tivoli object ID. The result of the assignment is redirected on a
log file.
For a listing of the select_gateway policy developed for the Outlet Systems
Management Solution, refer to A.3.4, “select_policy” on page 361.
344
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
6.5.3 Login policy
The login policy developed for the Outlet Systems Management Solution, shown
A.3.6, “login_policy” on page 364, only logs the endpoint label on a file in order to
track that the endpoint passes this script when logging in.
6.5.4 After_login policy
The after login policy script is executed once after an initial login process. For the
Outlet Systems Management Solution, this policy handles subscriptions, profile
distributions, and installation of software packages to new endpoints.
In summary, the after_login policy developed for the Outlet Systems
Management Solution performs the following:
1. Subscribing the endpoint to the correct profile managers using the sub_ep.sh
script.
2. Distributing hardware and software Inventory scanning profiles as well as the
basic (OS and HW) ITM monitoring profile through execution of the
ep_login_notif.sh script
3. Executing the run_ep_customization_task.sh script to execute the
ep_customization task in order to submit the activity plan to install software,
discover and create WAS and DB2 monitoring objects, and distribute
remaining monitoring profiles.
Note: The reason for implementing the ep_customization as a task is to
provide asynchronous execution of the majority of the deployment
operations. This will offload the TMR Server and allow it to process logins
from other endpoints.
A complete listing of the after_login policy and related scripts are available in
A.3.7, “after_policy” on page 365 ff.
6.6 Installing endpoints
To be able to manage the servers in the Outlet Solution, the Tivoli Management
Agent, otherwise known as the endpoint, needs to be installed on all systems.
The existence of the endpoint on an Outlet server is the bootstrap for enabling
the entire installation and management process.
Chapter 6. Deployment
345
To facilitate automation, it is necessary to define and enforce a strict naming
standard for the endpoints. The naming standard chosen for endpoint labels for
the Outlet Systems Management Solution is:
<client>_<region>_<store>-ep
This will enable the policy scripts to subscribe the endpoints to the correct profile
managers based on several factors, but primarily the geographical hierarchy.
Note: Based on the needs of the organization, naming standards and
requirements for subscription policies should be considered carefully before
setting up a production environment.
For Unix based systems, the endpoint installation is always initiated from the
command line of the TMR Server using some sort of remote access mechanism:
- rexec, ssh, and so forth. In this section, we assumed that UnitedLinux has been
installed on the Outlet Serves and that ssh has been enabled in accordance with
the description in 4.2.2, “Operating platform preparation” on page 142.
The command used to install an endpoint is winstlcf and for ssh the parameters
are as shown:
winstlcf -j -g <hubtmr>:9494 -l 9495 -n <hostname>_<region>_<store>-ep -r
<tmr>-<region>-<store>_EP -Y ‘<hostname> root <password>’
The winstlcf command shown, will install the endpoint on the target system and
assigns the hubtmr system as the intercepting gateway for the initial login.
Authentication to execute commands on the target system initiated from the hub
TMR is gained through ssh (the -j parameter) using the root user and the valid
password for that user.
To provide ease-of-use and consistency, the winstlcf command should be
scripted in any production environment, and further, perhaps would be build into
a Tivoli task. However, this will not change the fact that endpoint installation is a
manual process that must be executed by the Tivoli Administrators whenever a
new system is going to be included in the environment.
Once the endpoint is installed, it will attempt to login though the intercepting
gateway specified on the winstlcf command. At that point, the endpoint policy
scripts take control.
346
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Part 4
Part
4
Appendixes
These appendixes cover the following topics:
򐂰 Appendix A, “Configuration files and scripts” on page 349
򐂰 Appendix B, “Obtaining the installation images” on page 645
򐂰 Appendix C, “Additional material” on page 655
© Copyright IBM Corp. 2005. All rights reserved.
347
348
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
A
Appendix A.
Configuration files and
scripts
All files referenced in this Appendix are available online for download. See
Appendix C, “Additional material” on page 655 for details on how to obtain your
copy.
© Copyright IBM Corp. 2005. All rights reserved.
349
A.1 Additional material contents
The additional material is a zipped file that, when unpacked, will provide the all
the files described here. The files are organized in the following directory
structure:
򐂰 <unpack_dir>/code
Parent directory for installable code, put your downloaded images in this
directory
򐂰 <unpack_dir>/code/tivoli/scripts
Scripts developed specifically for the Outlet Systems Management Solution
򐂰 <unpack_dir>/rsp
Parent directory for response files
򐂰 <unpack_dir>/spd/
Parent directory for software package software package definition files
򐂰 <unpack_dir>/tools
Parent directory for Outlet Systems Management Solution specific scripts and
control files used by software packages
Once you have downloaded the file, unpack the online material to a directory of
your choice using gunzip. Upon unpacking, you should create a series of
symbolic links that reference the unpack directory to the /mnt directory, as in
Example A-1.
Example: A-1 Links to /mnt directory
if
ln
ln
ln
ln
[ ! -d /mnt ]; then mkdir /mnt; fi
-s /<unpack_dir>/code /mnt/code
-s /<unpack_dir>/rsp /mnt/rsp
-s /<unpack_dir>/spd /mnt/spd
-s /<unpack_dir>/tools /mnt/tools
Important: When unpacking the online material using Windows tools, you
should configure them to avoid automatic CRLF conversion. Failure to do this
will make the files unreadable from a unix system after they have been sent
through binary ftp.
350
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
A.2 TMR setup and maintenance-related files
These files are used to perform initial setup of the TMR environment to facilitate
automated distribution of software and monitoring profiles to new endpoints. The
endpoint policies described in A.3, “Policies and related tasks and scripts” on
page 357 rely on the successful execution of the scripts in this section.
In the unpacked online material, all of these files can be found in the
/mnt/core/tivoli/scripts directory.
A.2.1 create_logical_structure.sh
This script in Example A-2is used to create the logical structure of policy regions
and profile managers for the Outlet Systems Management Solution.
Example: A-2 create_logical_structure.sh
#!/bin/sh
#set -x
#########################################################################
#
# Date: November 2004
#-----------------------------------------------------------------------# DISCLAIMER: This script is provided AS IS
#
No official support will be provided from IBM support
#-----------------------------------------------------------------------# USAGE: ./create_logical_structure.sh GEO INTERP STORE_LIST_FILE
# Example: ./create_logical_structure.sh REGION01 linux-ix86
/tmp/REGION01_STORES.lst”
#
# GEO = The geographical region to be created (e.g. CENTRAL)
# INTERP = The interpreter type for default ITM, SWD and Inventory
#
profiles to create.
# STORE_LIST = Full path to a file containing a list of stores to create.
#
This file should contain a list of stores, one on each
#
line. Lines beginning with a # will be ignored.
#
These files will also be used by the login policies
#
to identify the appropriate gateway for each endpoint.
#
Please DO NOT DELETE these files after running the script.
#
To add additional stores, add the new store to the existing
#
file for that region and comment out the stores that were
#
created previously.
#
#
#########################################################################
GEO=$1
INTERP=$2
Appendix A. Configuration files and scripts
351
STORE_LIST=$3
# Check for the correct command line options
if [ $# -ne 3 ]; then
echo “ “
echo “This script will read an input file containing a list of stores”
echo “and create the logical policy region and profile structures”
echo “ “
echo “Usage: ./create_logical_structure.sh GEO INTERP STORE_LIST_FILE”
echo “Example: ./create_logical_structure.sh REGION01 linux-ix86
/tmp/REGION01_STORES.lst”
echo “ “
exit 1
fi
# Check for the store list file specified by the user
if [ ! -f “$STORE_LIST” ]; then
echo “ “
echo “Unable to locate specified input file: $STORE_LIST, aborting.”
echo “Please specify a valid file name including the full path.”
echo “e.g. /tmp/store_file.lst”
echo “ “
exit 2
fi
#-----------------------------------------------------------------------# Setup Tivoli Environment
#-----------------------------------------------------------------------echo “Setting Tivoli Environment”
. /etc/Tivoli/setup’_’env.sh > /dev/null
#-----------------------------------------------------------------------# Setup Script Variables
#-----------------------------------------------------------------------echo “ “
echo “*************************************************”
echo “
You may wish to run a database
“
echo “
backup before running this script.
“
echo “*************************************************”
echo “ “
echo “Do you wish to create the Policy Region, Profile Manager, Query Library
and Task Library structures? (y/n) (Default ‘n’):”
read FRW
DESTID=‘wlookup -ar ManagedNode |grep \.348 |grep -v $TMR |awk ‘{print $2}’‘
SRCID=‘wlookup ServerManagedNode‘
352
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
ADMIN=‘wls -l /Administrators |grep $TMR|grep Root’_’|awk ‘{print $2}’‘
TMRPR=‘echo $ADMIN|cut -d ‘’_’’ -f2‘
#-----------------------------------------------------------------------# Creating Policy Regions, Profile Managers and Profiles
#-----------------------------------------------------------------------#if [ $FRW = “y” ]; then
if [ “${FRW:-n}” = “y” ]; then
echo “*************************************************”
echo “
Creating Policy Region structure
“
echo “*************************************************”
wlookup -ar PolicyRegion|grep “$TMRPR” > /dev/null
RC=$?
if [ $RC -ne 0 ]; then
wcrtpr -a $ADMIN $TMRPR
else
echo “Policy Region $TMRPR exists, don’t need to create it”
fi
wlookup -ar PolicyRegion|grep $TMRPR’_’EP > /dev/null
RC=$?
if [ $RC -ne 0 ]; then
wcrtpr -s /Regions/$TMRPR -m Endpoint $TMRPR’_’EP
else
echo “Policy Region $TMRPR_EP exists, don’t need to create it”
fi
# Create the Main PolicyRegion per each GEO
wcrtpr -s /Regions/$TMRPR $GEO
# Create a PolicyRegion for Endpoints for each GEO
wcrtpr -s /Regions/$TMRPR/$GEO -m Endpoint $TMRPR’_’$GEO’_’EP
# Create a PolicyRegion for Profiles for each GEO
wcrtpr -s /Regions/$TMRPR/$GEO $TMRPR’_’$GEO’_’PR
#From this point forward, cycle through the list of stores and create the
objects
echo “*************************************************”
echo “ Link Monitoring for DB2 PR to $ADMIN Desktop “
echo “*************************************************”
Appendix A. Configuration files and scripts
353
wln /Library/PolicyRegion/”Monitoring for DB2” /Administrators/$ADMIN
wcrtpr -s /Regions/”Monitoring for DB2” “DB2 Database Servers”
echo
echo
echo
echo
“*************************************************************”
“ Cycling through $STORE_LIST and creating associated objects “
“*************************************************************”
“ “
# Check the STORE_LIST file for data and abort if empty
LINE_COUNT=‘wc -l $STORE_LIST | cut -c1-8‘
if [ $LINE_COUNT = 0 ]; then
echo “Input file $STORE_LIST is empty.”
echo “Exiting...”
exit 3
fi
grep “^[^#]” $STORE_LIST | {
while read STORE; do
echo “Creating objects for $GEO’_’$STORE’_’$INTERP”
wcrtpr -s @PolicyRegion:$TMRPR’_’$GEO’_’PR -m ACP -m InventoryConfig -m
ProfileManager -m QueryLibrary -m SoftwarePackage -m TaskLibrary -m
Tmw2kProfile $TMRPR’_’$GEO’_’$STORE’_’PR
echo “*************************************************”
echo “
Creating Profile Managers
“
echo “*************************************************”
wcrtprfmgr @$TMRPR’_’$GEO’_’$STORE’_’PR
$TMRPR’_’$GEO’_’$STORE’_’PMS’_’$INTERP
wsetpm -d @ProfileManager:$TMRPR’_’$GEO’_’$STORE’_’PMS’_’$INTERP
wcrtprfmgr @$TMRPR’_’$GEO’_’$STORE’_’PR $TMRPR’_’$GEO’_’$STORE’_’PM’_’INV
wcrtprfmgr @$TMRPR’_’$GEO’_’$STORE’_’PR $TMRPR’_’$GEO’_’$STORE’_’PM’_’TEC
wcrtprfmgr @$TMRPR’_’$GEO’_’$STORE’_’PR $TMRPR’_’$GEO’_’$STORE’_’PM’_’ITM
wcrtprfmgr @$TMRPR’_’$GEO’_’$STORE’_’PR $TMRPR’_’$GEO’_’$STORE’_’PM’_’SWD
echo “*************************************************”
echo “
Creating Task Library
“
echo “*************************************************”
wcrttlib $TMRPR’_’$GEO’_’$STORE’_’TL $TMRPR’_’$GEO’_’$STORE’_’PR
echo “*************************************************”
echo “
Creating Query Library
“
echo “*************************************************”
wcrtqlib $TMRPR’_’$GEO’_’$STORE’_’PR $TMRPR’_’$GEO’_’$STORE’_’QL
354
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#-----------------------------------------------------------------------# Subscriptions to PMS...
#-----------------------------------------------------------------------echo “*************************************************”
echo “
Subscribing PMS to Profile Managers
“
echo “*************************************************”
wsub @ProfileManager:$TMRPR’_’$GEO’_’$STORE’_’PM’_’INV
@ProfileManager:$TMRPR’_’$GEO’_’$STORE’_’PMS’_’$INTERP
wsub @ProfileManager:$TMRPR’_’$GEO’_’$STORE’_’PM’_’TEC
@ProfileManager:$TMRPR’_’$GEO’_’$STORE’_’PMS’_’$INTERP
wsub @ProfileManager:$TMRPR’_’$GEO’_’$STORE’_’PM’_’ITM
@ProfileManager:$TMRPR’_’$GEO’_’$STORE’_’PMS’_’$INTERP
wsub @ProfileManager:$TMRPR’_’$GEO’_’$STORE’_’PM’_’SWD
@ProfileManager:$TMRPR’_’$GEO’_’$STORE’_’PMS’_’$INTERP
done
}
else
echo “Exiting...”
fi
exit 0
A.2.2 Store lists
The sample store list files in the /mnt/code/tivoli/scripts/store_lists directory
control the creation of profile managers and policy regions that represent the
physical infrastructure of Outlet Inc. in the Outlet Systems Management Solution.
Each file represents the stores within a region.
In addition, these files are also used by the endpoint policies to determine if the
endpoint should be allowed to login and identify the appropriate gateway.
Do not delete the store list files after creating the logical structures. To add more
stores in the same region, simply add the store to the store list file for that region,
and comment out the stores that have already been created (using a #).
Example A-3, Example A-4 on page 356, and Example A-5 on page 356 are
examples of store list files.
general_stores.lst
Example: A-3 general_stores.lst
STORE
Appendix A. Configuration files and scripts
355
maine_stores.lst
Example: A-4 maine_stores.lst
2531
ohio_stores.lst
Example: A-5 ohio_stores.lst
0000
0001
A.2.3 load_tasks.sh
This script in Example A-6 is used to create and populate the Production_TL task
library, used primarily to deploy monitoring profiles to new endpoints. The tasks
created within the Production_TL tasklibrary references scripts that are
described in the subsequent sections.
Example: A-6 load_tasks.sh
wsettask -t Update_TMR_Resources -l Production_TL -i linux-ix86 hubtmr
/mnt/code/tivoli/scripts/update_resources.sh
wsettask -t Update_Resources_from_SPOKE -l Production_TL -i linux-ix86
/mnt/code/tivoli/scripts/update_resources.sh
wsettask -t ep_customization -l Production_TL -i linux-ix86 hubtmr
/mnt/code/tivoli/scripts/apps/APM/ep_customization.sh
wsettask -t itm_WAS_rm_distrib -l Production_TL -i linux-ix86 hubtmr
/mnt/code/tivoli/scripts/apps/WAS/itm_was_rm_distrib.sh
wsettask -t Create_DB2_instance_objects -l Production_TL -i linux-ix86
/mnt/code/tivoli/scripts/apps/DB2/create_db2_instance_objects.sh
wsettask -t Create_DB2_database_objects -l Production_TL -i linux-ix86
/mnt/code/tivoli/scripts/apps/DB2/create_db2_database_objects.sh
wsettask -t itm_DB2_database_rm_distrib -l Production_TL -i linux-ix86
/mnt/code/tivoli/scripts/apps/DB2/itm_db2_database_rm_distrib.sh
wsettask -t itm_DB2_instance_rm_distrib -l Production_TL -i linux-ix86
/mnt/code/tivoli/scripts/apps/DB2/itm_db2_instance_rm_distrib.sh
hubtmr
hubtmr
hubtmr
hubtmr
hubtmr
The load_tasks creates the following tasks to be referenced from the endpoint
policies:
356
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
A.2.4 update_resources.sh
This script in Example A-7 is used to update resources between TMRs, and is
referenced from the Update_Resources task in the Production_TL Task Library.
It is suggested to schedule this script to execute automatically on all spoke
TMRs, for example every 120 minutes, to automatically inform the hubtmr about
changes to the infrastructure,
Example: A-7 update_resources.sh
#!/bin/sh
wupdate -r All All
A.3 Policies and related tasks and scripts
The files in this section implement the automated deployment of software
packages and monitoring profiles to new Outlet Servers upon the initial login,
using the capabilities of the endpoint policies in the Tivoli Management
Framework.
The scripts and supporting files that implement the endpoint policies can be
found in the /mnt/code/tivoli/scripts/policies directory after unpacking the
online material.
As a prerequisite for using these scripts, the logical structure and the
Production_TL task library must have been created, using the files provided in
A.2, “TMR setup and maintenance-related files” on page 351.
A.3.1 put.sh
Example A-8 is a support script used to easily load the endpoint policies into the
TMR.
Example: A-8 put.sh
echo “Installing allow policy...”
wputeppol allow < allow_policy
echo “Installing login policy...”
wputeppol login < login_policy
echo “Installing login policy...”
wputeppol after < after_policy
echo “Installing select policy...”
wputeppol select < select_policy
echo “Updating epmgr...”
Appendix A. Configuration files and scripts
357
wepmgr update
echo “Done!”
A.3.2 get.sh
Example A-9 is a support script used to easily unload the endpoint policies from
a TMR.
Example: A-9 get.sh
wgeteppol
wgeteppol
wgeteppol
wgeteppol
allow > allow_policy_tmr
login > login_policy_tmr
after > after_policy_tmr
select > select_policy_tmr
A.3.3 allow_policy
This script in Example A-10 implements the allow_login endpoint policy used to
verify if a new endpoint is authorized to connect to the TMR. This implementation
of the allow_login policy only allows endpoints to proceed with the login process
if their host name is matched in the gateway.lst file. See A.3.5, “gateway.lst” on
page 364. In addition, the stores.lst (see , “general_stores.lst” on page 355 file is
consulted to determine the correct gateway.
Example: A-10 allow_policy
#!/bin/sh
# ------------------------------------------------------------------------#
# Descrizione: Allow policy
# 1) Verifica che non esista un Endpoint duplicato:
#
- duplicato ---> exit 6
#
2) Consente il login dell’Endpoint solo se esiste il SUO gateway di
filiale
#
# Input: necessita del file “/Tivoli_$HOST/comune/gateway.lst” il quale
#
contiente la lista di tutti i gateway del local TMR
#
# Exit: exit 0 = gateway trovato
#
exit 6 = gateway non trovato oppure endpoint duplicato
#
# Versione 1.0 09/12/2002 by Fabrizio Salustri, Giuseppe Grammatico, Enzo
Randazzo
# ------------------------------------------------------------------------#
# Please do not remove the below Tivoli comments
358
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# --- Start of Tivoli comments --#
# The following are the command line arguments passed to this script
# from the Endpoint Manager.
#
# $1 - The label of the endpoint machine
# $2 - The object reference of the endpoint machine
# $3 - The architecture type of the endpoint machine
# $4 - The object reference of the gateway that the endpoint logged into
# $5 - The ip address of the endpoint logging in.
# $6 - region
# $7 - dispatcher
# $8 - version
# $9 - The inventory id of the endpoint logging in.
#
# The following command line argument will be passed to this script
# from the Endpoint Manager, when compiled with the MULTIPROTO flag turned on
#
# $10 - The protocol of the endpoint logging in.
#
TCPIP
-> TCP/IP
#
IPX
-> IPX/SPX
#
# The normal exit code of 0 from the allow_install_policy will allow the
# endpoint’s initial login to proceed. (If the label of this endpoint is
# in use, though, this login won’t complete.)
#
# An exit code of 10 also will allow this login to proceed and, if this
# endpoint’s label matches the label of an existing endpoint, a unique label
# will be created for this endpoint.
#
# An exit code of 6 will cause this login to be ignored.
#
# Exiting the allow_install_policy with any other non-zero exit status will
# stop this endpoint’s initial (or orphaned) login.
#
# The environment variable LCF_LOGIN_STATUS is also set by the epmgr.
# A value of 2 indicates the endpoint is isolated. That is, it was unable
# to contact its assigned gateway. Isolated endpoints are automatically
# migrated to another gateway unless the select_gateway_policy terminates
# with a non-zero exit status. Other LCF_LOGIN_STATUS values are:
# 0 Initial login
(allow_install_policy, select_gateway_policy,
after_install_policy)
# 2 Isolated login (select_gateway_policy)
# 3 Migratory login (select_gateway_policy)
# 7 Orphaned login (allow_install_policy, select_gateway_policy,
after_install_policy)
#
# The allow_install_policy will have these environment variables set if
# there is already an existing endpoint with the same label as the endpoint
Appendix A. Configuration files and scripts
359
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
which is attempting to login:
LCF_DUPL_OBJECT
object id of existing endpoint
LCF_DUPL_ADDRESS network address of existing endpoint
LCF_DUPL_LOGIN
timestamp of existing endpoint’s first normal login
LCF_DUPL_GATEWAY object id of existing endpoint’s gateway
LCF_DUPL_INV_ID
inventory id of existing endpoint
LCF_DUPL_INTERP
interp (architecture type) of existing endpoint
The initial login will fail for an endpoint whose label matches the label
of an existing endpoint, unless allow_install_policy is exited with code 10.
Also note that during the execution of allow_install and select_gateway
policy scripts, the endpoint does not yet formally exist. For this reason,
the endpoint object reference will have a value of OBJECT_NIL and the
object dispatcher number will be 0. The endpoint label will have the value
suggested by the endpoint (or the user value lcfd -n) but is not guaranteed
to become the final endpoint label. It will become the final endpoint label
if this value is not already taken by another endpoint.
--- End of Tivoli comments ---
ep_label=$1
POL_DIR=”/mnt/code/tivoli/scripts/policies”
LISTGW=”$POL_DIR/gateway.lst”
LOG_FILE=”$DBDIR/../policies/logs/allow_install.log”
STORE_FILE_DIR=”$POL_DIR/../store_lists”
#
# check if Enpoint already exists
#
- if exist then Exit_code 6 + log
#
if [ “$LCF_DUPL_OBJECT” != ““ ]
then
data=‘date +”%d/%m/%Y %H:%M:%S”‘
echo “$data Endpoint: $ep_label already exists. Exit_Code 6”
>>$LOG_FILE
echo “$data Additional Information follows:” >>$LOG_FILE
echo “$data LCF_DUPL_OBJECT: $LCF_DUPL_OBJECT” >>$LOG_FILE
echo “$data LCF_DUPL_ADDRESS: $LCF_DUPL_ADDRESS” >>$LOG_FILE
echo “$data LCF_DUPL_LOGIN: $LCF_DUPL_LOGIN” >>$LOG_FILE
echo “$data LCF_DUPL_GATEWAY: $LCF_DUPL_GATEWAY” >>$LOG_FILE
echo “$data LCF_DUPL_INV_ID: $LCF_DUPL_INV_ID \n” >>$LOG_FILE
fi
#
# extract the EP_ID based on the hostname
#
EP_ID=‘echo $ep_label | cut -c 1-3‘
STORE=‘echo $ep_label | cut -d “_” -f3 | cut -d “-” -f1‘
360
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
# Find the store in the store list files to ensure the provided region and
store are valid and
# search the gateway.lst for the assigned gateway
#
grep $STORE $STORE_FILE_DIR/*.lst
if [ $? -eq 0 ] ; then
echo “Login allowed for $ep_label” >>$LOG_FILE
else
GATEWAY=$(grep $EP_ID $LISTGW )
if [ $? -gt 0 ]; then
date=‘date “+%d/%m/%Y %H:%M:%S”‘
echo $date “Endpoint does not follow naming convention and was not
found in $LISTGW. Login not allowed for $ep_label.” >>$LOG_FILE
exit 1
else
echo “Endpoint does not follow naming convention, but found in $LISTGW.
Login allowed for $ep_label.” >>$LOG_FILE
fi
fi
A.3.4 select_policy
Example A-11 implements the select_gateway endpoint policy, and is used to
determine which gateway to assign to and endpoint. The gateway.lst file (see
A.3.5, “gateway.lst” on page 364) holds the control information needed to assign
the correct gateway.
Example: A-11 select_policy
#!/bin/sh
#
# Select login policy
#
# Ricerca l’id della filiale nel file LISTAGW, creato con il comando “wlookup
-ar Gateway”
# - trovato: l’endpoint viene attestato a questo gateway
# - non trovato: login rifiutata
#
# Versione 1.0 09/12/2002 by Fabrizio Salustri, Giuseppe Grammatico, Enzo
Randazzo
# ------------------------------------------------------------------------# Please do not remove the below Tivoli comments
# --- Start of Tivoli comments --#
Appendix A. Configuration files and scripts
361
# The following are the command line arguments passed to this script
# from the Endpoint Manager.
#
# $1 - The label of the endpoint machine
# $2 - The object reference of the endpoint machine
# $3 - The architecture type of the endpoint machine
# $4 - The object reference of the gateway that the endpoint logged into
# $5 - The ip/ipx address of the endpoint logging in (refer to parameter
#
$10 to determine the protocol of the endpoint).
# $6 - region
# $7 - dispatcher
# $8 - version
# $9 - The inventory id of the endpoint logging in.
#
# The following command line argument will be passed to this script
# from the Endpoint Manager, when compiled with the MULTIPROTO flag turned on
#
# $10 - The protocol of the endpoint logging in.
#
TCPIP
-> TCP/IP
#
IPX
-> IPX/SPX
#
# The environment variable LCF_LOGIN_STATUS is also set by the epmgr.
# A value of 2 indicates the endpoint is isolated. That is, it was unable
# to contact its assigned gateway. Isolated endpoints are automatically
# migrated to another gateway unless the select_gateway_policy terminates
# with a non-zero exit status. Other LCF_LOGIN_STATUS values are:
# 0 Initial login
(allow_install_policy, select_gateway_policy,
after_install_policy)
# 2 Isolated login (select_gateway_policy)
# 3 Migratory login (select_gateway_policy)
# 7 Orphaned login (allow_install_policy, select_gateway_policy,
after_install_policy)
#
# Also note that during the execution of allow_install and select_gateway
# policy scripts, the endpoint does not yet formally exist. For this reason,
# the endpoint object reference will have a value of OBJECT_NIL and the
# object dispatcher number will be 0. The endpoint label will have the value
# suggested by the endpoint (or the user value lcfd -n) but is not guaranteed
# to become the final endpoint label. It will become the final endpoint label
# if this value is not already taken by another endpoint.
# --- End of Tivoli comments --ep_label=$1
POL_DIR=”/mnt/code/tivoli/scripts/policies”
LISTGW=”$POL_DIR/gateway.lst”
LOG_FILE=”$DBDIR/../policies/logs/select_gateway.log”
STORE_FILE_DIR=”$POL_DIR/../store_lists”
362
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#You must change the store_file_dir to point to the directory containing your
store files
#
# Select the first three characters of the endpoint label to identify the type
of endpoint
#
EP_ID=‘echo $ep_label | cut -c 1-3‘
REGION=‘echo $ep_label | cut -d “_” -f2‘
STORE=‘echo $ep_label | cut -d “_” -f3 | cut -d “-” -f1‘
#
# Find the store in the store list files to ensure the provided region and
store are valid and
# search the gateway.lst for the assigned gateway
#
grep $STORE $STORE_FILE_DIR/*.lst
if [ $? -eq 0 ] ; then
GATEWAY=$(grep $STORE $LISTGW)
if [ $? -eq 0 ]; then
GATEWAY_OID=$(echo $GATEWAY | cut -d’#’ -f1 | awk ‘{print $2}’)
GATEWAY_NAME=$(echo $GATEWAY | awk ‘{print $1}’)
echo “Gateway for $ep_label is $GATEWAY_NAME” >>$LOG_FILE
else
echo $date “Gateway not found for Endpoint $ep_label” >>$LOG_FILE
exit 6
fi
else
GATEWAY=$(grep $EP_ID $LISTGW)
if [ $? -eq 0 ]; then
GATEWAY_OID=$(echo $GATEWAY | cut -d’#’ -f1 | awk ‘{print $2}’)
GATEWAY_NAME=$(echo $GATEWAY | awk ‘{print $1}’)
echo “Endpoint does not follow naming convention, but found in
$LISTGW” >>$LOG_FILE
echo “Gateway for $ep_label is $GATEWAY_NAME” >>$LOG_FILE
else
date=‘date “+%d/%m/%Y %H:%M:%S”‘
echo $date “Gateway not found for Endpoint $ep_label”
>>$LOG_FILE
exit 7
fi
fi
echo $GATEWAY_OID
Appendix A. Configuration files and scripts
363
A.3.5 gateway.lst
This file in Example A-12 is used to correlate stores and regions and their related
gateways.
Example: A-12 gateway.lst
## TMR Gateways
hubtmr-gw1393424439.1.594#TMF_Gateway::Gateway#tecsrcrdbspom084fabmmmmmwconmjn
kcb ibm
spoketmr-gw1282790711.1.1146#TMF_Gateway::Gateway# tmtreg
## Regional Gateways
#TEXAS01-gw$TMR.x.xx#TMF_Gateway::Gateway#
#OHIO01-gw$TMR.x.xx#TMF_Gateway::Gateway#
region01-gw1282790711.2.23#TMF_Gateway::Gateway#stadev
## Store Gateways
#TEXAS_1234-gw$TMR.x.xx#TMF_Gateway::Gateway#
#OHIO_4321-gw$TMR.x.xx#TMF_Gateway::Gateway#
region_store-gw1282790711.3.23#TMF_Gateway::Gateway# STORE
## Assumptions:
##
## 1 - Endpoint labels must be set at machine installation time to the
following format: <hostname>_<region>_<store>-ep
## 2 - Each store has a unique number (e.g. 1234 does not appear in other
regions)
A.3.6 login_policy
This endpoint policy in Example A-13 is used for all endpoint logins, apart from
the initial login performed for new endpoints. The following implementation only
logs the login, but customized proceeding can be applied.
Example: A-13 login_policy
#!/bin/sh
# ------------------------------------------------------------------------# Descrizione: Login script
# - sul gateway nella directory $LOG_FILE viene creato un file
#
$ep_label. La data/time di ultima modifica e’ usata per capire
364
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
quando l’endpoint ha effettuato l’ultimo normal login
#
# Versione 1.0 09/12/2002 by Fabrizio Salustri, Giuseppe Grammatico, Enzo
Randazzo
# ------------------------------------------------------------------------#
# Please do not remove the below Tivoli comments
# --- Start of Tivoli comments --#
# The following are the command line arguments passed to this script
# from the Gateway.
#
# $1 - The label of the endpoint machine
# $2 - The object reference of the endpoint machine
# $3 - The architecture type of the endpoint machine
# $4 - The object reference of the gateway that the endpoint logged into
# $5 - The ip/ipx address of the endpoint logging in (refer to parameter
#
$9 to determine the protocol of the endpoint).
# $6 - region
# $7 - dispatcher
# $8 - version
#
# The following command line argument will be passed to this script
# from the Endpoint Manager, after the MULTIPROTO flag will be turned on
#
# $9 - The protocol of the endpoint logging in.
#
TCPIP
-> TCP/IP
#
IPX
-> IPX/SPX
#
# --- End of Tivoli comments --ep_label=$1
LOG_FILE=”$DBDIR/../policies/logs/login_policy”
touch “$LOG_FILE/$ep_label”
exit 0
A.3.7 after_policy
This implementation of the after_login endpolicy in Example A-14 on page 366 is
used to perform initial configuration, including subscribing the endpoint to the
profile managers in the logical structure, submitting an initial inventory scan, and
finally starting a task that activates the APM plan for the particular endpoint.
Appendix A. Configuration files and scripts
365
Example: A-14 after_policy
#!/bin/sh
# Please do not remove the below Tivoli comments
# --- Start of Tivoli comments --#
# The following are the command line arguments passed to this script
# from the Endpoint Manager.
#
# $1 - The label of the endpoint machine
# $2 - The object reference of the endpoint machine
# $3 - The architecture type of the endpoint machine
# $4 - The object reference of the gateway that the endpoint logged into
# $5 - The ip/ipx address of the endpoint logging in (refer to parameter
#
$10 to determine the protocol of the endpoint).
# $6 - region
# $7 - dispatcher
# $8 - version
# $9 - The unique id (inventory id) of the endpoint logging in.
# $10 - The protocol of the endpoint logging in.
#
TCPIP
-> TCP/IP
#
IPX
-> IPX/SPX
#
# The environment variable LCF_LOGIN_STATUS is also set by the epmgr.
# A value of 2 indicates the endpoint is isolated. That is, it was unable
# to contact its assigned gateway. Isolated endpoints are automatically
# migrated to another gateway unless the select_gateway_policy terminates
# with a non-zero exit status. Other LCF_LOGIN_STATUS values are:
# 0 Initial login
(allow_install_policy, select_gateway_policy,
after_install_policy)
# 2 Isolated login (select_gateway_policy)
# 3 Migratory login (select_gateway_policy)
# 7 Orphaned login (allow_install_policy, select_gateway_policy,
after_install_policy)
#
# Also note that during the execution of allow_install and select_gateway
# policy scripts, the endpoint does not yet formally exist. For this reason,
# the endpoint object reference will have a value of OBJECT_NIL and the
# object dispatcher number will be 0. The endpoint label will have the value
# suggested by the endpoint (or the user value lcfd -n) but is not guaranteed
# to become the final endpoint label. It will become the final endpoint label
# if this value is not already taken by another endpoint.
# --- End of Tivoli comments --ep_label=$1
INTERP=$3
POL_DIR=”/mnt/code/tivoli/scripts/policies”
SCRIPTS_LOC=”/mnt/code/tivoli/scripts”
LOG_FILE=”$DBDIR/../policies/logs/after_login_policy.log”
366
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
echo “Checking INTERP” >> $LOG_FILE
echo $INTERP >> LOG_FILE
echo “Calling sub_ep.sh script” >> $LOG_FILE
$POL_DIR/sub_ep.sh $ep_label $LOG_FILE
echo “Calling ep_login_notif.sh script” >> $LOG_FILE
$POL_DIR/ep_login_notif.sh $ep_label $INTERP
#echo “Calling ep_customization.sh script” >> $LOG_FILE
#$SCRIPTS_LOC/apps/APM/ep_customization.sh $ep_label
echo “Calling ep_customization task” >> $LOG_FILE
$POL_DIR/run_ep_customization_task.sh $ep_label >> $LOG_FILE
echo “After script execution completed... Exiting” >> $LOG_FILE
exit 0
A.3.8 sub_ep.sh
This script in Example A-15 is called from the after_login policy to perform initial
subscription of new endpoints.
Example: A-15 sub_ep.sh
#!/bin/sh
ep_label=$1
LOG_FILE=$2
REGION=‘echo $ep_label | cut -d “_” -f2‘
STORE=‘echo $ep_label | cut -d “_” -f3 | cut -d “-” -f1‘
policy_region=$(wlookup -ar PolicyRegion|grep $REGION’_’EP|awk ‘{print $1}’)
profile_manager=$(wlookup -ar ProfileManager|grep $STORE’_’PMS|awk ‘{print
$1}’)
wupdate -r Endpoint All
# Link the endpoint to the appropriate policy region
wln @Endpoint:$ep_label @PolicyRegion:$policy_region
if [ $? -eq 0 ]; then
echo “Endpoint $ep_label linked to Policy Region $policy_region
successfully.” >> $LOG_FILE
wpostemsg -m “Endpoint $ep_label completed initial login and successfully
linked to Policy Region $policy_region.” -r HARMLESS EVENT EVENT
else
echo “Endpoint $ep_label failed to link to Policy Region $policy_region.”
>> $LOG_FILE
Appendix A. Configuration files and scripts
367
exit 5
fi
# Subscribe the endpoint to the appropriate profile manager
wsub -r @$profile_manager @Endpoint:$ep_label
if [ $? -eq 0 ]; then
echo “Endpoint $ep_label subscribed to Profile Manager $profile_manager
successfully.” >> $LOG_FILE
else
echo “Endpoint $ep_label failed to subscribe to Profile Manager
$profile_manager.” >> $LOG_FILE
echo “Either the profile manager or the endpoint is unreachable.” >>
$LOG_FILE
exit 6
fi
A.3.9 ep_login_notif.sh
This script in Example A-16 is used by the after_login policy to initiate an initial
inventory scan of new endpoints.
Example: A-16 ep_login_notif.sh
#!/bin/sh
set -x
ep_label=$1
INTERP=$2
POL_DIR=”/mnt/code/tivoli/scripts/policies”
JRE_DIR=”$POL_DIR/../../ITM/5.1.2/base/generic/Tools/JRE”
LOG_FILE=”$DBDIR/../policies/logs/wdistinv.log”
REGION=‘echo $ep_label | cut -d “_” -f2‘
STORE=‘echo $ep_label | cut -d “_” -f3 | cut -d “-” -f1‘
policy_region=$(wlookup -ar PolicyRegion|grep $REGION’_’EP|awk ‘{print $1}’)
TMR_NAME=$(idlattr -tgv ‘wlookup ServerManagedNode‘ label string |sed -e
“s/\”//g”)
# Check if the EP is running...
echo “Sleeping for 5 minutes waiting for the EP to complete the first normal
login, please wait...” >> $LOG_FILE
sleep 300
echo “Sleep complete!” >> $LOG_FILE
echo “Entering in Inventory section...” >> $LOG_FILE
wupdate -r Endpoint All >> $LOG_FILE
echo “wupdate complete!” >> $LOG_FILE
echo “Checking if the EP is alive...” >> $LOG_FILE
wep $ep_label status >> $LOG_FILE
368
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
RC=$?
if [ $RC -eq 0 ] ; then
# Distribute the Hardware inventory profile
echo “$ep_label” >> $LOG_FILE
wdistinv @InventoryConfig:HW @Endpoint:$ep_label >> $LOG_FILE
EC=$?
if [ $EC -eq 0 ] ; then
wpostemsg -m “Endpoint $ep_label started HW scan” -r HARMLESS EVENT
EVENT
else
inventory_config=$(wlookup -ar InventoryConfig|grep “HW”|grep $TMR|grep
$TMR_NAME|awk ‘{print $1}’)
wdistinv @InventoryConfig:$inventory_config @Endpoint:$ep_label >>
$LOG_FILE
RC=$?
if [ $RC -eq 0 ] ; then
wpostemsg -m “Endpoint $ep_label started HW scan” -r HARMLESS EVENT
EVENT
else
echo “Failed to submit the HW inventory scan.” >> $LOG_FILE
exit 1
fi
fi
# Distribute the Software inventory profile
echo “$ep_label” >> $LOG_FILE
wdistinv @InventoryConfig:SW @Endpoint:$ep_label >> $LOG_FILE
EC=$?
if [ $EC -eq 0 ] ; then
wpostemsg -m “Endpoint $ep_label started SW scan” -r HARMLESS EVENT
EVENT
else
inventory_config=$(wlookup -ar InventoryConfig|grep “SW”|grep $TMR|grep
$TMR_NAME|awk ‘{print $1}’)
wdistinv @InventoryConfig:$inventory_config @Endpoint:$ep_label >>
$LOG_FILE
RC=$?
if [ $RC -eq 0 ] ; then
wpostemsg -m “Endpoint $ep_label started SW scan” -r HARMLESS EVENT
EVENT
else
echo “Failed to submit the SW inventory scan.” >> $LOG_FILE
exit 2
fi
fi
else
Appendix A. Configuration files and scripts
369
wpostemsg -m “Endpoint $ep_label completed initial login to its gateway but
it is not running, Inventory HW and SW distributions not started” -r CRITICAL
EVENT EVENT
exit 3
fi
echo “Sleeping 5 minutes waiting for inventory scans to complete” >> $LOG_FILE
sleep 300
echo “Spleep complete!” >> $LOG_FILE
# Check if the EP is running...
echo “Entering in ITM section...” >> $LOG_FILE
wep $ep_label status >> $LOG_FILE
RC=$?
if [ $RC -eq 0 ] ; then
# Distribute the base ITM profile including the JRE
ITM_PRF_NAME=‘wlookup -ar Tmw2kProfile |grep $STORE | grep $INTERP| awk
‘{print $1}’‘
wdmdistrib -p $ITM_PRF_NAME -J $JRE_DIR -l -e -w -i @Endpoint:$ep_label >>
$LOG_FILE
RC=$?
if [ $RC -eq 0 ] ; then
wpostemsg -m “Endpoint $ep_label started ITM Profile distribution named
$ITM_PRF_NAME” -r HARMLESS EVENT EVENT
else
echo “ITM Profile Distribution named $ITM_PRF_NAME failed for endpoint
$ep_label” >> $LOG_FILE
exit 4
fi
else
wpostemsg -m “Endpoint $ep_label completed initial login to its gateway but
it is not running, ITM profile distribution not started” -r CRITICAL EVENT
EVENT
exit 5
fi
A.3.10 run_ep_customization_task.sh
This script in Example A-17 on page 371 is called from the after_login policy in
order to start and asynchronous task that deploys the APM plan, which is
responsible for software installation and profile deployment), for the new
endpoint.
370
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Example: A-17 run_ep_customization_task.sh
#!/bin/sh
ep_label=$1
LOG_FILE=/tmp/ep_customization_task.out
wruntask -t ep_customization -l Production_TL -h @ManagedNode:hubtmr -a
$ep_label -m 600 >> $LOG_FILE
A.4 DB2 monitoring deployment-related scripts
These files are used to automate the creation and deployment of DB2-related
monitoring resources. All the files can be found in the
/mnt/tivoli/scripts/apps/DB2 directory. They are all referenced from the script
used to create the Production_TL task library described in A.2.3, “load_tasks.sh”
on page 356.
A.4.1 create_db2_instance_objects.sh
This script in Example A-18 creates DB2InstanceManager objects related to a
new endpoint.
In the APM plan for deployment, a task using this script is executed immediately
after installation of the DB2 Server software package.
Example: A-18 create_db2_instance_objects.sh
#!/bin/sh
# This script creates an IBM for Databases for DB2 database resource.
# We started from these assumptions:
#
# instance name is the default one installed with DB2, “db2inst1”
# We created the “DB2 Database Servers” Policy Region on both the TMR hub and
the TMR spoke(s)
ep_label=$1
SCRIPTS_LOC=”/mnt/code/tivoli/scripts”
LOG_FILE=”$SCRIPTS_LOC/apps/DB2/logs/create_db2_instance_objects.log”
wcdb2inst -l “db2inst1@$ep_label” \
-e $ep_label \
-i db2inst1 \
-p “DB2 Database Servers” >> $LOG_FILE
Appendix A. Configuration files and scripts
371
EC=$?
if [ $EC -eq 0 ] ; then
echo “Instance object for EP $ep_label succesfully created” >> $LOG_FILE
else
TMR_NAME=$(idlattr -tgv ‘wlookup ServerManagedNode‘ label string |sed -e
“s/\”//g”)
policy_region=‘wlookup -ar PolicyRegion |grep $TMR|grep $TMR_NAME| grep
“DB2 Database Servers”| awk ‘{print $1 “ “ $2 “ “ $3}’‘
wcdb2inst -l “db2inst1@$ep_label” \
-e $ep_label \
-i db2inst1 \
-p “$policy_region” >> $LOG_FILE
RC=$?
if [ $RC -eq 0 ]; then
echo “Instance object for EP $ep_label succesfully created” >>
$LOG_FILE
else
echo “Failed to create Instance object for EP $ep_label” >> $LOG_FILE
exit 1
fi
fi
A.4.2 create_db2_database_objects.sh
This script in Example A-19 creates DB2DatabaseManager objects related to a
new endpoint.
In the APM plan for deployment, a task using this script is executed immediately
after installation of the DB2 Server software package.
Example: A-19 create_db2_database_objects.sh
#!/bin/sh
# This script creates an IBM for Databases for DB2 database resource.
# We started from these assumptions:
#
# instance name is the default one installed with DB2, “db2inst1”
# database name is “STORE”, and each machine that will be installed as EP must
have this DB created under the “db2inst1” instance. The STORE database is used
by
# the TimeCard pllication
# We created the “DB2 Database Servers” Policy Region on both the TMR hub and
the TMR spoke(s)
ep_label=$1
ep=$(echo $ep_label|cut -d “-” -f1)
SCRIPTS_LOC=”/mnt/code/tivoli/scripts”
372
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
LOG_FILE=”$SCRIPTS_LOC/apps/DB2/logs/create_db2_database_objects.log”
TMR_NAME=$(idlattr -tgv ‘wlookup ServerManagedNode‘ label string |sed -e
“s/\”//g”)
wcdb2dbs -l “store@db2inst1@$ep” \
-i “db2inst1@$ep_label” \
-d STORE \
-p “DB2 Database Servers” >> $LOG_FILE
EC=$?
if [ $EC -eq 0 ] ; then
echo “DB object for EP $ep_label succesfully created” >> $LOG_FILE
else
PR=$(wlookup -ar PolicyRegion|grep “DB2 Database
Servers#$TMR_NAME-region”|grep $TMR|grep $TMR_NAME|awk ‘{print $1 “ “ $2 “ “
$3}’)
wcdb2dbs -l “store@db2inst1@$ep” \
-i “db2inst1@$ep_label” \
-d STORE \
-p “$PR” >> $LOG_FILE
RC=$?
if [ $RC -eq 0 ]; then
echo “DB object for EP $ep_label succesfully created” >> $LOG_FILE
else
echo “Failed to create DB object for EP $ep_label” >> $LOG_FILE
exit 1
fi
fi
A.4.3 itm_db2_instance_rm_distrib.sh
This script in Example A-20 is used to distribute db2 instance related monitoring
profiles the newly created endpoints.
In the APM plan for deployment, a task using this script is executed immediately
after installation of the DB2 Server software package.
Example: A-20 itm_db2_instance_rm_distrib.sh
#!/bin/sh
# This script is used by a Tivoli Task to distribute the ITM for Databases for
DB2 Resource Model(s) to the InstanceManager resource discovered on an EP
# We created and customized a Tmw2kProfile to include specific RMs.
# Before the ITM RM distribution, the scripts subscribe the InstanceManager
Resource to the ProfileManager containing the Tmw2kProfile to distribute
#
Appendix A. Configuration files and scripts
373
# We hard coaded the Profile Manager name to “DB2Manager-Instances”
ep_label=$1
ep=$(echo $ep_label | cut -d “-” -f1)
SCRIPTS_LOC=”/mnt/code/tivoli/scripts”
LOG_FILE=”$SCRIPTS_LOC/apps/DB2/logs/itm_db2_instance_rm_distrib.log”
# Setting variables for instance object(s)
INST_PM=”DB2Manager-Instances”
INST_RM=”DB2_instances”
INST_SUB=$(wlookup -ar DB2InstanceManager|grep db2inst1|grep $ep_label|awk
‘{print $1}’)
# Subscribing DB2 resources to its ProfileManagers
wsub -r @ProfileManager:$INST_PM @DB2InstanceManager:$INST_SUB
EC=$?
if [ $EC -eq 0 ] ; then
echo “Instance object for EP $ep_label succesfully subscried to $INST_PM”
else
TMR_NAME=$(idlattr -tgv ‘wlookup ServerManagedNode‘ label string |sed -e
“s/\”//g”)
prof_mgr=‘wlookup -ar ProfileManager |grep $TMR|grep $TMR_NAME| grep
“DB2Manager-Instances”| awk ‘{print $1}’‘
wsub -r @ProfileManager:$prof_mgr @DB2InstanceManager:$INST_SUB
RC=$?
if [ $RC -eq 0 ]; then
echo “Instance object for EP $ep_label succesfully subscribed
to $prof_mgr” >> $LOG_FILE
else
echo “Failed to subscribe Instance object to $prof_mgr for EP $ep_label”
>> $LOG_FILE
exit 1
fi
fi
# Distributing ITM DB2 RMs to the specified resources
wdmdistrib -p $INST_RM -e -w -i @DB2InstanceManager:$INST_SUB >> $LOG_FILE
A.4.4 itm_db2_database_rm_distrib.sh
This script in Example A-21 distributes monitoring profiles for DB2 databases to
newly created endpoints.
374
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
In the APM plan for deployment, a task using this script is executed immediately
after installation of the DB2 Server software package.
Example: A-21 iitm_db2_database_rm_distrib.sh
#!/bin/sh
# This script is used by a Tivoli Task to distribute the ITM for Databases for
DB2 Resource Model(s) to the InstanceManager resource discovered on an EP
# We created and customized a Tmw2kProfile to include specific RMs.
# Before the ITM RM distribution, the scripts subscribe the InstanceManager
Resource to the ProfileManager containing the Tmw2kProfile to distribute
#
# We hard coaded the Profile Manager name to “DB2Manager-Instances”
ep_label=$1
ep=$(echo $ep_label | cut -d “-” -f1)
SCRIPTS_LOC=”/mnt/code/tivoli/scripts”
LOG_FILE=”$SCRIPTS_LOC/apps/DB2/logs/itm_db2_instance_rm_distrib.log”
# Setting variables for instance object(s)
INST_PM=”DB2Manager-Instances”
INST_RM=”DB2_instances”
INST_SUB=$(wlookup -ar DB2InstanceManager|grep db2inst1|grep $ep_label|awk
‘{print $1}’)
# Subscribing DB2 resources to its ProfileManagers
wsub -r @ProfileManager:$INST_PM @DB2InstanceManager:$INST_SUB
EC=$?
if [ $EC -eq 0 ] ; then
echo “Instance object for EP $ep_label succesfully subscried to $INST_PM”
else
TMR_NAME=$(idlattr -tgv ‘wlookup ServerManagedNode‘ label string |sed -e
“s/\”//g”)
prof_mgr=‘wlookup -ar ProfileManager |grep $TMR|grep $TMR_NAME| grep
“DB2Manager-Instances”| awk ‘{print $1}’‘
wsub -r @ProfileManager:$prof_mgr @DB2InstanceManager:$INST_SUB
RC=$?
if [ $RC -eq 0 ]; then
echo “Instance object for EP $ep_label succesfully subscribed
to $prof_mgr” >> $LOG_FILE
else
echo “Failed to subscribe Instance object to $prof_mgr for EP $ep_label”
>> $LOG_FILE
exit 1
fi
fi
Appendix A. Configuration files and scripts
375
# Distributing ITM DB2 RMs to the specified resources
wdmdistrib -p $INST_RM -e -w -i @DB2InstanceManager:$INST_SUB >> $LOG_FILE
A.5 WebSphere Application Server monitoring
deployment files
The files described in this section are used to automate the creation and
deployment of WebSphere Application Server monitoring resources. All the files
can be found in the /mnt/tivoli/scripts/apps/WAS directory.
The scripts described in the following sections are all referenced from the script
used to create the Production_TL task library described in A.2.3, “load_tasks.sh”
on page 356.
A.5.1 create_app_server.sh
This script in Example A-22 creates IWSApplicationServer objects related to a
new endpoint.
In the APM plan for deployment, a task using this script is executed immediately
after installation of the WebSphere Application Server software package.
Example: A-22 create_app_server.sh
#!/bin/sh
#
# Usage: ./create_app_server.sh <serverName> <cellName> <nodeName> <hostname>
<ep_label>
if [ $# -ne 5 ] ; then
echo “Usage: ./create_app_server.sh <serverName> <cellName> <nodeName>
<hostname> <ep_label>”
exit 1
fi
SERVER=$1
CELL=$2
NODE=$3
HOSTNAME=$4
EP_LABEL=$5
wwebsphere -CreateAppServer \
376
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
-serverName $SERVER \
-cellName $CELL \
-nodeName $NODE \
-version 5.1 \
-rootDirectory /opt/IBM/WebSphere/AppServer \
-hostname $HOSTNAME \
-endpoint $EP_LABEL \
-policyRegion “WebSphere Application Servers”
A.5.2 itm_was_rm_distrib.sh
This script in Example A-23 is used to distribute WebSphere Application Server
monitoring profiles the newly created endpoints.
In the APM plan for deployment, a task using this script is executed immediately
after installation of the WebSphere Application Server software package.
Example: A-23 itm_was_rm_distrib.sh
#!/bin/sh
# This script is used by a Tivoli Task to distribute the ITM for
WebInfrastructure WAS Resource Model(s) to the IWSApplicationServer resource
discovered on an EP
# We created and customized a Tmw2kProfile to include specific RMs.
# Before the ITM RM distribution, the scripts subscribe the
IWSApplicationServer resource to the ProfileManager containing the Tmw2kProfile
to distribute
#
# We used the default Profile Manager created during the product installtion,
called “WebSphere Application Servers”
ep_label=$1
ep=$(echo $ep_label | cut -d “-” -f1)
SCRIPTS_LOC=”/mnt/code/tivoli/scripts”
LOG_FILE=”$SCRIPTS_LOC/apps/WAS/logs/was_rm_distrib.log”
TMR_NAME=$(idlattr -tgv ‘wlookup ServerManagedNode‘ label string |sed -e
“s/\”//g”)
# Setting variable for AppServer objects
WAS_PM=$(wlookup -ar ProfileManager|grep “WebSphere Application
Servers#$TMR_NAME-region”|grep $TMR|grep $TMR_NAME|awk ‘{print $1 “ “ $2 “ “
$3}’)
WAS_RM=$(wlookup -ar Tmw2kProfile|grep “WebSphere Application Servers
Profile”|grep $TMR|grep $TMR_NAME|awk ‘{print $1 “ “ $2 “ “ $3 “ “ $4}’)
Appendix A. Configuration files and scripts
377
WAS_SUB=$(wlookup -ar IWSApplicationServer|grep $ep|grep AppSvr|awk ‘{print
$1}’)
# Subscribing AppServer resource to its ProfileManagers
wsub -r @ProfileManager:”$WAS_PM” @IWSApplicationServer:$WAS_SUB
RC=$?
if [ $RC -eq 0 ] ; then
echo “$WAS_SUB subscribed to Profile Manager $WAS_PM”
else
WAS_PM=”WebSphere Application Servers”
WAS_RM=”WebSphere Application Servers Profile”
wsub -r @ProfileManager:”$WAS_PM” @IWSApplicationServer:$WAS_SUB
echo “$WAS_SUB subscribed to Profile Manager $WAS_PM”
fi
# Distributing ITM WAS RM to the specified resource
wdmdistrib -p “$WAS_RM” -e -w -i @IWSApplicationServer:$WAS_SUB >> $LOG_FILE
A.5.3 was_configure_tec_adapter.sh
This script in Example A-24 is used to configure the TEC adapter executing on
an IWSApplicationServer object in the Outlet Systems Management Solution.
Example: A-24 was_configure_tec_adapter.sh
#!/bin/sh
# Usage: ./was_configure_tec_adapter.sh IWSApplicationServer
if [ $# -ne 1 ] ; then
echo “Usage: ./was_configure_tec_adapter.sh IWSApplicationServer”
echo “Example: ./was_configure_tec_adapter.sh
AppSvr@<nodeName>@<cellName>@<serverName>”
exit 1
fi
SERVER=$1
wruntask -t Configure_WebSphere_TEC_Adapter \
-l “WebSphere Event Tasks” \
-h @IWSApplicationServer:$SERVER \
-a ““ -a EventServer \
-a Y \
-a Y \
-a Y \
-a N \
378
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
-a N\
-m 300
A.5.4 was_start_tec_adapter.sh
This script in Example A-25 is used to start the TEC adapter executing on an
IWSApplicationServer object.
Example: A-25 was_start_tec_adapter.sh
#!/bin/sh
# Usage: ./was_start_tec_adapter.sh IWSApplicationServer
if [ $# -ne 1 ] ; then
echo “Usage: ./was_start_tec_adapter.sh IWSApplicationServer”
echo “Example: ./was_start_tec_adapter.sh
AppSvr@<nodeName>@<cellName>@<serverName>”
exit 1
fi
SERVER=$1
wruntask -t Start_WebSphere_TEC_Adapter \
-l “WebSphere Event Tasks” \
-h @IWSApplicationServer:$SERVER \
-m 300
A.5.5 was_stop_tec_adapter.sh
This script Example A-26 is used to stop the TEC adapter executing on an
IWSApplicationServer object.
Example: A-26 was_stop_tec_adapter.sh
#!/bin/sh
# Usage: ./was_stop_tec_adapter.sh IWSApplicationServer
if [ $# -ne 1 ] ; then
echo “Usage: ./was_stop_tec_adapter.sh IWSApplicationServer”
echo “Example: ./was_stop_tec_adapter.sh
AppSvr@<nodeName>@<cellName>@<serverName>”
exit 1
fi
SERVER=$1
Appendix A. Configuration files and scripts
379
wruntask -t Stop_WebSphere_TEC_Adapter \
-l “WebSphere Event Tasks” \
-h @IWSApplicationServer:$SERVER \
-m 300
A.6 TMTP monitoring deployment scripts
The files described in this section are used to automate the creation and
deployment of TMTP related monitoring resources. All the files can be found in
the /mnt/tivoli/scripts/apps/DB2 directory. They are all referenced from the
script used to create the Production_TL task library described in A.2.3,
“load_tasks.sh” on page 356.
A.6.1 addtoagentgroup.sh
The following script in Example A-27 is used to add a new tmtp agent to an
existing agent group. This script must be executed at a system where the tmtp
command line interface has been installed. In the Outlet Systems Management
Solution the hubtmr system is used.
In the APM plan for deployment, a task using this script is executed immediately
after installation of the TMTP Agent software package.
Example: A-27 addtoagentgroup.sh
#Usage: ./addtoagentgroup <agentGroup> <agentName>
GROUPNAME=$1
AGENTNAME=$2
./tmtpcli.sh -AddToAgentGroup $GROUPNAME -Agents $AGENTNAME -Console
A.6.2 deployj2ee.sh
The following script in Example A-28 is used to deploy the J2EE monitoring
component to a TMTP Agent. This script must be executed at a system where
the tmtp command line interface has been installed. In the Outlet Systems
Management Solution the hubtmr system is used.
In the APM plan for deployment, a task using this script is executed immediately
after installation of the TMTP Agent software package.
Example: A-28 deployj2ee.sh
#user ./deployj2ee.sh <agentName> <WASHome> <ServerName> <nodeName>
<security_enabled>
380
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
AGENTNAME=$1
WASHOME=$2
SERVERNAME=$3
NODENAME=$4
SECURITY=$5
if [ $SECURITY -ne “TRUE” ]; then
./tmtpcli.sh -DeployJ2ee -AgentName $AGENTNAME -ServerType WebSphere
-Version 5.0 -ServerHome $WASHOME -ServerName $SERVERNAME -NodeName $NODENAME
-AutoRestart -Console
else
./tmtpcli.sh -DeployJ2ee -AgentName $AGENTNAME -ServerType WebSphere
-Version 5.0 -ServerHome $WASHOME -ServerName $SERVERNAME -NodeName $NODENAME
-AutoRestart -IsSecure -AdminUserName $ADMINUSERNAME -AdminPassword
$ADMINPASSWORD -Console
fi
A.7 Software Packages and related files
The following files constitute everything needed, except for code images, to
implement software packages that deploy and remove the components used in
the Outlet Systems Management Solution.
In the unpacked online material, the files can be found in the following
directories:
򐂰 Software Packages
/mnt/spd<vendor>/<product>/,version>/<component>/<platform>
򐂰 Response files
/mnt/rsp/<vendor>/<product>/,version>/<component>/<platform>
򐂰 Scripts and related files
/mnt/tools<vendor>/<product>/,version>/<component>/<platform>
A.7.1 Common scripts
These common scripts are used by the software packages to setup specific
environments.
In the online material, all these files are located in the following path relative to
the parent directory of the type of file: /common/UNIX.
Appendix A. Configuration files and scripts
381
do_it
This script in Example A-30 is used to invoke any executable that is started by a
software package. This allows for control of the logging, and easier debugging.
Example: A-29 do_it
#!/bin/sh
doit () {
echo “Executing command: “ $@
$@
rc=$?
echo “-- result was: “$rc
return $rc;
}
echo $0 “received parms:” $@
## do the install
doit $@
exit $?
java141_it
Example A-30 is used to setup a JRE 1.4.1 environment for installation
procedures that require Java2.
Example: A-30 java141_it
#!/bin/sh
doit () {
echo “Executing command: “ $@
$@
rc=$?
echo “-- result was: “$rc
return $rc;
}
## parm 1 is the fully qualified
(/opt/IBMJava2-141)
## parm 2 is the fully qualified
(/swdist/ihs_2047/IHS-2.0.47.1)
## parm 3 is the fully qualified
(/swdist/ihs_2047/ihs_2047.rsp))
## parm 4 is the installation path
382
path to Java 1.4 home directory
path to the setup.jar
path to the response file
(/opt/IBMIHS)
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
echo $0 “received parms:” $@
javahome=$1
javaroot=$javahome
javabin=$javahome/bin
export JAVA_HOME=$javahome
export JAVA_ROOT=$javaroot
export JAVA_BINDIR=$javabin
export PATH=$JAVA_HOME:$JAVA_BINDIR:$PATH
shift
## do the install
# doit “$JAVA_BINDIR/java -jar $2/setup.jar -silent -options $3”
doit “$JAVA_BINDIR/java “ $@
exit $?
A.7.2 IBM HTTP Server v2.0.47
The following files are used to create the software package for IBM HTTP Server
v2.0.47.
In the online material, all these files are located in the following path relative to
the parent directory of the type of file: /ibm/ihs/2047/all/Linux-IX86.
ihs_server.2.0.47.spd
Example A-31 is the software package definition for IBM HTTP Server v2.0.47.
Example: A-31 Software package for IBM HTTP Server v2 installation
“TIVOLI Software Package v4.2 - SPDF”
package
name = t_ihs_server
title = “IBM HTTP Server v2.0.47 for Linux”
version = 2.0.47
web_view_mode = hidden
undoable = o
committable = o
history_reset = y
save_default_variables = n
creation_time = “2004-11-18 22:56:18”
last_modification_time = “2004-11-18 23:04:20”
Appendix A. Configuration files and scripts
383
default_variables
work_dir = $(root_dir)/$(prod_name)
prod_name = ihs_2047
log_dir = $(root_dir)/log/$(prod_name)
img_dir = $(work_dir)/IHS-2.0.47.1
bin_dir = $(root_dir)/bin/$(os_family)
java_home = /opt/IBMJava2-141
test = false
root_dir = /swdist
inst_dir = /opt/IBMIHS
home_path = /mnt
prod_path = ibm/ihs/2047/all/$(os_name)-$(os_architecture)
src_base = $(home_path)/code
rsp_base
= $(home_path)/rsp
tool_base = $(home_path)/tools
tools_dir = $(tool_base)/$(prod_path)
svrlog_dir= $(home_path)/logs/$(prod_path)
end
log_object_list
location = $(log_dir)
unix_user_id = 0
unix_group_id = 0
unix_attributes = rwx,rx,
end
source_host_name = srchost
move_removing_host = y
no_check_source_host = y
lenient_distribution = y
default_operation = install
server_mode = all
operation_mode = preferably_not_transactional,preferably_undoable,force
log_path = /mnt/logs/ihs_server_2.0.47.log
post_notice = y
before_as_uid = 0
skip_non_zero = n
after_as_uid = 0
no_chk_on_rm = y
log_host_name = srchost
versioning_type = none
package_type = refresh
stop_on_failure = y
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
384
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = common
translate = n
destination = $(root_dir)/bin
descend_dirs = y
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
generic_container
caption = “on install”
stop_on_failure = y
condition = “$(operation_name)
== install”
check_disk_space
volume = /,500M
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(rsp_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
descend_dirs = n
remove_empty_dirs = n
Appendix A. Configuration files and scripts
385
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
add_file
replace_if_existing = y
replace_if_newer = y
remove_if_modified = n
name = ihs_server_2047.rsp
translate = n
destination = $(prod_name).rsp
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
end
execute_user_program
caption = unpack
condition = “$(test)
386
== false OR $(test)
== first”
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/tar -xvf $(work_dir)/$(prod_name).tar -C
$(work_dir)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_unpack.out
error_file = $(log_dir)/$(prod_name)_unpack.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(src_base)/$(prod_path)/HTTPServer.linux.2047.tar
translate = n
destination = $(work_dir)/$(prod_name).tar
compression_method = stored
rename_if_locked = n
end
end
end
end
Appendix A. Configuration files and scripts
387
execute_user_program
caption = “Silent Install”
transactional = n
during_install
path = $(bin_dir)/java141_it
arguments = “$(java_home) -jar $(img_dir)/setup.jar -silent
-options $(work_dir)/$(prod_name).rsp”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install.out
error_file = $(log_dir)/$(prod_name)_install.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = n
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = “remove inst-directories”
condition = “NOT ( $(test) == true OR $(test) == first)”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “rm -r $(img_dir)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
388
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
output_file = $(log_dir)/$(prod_name)_cleanup.out
error_file = $(log_dir)/$(prod_name)_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = “Start server”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(inst_dir)/bin/apachectl start”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_start.out
error_file = $(log_dir)/$(prod_name)_start.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
Appendix A. Configuration files and scripts
389
end
end
end
generic_container
caption = “on remove”
stop_on_failure = y
condition = “$(operation_name)
== remove”
execute_user_program
caption = “stop server”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “$(inst_dir)/bin/apachectl stop”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_stop.out
error_file = $(log_dir)/$(prod_name)_stop.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = “silent remove”
390
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
transactional = n
during_remove
path = $(bin_dir)/java141_it
arguments = “$(java_home) -jar $(inst_dir)/_uninst/uninstall.jar
-silent”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_uninstall.out
error_file = $(log_dir)/$(prod_name)_uninstall.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = “remove directories”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “rm -r $(inst_dir)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove.out
error_file = $(log_dir)/$(prod_name)_remove.err
output_file_append = n
Appendix A. Configuration files and scripts
391
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
end
ihs_server_2047.rsp
Example A-32 is the response file controlling the customization of IBM HTTP
Server v2.0.47.
Example: A-32 Response file for IHS 2.0.47 installation
###############################################################################
#####
# Response file for IHS installation on UNIX platforms.
#
# selectLocale.lang - {en,fr,de,it,ja,ko,pt_BR,zh,es, or zh_TW}
# ihs.installLocation - The install location for IHS
#
doc.active - IHS Documentation (defaults to true)
#
security.active - IHS SSL Security (defaults to true)
#
###############################################################################
#####
-W selectLocale.lang=en
-P ihs.installLocation=/opt/IBMIHS
#-P doc.active=false
#-P security.active=false
392
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
ihs_2047_linuxi386_install.sh
Example A-33 on page 393 is the script for installing IBM HTTP Server v2.0.47
Example: A-33 IHS 2.0.47 installation script
#!/bin/sh
doit () {
echo “Executing command: “ $1 $cmd
$1
rc=$?
echo “-- result was: “$rc
return $rc;
}
## parm 1 is the fully qualified
(/opt/IBMJava2-141)
## parm 2 is the fully qualified
(/swdist/ihs_2047/IHS-2.0.47.1)
## parm 3 is the fully qualified
(/swdist/ihs_2047/ihs_2047.rsp))
## parm 4 is the installation path
path to Java 1.4 home directory
path to the setup.jar
path to the response file
(/opt/IBMIHS)
echo $0 “received parms:” $@
javahome=$1
javaroot=$javahome
javabin=$javahome/bin
export JAVA_HOME=$javahome
export JAVA_ROOT=$javaroot
export JAVA_BINDIR=$javabin
export PATH=$JAVA_HOME:$JAVA_BINDIR:$PATH
## do the install
doit “$JAVA_BINDIR/java -jar $2/setup.jar -silent -options $3”
## start the apache server
doit “$4/bin/apachectl start”
## remove image directory
doit “rm -r $2”
exit $?
Appendix A. Configuration files and scripts
393
ihs_2047_linuxi386_uninstall.sh
Example A-34 on page 394 uninstalls IBM HTTP Server v2.0.47.
Example: A-34 IHS 2.0.47 uninstallation script
#!/bin/sh
doit () {
echo “Executing command: “ $1
$1
rc=$?
echo “-- result was: “ $rc;
}
## parm 1 is the fully qualified path to Java 1.4 home directory
(/opt/IBMJava2-141)
## parm 2 is the fully qualified path to the setup.jar
(/swdist/ihs_2047/IHS-2.0.47.1)
## parm 3 is the fully qualified path to the response file
(/swdist/ihs_2047/ihs_2047.rsp))
## parm 4 is the installation directory (/opt/IBMIHS)
echo $0 “ received parms: “
$@
javahome=$1
javaroot=$javahome
javabin=$javahome/bin
export JAVA_HOME=$javahome
export JAVA_ROOT=$javaroot
export JAVA_BINDIR=$javabin
export PATH=$JAVA_HOME:$JAVA_BINDIR:$PATH
## stop the apache server
doit “$4/bin/apachectl stop”
## uninstall
doit “$JAVA_BINDIR/java -jar $4/_uninst/uninstall.jar -silent”
## remove filesystems
doit “rm -r $4”
exit $?
394
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
A.7.3 WebSphere Message Queuing Server v5.3
The following files are used to install and configure WebSphere Message
Queuing Server v5.3 and related fixpacks.
In the online material, all these files are located in the following path relative to
the parent directory of the type of file: /websphere/mq/530/server/Linux-IX86.
mq_server.5.3.0.spd
Example A-35 is the software package for installation of WebSphere Message
Queuing Server v5.3. Besides installing the product, the software package also
configures the required users and groups.
Example: A-35 mq_server.5.3.0.spd
“TIVOLI Software Package v4.2.1 - SPDF”
package
name = t_mq_server
title = “MQServer V5.3”
version = 5.3.0
web_view_mode = hidden
undoable = o
committable = o
history_reset = n
save_default_variables = n
creation_time = “2004-12-02 17:03:20”
last_modification_time = “2004-12-02 17:03:20”
default_variables
prod_name = mq_server_530
work_dir = $(root_dir)/$(prod_name)
log_dir = $(root_dir)/log/$(prod_name)
img_dir = $(work_dir)/image
bin_dir = $(root_dir)/bin/$(os_family)
gz_file = C48UBML_WASMQ-Linux-5.3.0.2.tar.gz
tar_file = MQ53Server_LinuxIntel.tar
root_dir = /swdist
home_path = /mnt
prod_path = websphere/mq/530/server/$(os_name)-$(os_architecture)
src_dir = $(home_path)/code/$(prod_path)
rsp_dir
= $(home_path)/rsp/$(prod_path)
tool_base = $(home_path)/tools
tools_dir = $(tool_base)/$(prod_path)
unpack = yes
cleanup = yes
end
Appendix A. Configuration files and scripts
395
log_object_list
location = $(log_dir)
unix_user_id = 0
unix_group_id = 0
unix_attributes = rwx,rx,
end
source_host_name = srchost
move_removing_host = y
no_check_source_host = y
lenient_distribution = n
default_operation = install
server_mode = all
operation_mode = not_transactional
log_path = /mnt/logs/mq_server_530.log
post_notice = y
before_as_uid = 0
skip_non_zero = n
after_as_uid = 0
no_chk_on_rm = y
log_host_name = srchost
versioning_type = none
package_type = refresh
sharing_control = none
stop_on_failure = y
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = common
translate = n
destination = $(root_dir)/bin
descend_dirs = y
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
396
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
generic_container
caption = “on install”
stop_on_failure = y
condition = “$(operation_name)
== install”
check_disk_space
volume = /,250M
end
execute_user_program
caption = “Create mqm IDs”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = $(work_dir)/mqusers.sh
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_users.out
error_file = $(log_dir)/$(prod_name)_users.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
Appendix A. Configuration files and scripts
397
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(tools_dir)/mqusers.sh
translate = n
destination = $(work_dir)/mqusers.sh
compression_method = stored
rename_if_locked = n
end
end
end
end
execute_user_program
caption = uncompress
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/gunzip -dcfv $(work_dir)/$(prod_name).tar.gz”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_uncompress.out
error_file = $(log_dir)/$(prod_name)_uncompress.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
398
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(src_dir)/$(gz_file)
translate = n
destination = $(work_dir)/$(prod_name).tar.gz
compression_method = stored
rename_if_locked = n
end
end
end
end
execute_user_program
caption = uncompress
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/tar -xvf $(work_dir)/$(prod_name).tar”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_unpack1.out
error_file = $(log_dir)/$(prod_name)_unpack1.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
Appendix A. Configuration files and scripts
399
failure = 1,65535
end
end
end
add_directory
stop_on_failure = n
condition = “$(unpack) == yes”
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = $(prod_path)
translate = n
destination = $(work_dir)/image
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
execute_user_program
caption = unpack
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/tar -C $(img_dir) -xvf $(work_dir)/$(tar_file)”
inhibit_parsing = n
400
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
working_dir = $(img_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_unpack2.out
error_file = $(log_dir)/$(prod_name)_unpack2.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = “Accept MQ License”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(img_dir)/mqlicense.sh -accept”
inhibit_parsing = n
working_dir = $(img_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_license.out
error_file = $(log_dir)/$(prod_name)_license.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
Appendix A. Configuration files and scripts
401
exit_codes
success = 0,9
success = 10,65535
end
end
end
install_rpm_package
caption = “MQServer V5”
rpm_options = -vv
rpm_install_type = install
rpm_install_force = n
rpm_install_nodeps = n
rpm_remove_nodeps = n
rpm_report_log = y
rpm_file
image_dir = $(img_dir)
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = MQSeriesSDK-5.3.0-2
rpm_package_file = MQSeriesSDK-5.3.0-2.i386.rpm
end
rpm_file
image_dir = $(img_dir)
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = MQSeriesRuntime-5.3.0-2
rpm_package_file = MQSeriesRuntime-5.3.0-2.i386.rpm
end
rpm_file
image_dir = $(img_dir)
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = MQSeriesServer-5.3.0-2
rpm_package_file = MQSeriesServer-5.3.0-2.i386.rpm
end
402
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
rpm_file
image_dir = $(img_dir)
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = MQSeriesSamples-5.3.0-2
rpm_package_file = MQSeriesSamples-5.3.0-2.i386.rpm
end
rpm_file
image_dir = $(img_dir)
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = MQSeriesJava-5.3.0-2
rpm_package_file = MQSeriesJava-5.3.0-2.i386.rpm
end
end
execute_user_program
caption = “Set purchased license units”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/opt/mqm/bin/setmqcap 4”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_purchase.out
error_file = $(log_dir)/$(prod_name)_purchase.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
Appendix A. Configuration files and scripts
403
failure = 1,65535
end
end
end
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(work_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_cleanup.out
error_file = $(log_dir)/$(prod_name)_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
end
404
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
mq_users.sh
Example A-36 is the script for creating users and groups required by WebSphere
Message Queuing Server v5.3.
Example: A-36 Script for creating user and group for WebSphere MQ
#!/bin/sh
#
# create group and user for mq
#
groupadd mqm
useradd -g mqm -m -d /home/mqm mqm
exit 0
mq_server_53_install.sh
Example A-37 is the installation procedure for WebSphere Message Queuing
Server v5.3.
Example: A-37 WebSphere MQ Server installation script
#!/bin/sh
doit () {
echo “
about to execute: “ $1
$1
rc=$?
echo “
result was: “$rc
return $rc
}
imgdir=$1
instdir=$2
## validate
if [ ! ${imgdir} ]; then
echo “ image directory NOT specified”
exit 4
else
if [ ! -d ${imgdir} ]; then
echo “ image directory \”${imgdir}\” does not exist”
exit 4
else
echo “ using image directory \”${imgdir}\””
fi
fi
Appendix A. Configuration files and scripts
405
if [ !
echo
exit
else
echo
fi
${instdir} ]; then
“ installation directory NOT specified”
4
“ using installation directory \”${instdir}\””
#
# create group and user for mq
mqusr=”mqm”
mqgrp=”mqm”
#
if [ x‘grep ${mqgrp}: /etc/group‘ = x ]; then
echo “ creating group: \”${mqgrp}\””
doit “groupadd ${mqgrp}”
fi
# add mquser
if [ x‘grep ${mqusr}: /etc/passwd‘ = x ]; then
echo “ adding user: \”${mquser}\””
doit “useradd -g ${mqgrp} -m -d /home/${mquser} ${mqusr}”
fi
# accept licenses
echo “ accepting license”
doit “$imgdir/mqlicense.sh -accept”
### install rpm packages
echo “ installing RPM packages”
cmd=”/bin/rpm -ivh --force”
doit “ ${cmd}
${imgdir}/MQSeriesSDK-5.3.0-2.i386.rpm
${imgdir}/MQSeriesRuntime-5.3.0-2.i386.rpm
${imgdir}/MQSeriesServer-5.3.0-2.i386.rpm
${imgdir}/MQSeriesSamples-5.3.0-2.i386.rpm
${imgdir}/MQSeriesJava-5.3.0-2.i386.rpm
${imgdir}/MQSeriesMan-5.3.0-2.i386.rpm
“
### set installed purchased licenses
echo “ setting purchased licenses to: \”4\””
doit “${instdir}/bin/setmqcap 4”
exit 0
406
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
mq_server_53_uninstall.sh
Example A-38 is the uninstallation procedure for WebSphere Message Queuing
Server v5.3.
Example: A-38 Script for uninstallation of WebSphere MQ server
#!/bin/sh
doit () {
echo “
about to execute: “ $1
$1
rc=$?
echo “
-- result was: “$rc
return $rc
}
imgdir=$2
instdir=$1
## validate
if [ ! ${instdir} ]; then
echo “ installation directory NOT specified”
exit 4
else
if [ ! -d ${instdir} ]; then
echo “ installation directory \”${instdir}\” does not exist”
exit 4
else
echo “ using installation directory \”${instdir}\””
fi
fi
#
#
# remove group and user for mq
mqusr=”mqm”
mqgrp=”mqm”
# delete user
if [ x‘grep ${mqusr}: /etc/passwd‘ = x ]; then
echo “ removing user \”${mqusr}\””
doit “userdel -r ${mqusr}”
fi
# delete group
Appendix A. Configuration files and scripts
407
if [ x‘grep ${mqgrp}: /etc/group‘ != x ]; then
echo “ removing group \”${mqgrp}\””
doit “groupdel ${mqgrp}”
fi
### remove rpm packages
echo “ removing rpm packages”
cmd=”/bin/rpm -evv “
doit “${cmd}
${imgdir}MQSeriesSDK-5.3.0-2.i386.rpm
${imgdir}MQSeriesRuntime-5.3.0-2.i386.rpm
${imgdir}MQSeriesServer-5.3.0-2.i386.rpm
${imgdir}MQSeriesSamples-5.3.0-2.i386.rpm
${imgdir}MQSeriesJava-5.3.0-2.i386.rpm
${imgdir}MQSeriesMan-5.3.0-2.i386.rpm
“
### remove install directory
if [ -d ${instdir} ]; then
echo “ removing installation directory \”${instdir}\””
doit “/bin/rm -r ${instdir}”
fi
if [ -d /var/mqm ]; then
echo “ removing installation directory \”/var/mqm\””
doit “/bin/rm -r /var/mqm”
fi
exit 0
A.7.4 WebSphere Message Queuing Server v5.3 Fixpack 8
To install Fixpack 8 for WebSphere Message Queuing Server v5.3, the files in
this section are provided.
In the online material, all these files are located in the following path relative to
the parent directory of the type of file:
/websphere/mq/530_fp8/server/Linux-IX86.
mq_server.5.3.0_fp8.spd
Example A-39 is the software package for installing and removing Fixpack 8 for
WebSphere Message Queuing Server v5.3.
Example: A-39 Software Package mq_server.5.3.0_fp8.spd
“TIVOLI Software Package v4.2.1 - SPDF”
408
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
package
name = t_mq_server
title = “MQServer V5.3 fixpack 8”
version = 5.3.0_fp8
web_view_mode = hidden
undoable = o
committable = o
history_reset = n
save_default_variables = n
creation_time = “2004-11-24 01:10:26”
last_modification_time = “2004-11-24 01:10:26”
default_variables
prod_name = mq_server_530_fp08
work_dir = $(root_dir)/$(prod_name)
log_dir = $(root_dir)/log/$(prod_name)
img_dir = $(work_dir)/image
bin_dir = $(root_dir)/bin/$(os_family)
tar_file = U497537.gskit.tar
gz_file = WASMQ-CDS8-Linux-U497537.gskit.tar.gz
root_dir = /swdist
home_path = /mnt
prod_path =
websphere/mq/530_fp8/server/$(os_name)-$(os_architecture)
src_dir = $(home_path)/code/$(prod_path)
rsp_dir
= $(home_path)/rsp/$(prod_path)
tool_base = $(home_path)/tools
tools_dir = $(tool_base)/$(prod_path)
unpack = yes
cleanup = yes
end
log_object_list
location = $(log_dir)
unix_user_id = 0
unix_group_id = 0
unix_attributes = rwx,rx,
end
source_host_name = srchost
move_removing_host = y
no_check_source_host = y
lenient_distribution = n
default_operation = install
server_mode = all
operation_mode = not_transactional
log_path = /mnt/logs/mq_server_530_fp08.log
post_notice = y
Appendix A. Configuration files and scripts
409
before_as_uid = 0
skip_non_zero = n
after_as_uid = 0
no_chk_on_rm = y
log_host_name = srchost
versioning_type = none
package_type = patch
sharing_control = none
stop_on_failure = y
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = common
translate = n
destination = $(root_dir)/bin
descend_dirs = y
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
generic_container
caption = “on install”
stop_on_failure = y
condition = “$(operation_name)
== install”
check_disk_space
volume = /,250M
end
410
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
execute_user_program
caption = uncompress
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/gunzip -dvf $(work_dir)/$(prod_name).tar.gz”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_uncompress.out
error_file = $(log_dir)/$(prod_name)_uncompress.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(src_dir)/$(gz_file)
translate = n
destination = $(work_dir)/$(prod_name).tar.gz
compression_method = stored
rename_if_locked = n
end
end
end
Appendix A. Configuration files and scripts
411
end
add_directory
stop_on_failure = n
condition = “$(unpack) == yes”
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = $(prod_path)
translate = n
destination = $(work_dir)/image
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
execute_user_program
caption = unpack
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/tar -C $(img_dir) -xvf
$(work_dir)/$(prod_name).tar”
inhibit_parsing = n
working_dir = $(img_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
412
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
output_file = $(log_dir)/$(prod_name)_unpack.out
error_file = $(log_dir)/$(prod_name)_unpack.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
install_rpm_package
caption = “MQServer V5”
rpm_options = -vv
rpm_install_type = install
rpm_install_force = n
rpm_install_nodeps = n
rpm_remove_nodeps = n
rpm_report_log = y
rpm_file
image_dir = $(img_dir)
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = MQSeriesSDK-U497537-5.3.0-8
rpm_package_file = MQSeriesSDK-U497537-5.3.0-8.i386.rpm
end
rpm_file
image_dir = $(img_dir)
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = MQSeriesRuntime-U497537-5.3.0-8
rpm_package_file = MQSeriesRuntime-U497537-5.3.0-8.i386.rpm
end
Appendix A. Configuration files and scripts
413
rpm_file
image_dir = $(img_dir)
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = MQSeriesServer-U497537-5.3.0-8
rpm_package_file = MQSeriesServer-U497537-5.3.0-8.i386.rpm
end
rpm_file
image_dir = $(img_dir)
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = MQSeriesSamples-U497537-5.3.0-8
rpm_package_file = MQSeriesSamples-U497537-5.3.0-8.i386.rpm
end
rpm_file
image_dir = $(img_dir)
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = MQSeriesJava-U497537-5.3.0-8
rpm_package_file = MQSeriesJava-U497537-5.3.0-8.i386.rpm
end
end
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(work_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_cleanup.out
error_file = $(log_dir)/$(prod_name)_cleanup.err
414
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
end
mq_server_538_install.sh
Example A-40 is the installation script for installing WebSphere Message
Queuing Server v5.3 Fixpack 8.
Example: A-40 mq_server_538_install.sh
#!/bin/sh
doit () {
echo “
about to execute: “ $1
$1
rc=$?
echo “
-- result was: “$rc
return $rc
}
imgdir=$1
## validate
if [ ! ${imgdir} ]; then
echo “ image directory NOT specified”
exit 4
else
Appendix A. Configuration files and scripts
415
if [ !
echo
exit
else
echo
fi
-d ${imgdir} ]; then
“ image directory \”${imgdir}\” does not exist”
4
“
using image directory \”${imgdir}\””
fi
### install rpm packages
echo “installing rpm packages”
cmd=”/bin/rpm -ivh “
doit “ ${cmd}
${imgdir}/MQSeriesSDK-U497537-5.3.0-8.i386.rpm
${imgdir}/MQSeriesRuntime-U497537-5.3.0-8.i386.rpm
${imgdir}/MQSeriesServer-U497537-5.3.0-8.i386.rpm
${imgdir}/MQSeriesSamples-U497537-5.3.0-8.i386.rpm
${imgdir}/MQSeriesJava-U497537-5.3.0-8.i386.rpm
${imgdir}/MQSeriesMan-U497537-5.3.0-8.i386.rpm
“
exit 0
mq_server_538_uninstall.sh
Example A-41 is the script for removing WebSphere Message Queuing Server
v5.3 Fixpack 8.
Example: A-41 mq_server_538_uninstall.sh
#!/bin/sh
doit () {
echo “
about to execute: “ $1
$1
rc=$?
echo “
--result was: “$rc
return $rc
}
### install rpm packages
echo “removing rpm packages”
cmd=”/bin/rpm -evv “
doit “ ${cmd}
${imgdir}MQSeriesSDK-U497537-5.3.0-8.i386.rpm
${imgdir}MQSeriesRuntime-U497537-5.3.0-8.i386.rpm
${imgdir}MQSeriesServer-U497537-5.3.0-8.i386.rpm
416
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
${imgdir}MQSeriesSamples-U497537-5.3.0-8.i386.rpm
${imgdir}MQSeriesJava-U497537-5.3.0-8.i386.rpm
${imgdir}MQSeriesMan-U497537-5.3.0-8.i386.rpm
“
exit 0
A.7.5 WebSphere Application Server v5.1
The files in this section are used to define and build a software package capable
of installing and uninstalling WebSphere APplication Server v5.1, including basic
configuration of performance metrics settings and security.
In the online material, all these files are located in the following path relative to
the parent directory of the type of file:
/websphere/was/510/appserver/Linux-IX86.
was_appserver.5.1.0.spd
Example A-42 is the software package definition for the WebSphere Application
Server 5.1 package.
Example: A-42 Software package for WAS 5.1
“TIVOLI Software Package v4.2.1 - SPDF”
package
name = was_appserver
title = “WebSphere Application Server Enterprise Edition v5.1”
version = 5.1.0
web_view_mode = hidden
undoable = n
committable = o
history_reset = y
save_default_variables = n
creation_time = “2004-12-02 19:11:03”
last_modification_time = “2004-12-02 19:11:03”
default_variables
wasuser = root
waspassword = smartway
wasserver = server1
inst_ems = no
work_dir = $(root_dir)/$(prod_name)
prod_name = was_appserver_510
log_dir = $(root_dir)/log/$(prod_name)
img_base = $(work_dir)/image
Appendix A. Configuration files and scripts
417
img_dir = $(img_base)/linuxi386
bin_dir = $(root_dir)/bin/$(os_family)
root_dir = /swdist
inst_dir = /opt/IBM/WebSphere/AppServer
tar_file = C53IPML-WAS-510-LinuxIX86.tar
rsp_file = was_appserver_510.rsp
home_path = /mnt
prod_path =
websphere/was/510/appserver/$(os_name)-$(os_architecture)
src_base = $(home_path)/code
src_dir
= $(src_base)/$(prod_path)
rsp_base = $(home_path)/rsp
tool_base = $(home_path)/tools
tools_dir = $(tool_base)/$(prod_path)
unpack = yes
cleanup = yes
enable_pmi = yes
enable_security = yes
end
log_object_list
location = $(log_dir)
unix_user_id = 0
unix_group_id = 0
unix_attributes = rwx,rx,
end
source_host_name = srchost
move_removing_host = y
no_check_source_host = y
lenient_distribution = y
default_operation = install
server_mode = all
operation_mode = not_transactional,force
log_path = /mnt/logs/was_appserver_510.log
post_notice = y
before_as_uid = 0
skip_non_zero = n
after_as_uid = 0
no_chk_on_rm = y
log_host_name = srchost
versioning_type = none
package_type = refresh
sharing_control = none
stop_on_failure = y
add_directory
418
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = common
translate = n
destination = $(root_dir)/bin
descend_dirs = y
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
Appendix A. Configuration files and scripts
419
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
generic_container
caption = “on install”
stop_on_failure = y
condition = “$(operation_name)
== install”
check_disk_space
volume = /,1500M
end
add_directory
stop_on_failure = n
condition = “$(unpack) == yes”
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = $(prod_path)
translate = n
destination = $(img_base)
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
420
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = was_appserver_510_install.sh
translate = n
destination = $(prod_name)_install.sh
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
Appendix A. Configuration files and scripts
421
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = mod_was_server_JVM_process.sh
translate = n
destination = mod_was_server_JVM_process.sh
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = mod_was_pmirm_settings.sh
translate = n
destination = mod_was_pmirm_settings.sh
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
422
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = enable_was_security.sh
translate = n
destination = enable_was_security.sh
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = enable_was_security.jacl
translate = n
destination = enable_was_security.jacl
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
Appendix A. Configuration files and scripts
423
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
end
execute_user_program
caption = “create users and groups”
condition = “$(inst_ems) == yes”
transactional = y
during_install
path = $(bin_dir)/do_it
arguments = $(work_dir)/$(prod_name)_install.sh
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_prepare.out
error_file = $(log_dir)/$(prod_name)_prepare.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = y
retry = 1
exit_codes
success = 9,9
success_reboot_now = 0,0
warning = 1,8
failure = 10,65535
424
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
end
end
end
execute_user_program
caption = unpack
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/tar -xvf $(work_dir)/$(prod_name).tar -C
$(img_base)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_unpack.out
error_file = $(log_dir)/$(prod_name)_unpack.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(src_dir)/$(tar_file)
translate = n
destination = $(work_dir)/$(prod_name).tar
compression_method = stored
Appendix A. Configuration files and scripts
425
rename_if_locked = n
end
end
end
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(rsp_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(rsp_file)
translate = n
destination = $(prod_name).rsp
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
426
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
end
execute_user_program
caption = “Silent Install”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(img_dir)/install -options
$(work_dir)/$(prod_name).rsp”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install.out
error_file = $(log_dir)/$(prod_name)_install.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,1
failure = 2,65535
end
Appendix A. Configuration files and scripts
427
end
end
execute_user_program
caption = “jvm process settings”
condition = “$(enable_pmi) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(work_dir)/mod_was_server_JVM_process.sh
$(inst_dir)/config/cells/$(computer_name)/nodes/$(computer_name)/servers/$(wass
erver)/server.xml”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install_jvm_settings.out
error_file = $(log_dir)/$(prod_name)_install_jvm_settings.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,1
failure = 2,65535
end
end
end
execute_user_program
caption = “pmirm settings”
condition = “$(enable_pmi) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
428
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
arguments = “$(work_dir)/mod_was_pmirm_settings.sh
$(inst_dir)/config/cells/$(computer_name)/pmirm.xml”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install_pmirm_settings.out
error_file = $(log_dir)/$(prod_name)_install_pmirm_settings.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,1
failure = 2,65535
end
end
end
execute_user_program
caption = “enable security”
condition = “$(enable_security) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(work_dir)/enable_was_security.sh
$(work_dir)/enable_was_security.jacl”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install_pmirm_settings.out
error_file = $(log_dir)/$(prod_name)_install_pmirm_settings.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
Appendix A. Configuration files and scripts
429
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,1
failure = 2,65535
end
end
end
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(img_dir)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_cleanup.out
error_file = $(log_dir)/$(prod_name)_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,127
failure = 128,65535
end
end
end
430
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
execute_user_program
caption = “Start server”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(inst_dir)/bin/startServer.sh $(wasserver) -user
$(wasuser) -password $(waspassword)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_startserver.out
error_file = $(log_dir)/$(prod_name)_startserver.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success
warning
success
failure
end
=
=
=
=
0,0
1,110
111,111
1,65535
end
end
end
generic_container
caption = “on remove”
stop_on_failure = y
condition = “$(operation_name)
== remove”
execute_user_program
caption = “stop server”
transactional = n
Appendix A. Configuration files and scripts
431
during_remove
path = $(bin_dir)/do_it
arguments = “$(inst_dir)/bin/stopServer.sh $(wasserver) -user
$(wasuser) -password $(waspassword)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_stopserver.out
error_file = $(log_dir)/$(prod_name)_stopserver.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,246
failure = 247,65535
end
end
end
execute_user_program
caption = “silent remove”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “$(inst_dir)/_uninst/uninstall -silent”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_uninstall.out
error_file = $(log_dir)/$(prod_name)_uninstall.err
output_file_append = n
error_file_append = n
432
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,7
failure = 8,65535
end
end
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
add_file
replace_if_existing = y
replace_if_newer = n
Appendix A. Configuration files and scripts
433
remove_if_modified = n
name = was_appserver_510_uninstall.sh
translate = n
destination = $(prod_name)_uninstall.sh
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_attributes = rwx,rx,rx
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
end
execute_user_program
caption = “remove groups and users”
condition = “$(inst_ems) == yes”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = $(work_dir)/$(prod_name)_uninstall.sh
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove_users.out
error_file = $(log_dir)/$(prod_name)_remove_users.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
434
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
retry = 1
exit_codes
success = 0,0
success = 1,65535
end
end
end
execute_user_program
caption = “remove inst_dir”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(inst_dir)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove.out
error_file = $(log_dir)/$(prod_name)_remove.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,127
success = 128,65535
end
end
end
execute_user_program
caption = “remove mqm_dir”
condition = “$(inst_ems) == yes”
Appendix A. Configuration files and scripts
435
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “/bin/rm -r /var/mqm”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove_mqm.out
error_file = $(log_dir)/$(prod_name)_remove_mqm.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
success = 1,65535
end
end
end
execute_user_program
caption = “remove wemps_dir”
condition = “$(inst_ems) == yes”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “/bin/rm -r /var/wemps”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove_wemps.out
error_file = $(log_dir)/$(prod_name)_remove_wemps.err
output_file_append = n
436
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
success = 1,65535
end
end
end
execute_user_program
caption = “remove work_dir”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(work_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove_workdir.out
error_file = $(log_dir)/$(prod_name)_remove_workdir.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
success = 1,65535
end
end
end
Appendix A. Configuration files and scripts
437
end
end
was_appserver_510.rsp
Example A-43 is the response file controlling the installation and basic
customization of WebSphere Application Server 5.1.
Example: A-43 Response file for WAS 5.1 installation
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
*******************************************
#
#
#
#
*******************************************
This value is required.
PLEASE DO NOT REMOVE THIS VALUE.
*******************************************
Response file for WebSphere Application Server 5.1 Install
Please follow the comments to use the response file and
understand the various options. You must carefully complete
or change the various values. If the values are not completed
properly, the install may be unsuccessful.
NOTE: This file is for silent install only.
IMPORTANT: ALL VALUES MUST BE ENCLOSED IN DOUBLE QUOTES ( ““ ).
*******************************************
-W setupTypes.selectedSetupTypeId=”custom”
#
#
#
#
*******************************************
Below is the beginning of the response file that needs to be
filled in by the user.
*******************************************
#
#
#
#
*******************************************
The below value specifies silent install. This value
indicates that the install will be silent.
*******************************************
-silent
438
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
#
#
#
#
#
*******************************************
WebSphere Application Server Install Location
Please specify the destination directory for the WebSphere Application
Server installation. You will need to change this for UNIX
platforms. As an example for AIX, the value may be
“/usr/WebSphere/AppServer”
*******************************************
-P wasBean.installLocation=”$(inst_dir)”
#
#
#
#
#
#
#
#
#
#
*******************************************
IBM HTTP Server Install Location
Please specify the destination directory for the IBM HTTP Server
installation. This value will need to be completed if you
choose to install IBM HTTP Server. If you choose to not install IBM
HTTP Server, then this value is not required. You will need to change
the default value below for UNIX platforms. As an example for AIX, the
value may be “/usr/IBMHTTPServer”
*******************************************
-P ihsFeatureBean.installLocation=”/opt/IBMIHS”
#
#
#
#
#
#
#
#
*******************************************
Below are the features that you may choose to install.
Set the following values to “true” or “false,” depending upon whether
you want to install the following features or not.
NOTE: The default settings for features in this response file
detail the defaults for a typical installation.
*******************************************
# *******************************************
# Install Server
# *******************************************
-P serverBean.active=”true”
# *******************************************
#
# Begin Features for Administration
#
Appendix A. Configuration files and scripts
439
# *******************************************
# *********
# Install Administration
# *********
-P adminBean.active=”true”
#
#
#
#
#
*********
The next 2 features are part of Administration. In order for any of these
features to be installed, the property to install Administration denoted
above must be set to “true.”
*********
# *********
# Install Admin Scripting
# *********
-P adminScriptingFeatureBean.active=”true”
# *********
# Install Administrative Console
# *********
-P adminConsoleFeatureBean.active=”true”
# *******************************************
#
# End Features for Administration
#
# *******************************************
# *******************************************
#
# Begin Features for Application Assembly and Deployment Tools
#
# *******************************************
# *********
# Install Application Assembly and Deployment Tools
# *********
-P applicationAndAssemblyToolsBean.active=”true”
#
#
#
#
440
*********
The next 3 features are part of Application Assembly and Deployment
Tools. In order for any of these features to be installed,
the property to install Application And Assembly Tools denoted
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# above must be set to “true.”
# *********
# *********
# Install Deploy Tool
# *********
-P deployToolBean.active=”true”
# *********
# Install Ant Utilities
# *********
-P antUtilityBean.active=”true”
# *******************************************
#
# End Features for Application Assembly and Deployment Tools
#
# *******************************************
# *******************************************
#
# Begin Features for Embedded Messaging
#
# *******************************************
# *********
# Install Embedded Messaging
# *********
-P mqSeriesBean.active=”false”
#
#
#
#
#
#
#
#
#
*********
The next three features are for Embedded Messaging. In order to install
any of the following three subfeatures, the property to install Embedded
Messaging denoted above must be set to “true.”
IMPORTANT NOTE: If you do not want to install Embedded Messaging, please
ensure all of the following options are set to “false” as well as the above
option.
*********
# *********
# Install Embedded Messaging Server and Client
#
# You may only install the Embedded Messaging Server and Client or the Embedded
Appendix A. Configuration files and scripts
441
# Messaging client below. If you set the Server and Client to “true,” please
# ensure that the Client only option below is set to “false.” The same applies
# if you set the Client only option to “true,” please ensure the server and
client
# option is set to “false.”
# *********
-P mqSeriesServerBean.active=”false”
#
#
#
#
#
#
#
#
*********
Embedded Messaging Server and Client install location
If you choose to install Embedded Messaging Server and Client above, please
specify an install location below for Windows platforms only.
The directory may not be configured by the user for UNIX platforms
as it is predetermined.
*********
-P mqSeriesServerBean.installLocation=”/opt/IBM/WebSphereMQ”
# *********
# Install Embedded Messaging Client only
# *********
-P mqSeriesClientBean.active=”false”
#
#
#
#
#
#
#
#
*********
Embedded Messaging Client Only install location
If you choose to install Embedded Messaging Client only above, please
specify an install location below for Windows platforms only.
The directory may not be configured by the user for UNIX platforms
as it is predetermined.
*********
-P mqSeriesClientBean.installLocation=”/opt/IBM/WebSphereMQ”
# *********
# Install Message-driven beans Samples
# *********
-P mqSeriesSamplesBean.active=”false”
# *******************************************
#
# End Features for Embedded Messaging
#
# *******************************************
442
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# *******************************************
# Install IHS WebServer 1.3.28
# *******************************************
-P ihsFeatureBean.active=”false”
# *******************************************
#
# Begin Features for Web Server Plugins
#
# *******************************************
# *********
# Install Web Server Plugins
# *********
-P pluginBean.active=”true”
#
#
#
#
#
#
*********
The next 5 features are part of Web Server Plugins.
In order for any of these features to be installed,
the property to install Web Server Plugins denoted
above must be set to “true.”
*********
# *********
# Install IBM HTTP Server v1.3 Plugin
# *********
-P ihsPluginBean.active=”false”
# *********
# Install IBM HTTP Server v2.0 Plugin
# *********
-P ihs20PluginBean.active=”true”
# *********
# Install Apache Web Server v1.3 Plugin
# *********
-P apachePluginBean.active=”false”
# *********
# Install Apache Web Server v2.0 Plugin
# *********
Appendix A. Configuration files and scripts
443
-P apache20PluginBean.active=”false”
# *********
# Install Microsoft Internet Information Services (IIS) Plugin
# *********
-P iisPluginBean.active=”false”
# *********
# Install iPlanet Web Server Plugin
# *********
-P iplanet60PluginBean.active=”false”
# *********
# Install Domino Web Server Plugin
# *********
-P dominoPluginBean.active=”false”
# *******************************************
#
# End Features for Web Server Plugins
#
# *******************************************
# *******************************************
# Install Samples
# *******************************************
-P samplesBean.active=”false”
# *******************************************
#
# Begin Features for Performance and Analysis Tools
#
# *******************************************
# *********
# Install Performance And Analysis Tools
# *********
-P performanceAndAnalysisToolsBean.active=”true”
# *********
# The next 3 features are part of Performance And Analysis
# Tools. In order for any of these features to be installed,
444
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# the property to install Performance And Analysis Tools denoted
# above must be set to “true.”
# *********
# *********
# Install Tivoli Performance Viewer
# *********
-P tivoliPerfBean.active=”true”
# *********
# Install Dynamic Cache Monitor
# *********
-P DCMBean.active=”true”
# *********
# Install Performance Servlet
# *********
-P performanceServletBean.active=”true”
# *********
# Install Log Analyzer
# *********
-P logAnalyzerBean.active=”true”
# *******************************************
#
# End Features for Performance and Analysis Tools
#
# *******************************************
# *******************************************
# Install Javadocs
# *******************************************
-P javadocBean.active=”false”
#
#
#
#
#
#
*******************************************
Please enter a node name and hostname for this installation.
The node name is used for administration, and must be unique
within its group of nodes (cell). The hostname is the DNS name
or IP address for this computer. You must replace the
“DefaultNode” with the node name that you want the default node
Appendix A. Configuration files and scripts
445
# to be and “127.0.0.1” to a resolveable hostname or IP address
# for your machine.
#
#
#
#
#
Warning:
1. If you are migrating now or plan to do so after
installation, enter the same node name as the previous version.
2. If you are performing coexistence, enter a unique node name.
*******************************************
-W nodeNameBean.nodeName=”$(hostname)”
-W nodeNameBean.hostName=”$(hostname)”
#
#
#
#
#
#
#
#
*******************************************
Begin Installing Services
The following are to install Services for IHS and Websphere
Application Server on Windows. Using Services, you can start and
stop services, and configure startup and recovery actions.
You can ignore these or comment them out for other Operating Systems.
*******************************************
-W serviceSettingsWizardBean.active=”true”
#
#
#
#
#
#
*********
The next 2 options are part of Installing Services.
In order for any of these to be set to “true,”
the property to install Services denoted above must be set
to “true.”
*********
# *********
# Install the IHS service
# *********
-W serviceSettingsWizardBean.ihsChoice=”false”
# *********
# Install the WebSphere Application Server service
# *********
-W serviceSettingsWizardBean.wasChoice=”true”
#
#
#
#
#
#
446
*********
If you chose to install a service above, then you must
specify the User Name and Password which are required to
install the Services. The current user must be admin or must
have admin authority to install a Service. Also the username
which is given here must have “Log On as a Service “ authority
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# for the service to run properly.
# *********
# *********
# Replace YOUR_USER_NAME with your username.
# *********
-W serviceSettingsWizardBean.userName=”$(wasuser)”
# *********
# Replace YOUR_PASSWORD with your valid password.
# *********
-W serviceSettingsWizardBean.password=”$(waspassword)”
# *******************************************
#
# End Installing Services
#
# *******************************************
#
#
#
#
#
#
-P
-P
-P
-P
-P
-P
-P
-P
*******************************************
Set any or all of the following to false if the launcher
icon is not to be installed. These settings will only affect
an install in which the corresponding product component
is also selected for install.
*******************************************
StartServerIconBean.active=”true”
StopServerIconBean.active=”true”
AdminConsolIconBean.active=”true”
SamplesGalleryIconBean.active=”false”
TivoliPerfIconBean.active=”true”
infoCenterIconBean.active=”true”
firstStepsIconBean.active=”false”
logAnalyzerIconBean.active=”true”
# *******************************************
# Change the path to the prerequisite checker configuration
# file only if a new file has been provided. This can be a
# relative path or an absolute path. Make sure both the
# prereqChecker.xml and prereqChecker.dtd files are present at the provided
path.
# *******************************************
-W osLevelCheckActionBean.configFilePath=”waspc/prereqChecker.xml”
Appendix A. Configuration files and scripts
447
#
#
#
#
#
#
#
#
#
#
*******************************************
Begin Plugin Config File Location
If you chose to install plugins above, then you will
need to specify the fully qualified path, including
the config file name, for the plugins you selected. If you want to
install the plugin, you must specify this path, otherwise the
installer will fail to install the plugins properly. Also, the
value must be included in double quotes.
*******************************************
# *********
# IBM HTTP Server Plugin v1.3 Config File Location
# *********
-W defaultIHSConfigFileLocationBean.value=
# *********
# IBM HTTP Server Plugin v2.0Config File Location
# *********
-W defaultIHS20ConfigFileLocationBean.value=”/opt/IBMIHS/conf/httpd.conf”
# *********
# Apache Web Server v1.3 Config File Location
# *********
-W defaultApacheConfigFileLocationBean.value=
# *********
# Apache Web Server v2.0 Config File Location
# *********
-W defaultApache20ConfigFileLocationBean.value=
# *********
# iPlanet Web Server Config File Location
# *********
-W defaultIPlanetConfigFileLocationBean.value=
#
#
#
#
#
#
#
448
*********
Begin Domino Web Server Plugin Config File Locations
The Notes.jar and names.nsf locations are required
for the Domino Plugin. Please be sure to enter values in
double quotes for both of these files.
*********
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# *********
# Domino Notes.jar File Location
# *********
-W dominoPanelBean.notesJarFile=
# *********
# Domino names.nsf File Location
# *********
-W dominoPanelBean.namesFile=
# *********
# End Domino Web Server Plugin Config File Locations
# *********
# *******************************************
#
# End Plugin Config File Location
#
# *******************************************
#
#
#
#
#
#
#
*******************************************
Product Registration Tool
To launch the Product Registration Tool, please
change the value to “true.” This is only for
GUI install.
*******************************************
-W launchPRTBean.active=”false”
#
#
#
#
#
#
*******************************************
Install Default App
Please specify if you would like to install the
Default App by setting the value to “true” or “false.”
*******************************************
-W installSampleAppSequenceBean.active=”false”
#
#
#
#
#
*******************************************
First Steps
If you would the First Steps to display at the end
of the installation, please change the value to “true.”
Appendix A. Configuration files and scripts
449
# *******************************************
-W firstStepsSequenceBean.active=”false”
#
#
#
#
#
#
*******************************************
Installation Verification Tool (IVT)
Please specify if you would like to run the Installation
Verification Tool by setting the value to “true” or “false.”
*******************************************
-W installIVTAppSequenceBean.active=”true”
#
#
#
#
#
#
#
#
#
***********************************************************
**
Support for Silent Coexistence
**
** NOTE:
** 1. You must uncomment and modify the properties in
** this section for silent coexistence to work properly.
** 2. You can not perform migration and coexistence at
** the same time.
***********************************************************
# ***********************************************************
# Tell the installer that you want to perform coexistence
# ***********************************************************
#-W coexistenceOptionsBean.doCoexistence=”true”
#
#
#
#
***********************************************************
Set this property if you want to modify the default IHS
and IHS Admin ports
***********************************************************
#-W coexistencePanelBean.useIhs=”true”
# ***********************************************************
# The new value for the Bootstrap Port
# ***********************************************************
#-W coexistencePanelBean.bootstrapPort=”2810”
#
#
#
#
#
450
***********************************************************
The new values for the IHS and IHS Admin ports
NOTE: These values are only used if
coexistencePanelBean.useIhs is set to “true”
***********************************************************
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#-W coexistencePanelBean.ihsPort=”81”
#-W coexistencePanelBean.ihsAdminPort=”8009”
# ***********************************************************
# The new values for the HTTP and HTTPs transports.
# ***********************************************************
#-W coexistencePanelBean.httpTransportPort=”9086”
#-W coexistencePanelBean.httpsTransportPort=”9044”
#
#
#
#
***********************************************************
Thew new values for the admin console an secure admin
console ports.
***********************************************************
#-W coexistencePanelBean.adminConsolePort=”9091”
#-W coexistencePanelBean.secureAdminConsolePort=”9444”
#
#
#
#
#
***********************************************************
The new values for the csivServerAuthListener and
the csivMultiAuthListener ports.
NOTE: You can usually leave these set to 0
***********************************************************
#-W coexistencePanelBean.csivServerAuthListenerAddr=”0”
#-W coexistencePanelBean.csivMultiAuthListenerAddr=”0”
# ***********************************************************
# The new value for the sasSSLServerAuth port.
# ***********************************************************
#-W coexistencePanelBean.sasSSLServerAuthAddr=”0”
#
#
#
#
***********************************************************
The new values for the JMS Server Direct Address,
JMS Server Security, and JMS Server QueuedAddress ports
***********************************************************
#-W coexistencePanelBean.jmsServerDirectAddress=”5569”
#-W coexistencePanelBean.jmsServerSecurityPort=”5567”
#-W coexistencePanelBean.jmsServerQueuedAddress=”5568”
# ***********************************************************
# The new value for the soap connector address port
# ***********************************************************
#-W coexistencePanelBean.soapConnectorAddress=”8881”
Appendix A. Configuration files and scripts
451
#
#
#
#
#
#
#
#
#
#
***********************************************************
**
Support for Silent Migration
**
** NOTE:
** 1. You must uncomment and modify EVERY property
** in this section for silent migration to work properly.
** 2. You can not perform migration and coexistence at
** the same time.
**
***********************************************************
#
#
#
#
#
***********************************************************
The installer must be informed that you wish to operate on
a previous version, so you must tell it that one is present
by uncommenting the next line.
***********************************************************
# -W previousVersionDetectedBean.previousVersionDetected=”true”
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
***********************************************************
Direct the installer to operate on a specific previous version by
uncommenting the next line and entering one of these values:
Value
*****
AE
advanced
AEs
standard
Edition
*******
WAS Advanced Edition (V3.x, V4.0.x)
AE
WAS Advanced Single Server Edition (V4.0.x)
WAS Standard Edition (V3.x)
Note:
For migration from WAS V5.0.x, this field is not used. So simply
set previousVersionPanelBean.selectedVersionEdition to “<NONE>”.
************************************************************
# -W previousVersionPanelBean.selectedVersionEdition=”AEs”
# ************************************************************
# Specify the location where the previous version is installed.
# ************************************************************
# -W
previousVersionPanelBean.selectedVersionInstallLocation=”/opt/IBM/WebSphere/App
Server”
# ************************************************************
# Specify the path to the configuration file for the
# previous version. Configuration filenames are:
#
452
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
#
#
#
#
#
#
#
#
Value
*****
admin.config
admin.config
server-cfg
server-cfg
previousVersionPanelBean.selectedVersionEdition
***********************************************
AE
advanced
AEs
standard
Note:
For migration from WAS V5.0.x, this field is not used. So simply
set previousVersionPanelBean.selectedVersionConfigFile to “<NONE>”.
************************************************************
# -W
previousVersionPanelBean.selectedVersionConfigFile=”/opt/IBM/WebSphere/AppServe
r/config/server-cfg.xml”
# ************************************************************
# Specify the version number of the previous version:
5.0.2,5.0.1,5.0.0,4.0,4.0.1,3.5, etc...
# ************************************************************
# -W previousVersionPanelBean.previousVersionSelected=”4.0”
#
#
#
#
************************************************************
Uncomment the below line to indicate that you wish to
migrate the previous version.
************************************************************
# -W previousVersionPanelBean.migrationSelected=”true”
#
#
#
#
************************************************************
Specify the directory where migration will backup
information about the previous version.
************************************************************
# -W migrationInformationPanelBean.migrationBackupDir=”/tmp/migrationbackup”
# ************************************************************
# Specify the directory where migration logs will be stored.
# ************************************************************
# -W migrationInformationPanelBean.migrationLogfileDir=”/tmp/migrationlogs”
was_appserver_510_install.sh
Example A-44 on page 454 is the installation script for WebSphere Application
Server 5.1.
Appendix A. Configuration files and scripts
453
Example: A-44 Script for installation of WAS 5.1
#!/bin/sh
## this is a basic script to create users nd groups required for
## succesful installation of WebSphere APplication Server v5.1 on Linux
##
##
## To make this script ready for production, review paths, user and group
names
## and add logging....
doit () {
echo “Executing command: “ $1
$1
rc=$?
echo “-- result was: “ $rc
return $rc;
}
build_group_list ()
{
action=$1
grp_list=$2
case $action in
add)
new_group_list=$grp
;;
remove)
new_group_list=””
;;
esac
##
##
##echo $grp_list | tr “ “ “\n” | while read GROUP;
##do
for GROUP in $grp_list;
do
##echo “read group $GROUP”
case $action in
add)
if [ x$GROUP = x$grp ]; then
# return, already there
echo “Returning - user $usr already member of group $grp”
new_group_list=””
return 4
454
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
fi
new_group_list=$new_group_list,$GROUP
;;
remove)
if [ x$GROUP != x$grp ]; then
if [ x$new_groups = x ]; then
new_group_list=$GROUP
else
new_group_list=$new_group_list,$GROUP
fi
##else
##echo “ommiting group ‘$grp’ target to be removed”
fi
esac
##echo “Working group list is:” $new_group_list
done
##echo “New group list:” $new_group_list
return 0
}
user_mod () {
##
##
##
##echo “Received: “$@
usrfile=/etc/passwd
grpfile=/etc/group
action=$1
usr=$2
grp=$3
##echo “inspecting groups for ${action}’ing ‘$usr’ to/from ‘$grp’”
## get current secondary groups
a=‘grep $usr:x: $usrfile | awk -F: ‘{ print $1 }’‘
if [ x$a = x ]; then
echo “User ‘$usr’ does not exist”
return 4
fi
a=‘grep $grp:x: $grpfile | awk -F: ‘{ print $1 }’‘
if [ x$a = x ]; then
echo “Group ‘$grp’ does not exist”
return 4
fi
Appendix A. Configuration files and scripts
455
a=‘grep :$usr $grpfile | awk
b=‘grep ,$usr $grpfile | awk
old_group_list=‘echo $a $b |
##echo “current group list:”
tst=‘echo $old_groups | tr “
-F: ‘{ print $1 }’‘
-F: ‘{ print $1 }’‘
tr “\n” “ “‘
$old_group_list
“ “\n” | grep $grp‘
case $action in
add)
# return if currently a member
if [ x$tst = x$grp ]; then
echo “‘$usr’ is already a member of ‘$grp’”
return 0
fi
;;
remove)
# return if currently not a member
if [ x$tst = x ]; then
echo “‘$usr’ is NOT a member of ‘$grp’”
return 0
fi
;;
esac
build_group_list “$action” “$old_group_list”
rc=$?
if [ $rc = 0 ]; then
cmd=”usermod -G $new_group_list $usr”
## echo “About to execute: “$cmd
doit “$cmd”
rc=$?
fi
return $rc
}
usr_mod_group () {
func=$1
grp=$2
usr=$3
grpfile=/etc/group
cp $grpfile $grpfile-backup
sedfile=sed.tmp
tmpfile=tmp.tmp
newfile=new.tmp
old=‘grep $grp:x: $grpfile‘
# is group already defined
if [ $old == ““ ]; then
456
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
return 0
fi
# is user a member of group
tst=‘echo $old | grep $usr‘
unset new
case $func in
remove)
if [ x$tst != x ]; then
new=‘echo $old | awk -F”$usr” ‘{print $1 $2’}‘
fi
;;
add)
if [ x$tst = x ]; then
new=$old,$usr
fi
;;
esac
if [ x$new != x ]; then
echo “/$old/ c $new” > $sedfile
sed -f $sedfile $grpfile > $tmpfile
sed -e ‘s/,,/,/g’ $tmpfile > $newfile
sed -e ‘s/:,/:/g’ $newfile > $grpfile
rm $newfile
rm $tmpfile
rm $sedfile
fi
return $?
}
set_rc () {
if [ $1 -gt $2 ]; then
return $1
fi
return $2
}
echo $0 “ received parms: “ $@
mqmgrp=mqm
brkgrp=mqbrkrs
old_rc=0
## add group mqm
doit “groupadd $mqmgrp”
Appendix A. Configuration files and scripts
457
set_rc $? $old_rc
old_rc=$?
## add group mqbrkrs
doit “groupadd $brkgrp”
set_rc $? $old_rc
old_rc=$?
##add user mqm and make member of both groups
doit “useradd -g $mqmgrp -G $brkgrp -m mqm”
set_rc $? $old_rc
old_rc=$?
## add root to mqm group
user_mod “add” “root” $mqmgrp
set_rc $? $old_rc
old_rc=$?
user_mod “add” “root” $brkgrp
set_rc $? $old_rc
old_rc=$?
exit $old_rc
mod_was_server_JVM_process.sh
Example A-45 sets up customization to enable monitoring of JVM processes.
Example: A-45 mod_was_server_JVM_process.sh
#!/bin/sh
ifile=$1
tfile=”l”
bfile=$ifile.bkp
## create backup
cp $ifile $bfile
## find line with string ‘<services xmi:type=”pmiservice:PMIService”’ and set
line numbers
target=‘sed -n -e ‘/<services xmi:type=”pmiservice:PMIService”/ =’ $ifile ‘
prev=$[${target}-1]
next=$[${target}+1]
## write sed edit commands to temp file
echo “1,$prev {“ > $tfile
458
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
echo “p” >> $tfile
echo “}” >> $tfile
echo “$target {“ >> $tfile
echo “i\<services xmi:type=\”pmiservice:PMIService\” xmi:id=\”PMIService_1\”
enable=\”false\”
initialSpecLevel=\”beanModule=N:cacheModule=N:connectionPoolModule=N:j2cModule=
N:jvmRuntimeModule=X:orbPerfModule=N:servletSessionsModule=N:systemModule=N:thr
eadPoolModule=N:transactionModule=N:webAppModule=N:webServicesModule=N:wlmModul
e=N:wsgwModule=N\”/>” >> $tfile
echo “}” >> $tfile
echo “$next,$ {“ >> $tfile
echo “p” >> $tfile
echo “}” >> $tfile
## replace content of ifile
sed -n -f $tfile $bfile > $ifile
## find line with string ‘<jvmEntries xmi:id=”’ and set line numbers
target=‘sed -n -e ‘/<jvmEntries xmi:id=”/ =’ $ifile ‘
prev=$[${target}-1]
next=$[${target}+1]
## write sed edit commands to temp file
echo “1,$prev {“ > $tfile
echo “p” >> $tfile
echo “}” >> $tfile
echo “$target {“ >> $tfile
echo “i\<jvmEntries xmi:id=\”JavaVirtualMachine_1\” verboseModeClass=\”false\”
verboseModeGarbageCollection=\”false\” verboseModeJNI=\”false\”
initialHeapSize=\”0\” maximumHeapSize=\”0\” runHProf=\”false\”
debugMode=\”false\” debugArgs=\”-Djava.compiler=NONE -Xdebug -Xnoagent
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=7777\”
genericJvmArguments=\”-XrunpmiJvmpiProfiler\” disableJIT=\”false\”>” >> $tfile
echo “}” >> $tfile
echo “$next,$ {“ >> $tfile
echo “p” >> $tfile
echo “}” >> $tfile
## replace content of ifile
sed -n -f $tfile $bfile > $ifile
## cleanup
rm $tfile
mod_was_pmirm_settings.sh
Example A-46 on page 460 is the script for enabling PMI data gathering in
WebSphere Application Server 5.1.
Appendix A. Configuration files and scripts
459
Example: A-46 mod_was_pmirm_settings.sh
#!/bin/sh
ifile=$1
tfile=”l”
bfile=$ifile.bkp
## create backup
cp $ifile $bfile
## find line with string ‘<pmirm:’ and set line numbers
target=‘sed -n -e ‘/<pmirm:/ =’ $ifile ‘
prev=$[${target}-1]
next=$[${target}+1]
## write sed edit commands to temp file
echo “1,$prev {“ > $tfile
echo “p” >> $tfile
echo “}” >> $tfile
echo “$target {“ >> $tfile
echo “i\<pmirm:PMIRequestMetrics xmi:version=\”2.0\”
xmlns:xmi=\”http://www.omg.org/XMI\”
xmlns:pmirm=\”http://www.ibm.com/websphere/appserver/schemas/5.0/pmirm.xmi\”
xmi:id=\”PMIRequestMetrics_1\” enable=\”true\” enableARM=\”false\”
traceLevel=\”HOPS\”>/ “ >> $tfile
echo “}” >> $tfile
echo “$next,$ {“ >> $tfile
echo “p” >> $tfile
echo “}” >> $tfile
## replace content of ifile
sed -n -f $tfile $bfile > $ifile
## cleanup
rm $tfile
enable_was_security.sh
Example A-47 on page 461 is the script for enabling security in WebSphere
Application Server 5.1.
460
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Example: A-47 Enable WAS security
#!/bin/sh
jaclfile=$1
if [ ! -r $jaclfile ]; then
echo “File \”${jaclfile}\” does not exist”
exit 256
fi
was_home=/opt/IBM/WebSphere/AppServer
hostname=‘hostname‘
servername=$(wasserver)
wasuser=$(wasuser)
waspassword=$(waspassword)
curdir=‘pwd‘
wsadmin=”${was_home}/bin/wsadmin.sh -conntype SOAP -host localhost -port 8880
-username ${wasuser} -password ${waspassword} -f ${jaclfile}”
doit () {
echo “Executing command: “ $1
$1
rc=$?
echo “-- result was: “ $rc
return $rc;
}
#install websphere application
doit “${was_home}/bin/setupCmdLine.sh “
doit “${was_home}/bin/startServer.sh ${servername} -user ${wasuser} -password
${waspassword}”
doit “${wsadmin}”
doit “${was_home}/bin/GenPluginCfg.sh -node.name ${hostname}”
doit “${was_home}/bin/stopServer.sh ${servername} -user ${wasuser} -password
${waspassword}”
doit “${was_home}/bin/startServer.sh ${servername} -user ${wasuser} -password
${waspassword}”
exit 0
enable_was_security.jacl
Example A-48 on page 462 is the Jacl control file for WebSphere Application
Server security enablement.
Appendix A. Configuration files and scripts
461
Example: A-48 WebSphere Application Server security enablement control
# Usage: wsadmin -connType SOAP -host <HOSTNAME> -port 8880 -user wasuser
-password clemson -f installApp.jacl
# Configure Authentication Alias
# Configure JDBC Data Source and J2C Authentication Alias
# Install TimeCardEARProject application
# Authentication Alias Configuration
set wasserver “$(wasserver)”
set wasuser “$(wasuser)”
set waspassword “$(waspassword)”
puts “ “
puts “ “
puts “LocalOSUserRegistry REGISTRY Configuration”
### Set active user registry to Local OS registry
set regName “LocalOS”
set security_item [$AdminConfig list Security]
### List all the user registries defined
set user_regs [$AdminConfig list UserRegistry]
### Find the one that starts with the name we set at the beginning of the
script
foreach user_reg $user_regs { if {[regexp $regName $user_reg]} { set
new_user_reg $user_reg; break }}
set attrs6 [subst {{activeUserRegistry $new_user_reg} {enabled true}
{enforceJava2Security true}}]
## Modify the user registry attribute for the security object
$AdminConfig modify $security_item $attrs6
puts “ “
puts “ “
puts “Set the properties for Custom User Registry”
## Set the properties for Custom User Registry
#set curClassName “com.ibm.websphere.security.FileRegistrySample”
set serverId $wasuser
set serverPassword $waspassword
##set properties [list [list [list name “usersFile”] [list value
“${was_home}/properties/users.prop”]] [list [list name “groupsFile”] [list
value “${was_home}/properties/groups.prop”]]]
##set attrs7 [subst {{customRegistryClassName $curClassName} {serverId
$serverId} {serverPassword $serverPassword} {properties [list $properties]}}]
set attrs7 [subst {{serverId $wasuser} {serverPassword $waspassword}}]
$AdminConfig modify $new_user_reg $attrs7
462
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
$AdminConfig save
exit
was_appserver_510_uninstall.sh
Example A-49 is the script to use for uninstalling WebSphere Application Server
5.1
Example: A-49 WebSphere Application Server 5.1 uninstallation
#!/bin/sh
## this is a basic script to remove a DB2 UDB Server
## and perform the appropiate cleanup functions:
##
## To make this script ready for production, review paths, user and group
names
## and add logging....
doit () {
echo “Executing command: “ $1
$1
rc=$?
echo “-- result was: “ $rc
return $rc;
}
usr_mod_group () {
func=$1
grp=$2
usr=$3
grpfile=/etc/group
cp $grpfile $grpfile-backup
sedfile=sed.tmp
tmpfile=tmp.tmp
newfile=new.tmp
old=‘grep $grp:x: $grpfile‘
if [ $old == ““ ]; then
return 0
fi
case $func in
remove)
new=‘echo $old | awk -F”$usr” ‘{print $1 $2’}‘
;;
add)
Appendix A. Configuration files and scripts
463
new=$old,$usr
;;
esac
echo “/$old/ c $new” > $sedfile
sed -f $sedfile $grpfile > $tmpfile
sed -e ‘s/,,/,/g’ $tmpfile > $newfile
sed -e ‘s/:,/:/g’ $newfile > $grpfile
rm $newfile
rm $tmpfile
rm $sedfile
return $?
}
echo $0 “ received parms: “ $@
doit “userdel -r mqm”
doit “groupdel mqbrkrs”
doit “groupdel mqm”
exit 0
A.7.6 WebSphere Application Server v5.1 Fixpack 1
The files in this section define and build a software package capable of installing
and uninstalling WebSphere Application Server v5.1 fixpack 1.
In the online material, all these files are located in the following path relative to
the parent directory of the type of file:
/websphere/was/510_fp1/appserver/Linux-IX86.
was_appserver.5.1.0_fp1.spd
Example A-50 is the software package definition file.
Example: A-50 Software package definition for was_server.5.1.0_fp1
“TIVOLI Software Package v4.2.1 - SPDF”
package
name = was_appserver
title = “WebSphere Application Server Enterprise Edition v5.1 fixpack 1”
version = 5.1.0_fp1
web_view_mode = hidden
undoable = o
committable = o
history_reset = y
464
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
save_default_variables = n
creation_time = “2004-11-25 02:56:40”
last_modification_time = “2004-11-25 02:56:40”
default_variables
wasuser = root
waspassword = smartway
wasserver = server1
work_dir = $(root_dir)/$(prod_name)
prod_name = was_appserver_510_fp1
log_dir = $(root_dir)/log/$(prod_name)
img_dir = $(work_dir)/image
bin_dir = $(root_dir)/bin/$(os_family)
root_dir = /swdist
was_dir = /opt/IBM/WebSphere/AppServer
inst_dir = /opt/IBM/WebSphere/AppServer
gz_file = was51_fp1_linux.tar.gz
tar_file = C53IPML-WAS-510-LinuxIX86.tar
home_path = /mnt
prod_path =
websphere/was/510_fp1/appserver/$(os_name)-$(os_architecture)
src_base = $(home_path)/code
src_dir
= $(src_base)/$(prod_path)
tool_base = $(home_path)/tools
tools_dir = $(tool_base)/$(prod_path)
unpack = yes
cleanup = yes
end
log_object_list
location = $(log_dir)
unix_user_id = 0
unix_group_id = 0
unix_attributes = rwx,rx,
end
source_host_name = srchost
move_removing_host = y
no_check_source_host = y
lenient_distribution = y
default_operation = install
server_mode = all
operation_mode = preferably_not_transactional,preferably_undoable,force
log_path = /mnt/logs/was_appserver_510_fp1.log
post_notice = y
before_as_uid = 0
skip_non_zero = n
Appendix A. Configuration files and scripts
465
after_as_uid = 0
no_chk_on_rm = y
log_host_name = srchost
versioning_type = none
package_type = patch
sharing_control = none
stop_on_failure = y
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = common
translate = n
destination = $(root_dir)/bin
descend_dirs = y
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
466
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
generic_container
caption = “on install”
stop_on_failure = y
condition = “$(operation_name)
== install “
check_disk_space
volume = /,500M
end
execute_user_program
caption = uncompress
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/gunzip -dvf $(work_dir)/$(prod_name).tar.gz”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_uncompress.out
error_file = $(log_dir)/$(prod_name)_uncompress.err
output_file_append = n
error_file_append = n
Appendix A. Configuration files and scripts
467
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(src_dir)/$(gz_file)
translate = n
destination = $(work_dir)/$(prod_name).tar.gz
compression_method = stored
rename_if_locked = n
end
end
end
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)
translate = n
destination = $(img_dir)
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
468
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
execute_user_program
caption = unpack
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/tar -C $(img_dir) -xvf
$(work_dir)/$(prod_name).tar”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_unpack.out
error_file = $(log_dir)/$(prod_name)_unpack.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
Appendix A. Configuration files and scripts
469
execute_user_program
caption = “stop server”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(was_dir)/bin/stopServer.sh $(wasserver) -user
$(wasuser) -password $(waspassword)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_stopserver.out
error_file = $(log_dir)/$(prod_name)_stopserver.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success
success
success
failure
end
=
=
=
=
0,0
246,246
247,247
1,245
end
end
execute_user_program
caption = “Silent Install”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(work_dir)/$(prod_name)_install.sh install
$(was_dir) $(img_dir)”
inhibit_parsing = n
working_dir = $(work_dir)
470
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install.out
error_file = $(log_dir)/$(prod_name)_install.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(tools_dir)/was_appserver_510_fp1_install.sh
translate = n
destination = $(work_dir)/$(prod_name)_install.sh
compression_method = stored
rename_if_locked = n
end
end
end
end
execute_user_program
caption = “Start server”
transactional = n
during_install
path = $(bin_dir)/do_it
Appendix A. Configuration files and scripts
471
arguments = “$(was_dir)/bin/startServer.sh $(wasserver) -user
$(wasuser) -password $(waspassword)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_startserver.out
error_file = $(log_dir)/$(prod_name)_startserver.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(work_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_cleanup.out
error_file = $(log_dir)/$(prod_name)_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
472
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,127
failure = 0,65535
end
end
end
end
generic_container
caption = “on remove”
stop_on_failure = y
condition = “$(operation_name)
== remove”
execute_user_program
caption = “stop server”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “$(was_dir)/bin/stopServer.sh $(wasserver) -user
$(wasuser) -password $(waspassword)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_stopserver.out
error_file = $(log_dir)/$(prod_name)_stopserver.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
Appendix A. Configuration files and scripts
473
success
success
success
failure
=
=
=
=
0,0
246,246
247,247
1,245
end
end
end
execute_user_program
caption = “Silent Install and uninstall”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “$(work_dir)/$(prod_name)_install.sh install
$(was_dir) $(img_dir)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_uninstall.out
error_file = $(log_dir)/$(prod_name)_uninstall.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(tools_dir)/was_appserver_510_fp1_install.sh
translate = n
474
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
destination = $(work_dir)/$(prod_name)_install.sh
compression_method = stored
rename_if_locked = n
end
end
end
end
execute_user_program
caption = “Start server”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “$(was_dir)/bin/startServer.sh $(wasserver) -user
$(wasuser) -password $(waspassword)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_startserver.out
error_file = $(log_dir)/$(prod_name)_startserver.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
end
Appendix A. Configuration files and scripts
475
was_appserver_510_fp1_install.sh
Example A-51 is the install script.
Example: A-51 was_appserver_510_fp1 installation script
#!/bin/sh
function=$1
if [ x${function} = x ]; then
echo “you have to specify function: ‘install’ or ‘uninstall’ “
exit 4
fi
WAS_HOME=$2
if [ x${WAS_HOME} = x ]; then
echo “you have to specify the WAS install directory (
/opt/WebSphere/AppServer )”
exit 4
fi
IMG_DIR=$3
if [ x${IMG_DIR} = x ]; then
echo “you have to proide the path to the directory holding the
updateSilent.sh script”
exit 4
fi
. ${WAS_HOME}/bin/setupCmdLine.sh
${IMG_DIR}/updateSilent.sh \
-installDir “${WAS_HOME}” \
-fixpack \
-fixpackDir “${IMG_DIR}/fixpacks” \
-${function} \
-fixPackID was51_fp1_linux
\
-skipIHS
\
-skipMQ
exit $?
A.7.7 DB2 Server v8.2
The following files in this section are used to define and build a software package
which installs and uninstalls IBM DB2 UDB Server v8.2 Enterprise Edition.
In the online material, all these files are located in the following path relative to
the parent directory of the type of file: /db2/udb-ee/820/server/Linux-IX86.
476
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
db2_server.8.2.0.spd
Example A-52 is the software package for DB2 Server installation.
Example: A-52 Software package definition for db2_server.8.2.0.spd
“TIVOLI Software Package v4.2.1 - SPDF”
package
name = t_db2_server
title = “DB2 Universal Database Server Enterprise Edition v8.2”
version = 8.2.0
web_view_mode = hidden
undoable = n
committable = n
history_reset = n
save_default_variables = n
creation_time = “2004-11-30 18:07:20”
last_modification_time = “2004-11-30 19:15:03”
default_variables
work_dir = $(root_dir)/$(prod_name)
prod_name = db2_server_820
log_dir
= $(root_dir)/log/$(prod_name)
img_dir
= $(work_dir)/image/009_ESE_LNX_32_NLV
bin_dir
= $(root_dir)/bin/$(os_family)
root_dir = /swdist
inst_dir = /opt/IBM/db2/V8.1
tar_file = C58S8ML-db2udbes82.tar
rsp_file = db2_server_820.rsp
home_path = /mnt
prod_path = db2/udb-ee/820/server/$(os_name)-$(os_architecture)
src_base = $(home_path)/code
src_dir
= $(src_base)/$(prod_path)
rsp_base = $(home_path)/rsp
rsp_dir
= $(rsp_base)/$(prod_path)
tool_base = $(home_path)/tools
tools_dir = $(tool_base)/$(prod_path)
unpack
= yes
cleanup
= yes
end
log_object_list
location = $(log_dir)
unix_user_id = 0
unix_group_id = 0
unix_attributes = rwx,rx,rx
end
Appendix A. Configuration files and scripts
477
source_host_name = srchost
move_removing_host = y
no_check_source_host = y
lenient_distribution = n
default_operation = install
server_mode = all,force
operation_mode = not_transactional
log_path = /mnt/logs/db2_server_820.log
post_notice = y
before_as_uid = 0
skip_non_zero = n
after_as_uid = 0
no_chk_on_rm = y
log_host_name = srchost
versioning_type = swd
package_type = refresh
sharing_control = none
stop_on_failure = y
add_directory
stop_on_failure = y
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = common
translate = n
destination = $(root_dir)/bin
descend_dirs = y
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_group = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
478
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
add_directory
stop_on_failure = y
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)
translate = n
destination = $(work_dir)
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_group = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
generic_container
caption = “on install”
stop_on_failure = y
condition = “$(operation_name)
#
#
#
#
== install”
generic_container
caption = “on unpack”
stop_on_failure = y
condition = “$(unpack) == yes”
add_directory
stop_on_failure = n
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
Appendix A. Configuration files and scripts
479
location = $(src_base)
name = $(prod_path)
translate = n
destination = $(work_dir)/image
descend_dirs = n
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
execute_user_program
caption = unpack
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/tar -C $(work_dir)/image -xvf
$(work_dir)/$(prod_name).tar”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_unpack.out
error_file = $(log_dir)/$(prod_name)_unpack.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
480
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(src_dir)/$(tar_file)
translate = n
destination = $(work_dir)/$(prod_name).tar
compression_method = stored
rename_if_locked = n
end
end
end
end
#
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(rsp_base)
name = $(prod_path)
translate = n
destination = $(work_dir)
descend_dirs = n
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
Appendix A. Configuration files and scripts
481
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = y
is_signature = n
compression_method = stored
rename_if_locked = n
add_file
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
name = db2_server_820.rsp
translate = n
destination = $(prod_name).rsp
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = y
is_signature = n
compression_method = stored
rename_if_locked = n
end
end
execute_user_program
caption = “Install”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(img_dir)/db2setup -r $(work_dir)/$(prod_name).rsp
-l $(log_dir)/$(prod_name)_db2inst.log”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
482
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install.out
error_file = $(log_dir)/$(prod_name)_install.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = “Update DB2 License”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(inst_dir)/adm/db2licm -a
$(img_dir)/db2/license/db2ese.lic”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_update_license.out
error_file = $(log_dir)/$(prod_name)_update_license.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
Appendix A. Configuration files and scripts
483
exit_codes
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = “create monitoring user db2ecc”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = $(work_dir)/$(prod_name)_addusers.sh
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_addusers.out
error_file = $(log_dir)/$(prod_name)_addusers.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(tools_dir)/db2_server_820_addusers.sh
translate = n
destination = $(work_dir)/$(prod_name)_addusers.sh
484
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
compression_method = stored
rename_if_locked = n
end
end
end
end
execute_user_program
caption = “cleanup image directory”
condition = “$(cleanup) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(img_dir)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_cleanup.out
error_file = $(log_dir)/$(prod_name)_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
generic_container
caption = “on remove”
Appendix A. Configuration files and scripts
485
stop_on_failure = y
condition = “$(operation_name)
== remove”
execute_user_program
caption = “DB2 remove”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = $(work_dir)/$(prod_name)_remusers.sh
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove.out
error_file = $(log_dir)/$(prod_name)_remove.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(tools_dir)/db2_deinstall
translate = n
destination = $(work_dir)/db2_deinstall
compression_method = stored
rename_if_locked = n
end
add_file
replace_if_existing = y
486
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
replace_if_newer = n
remove_if_modified = n
name = $(tools_dir)/db2_server_820_remusers.sh
translate = n
destination = $(work_dir)/$(prod_name)_remusers.sh
compression_method = stored
rename_if_locked = n
end
end
end
end
end
end
db2_server_820.rsp
Example A-53 is the response file used to control the installation of DB2 Server
v8.2.
Example: A-53 Response file for DB2 Server v8.2 installation
*----------------------------------------------------* Generated response file used by the DB2 Setup wizard
* generation time: 11/13/04 1:43 PM
*----------------------------------------------------* Product Installation
LIC_AGREEMENT
= ACCEPT
PROD
= ENTERPRISE_SERVER_EDITION
INSTALL_TYPE
= CUSTOM
COMP
= SQL_PROCEDURES
COMP
= DB2_CONTROL_CENTER
COMP
= CONFIGURATION_ASSISTANT
COMP
= DB2_ENGINE
COMP
= CONNECT
COMP
= DB2_SAMPLE_DATABASE_SOURCE
COMP
= CLIENT_APPLICATION_ENABLER
COMP
= JAVA_SUPPORT
COMP
= REPLICATION
COMP
= COMMUNICATION_SUPPORT_TCPIP
*----------------------------------------------* Das properties
*----------------------------------------------DAS_CONTACT_LIST
= LOCAL
* DAS user
DAS_USERNAME
= dasusr1
DAS_GROUP_NAME
= dasadm1
Appendix A. Configuration files and scripts
487
DAS_HOME_DIRECTORY
= /home/dasusr1
DAS_PASSWORD
= 245255345290348217331337
ENCRYPTED
= DAS_PASSWORD
* ---------------------------------------------* Instance properties
* ---------------------------------------------INSTANCE
= inst1
inst1.TYPE
= ese
inst1.WORDWIDTH
= 32
* Instance-owning user
inst1.NAME
= db2inst1
inst1.GROUP_NAME
= db2grp1
inst1.HOME_DIRECTORY
= /home/db2inst1
inst1.PASSWORD
= 245255345290348217331337
ENCRYPTED
= inst1.PASSWORD
inst1.AUTOSTART
= YES
inst1.AUTHENTICATION
= SERVER
inst1.SVCENAME
= db2c_db2inst1
inst1.PORT_NUMBER
= 50001
inst1.FCM_PORT_NUMBER
= 60000
inst1.MAX_LOGICAL_NODES
= 4
* Fenced user
inst1.FENCED_USERNAME
= db2fenc1
inst1.FENCED_GROUP_NAME
= db2fgrp1
inst1.FENCED_HOME_DIRECTORY
= /home/db2fenc1
inst1.FENCED_PASSWORD
= 245255345290348217331337
ENCRYPTED
= inst1.FENCED_PASSWORD
*----------------------------------------------* Installed Languages
*----------------------------------------------LANG
= EN
db2_server_820_addusers.sh
Example A-54 is the script used to create system groups and users required by
the DB2 Server installation.
Example: A-54 db2_server_820_addusers.sh script to create db2 users
#!/bin/sh
##
##
##
##
##
this is a basic that adds the db2ecc user to system
where DB2 Server has been installed.
the db2ecc user is required for Tivoli Monitoring for Databases
doit () {
echo “Executing command: “ $1
488
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
$1
rc=$?
echo “-- result was: “ $rc
return $rc;
}
db2grp=db2grp1
password=smartway
## define group
echo “ ensuring existance of db2 user group”
doit “groupadd $db2grp”
## define user db2ecc
echo “ creating monitoring user”
doit “useradd -g $db2grp -m db2ecc -p $password”
exit 0
db2_server_820_remusers.sh
Example A-55 is the script for removing all system user and group definitions
prior to removing DB2 Server.
Example: A-55 db2_server_820_remusers.sh to remove db2 users
#!/bin/sh
## this is a basic script to remove a DB2 UDB Server
## and perform the appropiate cleanup functions:
##
## Functions performed:
##
## - Add root to db2admin and db2users group to allow root to perform admin
functions
##
note: existing additional group relationships for root will be
overwritten
##
## - for each instance: db2stop force, drop instance, delete instance user
##
## - delete default fence user and group (db2fenc1:db2fgrp1)
##
## - stop db2admin server
## - delete default db2admin user and group (dasusr1:dasadm1)
##
## - uninstall db2 rpm packages
##
## - remove default installation file system (/opt/IBM/db2)
##
Appendix A. Configuration files and scripts
489
##
##
## To make this script ready for production, review paths, user and group
names
## and add logging....
doit () {
echo “Executing command: “ $1
$1
rc=$?
echo “-- result was: “ $rc
return $rc;
}
build_group_list ()
{
action=$1
grp_list=$2
case $action in
add)
new_group_list=$grp
;;
remove)
new_group_list=””
;;
esac
##
##
##echo $grp_list | tr “ “ “\n” | while read GROUP;
##do
for GROUP in $grp_list;
do
##echo “read group $GROUP”
case $action in
add)
if [ x$GROUP = x$grp ]; then
# return, already there
echo “Returning - user $usr already member of group $grp”
new_group_list=””
return 4
fi
new_group_list=$new_group_list,$GROUP
;;
remove)
if [ x$GROUP != x$grp ]; then
490
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
if [ x$new_groups = x ]; then
new_group_list=$GROUP
else
new_group_list=$new_group_list,$GROUP
fi
##else
##echo “ommiting group ‘$grp’ target to be removed”
fi
esac
##echo “Working group list is:” $new_group_list
done
##echo “New group list:” $new_group_list
return 0
}
user_mod () {
##
##
##
##echo “Received: “$@
usrfile=/etc/passwd
grpfile=/etc/group
action=$1
usr=$2
grp=$3
##echo “inspecting groups for ${action}’ing ‘$usr’ to/from ‘$grp’”
## get current secondary groups
a=‘grep $usr:x: $usrfile | awk -F: ‘{ print $1 }’‘
if [ x$a = x ]; then
echo “User ‘$usr’ does not exist”
return 4
fi
a=‘grep $grp:x: $grpfile | awk -F: ‘{ print $1 }’‘
if [ x$a = x ]; then
echo “Group ‘$grp’ does not exist”
return 4
fi
a=‘grep :$usr $grpfile | awk -F: ‘{ print $1 }’‘
b=‘grep ,$usr $grpfile | awk -F: ‘{ print $1 }’‘
Appendix A. Configuration files and scripts
491
old_group_list=‘echo $a $b | tr “\n” “ “‘
##echo “current group list:” $old_group_list
tst=‘echo $old_groups | tr “ “ “\n” | grep $grp‘
case $action in
add)
# return if currently a member
if [ x$tst = x$grp ]; then
echo “‘$usr’ is already a member of ‘$grp’”
return 0
fi
;;
remove)
# return if currently not a member
if [ x$tst = x ]; then
echo “‘$usr’ is NOT a member of ‘$grp’”
return 0
fi
;;
esac
build_group_list “$action” “$old_group_list”
rc=$?
if [ $rc = 0 ]; then
cmd=”usermod -G $new_group_list $usr”
## echo “About to execute: “$cmd
doit “$cmd”
rc=$?
fi
return $rc
}
echo $0 “ received parms: “ $@
db2root=/opt/IBM/db2
db2path=$db2root/V8.1
db2ipath=$db2path/instance
fncusr=db2fenc1
admusr=dasusr1
eccusr=db2ecc
db2grp=db2grp1
fncgrp=db2fgrp1
admgrp=dasadm1
## allow root to issue db2 commands
if [ x‘grep ${db2grp}: /etc/group‘ != x ]; then
492
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
echo “allow root to perform db2 functions”
##doit “usermod -G $db2grp,$admgrp root”
##user_mod_group “add” $db2grp “root”
user_mod “add” “root” $db2grp
fi
if [ x‘grep ${admgrp}: /etc/group‘ != x ]; then
echo “allow root to perform db2 admin functions”
user_mod “add” “root” $admgrp
fi
## stop and delete all instances and instance users
if [ -f $db2ipath/db2ilist ]; then
instances=‘$db2ipath/db2ilist‘
for i in $instances ; do
if [ x‘grep ${i}: /etc/passwd‘ != x ]; then
echo “removing instance $i - including instance user”
. /home/$i/.profile
doit “db2stop force”
doit “$db2ipath/db2idrop $i”
doit “userdel -r $i”
fi
done
fi
## delete fence user
if [ x‘grep ${fncusr}: /etc/passwd‘ != x ]; then
echo “deleting fence user”
doit “userdel -r $fncusr”
fi
## delete monitoring user
if [ x‘grep ${eccusr}: /etc/passwd‘ != x ]; then
echo “deleting monitoring user”
doit “userdel -r $eccusr”
fi
## delete user groups
echo “delete db2 instance groups”
if [ x‘grep ${db2grp}: /etc/group‘ != x ]; then
doit “groupdel $db2grp”
fi
if [ x‘grep ${fncgrp}: /etc/group‘ != x ]; then
doit “groupdel $fncgrp”
fi
## stop and delete the admin server
echo “ stop and delete admin server”
if [ x‘grep ${admusr}: /etc/passwd‘ != x ]; then
Appendix A. Configuration files and scripts
493
. /home/dasusr1/.profile
doit “db2admin stop”
doit “$db2ipath/dasdrop”
doit “userdel -r $admusr”
fi
## delete admin group
echo “deleting admin group”
if [ x‘grep ${admgrp}: /etc/group‘ != x ]; then
doit “groupdel $admgrp”
fi
## uninstall product rpm’s
echo “removing rpms”
doit “./db2_deinstall”
## remove file system
echo “removing filesystem”
doit “rm -r $db2root”
##usr_mod “remove” “root”
$db2grp
exit 0
A.7.8 TMTP Agent v5.3
The following files in this section are define and build a software package
capable of installing and uninstalling TMTP Monitoring Agent v5.3.
In the online material, all these files are located in the following path relative to
the parent directory of the type of file: /tivoli/TMTP/530/agents/generic.
tmtp_agent.5.3.0.spd
Example A-56 is the software package definition for TMTP Monitoring Agent
v5.3.
Example: A-56 Software Package for TMTP Agent deployment
“TIVOLI Software Package v4.2.1 - SPDF”
package
name = t_tmtp_agent
title = “TMTP Management Agent v5.3”
version = 5.3.0
web_view_mode = hidden
undoable = n
494
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
committable = o
history_reset = y
save_default_variables = n
creation_time = “2004-12-02 19:11:03”
last_modification_time = “2004-12-02 19:11:03”
default_variables
inst_ems = no
work_dir = $(root_dir)/$(prod_name)
prod_name = tmtp_agent_530
log_dir = $(root_dir)/log/$(prod_name)
img_dir = $(work_dir)/image
bin_dir = $(root_dir)/bin/$(os_family)
root_dir = /swdist
inst_dir = /opt/IBM/Tivoli/MA
rsp_file = MA.rsp
home_path = /mnt
prod_path = tivoli/tmtp/530/agents/generic
src_base = $(home_path)/code
src_dir
= $(src_base)/$(prod_path)
rsp_base = $(home_path)/rsp
tool_base = $(home_path)/tools
tools_dir = $(tool_base)/$(prod_path)
unpack = yes
cleanup = yes
end
log_object_list
location = $(log_dir)
unix_user_id = 0
unix_group_id = 0
unix_attributes = rwx,rx,
end
source_host_name = srchost
move_removing_host = y
no_check_source_host = y
lenient_distribution = y
default_operation = install
server_mode = all
operation_mode = not_transactional,force
log_path = /mnt/logs/tmtp_agent_530.log
post_notice = y
before_as_uid = 0
skip_non_zero = n
after_as_uid = 0
no_chk_on_rm = y
Appendix A. Configuration files and scripts
495
log_host_name = srchost
versioning_type = swd
package_type = refresh
sharing_control = none
stop_on_failure = y
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = common
translate = n
destination = $(root_dir)/bin
descend_dirs = y
remove_empty_dirs = n
is_shared = y
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
descend_dirs = n
496
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
generic_container
caption = “on install”
stop_on_failure = y
condition = “$(operation_name)
== install”
check_disk_space
volume = /,1500M
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(src_dir)
name = MA
translate = n
destination = $(img_dir)
descend_dirs = y
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = n
Appendix A. Configuration files and scripts
497
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(rsp_base)
name = $(prod_path)
translate = n
destination = $(work_dir)
descend_dirs = n
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
498
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
name = $(rsp_file)
translate = n
destination = $(prod_name).rsp
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
end
execute_user_program
caption = install
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(img_dir)/setup_MA_lin.bin -is:silent -is:log
$(log_dir)/$(prod_name)_product_install.log -silent -options
$(work_dir)/$(prod_name).rsp”
inhibit_parsing = n
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install.out
error_file = $(log_dir)/$(prod_name)_install.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = n
reporting_stderr_on_server = n
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
Appendix A. Configuration files and scripts
499
exit_codes
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “rm -r $(img_dir)”
inhibit_parsing = n
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install_cleanup.out
error_file = $(log_dir)/$(prod_name)_install_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = n
reporting_stderr_on_server = n
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
generic_container
caption = “on remove”
500
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
stop_on_failure = y
condition = “$(operation_name)
== remove”
execute_user_program
caption = remove
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “$(inst_dir)/_uninst53/uninstall.bin -is:silent
-is:log $(log_dir)/$(prod_name)_product_remove.log -silent”
inhibit_parsing = n
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove.out
error_file = $(log_dir)/$(prod_name)_remove.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = n
reporting_stderr_on_server = n
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “rm -r $(work_dir)”
inhibit_parsing = n
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
Appendix A. Configuration files and scripts
501
output_file = $(log_dir)/$(prod_name)_remove_cleanup.out
error_file = $(log_dir)/$(prod_name)_remove_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = n
reporting_stderr_on_server = n
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
end
MA.rsp
Example A-57 is the response file controlling the installation of the TMTP
Monitoring Agent v5.3.
Example: A-57 Responsefile for TMTP Agent deployment
-W msConnection.hostName=”tmtpsrv”
-W msConnection.userName=”root”
-W msConnection.password=”smartway”
-W msConnection.sslValue=”true”
-W msConnection.copyKeyFile=”true”
-W msConnection.protocol=”https”
-W msConnection.portNumber=”9446”
-W msConnection.epKeyStore=”$(img_dir)/keyfiles/agent.jks”
-W msConnection.epKeyPass=”changeit”
-W msConnection.maPort=”1976”
#-W serviceUser.user=”TMTPAgent”
#-W serviceUser.password=”tivoli”
-W logSettings.logLevel=”ALL”
-W logSettings.consoleOut=”false”
-W uninstall_reboot.rebootSelected=”true”
#-P ma.installLocation=”C:/Program Files/Tivoli/MA”
-P ma.installLocation=”$(inst_dir)”
### -P ma.installLocation=”/opt/IBM/Tivoli/MA”
502
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
A.7.9 WebSphere Caching Proxy v5.1
The following files in this section are used to define and build a software package
capable of installing and uninstalling WebSphere Caching Proxy v5.1.
In the online material, all these files are located in the following path relative to
the parent directory of the type of file:
/websphere/edge/510/cachingproxy/Linux-IX86.
wses_cachingproxy.5.1.0.spd
Example A-58 is the software package definition file for installing WebSphere
Caching Proxy v5.1.
Example: A-58 Software package for Caching Proxy deployment
“TIVOLI Software Package v4.2.1 - SPDF”
package
name = t_wses_cackingproxy
title = “WAS Caching Proxy V5.1.0”
version = 5.1.0
web_view_mode = hidden
undoable = n
committable = n
history_reset = n
save_default_variables = n
creation_time = “2004-11-29 18:06:10”
last_modification_time = “2004-11-29 18:06:10”
default_variables
root_dir = /swdist
prod_name = wses_cachingproxy_510
work_dir = $(root_dir)/$(prod_name)
log_dir = $(root_dir)/log/$(prod_name)
img_dir = $(work_dir)/image
bin_dir = $(root_dir)/bin/$(os_family)
inst_dir = /opt/ibm/edge
tar_file = EdgeComponents51-Linux.tar
home_path = /mnt
prod_path =
websphere/edge/510/cachingproxy/$(os_name)-$(os_architecture)
src_base = $(home_path)/code
src_dir = $(src_base)/$(prod_path)
rsp_base = $(home_path)/rsp
tool_base = $(home_path)/tools
tools_dir = $(tool_base)/$(prod_path)
Appendix A. Configuration files and scripts
503
unpack = yes
cleanup = yes
user = root
password = smartway
conf_dir = /opt/ibm/edge/cp/etc/en_US
LoadURL = RS-AppServer.demo.tivoli.com
end
log_object_list
location = $(log_dir)
unix_user_id = 0
unix_group_id = 0
unix_attributes = rwx,rx,
end
source_host_name = srchost
move_removing_host = y
no_check_source_host = y
lenient_distribution = n
default_operation = install
server_mode = all
operation_mode = not_transactional
log_path = /mnt/logs/wses_cachingproxy_510.log
post_notice = y
before_as_uid = 0
skip_non_zero = n
after_as_uid = 0
no_chk_on_rm = y
log_host_name = srchost
versioning_type = swd
package_type = refresh
sharing_control = none
stop_on_failure = y
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = common
translate = n
destination = $(root_dir)/bin
descend_dirs = y
504
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
generic_container
caption = “on install - unpack”
stop_on_failure = y
condition = “$(operation_name) == install”
check_disk_space
volume = /,250M
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(rsp_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
descend_dirs = n
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
Appendix A. Configuration files and scripts
505
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
add_file
replace_if_existing = y
replace_if_newer = y
remove_if_modified = n
name = ibmproxy.conf
translate = n
destination = $(prod_name).conf
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
end
add_directory
stop_on_failure = y
condition = “$(unpack) == yes”
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)
translate = n
destination = $(img_dir)
descend_dirs = n
506
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
execute_user_program
caption = unpack
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/tar -C $(img_dir) -xvf
$(work_dir)/$(prod_name).tar”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_unpack.out
error_file = $(log_dir)/$(prod_name)_unpack.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
Appendix A. Configuration files and scripts
507
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(src_dir)/$(tar_file)
translate = n
destination = $(work_dir)/$(prod_name).tar
compression_method = stored
rename_if_locked = n
end
end
end
end
end
generic_container
caption = “on remove - stop”
stop_on_failure = y
condition = “$(operation_name)
== remove”
execute_user_program
caption = “stop server”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “$(work_dir)/$(prod_name)_remove.sh”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove.out
error_file = $(log_dir)/$(prod_name)_remove.err
output_file_append = y
error_file_append = y
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
508
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(tools_dir)/wses_cachingproxy_510_remove.sh
translate = n
destination = $(work_dir)/$(prod_name)_remove.sh
compression_method = stored
rename_if_locked = n
end
end
end
end
end
install_rpm_package
caption = “WAS Caching Proxy”
rpm_options = -vv
rpm_install_type = install
rpm_install_force = n
rpm_install_nodeps = y
rpm_remove_nodeps = y
rpm_report_log = y
rpm_file
image_dir = $(img_dir)/admin
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = WSES_Admin_Runtime-5.1.0-0
rpm_package_file = WSES_Admin_Runtime-5.1.0-0.i686.rpm
end
Appendix A. Configuration files and scripts
509
rpm_file
image_dir = $(img_dir)/cp
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = WSES_CachingProxy_msg_en_US-5.1.0-0
rpm_package_file = WSES_CachingProxy_msg_en_US-5.1.0-0.i686.rpm
end
rpm_file
image_dir = $(img_dir)/icu
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = WSES_ICU_Runtime-5.1.0-0
rpm_package_file = WSES_ICU_Runtime-5.1.0-0.i686.rpm
end
rpm_file
image_dir = $(img_dir)/cp
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = WSES_CachingProxy-5.1.0-0
rpm_package_file = WSES_CachingProxy-5.1.0-0.i686.rpm
end
rpm_file
image_dir = $(img_dir)/doc
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = WSES_Doc_en_US-5.1.0-0
rpm_package_file = WSES_Doc_en_US-5.1.0-0.i686.rpm
end
end
generic_container
caption = “on install - after rpm install”
stop_on_failure = y
condition = “$(operation_name) == install”
510
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
execute_user_program
caption = “configure proxy server”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(work_dir)/$(prod_name)_setup.sh $(user) $(password)
$(work_dir)/$(prod_name).conf $(conf_dir)”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_setup.out
error_file = $(log_dir)/$(prod_name)_setup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(tools_dir)/wses_cachingproxy_510_setup.sh
translate = n
destination = $(work_dir)/$(prod_name)_setup.sh
compression_method = stored
rename_if_locked = n
end
end
end
end
Appendix A. Configuration files and scripts
511
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(work_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_cleanup.out
error_file = $(log_dir)/$(prod_name)_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
generic_container
caption = “on remove”
stop_on_failure = y
condition = “$(operation_name)
== remove”
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_remove
512
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
path = $(bin_dir)/do_it
arguments = “$(work_dir)/$(prod_name)_remove.sh”
arguments = “/bin/rm -r $(inst_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove.out
error_file = $(log_dir)/$(prod_name)_remove.err
output_file_append = y
error_file_append = y
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
#
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
end
ibmproxy.conf
Example A-59 is used to provide basic customization of the WebSphere Caching
Proxy. From the software package, it is referenced as a response file.
Example: A-59 ibmproxy.conf for Outlet Inc.
#
#
#
#
#
#
#
#
#
#
(C) COPYRIGHT International Business Machines Corp. 1997, 2000, 2001
All Rights Reserved
Licensed Materials - Property of IBM
US Government Users Restricted Rights - Use, duplication or
disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
===================================================================== #
This is the default configuration file for the
Appendix A. Configuration files and scripts
513
#
IBM WebSphere Edge Server Web Traffic Express
#
proxy server.
#
#
TABLE OF CONTENTS
#
=================
#
- Basic directives
#
- Process control
#
- Logging directives
#
* log file directives
#
* log archive directives
#
* log filtering directives
#
- Method directives
#
- Content presentation directives
#
* Welcome pages directives
#
* Directory browsing directives
#
* CGI program directives
#
* Content type directives
#
- Error Message directives
#
- API directives
#
- User authentication and document protection
#
- Mapping rules
#
- Performance directives
#
- Timeout directives
#
- Proxy directives
#
- Proxy caching directives
#
- Proxy cache garbage collection directives
#
- Advanced proxy and caching directives
#
- Remote Cache Access (RCA) directives
#
- SNMP directives
#
- Icon directives
#
- Cache agent directives
#
- PICS Filtering directives
#
- Miscellaneous directive
#
- System Plug-in directives
#
# ===================================================================== #
# ===================================================================== #
#
#
Basic directives
#
# ===================================================================== #
#
#
#
514
ServerRoot directive:
Set this to point to the directory where you unpacked this
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
distribution, or wherever you want ibmproxy to have its “home”.
#
#
Default: /opt/ibm/edge/cp/server_root
#
Syntax:
ServerRoot
<path>
ServerRoot /opt/ibm/edge/cp/server_root
#
HeaderServerName directive:
#
#
This directive allows the administrator to modify the name of the
#
proxy server contained within the http header.
#
#
The default server name is IBM-PROXY-WTE-US/VersionNumber.
#
#
Syntax:
HeaderServerName
<headerservername>
#
# HeaderServerName <none>
#
#
#
#
#
#
#
#
#
Hostname directive:
Specify the fully qualified hostname, including the domain. You can
use an alias (if you have one set up) instead of the machine’s real
host name so that clients will not be able to see the real host name.
Default:
Syntax:
<host name default defined in DNS>
Hostname
<fully qualified host name>
#
BindSpecific directive:
#
#
Allows a multi-homed system to run a different server on
#
each IP address.
#
#
Default: off
#
Syntax:
BindSpecific <on | off>
BindSpecific off
#
Port directive:
#
#
Port used by the server.
#
NOTE: If the server is not started by root, you must use a port
#
above 1024; good choices are 8000, 8001, 8080.
#
#
Default: 80
#
Syntax:
Port <num>
Port 8080
Appendix A. Configuration files and scripts
515
#
AdminPort directive:
#
#
This port may be used by the administrator for access to the
#
server status pages or configuration forms. Requests to this
#
port will not be queued with all other incoming requests on the
#
standard port(s) defined with the Port directive. However, they
#
will go through the normal access control and request-mapping
#
rules (Pass, Exec, Protect, etc).
#
#
The administration port must not be the same as the standard
#
port(s) defined with the Port directive.
#
#
Default: 8008
#
Syntax:
AdminPort <num>
AdminPort
8008
# ===================================================================== #
#
#
Process control directives
#
# ===================================================================== #
516
#
#
#
#
#
#
#
#
UserId
UserId directive:
#
#
#
#
#
#
#
#
GroupId
GroupId directive:
#
#
#
#
#
NoBG directive:
Specify the user name/number to which the server changes
before accessing files, if the server was started as root.
Default:
Syntax:
nobody
UserId <user name/number>
nobody
Specify the group name/number the server changes to
before accessing files, if the server was started as root.
Default:
Syntax:
nobody
nobody
nogroup (for the SuSE Linux distribution)
GroupId <group name/number>
Normally, when the server is started, the process forks to go
into the background, and the first process exits. If you turn on
NoBG, the main process will not go to the background. If you’re
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
#
#
NoBG off
using init to start the server, it may be useful to set this
to ON (and then init can respawn the server if it fails).
#
#
#
#
#
#
#
#
#
#
#
#
PidFile directive:
Default:
Syntax:
off
NoBG <on | off>
When the server process starts, it will record its process id
(“pid”) in a file, for use by the “ibmproxy -restart” command. This
directive specifies the location for that file. If you are
running multiple instances of the server on a single system, each
should have its own PidFile.
Default:
Syntax:
<server-root>/ibmproxy-pid
If no ServerRoot directive is given, then the default for
the PidFile is /tmp/ibmproxy-pid
PidFile <path-to-pid-file-into>
# ===================================================================== #
#
#
Logging directives
#
# ===================================================================== #
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
==============================
*** log file directives ***
==============================
If you want logging, specify locations for your logs:
AccessLog
ProxyAccessLog
CacheAccessLog
ErrorLog
EventLog
-
used
used
used
used
used
for
for
for
for
for
logging
logging
logging
logging
logging
local document requests
proxy requests
hits on proxy cache
any errors
initialization events
NOTE: To enable logging of requests to the proxy cache, the
following must be defined:
Caching MUST be turned ON (default is OFF)
CacheAccessLog MUST be defined
Defaults:
AccessLog
/opt/ibm/edge/cp/server_root/logs/local
ProxyAccessLog /opt/ibm/edge/cp/server_root/logs/proxy
Appendix A. Configuration files and scripts
517
#
#
#
#
Syntax:
CacheAccessLog /opt/ibm/edge/cp/server_root/logs/cache
ErrorLog
/opt/ibm/edge/cp/server_root/logs/error
EventLog
/opt/ibm/edge/cp/server_root/logs/event
<directive> <fullpath-filename>
CacheAccessLog
ProxyAccessLog
AccessLog
ErrorLog
EventLog
/opt/ibm/edge/cp/server_root/logs/cache
/opt/ibm/edge/cp/server_root/logs/proxy
/opt/ibm/edge/cp/server_root/logs/local
/opt/ibm/edge/cp/server_root/logs/error
/opt/ibm/edge/cp/server_root/logs/event
# LogFileFormat:
#
# Specify the format of the access log files.
# By default, logs are displayed in the NCSA Common Log Format.
# Specify “combined” to get the NCSA Combined Log Format instead.
# Entries in the combined format are the same as those in the
# common format with the addition of fields for the referring URL,
# User Agent, and Cookie (if present in the request). Certain site
# analysis tools, such as IBM’s Site Analyzer, require proxy logs
# to be in Combined format.
#
# Default: common
# Syntax: LogFileFormat <common | combined>
#
# LogFileFormat common
#
LogToSyslog directive:
#
#
In addition to logging access request information to the server logs,
#
you can send log entries to the UNIX syslog daemon.
#
#
Default: off
#
Syntax:
LogToSyslog <on | off>
#
LogToSyslog off
#
#
#
#
#
#
#
#
#
#
#
518
NoLog directive:
Suppress access log entries for host matching a given IP address
or hostname. Wild cards “*” may be used. This directive may be used
multiple times within the configuration file.
Default:
Syntax:
<none>
NoLog <hostnames and IP addresses>
NOTE: DNS-Lookup may need to be turned ON.
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
#
Example:
NoLog
128.141.*.*
NoLog
*.location.company.com
Nolog
*.*.*.com
#
LogVirtualHostName directive:
#
#
Log virtual host name into log files. The virtual host name will be
#
merged into URL or place at the beginning of each record. For the
#
record of HTTP 1.0 request without HOST header, the IP address of the
#
server will be recorded into log file.
#
#
Default: <none>
#
Syntax:
LogVirtualHostName < none | merged | beginning >
#
#
NOTE: DNS-Lookup may need to be turned ON.
#
# Example:
# LogVirtualHostName merged
# LogVirtualHostName beginning
# MaxLogFileSize directive:
#
# MaxLogFileSize specifies the maximum size for each log file. When
# a log file exceeds the specified size, it is closed, and a new file
# is opened for writing. A version number is appended to logfile names
# to distinguish log files of the same type created on the same day.
# The unit specified can be B (bytes), K (kilobytes), M (megabytes),
# or G (gigabytes).
#
# NOTE: Recommended size should be no more than one fourth of swap
# space.
#
# Default: <none>
# Syntax:
MaxLogFileSize <size> B | K | M | G
#
# Example:
# MaxLogFileSize 100 M
#
#
#
==============================
*** log archive directives ***
==============================
# LogArchive directive
#
Appendix A. Configuration files and scripts
519
#
Specify the type of archive processing to use.
#
#
Default: Purge
#
Syntax:
LogArchive <Compress | Purge | none>
#
LogArchive Purge
# Compress directives
#
#
If you specified “Compress” for LogArchive, specify:
#
- the age at which the log files should be compressed
#
- the age at which the log files should be deleted
#
- the compress command you want executed against the
#
log archive files
#
#
Default: 0
#
Syntax:
CompressAge <num>
#
Syntax:
CompressDeleteAge <num>
#
#
Default: <none>
#
Syntax:
CompressCommand <commpress-command>
#
CompressAge
1
CompressDeleteAge 7
# CompressCommand tar -cf /logarchs/log%%DATE%%.tar %%LOGFILES%% ; gzip
/logarchs/log%%DATE%%.tar
# CompressCommand tar -cf /logarchs/log%%DATE%%.tar %%LOGFILES%% ; compress
/logarchs/log%%DATE%%.tar
# CompressCommand zip -q /logarchs/log%%DATE%%.tar %%LOGFILES%%
# Purge directives
#
#
If you specified “Purge” for LogArchive,
#
specify the age and size (in Megabytes) limits
#
at which time the files should be purged.
#
#
Syntax:
PurgeAge <num>
#
Default: PurgeAge 7
#
Syntax:
PurgeSize <num>
#
Default: PurgeSize 0
#
PurgeAge
7
PurgeSize 0
#
#
520
====================================
*** log filtering directives ***
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
====================================
# AccessLogExcludeUserAgent directive
#
# A filter to exclude request URLs from user-agents that match a
# given template.
#
#
Default: Requests from Network Dispatcher’s HTTP and WTE advisors
#
will not be logged
#
Syntax: AccessLogExcludeUserAgent <User-Agent template>
#
AccessLogExcludeUserAgent IBM_Network_Dispatcher_HTTP_Advisor
AccessLogExcludeUserAgent IBM_Network_Dispatcher_WTE_Advisor
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
AccessLogExcludeURL, AccessLogExcludeMethod,
AccessLogExcludeReturnCode and AccessLogExcludeMimeType directive:
Access log entries may be filtered to exclude requests by:
* requests matching a given URL template
* requests of a given method
* requests with a given return code range (200s, 300s, 400s, 500s)
* requests for files of a given mime type
Default:
Syntax:
<none>
AccessLogExcludeURL
<URL template>
AccessLogExcludeMethod <GET | PUT | POST | DELETE>
AccessLogExcludeReturnCode <200 | 300 | 400 | 500>
AccessLogExcludeMimeType <text/html | text/plain |
text/other | image/gif |
image/jpeg | image/(other) |
application/* |
audio/* | video/* |
(other)/(other)>
Example:
AccessLogExcludeURL *.gif
AccessLogExcludeURL /Freebies/*
AccessLogExcludeMethod PUT
AccessLogExcludeMethod POST
AccessLogExcludeReturnCode 300
AccessLogExcludeReturnCode 400
AccessLogExcludeMimeType text/html
AccessLogExcludeMimeType text/plain
# ===================================================================== #
#
#
Method directives
Appendix A. Configuration files and scripts
521
#
# ===================================================================== #
#
#
HTTP Methods that you do or don’t want your server to accept.
#
#
NOTE: Please reference online documentation/help to specify or
#
create other methods.
#
#
Default: Enable
GET
#
Enable
HEAD
#
Enable
POST
#
Enable
TRACE
#
Enable
OPTIONS
#
Enable
CONNECT
#
Disable
PUT
#
Disable
DELETE
#
Syntax:
Enable <method>
#
Disable <method>
#
Enable
GET
Enable
HEAD
Enable
POST
Enable
TRACE
Enable
OPTIONS
Enable
CONNECT
Disable
Disable
PUT
DELETE
# ===================================================================== #
#
#
Content presentation directives
#
# ===================================================================== #
#
#
#
#
#
#
#
#
#
#
#
522
==============================
*** Welcome pages ***
==============================
Welcome directive:
Specifies the default file name to use when only a directory name is
specified in the URL. Many Welcome directives may be defined, with the
one defined earliest having precedence.
Multi-homed servers can use the IP-address_template parameter to
specify an address template which restricts the server to displaying
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
a specific welcome page based on which address template matches.
#
#
NOTE: the address of the servers network connection is compared
#
to the template, not the address of the requesting client.
#
#
Defaults: Welcome.html, welcome.html, index.html, Frntpage.html
#
Syntax:
Welcome file-name [IP-address-template]
#
# Example:
# Welcome letsgo.html
# Welcome CustomerA.html 9.67.106.79
Welcome
Welcome.html
Welcome
welcome.html
Welcome
index.html
Welcome
Frntpage.html
#
#
#
=====================================
*** Directory browsing directives ***
=====================================
#
These directives control directory listings as follows:
#
* Enable/disable or selective directory browsing
#
* Configure/disable readme feature for directory browsing
#
* Control the appearance of the directory listing
#
* Define the maximum width of the description text
#
* Define the maximum & minimum width of the filename field
#
#
Default: DirShowIcons
on
#
DirShowDate
on
#
DirShowSize
on
#
DirShowDescription on
#
DirShowCase
on
#
DirShowHidden
on
#
DirShowBytes
off
#
Syntax:
<directive> <on | off>
#
#
Default: DirShowMaxDescrLength 25
#
DirShowMaxLength
25
#
DirShowMinLength
15
#
Syntax:
<directive> <num>
DirShowIcons
on
DirShowDate
on
DirShowSize
on
DirShowDescription
on
DirShowCase
on
DirShowHidden
on
DirShowBytes
off
DirShowMaxDescrLength 25
Appendix A. Configuration files and scripts
523
DirShowMaxLength
DirShowMinLength
25
15
#
FTPDirInfo directive:
#
#
FTP servers may generate a welcome or description message for
#
a directory. This can optionally be displayed as part of FTP
#
directory listings; in addition, you can control where it will
#
be displayed. The following options are available:
#
#
FTPDirInfo top display welcome message at the top of the page,
#
before the listing of files in the directory.
#
FTPDirInfo bottom - display welcome message at the bottom of the page,
#
after the listing of files in the directory.
#
FTPDirInfo off
- do not display the welcome message from the
#
FTP server.
#
#
Note that this directive gives no control over the content of the
#
message itself; that message will be generated by the FTP server
#
being contacted.
#
#
Default:
FTPDirInfo top
#
Syntax:
FTPDirInfo <top | bottom | off>
#
# Example: don’t display welcome messages from FTP servers
# FTPDirInfo off
FTPDirInfo top
#
DirBackgroundImage directive:
#
#
This directive allows applying a background image to directory
#
listings generated by the proxy. Directory listings are generated
#
when browsing FTP sites through the proxy.
#
#
The background image should be given as an absolute path. If the
#
image is located at another server, the background image must
#
be specified as a full URL.
#
#
If no background image is specified, a plain white background will
#
be used.
#
#
Default:
no background image specified
#
Syntax:
DirBackgroundImage /some/image.jpg
#
# Example: use /images/corplogo.png as background graphic
# DirBackgroundImage /images/corplogo.png
#
524
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# Example: use /graphics/embossed.gif on Web server www.somehost.com as
# background graphic
# DirBackgroundImage http://www.somehost.com/graphics/embossed.gif
#
#
#
=====================================
*** CGI program directives ***
=====================================
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
InheritEnv and DisInheritEnv directives:
Default: <none>
Syntax:
InheritEnv
<variable>
Syntax:
DisInheritEnv <variable>
Example:
InheritEnv PATH
InheritEnv LANG=ENUS
DisInheritEnv PATH
DisInheritEnv LANG
#
#
#
=====================================
*** Content type directives ***
=====================================
InheritEnv
- Specify which environment variables are
inherited by CGI programs.
DisInheritEnv - Specify which environment variables are
disinherited by CGI programs.
By default (if neither statement is used), all environment
variables are inherited by CGI programs. If you include any
InheritEnv directive, only those environment variables
specified on InheritEnv directives will be inherited. To
exclude individual environment variables from being inherited,
use the DisInheritEnv directive.
#
#
#
#
#
##
#
#
NOTE: Refer to the WTE Programming Guide for a list
of the CGI-specific environment variables.
imbeds directive:
Controls Server Side Include processing for output that has
Content-type: text/x-ssi-html.
Default:
Syntax:
off SSIOnly
imbeds <on | off | files | cgi | noexec> <html | SSIOnly>
Appendix A. Configuration files and scripts
525
#
#
#
#
#
#
#
#
#
imbeds on SSIOnly
parm1: on
off
files
cgi
noexec
process files, CGIs and SSI
never use SSI
process files and SSI #exec
process CGIs and SSI #exec
process files, CGIs but not
#exec CGI
parm2: html
SSIOnly
also process Content-type: text/html
process Content-type: text/x-ssi-html only
CGI
CGI
SSI #exec CGI
#
SuffixCaseSense directive:
#
#
Specify whether case sensitivity for suffixes is on or off.
#
#
Default: off
#
Syntax:
SuffixCaseSense <on | off>
#
#
NOTE: This directive should be placed BEFORE any AddType or
#
AddEncoding directives.
#
SuffixCaseSense
off
#
AcceptAnything directive:
#
#
If this is set to ON, documents will be served to the client even
#
if the MIME type of the document does not match an Accept:
#
header sent by the client. If this is OFF, documents with a MIME type
#
which the client doesn’t not state that they understand will cause
#
the client to see an error page instead.
#
#
Default: ON
#
Syntax:
AcceptAnything <on | off>
#
#
AcceptAnything
on
#
#
#
#
#
#
#
#
AddType directive:
Map a suffix to the content-type of a file.
Defaults:
Syntax:
see list below
Addtype <.suffix><representation><encoding><quality>
where <quality> is optional
# Application-specific types
526
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
AddType .bcpio
application/x-bcpio
CPIO
AddType .cpio
application/x-cpio
AddType .gtar
application/x-gtar
AddType .bin
application/octet-stream
Uninterpreted binary
AddType .class
application/octet-stream
or application
AddType .dms
application/octet-stream
AddType .exe
application/octet-stream
MSDOS/OS2/WIN executables
AddType .dll
application/octet-stream
executables
AddType .lha
application/octet-stream
AddType .lzh
application/octet-stream
data
AddType .oda
application/oda
AddType .pdf
application/pdf
AddType .ai
application/postscript
Illustrator
AddType .PS
application/postscript
AddType .eps
application/postscript
AddType .ps
application/postscript
AddType .rtf
application/rtf
AddType .sit
application/stuffit
compressor
AddType .csh
application/x-csh
script
AddType .dvi
application/x-dvi
AddType .hdf
application/x-hdf
data file
AddType .latex
application/x-latex
AddType .nc
application/x-netcdf
netCDF data
AddType .cdf
application/x-netcdf
AddType .js
application/x-javascript
AddType .sh
application/x-sh
AddType .shar
application/x-shar
archive
AddType .sv4cpio application/x-sv4cpio
AddType .sv4crc
application/x-sv4crc
with CRC
AddType .tcl
application/x-tcl
AddType .tex
application/x-tex
AddType .texi
application/x-texinfo
AddType .texinfo application/x-texinfo
AddType .t
application/x-troff
AddType .roff
application/x-troff
AddType .tr
application/x-troff
binary
1.0 # Old binary
binary
binary
binary
1.0 # POSIX CPIO
1.0 # Gnu tar
1.0 #
binary
1.0 # Java applet
binary
binary
1.0 #
0.8 #
binary
0.8 # OS2/WIN
binary
binary
0.8 # LHArc
0.8 # Compressed
binary
binary
8bit
1.0
1.0
0.5 # Adobe
8bit
8bit
8bit
7bit
binary
0.8 # PostScript
0.8
0.8
1.0 # RTF
1.0 # Mac StuffIt
7bit
0.5 # C-shell
binary
binary
1.0 # TeX DVI
1.0 # NCSA HDF
8bit
binary
1.0 # LaTeX source
1.0 # Unidata
binary
binary
7bit
8bit
1.0
1.0 # Java script
0.5 # Shell-script
1.0 # Shell
binary
binary
1.0 # SVR4 CPIO
1.0 # SVR4 CPIO
7bit
8bit
7bit
7bit
7bit
7bit
7bit
0.5
1.0
1.0
1.0
0.5
0.5
0.5
# TCL-script
# TeX source
# Texinfo
# Troff
Appendix A. Configuration files and scripts
527
AddType .man
application/x-troff-man
7bit
0.5 # Troff with
man macros
AddType .me
application/x-troff-me
7bit
0.5 # Troff with
me macros
AddType .ms
application/x-troff-ms
7bit
0.5 # Troff with
ms macros
AddType .src
application/x-wais-source
7bit
1.0 # WAIS source
AddType .prs
application/x-freelance
binary 1.0 # Lotus
Freelance
AddType .pre
application/vnd.lotus-freelance binary 1.0 # Lotus
Freelance
AddType .prz
application/vnd.lotus-freelance binary 1.0 # Lotus
Freelance
AddType .lwp
application/vnd.lotus-wordpro
binary 1.0 # Lotus Word
Pro
AddType .sam
application/vnd.lotus-wordpro
binary 1.0 # Lotus Word
Pro
AddType .apr
application/vnd.lotus-approach
binary 1.0 # Lotus
Approach
AddType .vew
application/vnd.lotus-approach
binary 1.0 # Lotus
Approach
AddType .123
application/vnd.lotus-1-2-3
binary 1.0 # Lotus 1-2-3
AddType .wk1
application/vnd.lotus-1-2-3
binary 1.0 # Lotus 1-2-3
AddType .wk3
application/vnd.lotus-1-2-3
binary 1.0 # Lotus 1-2-3
AddType .wk4
application/vnd.lotus-1-2-3
binary 1.0 # Lotus 1-2-3
AddType .org
application/vnd.lotus-organizer binary 1.0 # Lotus
Organizer
AddType .or2
application/vnd.lotus-organizer binary 1.0 # Lotus
Organizer
AddType .or3
application/vnd.lotus-organizer binary 1.0 # Lotus
Organizer
AddType .scm
application/vnd.lotus-screencam binary 1.0 # Lotus
Screencam
AddType .ppt
application/vnd.microsoft-powerpoint binary 1.0 # MS
PowerPoint
AddType .pac
application/x-ns-proxy-autoconfig binary 1.0 # Netscape
proxy Autoconfig files
AddType .hqx
application/mac-binhex40
binary 1.0 # Macintosh
BinHex format
AddType .bsh
application/x-sh
7bit
0.5 # Bourne shell
script
AddType .ksh
application/x-ksh
7bit
0.5 # K-shell
script
AddType .pcl
application/x-pcl
7bit
0.5 #
AddType .wk1
application/x-123
binary 1.0 #
# Audio files
AddType .snd
AddType .au
528
audio/basic
audio/basic
binary
binary
1.0 # Audio
1.0
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
AddType .mid
AddType .midi
AddType .kar
AddType .mpga
AddType .mp2
AddType .mp3
AddType .aiff
AddType .aifc
AddType .aif
AddType .ra
AddType .ram
AddType .rpm
AddType .wav
WAVE format
audio/midi
audio/midi
audio/midi
audio/mpeg
audio/mpeg
audio/mpeg
audio/x-aiff
audio/x-aiff
audio/x-aiff
audio/x-realaudio
audio/x-pn-realaudio
audio/x-pn-realaudio-plugin
audio/x-wav
binary
binary
binary
binary
binary
binary
binary
binary
binary
binary
binary
binary
binary
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0 # Windows+
AddType
AddType
chemical/x-pdb
chemical/x-pdb
binary
binary
0.8
0.8
binary
1.0 # OS/2 bitmap
binary
binary
binary
1.0
1.0 # GIF
1.0 # Image
binary
binary
binary
binary
binary
binary
binary
1.0 # JPEG
1.0
1.0
1.0
1.0
1.0
1.0 # Portable
binary
binary
binary
1.0 # TIFF
1.0
1.0 # PBM Anymap
binary
1.0 # PBM Bitmap
binary
1.0 # PBM Graymap
binary
1.0 # PBM Pixmap
binary
binary
binary
1.0
1.0 # X bitmap
1.0 # X pixmap
binary
1.0 # X window
.pdb
.xyz
# Graphic (image) types
AddType .bmp
image/bmp
format
AddType .ras
image/x-cmu-raster
AddType .gif
image/gif
AddType .ief
image/ief
Exchange fmt
AddType .jpg
image/jpeg
AddType .JPG
image/jpeg
AddType .JPE
image/jpeg
AddType .jpe
image/jpeg
AddType .JPEG
image/jpeg
AddType .jpeg
image/jpeg
AddType .png
image/png
Network Graphics
AddType .tif
image/tiff
AddType .tiff
image/tiff
AddType .pnm
image/x-portable-anymap
format
AddType .pbm
image/x-portable-bitmap
format
AddType .pgm
image/x-portable-graymap
format
AddType .ppm
image/x-portable-pixmap
format
AddType .rgb
image/x-rgb
AddType .xbm
image/x-xbitmap
AddType .xpm
image/x-xpixmap
format
AddType .xwd
image/x-xwindowdump
dump (xwd)
Appendix A. Configuration files and scripts
529
# “Multipart” (containers)
AddType .tar
multipart/x-tar
tar
AddType .ustar
multipart/x-ustar
AddType .zip
multipart/x-zip
# Text file types
AddType .css
text/css
Cascading Style Sheets
AddType .html
text/html
AddType .htm
text/html
AddType .c
text/plain
AddType .h
text/plain
AddType .C
text/plain
AddType .cc
text/plain
AddType .hh
text/plain
AddType .java
text/plain
AddType .m
text/plain
source
AddType .f90
text/plain
source
AddType .txt
text/plain
AddType .cxx
text/plain
AddType .for
text/plain
AddType .mar
text/plain
AddType .log
text/plain
AddType .com
text/plain
AddType .sdml
text/plain
AddType .list
text/plain
AddType .lst
text/plain
AddType .def
text/plain
files
AddType .conf
text/plain
files
AddType .
text/plain
no extension
AddType .rtx
text/richtext
Richtext format
AddType .tsv
text/tab-separated-values
Tab-separated values
AddType .etx
text/x-setext
Enchanced Txt
AddType .asm
text/x-asm
AddType .sgm
text/x-sgml
AddType .sgml
text/x-sgml
AddType .htmls
text/x-ssi-html
includes
530
binary
1.0 # 4.
binary
binary
1.0 # POSIX tar
1.0 # PKZIP
8bit
1.0 # W3C
8bit
8bit
7bit
7bit
7bit
7bit
7bit
7bit
7bit
1.0
1.0
0.5
0.5
0.5
0.5
0.5
0.5
0.5
7bit
0.5 # Fortran 90
7bit
7bit
7bit
7bit
7bit
7bit
7bit
7bit
7bit
7bit
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
7bit
0.5 # definition
7bit
0.5 # files with
7bit
1.0 # MIME
7bit
1.0 #
7bit
0.9 # Struct
7bit
8bit
8bit
8bit
1.0
1.0
1.0
1.0
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
3BSD
HTML
HTML on PCs
C source
C headers
C++ source
C++ source
C++ headers
Java source
Objective-C
Plain text
C++
Fortran
MACRO
logfiles
scripts
SDML
listfiles
listfiles
definition
ASM source
SGML source
SGML source
Server-side
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
AddType .shtml
includes
AddType .uil
AddType .uu
# Video formats
AddType .mpg
AddType .mpe
AddType .mpeg
AddType .qt
AddType .mov
AddType .avi
Windows
AddType .movie
movieplayer
AddType .mjpg
text/x-ssi-html
8bit
1.0 # Server-side
text/x-uil
text/x-uuencode
8bit
8bit
1.0 #
1.0 #
video/mpeg
video/mpeg
video/mpeg
video/quicktime
video/quicktime
video/x-msvideo
binary 1.0
binary 1.0
binary 1.0
binary 1.0 # QuickTime
binary 1.0
binary 1.0 # MS Video for
video/x-sgi-movie
binary
1.0 # SGI
video/x-motion-jpeg
binary
1.0 #
binary
1.0 # Internal --
binary
binary
1.0
1.0 # VRML
binary
binary
0.1 # Try to guess
0.1 # Try to guess
# “WWW” - internal - types
AddType .mime
www/mime
MIME is not recursive
# Extension types
AddType .ice
AddType .wrl
x-conference/x-cooltalk
x-world/x-vrml
# When all else fails...
AddType *.*
application/octet-stream
AddType *
application/octet-stream
#
AddEncoding directive:
#
#
Map a suffix to a MIME content encoding.
#
These are usually extra suffixes that modify the base file.
#
#
Defaults: <none>
#
Syntax:
AddEncoding <.suffix><encoding>
AddEncoding
.Z
x-compress
1.0
# Compressed data
AddEncoding
.gz
x-gzip
1.0
# Compressed data
# ===================================================================== #
#
#
Error Message directives
#
# ===================================================================== #
Appendix A. Configuration files and scripts
531
#
ErrorPage directive:
#
#
Specifies the html file name to be returned by the server to the
#
client when a specific error occurs.
#
#
NOTE: Please see online documentation for a list of keywords.
#
#
Default: <none>
#
Syntax:
ErrorPage <keyword> </path/filename.html>
#
# Example:
# ErrorPage scriptstart /errorpages/scriptstart.html
#
ErrorPage badrange
/errorpages/badrange.htmls
ErrorPage badredirect
/errorpages/badredirect.htmls
ErrorPage badrequest
/errorpages/badrequest.htmls
ErrorPage badscript
/errorpages/badscript.htmls
ErrorPage baduser
/errorpages/baduser.htmls
ErrorPage blocked
/errorpages/blocked.htmls
ErrorPage byrule
/errorpages/byrule.htmls
ErrorPage cacheexp
/errorpages/cacheexp.htmls
ErrorPage cachenoconn
/errorpages/cachenoconn.htmls
ErrorPage cachenotopened
/errorpages/cachenotopened.htmls
ErrorPage cacheonly
/errorpages/cacheonly.htmls
ErrorPage connectfail
/errorpages/connectfail.htmls
ErrorPage deletefailed
/errorpages/deletefailed.htmls
ErrorPage dirbrowse
/errorpages/dirbrowse.htmls
ErrorPage dirnobrowse
/errorpages/dirnobrowse.htmls
ErrorPage dotdot
/errorpages/dotdot.htmls
ErrorPage expectfailed
/errorpages/expectfailed.htmls
ErrorPage ftpanonloginrej
/errorpages/ftpanonloginrej.htmls
ErrorPage ftpauth
/errorpages/ftpauth.htmls
ErrorPage ftpbad220
/errorpages/ftpbad220.htmls
ErrorPage ftphpanonloginrej
/errorpages/ftphpanonloginrej.htmls
ErrorPage ftphpbad220
/errorpages/ftphpbad220.htmls
ErrorPage ftphploginrej
/errorpages/ftphploginrej.htmls
ErrorPage ftphpnoconnect
/errorpages/ftphpnoconnect.htmls
ErrorPage ftphpnoresponse
/errorpages/ftphpnoresponse.htmls
ErrorPage ftphpnosocket
/errorpages/ftphpnosocket.htmls
ErrorPage ftphpunreshost
/errorpages/ftphpunreshost.htmls
ErrorPage ftploginrej
/errorpages/ftploginrej.htmls
ErrorPage ftploginreq
/errorpages/ftploginreq.htmls
ErrorPage ftpnoconnect
/errorpages/ftpnoconnect.htmls
ErrorPage ftpnodata
/errorpages/ftpnodata.htmls
ErrorPage ftpnoresponse
/errorpages/ftpnoresponse.htmls
ErrorPage ftpnosocket
/errorpages/ftpnosocket.htmls
ErrorPage ftpunrechost
/errorpages/ftpunrechost.htmls
ErrorPage ftpunreshost
/errorpages/ftpunreshost.htmls
532
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
ErrorPage
hpforbidden
httpnodata
httpnoforward
httpnosend
httpunreshost
ipmask
ipmaskproxy
methoddisabled
multifail
noaccess
noacl
nocachenoconn
noentry
noformat
nohostheader
noopen
nopartner
norep
notallowed
notauthorized
notfound
notmember
olproxnocontact
openfailed
originbadresp
preconfail
proxybadurl
proxyfail
proxynotauth
proxynotmember
putfailed
rchunkerror
rchunkmemory
scriptinterr
scriptio
scriptnocomm
scriptnotfound
scriptnovari
scriptstart
servermaperror
setuperror
throttled
unknownmethod
outputtimeout
/errorpages/hpforbidden.htmls
/errorpages/httpnodata.htmls
/errorpages/httpnoforward.htmls
/errorpages/httpnosend.htmls
/errorpages/httpunreshost.htmls
/errorpages/ipmask.htmls
/errorpages/ipmaskproxy.htmls
/errorpages/methoddisabled.htmls
/errorpages/multifail.htmls
/errorpages/noaccess.htmls
/errorpages/noacl.htmls
/errorpages/nocachenoconn.htmls
/errorpages/noentry.htmls
/errorpages/noformat.htmls
/errorpages/nohostheader.htmls
/errorpages/noopen.htmls
/errorpages/nopartner.htmls
/errorpages/norep.htmls
/errorpages/notallowed.htmls
/errorpages/notauthorized.htmls
/errorpages/notfound.htmls
/errorpages/notmember.htmls
/errorpages/olproxnocontact.htmls
/errorpages/openfailed.htmls
/errorpages/originbadresp.htmls
/errorpages/preconfail.htmls
/errorpages/proxybadurl.htmls
/errorpages/proxyfail.htmls
/errorpages/proxynotauth.htmls
/errorpages/proxynotmember.htmls
/errorpages/putfailed.htmls
/errorpages/rchunkerror.htmls
/errorpages/rchunkmemory.htmls
/errorpages/scriptinterr.htmls
/errorpages/scriptio.htmls
/errorpages/scriptnocomm.htmls
/errorpages/scriptnotfound.htmls
/errorpages/scriptnovari.htmls
/errorpages/scriptstart.htmls
/errorpages/servermaperror.htmls
/errorpages/setuperror.htmls
/errorpages/throttled.htmls
/errorpages/unknownmethod.htmls
/errorpages/outputtimeout.htmls
# ===================================================================== #
#
Appendix A. Configuration files and scripts
533
#
API directives:
#
# ===================================================================== #
# API directives enabling plug-ins distributed with this product should appear
# before any user-defined plug-ins. Add user-defined plug-ins only to the end
# of this section (see Placeholder comment below).
#
ServerInit directive:
#
#
Specify a customized application function you want the server
#
to call during the server’s initialization routines. This code
#
will be executed before any client requests are read.
#
#
Default: <none>
#
Syntax:
ServerInit </path/file:function_name>
#
# Example:
# ServerInit /opt/ibm/edge/cp/samples/libsample_plugin.so:svr_init
#
PreExit directive:
#
#
Specify a customized application function you want the server to
#
call during the User PreExit step. This code will be executed after
#
a client request has been read but before any other processing
#
occurs. You can call the goserve module during this step.
#
#
Default: <none>
#
Syntax:
PreExit </path/file:function_name>
#
# Example:
# PreExit /opt/ibm/edge/cp/samples/libsample_plugin.so:pre_exit
#
Authentication directive:
#
#
Specify a customized application function you want the server to
#
call during the Authentication step. This code will be executed based
#
on the authentication scheme. Currently, only Basic authentication
#
is supported.
#
#
Default: <none>
#
Syntax:
Authentication <type> </path/file:function_name>
#
# Example:
534
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# Authentication BASIC
/opt/ibm/edge/cp/samples/libsample_plugin.so:basic_authen
#
NameTrans directive:
#
#
Specify a customized application function you want the server
#
to call during the Name Translation step. This code would supply
#
the mechanism for translating the virtual path in the request to
#
the physical path on the server, mapping URLs to specific objects.
#
#
Default: <none>
#
Syntax:
NameTrans </URL> </path/file:function_name>
<IP_address_template>
#
# Example:
# NameTrans /index.html /opt/ibm/edge/cp/samples/libsample_plugin.so:trans_url
#
Authorization directive:
#
#
Specify a customized application function you want the server
#
to call during the Authorization step. This code would verify
#
that the requested object can be served to the client.
#
#
Default: <none>
#
Syntax:
Authorization </URL> </path/file:function_name>
#
# Example:
# Authorization /index.html
/opt/ibm/edge/cp/samples/libsample_plugin.so:auth_url
#
ObjectType directive:
#
#
Specify a customized application function you want the server
#
to call during the Object Type step. This code would locate the
#
requested object in the file system and identify its MIME type.
#
#
Default: <none>
#
Syntax:
ObjectType </URL> </path/file:function_name>
#
# Example:
# ObjectType /index.html /opt/ibm/edge/cp/samples/libsample_plugin.so:obj_type
#
#
#
PostAuth directive:
Specify a customized application function you want the server to
Appendix A. Configuration files and scripts
535
#
call during the User PostAuth step. This code will be executed after
#
authorization/authentication and objecttype(if any) steps.
#
#
Default: <none>
#
Syntax:
PostAuth </path/file:function_name>
#
# Example:
# PostAuth /opt/ibm/edge/cp/samples/libsample_plugin.so:post_auth
#
Service directive:
#
#
Specify a customized application function you want the server
#
to call during the Service step. This code would service the client
#
request. For example, it sends the file or runs the CGI program.
#
#
Default: <none>
#
Syntax:
Service </URL> </path/file:function_name>
<IP_address_template>
#
# Example:
# Service /index.html
/opt/ibm/edge/cp/samples/libsample_plugin.so:serve_req
#
Transmogrifier directive:
#
#
Specify a customized application function you want the server
#
to call during the Transmogrifier step. This code would provide four
#
application functions: an open function, a write function, a close
#
function, and an error function.
#
#
Default: <none>
#
Syntax:
Transmogrifier
</path/file:open_function_name:write_function_name:close_function_name:error_fu
nction_name>
#
# Example:
# Transmogrifier
/opt/ibm/edge/cp/samples/libsample_plugin.so:my_open:my_write:my_close:my_error
#
#
#
proxy
#
message
#
data has
536
TransmogrifiedWarning directive:
When a customized application function is specified that you want the
to call during the Transmogrifier step, WTE will send a warning
(e.g. “Warning: 214 hostname”) to the client informing it that the
been
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
transmogrified. In some cases, a browser that receives this message
will fail
#
with an ‘unknown error’ message. This Directive allows the
administrator to
#
disable the warning message sent from the proxy.
#
#
Default: <YES>
#
#
Syntax:
transmogrifiedwarning <YES | NO>
#
Log directive:
#
#
Specify a customized application function you want the server
#
to call during the Log step. This code would supply logging and
#
other processing you want performed after the connection has
#
been closed.
#
#
Default: <none>
#
Syntax:
Log </URL> </path/file:function_name>
#
# Example:
# Log /index.html /opt/ibm/edge/cp/samples/libsample_plugin.so:log_url
#
Error directive:
#
#
Specify a customized application function you want the server
#
to call during the Error step. This code would execute only when
#
an error is encountered to provide customized error routines.
#
#
Default: <none>
#
Syntax:
Error </URL> </path/file:function_name>
#
# Example:
# Error /index.html /opt/ibm/edge/cp/samples/libsample_plugin.so:error_rtns
#
PostExit directive:
#
#
Specify a customized application function you want the server
#
to call during the Post-exit step. This code will be executed
#
regardless of the return codes from previous steps. It allows
#
you to clean up any resources allocated to process the request.
#
#
Default: <none>
#
Syntax:
PostExit </path/file:function_name>
#
# Example:
Appendix A. Configuration files and scripts
537
# PostExit /opt/ibm/edge/cp/samples/libsample_plugin.so:post_exit
#
ServerTerm directive:
#
#
Specify a customized application function you want the server
#
to call during the Server Termination step. This code would
#
execute when an orderly shutdown occurs and allows you to
#
release resources allocated by a PreExit application function.
#
#
Default: <none>
#
Syntax:
ServerTerm </path/file:function_name>
#
# Example:
# ServerTerm /opt/ibm/edge/cp/samples/libsample_plugin.so:shut_down
#
#
#
#
#
#
#
Midnight
Midnight directive:
Specify a customized application function you want the server
to call at midnight.
Default: Midnight /usr/lib/archive.so:begin
Syntax:
Midnight </path/file:function_name>
/usr/lib/archive.so:begin
#
# Following are directives used for Transaction Quality of Service
#
# For ServerInit there are three parameters:
#
debug
- Which should be set only when asked by IBM Service
#
This issues many debug information lines of output
#
useqoscookies - Specify this if cookies should be sent from
#
the proxy to the backend server and from the
#
proxy back to the client.
#
eventlog
- Specify this if classification events should be
#
written to the EventLog file. This is helpful
#
for verifying classification results.
#
#
# Note: TQoS debug info goes into TraceLog file
#
TQoS classfication info goes into EventLog file
#
TQoS error info goes into ErrorLog file
#==========================================================================
#
# QualityofService
on
# ServerInit
/opt/ibm/edge/tqos/lib/libtqosplugin.so:tqos_server_init useqoscookies,eventlog
# PostAuth
/opt/ibm/edge/tqos/lib/libtqosplugin.so:tqos_postaut
538
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# Transmogrifier
/opt/ibm/edge/tqos/lib/libtqosplugin.so:tqos_topen:tqos_twrite:tqos_tclose:tqos
_terror
# ServerTerm
/opt/ibm/edge/tqos/lib/libtqosplugin.so:tqos_server_term
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
The following directives are used by the Content Distribution Framework.
CdfAware
This directive is used to mark this WTE instance as being part of a
Content Distribution Framework
CdfRestartFile
WTE maintains a mapping table that associates a requested URL to its filename
on the webserver. This file is used for persistent storage of this table
so as to preserve its contents during restarts
ServerInit
This directive is required to load the Content Distribution Agent which is
distributed as a plugin
=============================================================
CdfAwareyes
CdfRestartFile/opt/ibm/edge/cdist/data/cdfRestartFile
# ServerInit/opt/ibm/edge/cdist/lib/libcpagent.so:cdca_Start
#
#
#
#
#
#
Following are examples of the API directives required by the
plug-ins distributed with this product. Uncomment (and edit,
if necessary) these directives to include support for each
desired plug-in.
These plug-in API directives should be included in this order.
Plug-ins may not work correctly together if reordered.
# ===== ICP Plug-in =====
# Uncomment ServerInit to enable an ICP server. Uncomment PreExit to enable an
ICP client.
# ServerInit /opt/ibm/edge/cp/lib/plugins/icp/libicp_plugin.so:icpServer
# PreExit /opt/ibm/edge/cp/lib/plugins/icp/libicp_plugin.so:icpClient
Appendix A. Configuration files and scripts
539
#
#
#
#
#
===== CBR Plug-in =====
ServerInit /opt/ibm/edge/lb/servers/lib/libndcbr.so:ndServerInit
PostAuth /opt/ibm/edge/lb/servers/lib/libndcbr.so:ndPostAuth
PostExit /opt/ibm/edge/lb/servers/lib/libndcbr.so:ndPostExit
ServerTerm /opt/ibm/edge/lb/servers/lib/libndcbr.so:ndServerTerm
# ===== LDAP Plug-in =====
# ServerInit /opt/ibm/edge/cp/lib/plugins/pac/libpacwte.so:pacwte_auth_init
/etc/paccp.conf
# Authorization http://protectarea
/opt/ibm/edge/cp/lib/plugins/pac/libpacwte.so:pacwte_auth_policy
# ServerTerm /opt/ibm/edge/cp/lib/plugins/pac/libpacwte.so:pacwte_shutdown
# ===== JSP Plug-in =====
# Service /WES_External_Adapter
/opt/ibm/edge/cp/lib/plugins/dynacache/libdyna_plugin.so:exec_dynacmd
# ===== Application Router Plug-in =====
# ServerInit
/opt/ibm/edge/cp/lib/plugins/approuter/libapprouter.so:approuter_init
# PostAuth
/opt/ibm/edge/cp/lib/plugins/approuter/libapprouter.so:approuter_offload
# ServerTerm
/opt/ibm/edge/cp/lib/plugins/approuter/libapprouter.so:approuter_term
# The following directive allows automatic disabling of the edge application
# offload on exception conditions.
# Log *
/opt/ibm/edge/cp/lib/plugins/approuter/libapprouter.so:approuter_failover
# ===== Junction URL Rewrite Plug-in =====
# ServerInit /opt/ibm/edge/cp/lib/plugins/mod_rewrite/libmod_rw.so:modrw_init
# Transmogrifier
/opt/ibm/edge/cp/lib/plugins/mod_rewrite/libmod_rw.so:modrw_open:modrw_write:mo
drw_close:modrw_error
# ===== Placeholder - add new user-defined plug-ins here =====
# ===================================================================== #
#
#
User authentication and document protection
#
# ===================================================================== #
#
#
540
Within the configuration file, three directives define file
access protection: Protect, DefProt, and Protection.
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
A Protection setup contains subdirectives that define how a set
of resources is to be protected. The protection setup is used on
a DefProt or Protect directive. The subdirectives can be coded
* on a preceding Protection directive
* in-line on the DefProt or Protect directive
* in a separate protection file
The Protect and DefProt directives define the association of a
Protection setup with a set of resources to be protected.
* The DefProt statement associates a Protection setup with a URL
template but does not activate protection.
* The Protect statement associates a Protection setup with a URL
template and activates the protection.
If your server has multiple network connections, you can optionally
specify an address template on either the DefProt or Protect directive
to restrict the server to using the directive only for requests that
come to the server on a connection with an address matching the
template.
NOTE: The address of the server’s network connection is compared to
the template, NOT the address of the requesting client.
You can specify a complete IP address (for example, FOR 9.67.106.79),
or you can use a wildcard(*) character and specify a template
(for example, FOR 9.83.*).
Protection directive:
Default: <none>
Syntax: Protection setup-name { directives }
Within the braces, any combination of twelve (12) possible
protection subdirectives can be defined:
UserID, GroupID, ServerID, AuthType,
GetMask, PutMask, PostMask, DeleteMask, Mask,
PasswdFile, GroupFile
Protect directive:
Default: <none>
Syntax for a Protect directive pointing to a Protection directive:
Protect request-template setup-name [FOR IP-address-label]
Syntax for a Protect directive with protection settings defined
inline:
Appendix A. Configuration files and scripts
541
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
542
Protect request-template [IP-address-label] {
protection setting
protection setting
.
.
.
}
Example:
Protect /secret/*
Protect /secret/*
CustomerA-PROT
{
ServerID
ServerName
AuthType
Basic
PasswdFile /docs/www/restricted.pwd
GroupFile /docs/www/restricted.grp
GetMask authors
}
Protect /secret/* CustomerA-PROT FOR 9.67.106.79
Protect /secret/* 9.83.* {
ServerID
ServerName
AuthType
Basic
PasswdFile /docs/www/restricted.pwd
GroupFile /docs/www/restricted.grp
GetMask
authors
}
DefProt directive:
Syntax for a DefProt directive pointing to a Protection directive:
DefProt request-template setup-name [FOR IP-address-label]
Syntax for a DefProt directive with protection settings defined
inline:
DefProt request-template [IP-address-label] {
protection setting
protection setting
.
.
.
}
Example:
DefProt /secret/*
DefProt /secret/*
CustomerA-PROT
{
ServerID
ServerName
AuthType
Basic
PasswdFile /docs/www/restricted.pwd
GroupFile /docs/www/restricted.grp
GetMask
authors
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
}
# DefProt /secret/* CustomerA-PROT FOR 9.67.106.79
# DefProt /secret/* 9.83.* {
#
ServerID
ServerName
#
AuthType
Basic
#
PasswdFile /docs/www/restricted.pwd
#
GroupFile /docs/www/restricted.grp
#
GetMask
authors
#
}
#
#
#
Example DefProt and Protect and Protection directives:
#
#
Protection setup by usernames; specify groups in the group
#
file, if you need groups; create and maintain password files
#
with the htadm program.
#
# Protection PROT-SETUP-USERS {
#
ServerId
YourServersFancyName
#
AuthType
Basic
#
PasswdFile
/where/ever/passwd
#
GroupFile
/where/ever/group
#
GET-Mask
user, user, group, group, user
# }
#
#
#
Protection setup by hosts; you can use both domain name
#
templates and IP number templates
#
# Protection PROT-SETUP-HOSTS {
#
ServerId
YourServersFancyName
#
AuthType
Basic
#
PasswdFile
/where/ever/passwd
#
GroupFile
/where/ever/group
#
GET-Mask
@(*.cern.ch, 128.141.*.*, *.ncsa.uiuc.edu)
# }
# DefProt /very/secret/URL/*
# Protect /very/secret/URL/*
PROT-SETUP-USERS
# Protect /another/secret/URL/* PROT-SETUP-HOSTS
#
Protection PROT-ADMIN {
ServerId
Private_Authorization
AuthType
Basic
GetMask
All@(*)
PutMask
All@(*)
PostMask
All@(*)
Mask
All@(*)
PasswdFile
/opt/ibm/edge/cp/server_root/protect/webadmin.passwd
}
Appendix A. Configuration files and scripts
543
Protect /admin-bin/* PROT-ADMIN
Protect /reports/*
PROT-ADMIN
Protect /Usage*
PROT-ADMIN
Service /ndadvisor
Service /Usage*
Service /admin-bin/trace*
INTERNAL:NDAdvisor
INTERNAL:UsageFn
INTERNAL:TraceFn
# ===================================================================== #
#
#
Significant URL Terminator directive
#
# ===================================================================== #
#
#
#
#
The SignificantUrlTerminator is useful for increasing the cache hit
rate when you know that all URLs with the same significant part will
return the same response from the server regardless of the value
following the significant part.
# This directive sets a terminating pattern for URL requests. Only the
# part of the URL before the terminator is evaluated to determine whether
# the response is cached.
# You can have multiple entries for this directive.
#
#
#
#
#
#
#
Default: <none>
Syntax: SignificantUrlTerminator <pattern>
#
#
#
#
#
#
http://www.abc.com/cgi.asp?id=0044&.x=045;y=003
Example:
SignificantUrlTerminator&.
SignificantUrlTerminator^^^^^
If the response to the URL is the same regardless of the values of x
and y, you can set SignificantUrlTerminator to &. to tell the proxy to
consider only the value up to &. (i.e., http://www.abc.com/cgi.asp?id=0044)
#
===============================================================================
================
544
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
URL Rewriting rules and directives
#
#
===============================================================================
================
#
#
ReversePass rule:
#
Usage similar to that of Proxy rule. This rule is used to match
#
redirected URLs (301) so that they are rewritten to point to the
#
proxy.
#
Default : none
#
Syntax:
ReversePass <from URL> <to URL>
#
JunctionRewrite directive:
#
A Junction is defined as proxy rule (see next section) that maps a
#
virtual URL directory to the document root of a Origin Server.
#
The JunctionRewrite directive controls the ability of proxy to rewrite
#
responses from such origin servers so that they preserve the virtual
#
url space of the web site.
#
Junctions are defined by the proxy rule mapping. (Mapping rules
section)
#
This directive works in conjunction with the enabling of the Junction
#
rewriting plugin (which appears in System Plugins section).
#
Valid junctions appear as
#
Proxy
/shop/*
http://shopserver.acme.com/*
#
Proxy
/auth/*
http://authserver.acme.com/*
#
Proxy
/market/partners/*
http://b2bserver.acme.com/*
#
Valid junctions that have no effect on rewriting are
#
Proxy
/*
http://defaultserver.acme.com/*
#
Invalid junctions are
#
Proxy
/images/*.gif
http://imageserver.acme.com/images/gifs/*.gif
#
Proxy
/cgi-bin/*
http://cgiserver.acme.com/cgi/perl/*
#
#
Default: off
#
Syntax: JunctionRewrite <on | off>
#
# ===================================================================== #
#
#
Mapping rules
#
# ===================================================================== #
#
#
#
The Pass, Fail, Exec, Map, Redirect, Proxy and ProxyWAS rules are
used for mapping a request URL to another URL or a physical file.
Appendix A. Configuration files and scripts
545
#
The rules specify templates and a file directory or new URL to replace
#
the template in the request. If a request comes in and the URL of the
#
request matches one of the mapping rules, the rule is applied.
Asterisks
#
are used as wild cards and must appear in both the request template
and
#
the replacement template. The Fail rule does not have a replacement
template.
#
#
The Pass, Fail, and Exec rules are used for mapping from a request URL
#
to a physical file. The Map rule is used for mapping from one url to
#
another. The Proxy rule is used for mapping from a request URL to a
URL
#
which will be passed on to another server and requires a full URL in
the
#
replacement template. Redirect will direct the client to pass the
request
#
on to another server and requires a full URL in the replacement
template.
#
#
The ProxyWAS rule is almost identical to a Proxy rule and is used as an
#
indication to the Caching Proxy that the origin server being proxied is a
#
WebSphere Application Server 4.0 machine. This allows Proxies that
#
have a security proxy function to relay security information to the origin
#
server for use by J2EE applications. This rule is intended for use in a
#
reverse proxy or security proxy deployment.
#
#
The rules are applied in the order they appear in the configuration
file
#
until a request template has been matched or until there are no more
rules
#
to apply. The Map rule will modify the request as defined by the
replacement
#
template and then will continue processing the remaining rules against
the
#
mapped request. All other rules will stop applying rules once one of
the
#
rules has been matched.
#
#
If your server has multiple network connections or you want to
differentiate
#
traffic coming in on different ports, you can optionally specify an
#
address/port template to restrict the server to using the directive
only
#
for requests that come to the server on a connection with an
address/port
#
matching the template.
#
#
NOTE: the address/port of the server’s network connection is compared
546
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
to the template, not the address of the requesting client.
You can specify a complete IP address (for example, 9.67.106.79) or a
a specific port (for example, :443) or a IP address:port template (for
example 128.22.49.5:80).
Default:
Syntax:
<none>
rule request-template result [IP-address-template]
Example:
Exec
/cgi-bin/*
/CGI-BIN/CustomerA/* 9.67.109.79
Map
/data/*
/www/data/*
Map
/mess/*
/www/junk/*
Map
/books/stuff/*
/www/docs/*
Fail
/bogus/*
9.67.105.79
Fail
/ddd/eee/*
Redirect
/old/server/*
http://new.server.loc/newpath/*
Proxy
/hosta/*
http://hosta/*
ProxyWAS/hostwas/*http://hostwas/*
Proxy /hostb/*http://hostb/* 128.22.49.5:80
Pass
/buck/*
/diskx/bin/*
#
Scripts; URLs starting with /cgi-bin/ will be understood as
#
script calls in the directory /opt/ibm/edge/cp/server_root/cgi-bin/
#
#
URLs starting with /admin-bin/ will be understood as
#
script calls in the directory /opt/ibm/edge/cp/server_root/admin-bin/
#
Proxy /* http://console.demo.tivoli.com/* :8080
Pass
/admin-bin/webexec/*.style
/opt/ibm/edge/cp/server_root/admin-bin/webexec/*.style
#
#
If you are using an EUC codepage for display, use the *.euc.NLS and
*.euc.props files.
#
#Pass
/admin-bin/webexec/wteClient.NLS
/opt/ibm/edge/cp/server_root/admin-bin/webexec/en_US/wteClient.euc.NLS
#Pass
/admin-bin/webexec/wteClient.props
/opt/ibm/edge/cp/server_root/admin-bin/webexec/en_US/wteClient.euc.props
Pass
/admin-bin/webexec/wteClient.NLS
/opt/ibm/edge/cp/server_root/admin-bin/webexec/en_US/wteClient.NLS
Pass
/admin-bin/webexec/wteClient.props
/opt/ibm/edge/cp/server_root/admin-bin/webexec/en_US/wteClient.props
Pass
/admin-bin/webexec/*.NLS
/opt/ibm/edge/cp/server_root/admin-bin/webexec/en_US/*.NLS
Appendix A. Configuration files and scripts
547
Pass
/admin-bin/webexec/*.props
/opt/ibm/edge/cp/server_root/admin-bin/webexec/en_US/*.props
Pass
/admin-bin/webexec/*.class
/opt/ibm/edge/cp/server_root/admin-bin/webexec/*.class
Pass
/admin-bin/webexec/*.gif
/opt/ibm/edge/cp/server_root/admin-bin/webexec/*.gif
Pass
/admin-bin/webexec/*.jar
/opt/ibm/edge/cp/server_root/admin-bin/webexec/*.jar
Pass
/admin-bin/webexec/*.html
/opt/ibm/edge/cp/server_root/admin-bin/webexec/en_US/*.html
Pass
/Docs/arrow*.gif
/opt/ibm/edge/cp/server_root/Docs/arrow*.gif
Pass
/Docs/top.gif /opt/ibm/edge/cp/server_root/Docs/top.gif
Pass
/Docs/up-arrow.gif
/opt/ibm/edge/cp/server_root/Docs/up-arrow.gif
Pass
/Docs/NavBar.*
/opt/ibm/edge/cp/server_root/Docs/NavBar.*
Pass
/Docs/JohnNav.class
/opt/ibm/edge/cp/server_root/Docs/JohnNav.class
Pass
/Docs/JohnNode.class
/opt/ibm/edge/cp/server_root/Docs/JohnNode.class
Pass
/Docs/htmldocs/*
/opt/ibm/edge/doc/en_US/*
#Pass
/admin-bin/webexec/images/*
/opt/ibm/edge/cp/server_root/admin-bin/webexec/images*
Pass
/admin-bin/webexec/*
/opt/ibm/edge/cp/server_root/admin-bin/webexec/en_US/*
Exec
/cgi-bin/*
/opt/ibm/edge/cp/server_root/cgi-bin/*
Exec
/Docs/cgi-bin/*
/opt/ibm/edge/cp/server_root/cgi-bin/*
Exec
/admin-bin/*
/opt/ibm/edge/cp/server_root/admin-bin/*
Exec
/Docs/admin-bin/*
/opt/ibm/edge/cp/server_root/admin-bin/en_US/*
#
#
URL translation rules; If your documents are under
#
/opt/ibm/edge/cp/server_root/pub/en_US then this single rule does the
job:
#
Pass
/cpicons/statusSB.gif
/opt/ibm/edge/cp/server_root/cpicons/en_US/statusSB.gif
Pass
/cpicons/*
/opt/ibm/edge/cp/server_root/cpicons/*
Pass
/Admin/home.gif
/opt/ibm/edge/cp/server_root/Admin/home.gif
Pass
/Admin/help.gif
/opt/ibm/edge/cp/server_root/Admin/help.gif
Pass
/Admin/helpicon.gif
/opt/ibm/edge/cp/server_root/Admin/helpicon.gif
548
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Pass
/Admin/restarticon.gif
/opt/ibm/edge/cp/server_root/Admin/restarticon.gif
Pass
/Admin/*
/opt/ibm/edge/cp/server_root/Admin/en_US/*
Pass
/Docs/*
/opt/ibm/edge/cp/server_root/Docs/en_US/*
Pass
/wsApplet/*
/opt/ibm/edge/cp/server_root/Applets/*
Pass
/errorpages/*
/opt/ibm/edge/cp/server_root/pub/en_US/errorpages/*
Pass
/pac/*
/opt/ibm/edge/cp/server_root/pub/pacfiles/*
Pass
/pacfiles/*
/opt/ibm/edge/cp/server_root/pub/pacfiles/*
# *** START NEW MAPPING RULES SECTION ***
# *** END NEW MAPPING RULES SECTION ***
Pass
/*
/opt/ibm/edge/cp/server_root/pub/en_US/*
# ===================================================================== #
#
#
Performance directives
#
# ===================================================================== #
#
MaxActiveThreads directive:
#
#
Defines the number of threads in system thread pool.
#
#
Default: 100
#
Syntax:
MaxActiveThreads <num>
MaxActiveThreads 100
# ConnThreads directive
#
# Defines the number of connection threads to be used for connection
management
#
# Default
5
# Syntax:
ConnThreads <num>
#
MaxPersistRequest directive:
#
#
Maximum number of request to receive on a persistent connection.
#
#
Default: 5
#
Syntax:
MaxPersistRequest <num>
MaxPersistRequest 5
Appendix A. Configuration files and scripts
549
#
DNS-Lookup directive:
#
#
Instruct the server to look up hostnames of clients by
#
setting DNS-Lookup to “on”.
#
NOTE: Turning DNS-Lookup “on” decreases server performance.
#
#
Default: off
#
Syntax:
DNS-Lookup <on | off>
DNS-Lookup
off
#
PureProxy
#
#
Specifies
#
#
Default:
#
Syntax:
PureProxy on
directive:
if the server is to be run purely as a proxy server.
on
PureProxy <on | off>
#
MaxContentLengthBuffer directive:
#
#
The server normally gives a content-length header line for every
#
document it returns. For dynamically generated documents, the server
#
must buffer the document to compute the content length. This directive
#
can be used to set the size of this buffer. If it is exceeded the
#
document will be returned without a content-length header field (and
#
the connection will be forced closed: persistent connections cannot
#
be used unless the response has a content length).
#
#
Default: 100 K
#
Syntax: MaxContentLengthBuffer <size> <K|M>
#
(Only one keyword/value pair allowed.)
#
MaxContentLengthBuffer
1 M
#
#
#
#
#
#
#
#
#
#
#
550
ProxyPersistence directive:
Allows persistent connections, which will significantly reduce
latency for users and reduce CPU load on the proxy server.
NOTE: Supporting persistent connections requires more threads,
and thus more memory, on the proxy server. In addition,
if you have a multi-level proxy server setup, and you
have any old (HTTP/1.0) proxies in the network, then you
MUST NOT use persistent connections at the proxy.
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
Default: on
#
Syntax:
ProxyPersistence <on | off>
ProxyPersistence on
# ServerConnPool directive:
#
# Allows the proxy to pool together its outgoing connections to origin
# servers. This directive will enhance performance and take
# better advantage of origin servers that allow persistent connections.
# Also, you may specifiy how long to keep an unused connection around
# via the ServerConnTimeout directive.
#
# NOTE: This is best in a controlled environment and could cause a
#
worse performance in a forward proxy situation or one where
#
the origin servers are not HTTP 1.1 compliant.
#
# Default: off
# Syntax:
ServerConnPool <on | off>
ServerConnPool off
# MaxSocketPerServer directive:
#
# This sets the maximum number of open IDLE sockets to maintain to any
# one origin server.
# This is only used if the ServerConnPool is set on.
#
# Default: 5
# Syntax: MaxSocketPerServer <num>
MaxSocketPerServer 5
# ServerConnTimeout directive:
#
# This is set to limit the time to keep an unused connection to a server.
# This is only used if the ServerConnPool directive is set on.
#
# Default: 10 seconds
# Syntax: ServerConnTimeout <time value>
ServerConnTimeout
10 seconds
#
#
#
#
#
#
#
#
#
ServerConnGCRun directive:
This sets the interval at which the garbage collection thread will
check for connections to the server that have timed out (set with the
ServerConnTimeout directive.
This is only used if the ServerConnPool directive is set on.
Default: 2 minutes
Syntax: ServerConnGCRun <time value>
Appendix A. Configuration files and scripts
551
ServerConnGCRun 2 minutes
# ===================================================================== #
#
#
Timeout directives
#
# ===================================================================== #
#
Use these directives to:
#
* limit the time to wait for the client to send a request
#
after connecting to the server before cancelling the connection.
#
* limit the time allowed without network activity before
#
cancelling the connection.
#
* limit the time to allow for sending output to the client.
#
* limit the time to allow for CGI programs to finish.
#
(If the program does not finish within allotted time, the server
#
will kill the CGI program.)
#
* limit the time to wait for the client to send a request
#
after establishing a persistent connection to the server
#
before cancelling the connection.
#
#
Default: InputTimeout
2 minutes
#
Default: ReadTimeout
5 minutes
#
Default: OutputTimeout
12 hours
#
Default: ScriptTimeout
5 minutes
#
Default: PersistTimeout
4 seconds
#
#
Syntax:
<directive>
<time-spec>
#
InputTimeout
2 minutes
ReadTimeout
5 minutes
OutputTimeout
12 hours
ScriptTimeout
5 minutes
PersistTimeout
4 seconds
# ===================================================================== #
#
#
Proxy directives
#
# ===================================================================== #
#
#
552
Proxy server protection and caching directives
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
Proxy protections; if you want only certain domains to
#
use your proxy, uncomment these lines and specify the Mask
#
with hostname templates or IP number templates:
#
# Protection PROXY-PROT {
#
ServerId
YourProxyName
#
Mask
@(*.cern.ch, 128.141.*.*, *.ncsa.uiuc.edu)
# }
# Protect * PROXY-PROT
#
# Protect
# Protect
# Protect
#
#
Proxy
http:*
ftp:*
gopher:*
PROXY-PROT
PROXY-PROT
PROXY-PROT
Specify the protocols that this proxy server will forward:
http:*
#
Forward Proxy-Authorization header directive:
#
#
Enabling this directive leads WTE to forward Proxy-Authorization
#
header. It can be used when WTE is part of a proxy hierarchy
#
and needs to forward a user’s proxy credentials to an
#
upstream proxy.
#
#
Defaults: off
#
Syntax:
ForwardProxyAuthHeader <on | off>
ForwardProxyAuthHeader off
#
SSLTunneling directive:
#
#
Enable SSL Tunneling to any port. You must also make sure that
#
the CONNECT method is enabled; use the directive “Enable” to
#
do this.
#
#
Default: on
#
Syntax: SSLTunneling <on | off>
SSLTunneling
off
#
#
#
#
#
#
#
Proxy-to-Proxy directives:
Also known as Proxy Chaining
ftp_proxy directive:
Use this directive to specify the name of another proxy web server
this server should contact for FTP requests rather than contacting
Appendix A. Configuration files and scripts
553
#
the FTP Server named in the request URL directly.
#
#
Specify the optional domain specification value as a string of domain
#
names or domain name templates. Separate each entry in the string with
a comma.
#
Do NOT put any spaces in the string. You CANNOT use the wildcard
character (*).
#
You CAN specify a template by including only the last part of a domain
#
name.
#
#
Default: <none>
#
Syntax: ftp_proxy <outer_proxy_server_URL> <optional domain
specification>
#
# Example:
# ftp_proxy
http://outer.proxy.name/
#
gopher_proxy directive:
#
#
Use this directive to specify the name of another proxy web server
#
this server should contact for Gopher requests rather than contacting
#
the Gopher Server named in the request URL directly.
#
#
Specify the optional domain specification value as a string of domain
#
names or domain name templates. Separate each entry in the string with
a comma.
#
Do NOT put any spaces in the string. You CANNOT use the wildcard
character (*).
#
You CAN specify a template by including only the last part of a domain
#
name.
#
#
Default: <none>
#
Syntax: gopher_proxy <outer_proxy_server_URL> <optional domain
specification>
#
# Example:
# gopher_proxy http://outer.proxy.name/
#
#
#
#
#
#
#
#
a comma.
554
http_proxy directive:
Use this directive to specify the name of another proxy web server
this server should contact for HTTP requests rather than contacting
the HTTP Server named in the request URL directly.
Specify the optional domain specification value as a string of domain
names or domain name templates. Separate each entry in the string with
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
Do NOT put any spaces in the string. You CANNOT use the wildcard
character (*).
#
You CAN specify a template by including only the last part of a domain
#
name.
#
#
Default: <none>
#
Syntax: http_proxy <outer_proxy_server_URL> <optional domain
specification>
#
# Example:
# http_proxy
http://outer.proxy.name/
#
no_proxy directive:
#
#
Specify the domains to which the server should directly connect.
#
This is ONLY used when doing proxy chaining (i.e., when an
#
http_proxy, ftp_proxy, and/or gopher_proxy is defined). This
#
directive does NOT apply when the proxy goes through a SOCKS server;
#
use socks.conf for that purpose. Also, if you are using neither
#
proxy chaining nor SOCKS, then this directive is not needed.
#
#
Specify the value as a string of domain names or domain name
#
templates. Separate each entry in the string with a comma.
#
#
Do NOT put any spaces in the string.
#
You CANNOT use the wildcard character (*).
#
You CAN specify a template by including only the last part of a domain
#
name.
#
#
Default: <none>
#
Syntax:
no_proxy <non-proxy domain specification>
#
# Example:
# no_proxy
www.someco.com,.raleigh.ibm.com,.some.host.org:8080
#
#
#
#
#
#
#
#
#
#
#
#
SendRevProxyName directive:
In a reverse proxy scenario, WTE normally sends the destination
origin server name in the HOST header of the request to the origin
server. If this directive is set to yes, WTE will instead send
the WTE host name in the HOST header of the request to the origin
server. This allows the origin server to use the WTE host name in
redirects sent back. Therefore, subsequent requests to redirected
locations will go through WTE.
Default:
Syntax:
no
SendRevProxyName <yes | no>
Appendix A. Configuration files and scripts
555
#
# Example:
# SendRevProxyName
yes
# ===================================================================== #
#
#
Proxy caching directives
#
# ===================================================================== #
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
Caching directive:
Turn on proxy caching here. To enable proxy caching, you must:
1) set the Caching directive to ON, and
2) specify at least one CacheDev (raw partition), or
specify CacheMemory (if no CacheDev, then memory cache is used)
There are two different types of caches:
- memory cache - fastest, but limited by amount of RAM
- raw disk cache - fast, only limited by disk space
For raw disk caches, the htcformat utility must be used
to set up the devices. The CacheDev and BlockSize directives
need to be specified. Multiple CacheDev directives can be
specified, and the total space from those devices will be used as a
single cache. For disk (rather than memory) caches, the CacheMemory
directive can still be used to tell the proxy how much memory it
can use to efficiently manage the disk cache. CacheMemory should
be set to at least 1% of the total size of the disk cache.
For a memory cache, simply specify the CacheMemory directive for the
desired cache size, and do not specify any CacheDev directives.
The minimum cache size allowed is 64 M, which can be either 64 M
of a memory cache, or multiple cache devices that add up to 64 M.
Default:
Syntax:
ON
Caching <ON | on | OFF | off>
Caching ON
#
#
#
#
#
556
CacheDev directive:
Specify the cache device(s) to be used for the cache. The minimum
size for the sum of all cache devices is 64 M.
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
#
#
Default:
Syntax:
Examples:
none
CacheDev <raw disk partition>
CacheDev /dev/raw/raw1 (For Red Hat)
CacheDev /dev/raw1 (For SuSE)
#CacheDev /dev/raw/raw1
CacheDev
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
CacheMemory directive:
Specify the amount of RAM associated with the cache. See the
explanations in the Caching section above for more information
about the use of this directive. If the proxy needs more memory
than is specified for this directive, it will use what it needs
and send a warning message indicating this to the EventLog when
the proxy starts up and initializes the cache. There is more
memory overhead per cache device, so memory utilization is more
efficient for fewer cache devices. As a guideline, memory
utilization is also better for larger caches (1 G and up) than
for smaller caches. With a cache device over 1 G in size, the
amount of memory used will be approximately 1% of the total cache
size. Each cache device uses a minimum of about 8.5 M of memory.
The unit specified can be B (bytes), K (kilobytes), M (megabytes),
or G (gigabytes). If you do not specify a unit, it assumes M.
Minimum for a memory cache: 64 M
Maximum for a memory cache: 1200 M
Minimum for a non-memory cache: 1 K
Default: there is no default value, this must be specified
Syntax:
CacheMemory <size> B | K | M | G
CacheMemory
#
#
#
#
#
#
#
#
/dev/raw/raw1
64 M
BlockSize directive:
Specify the size (in bytes) of the blocks within the cache.
The same BlockSize value will be used for all devices specified
via the CacheDev directives.
Default:
Syntax:
8192 (the only supported BlockSize is 8192 bytes)
BlockSize <size>
#BlockSize 8192
Appendix A. Configuration files and scripts
557
#
CacheDefaultExpiry directive:
#
#
Specify the expiry date for files which do not include an explicit
#
expiry date and do not have a last-modified date that would allow us
#
to compute an expiry based on the CacheLastModifiedFactor. This is
#
most useful for protocols that do not have any way to transmit this
#
information, such as FTP or Gopher.
#
#
NOTE: The default expiration for HTTP is 0. HTTP should be kept at 0
#
because many script programs don’t give an expiration date, yet
#
their output expires immediately. A value other than zero may
#
cause problems.
#
#
Defaults: http:*
0 days
#
ftp:*
1 day
#
gopher:* 2 days
#
Syntax:
CacheDefaultExpiry <URL pattern> <time period>
#
# Example: set default expiration for all FTP files to 14 days
# CacheDefaultExpiry ftp:*
1 fortnight
CacheDefaultExpiry
CacheDefaultExpiry
CacheDefaultExpiry
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
558
http:*
ftp:* 1
gopher:*
0 days
days
2 days
CacheRefreshInterval directive:
This directive specifies when to revalidate - check with the
origin to see if they’ve changed - documents. The difference
between this directive and CacheClean (below) is that CacheClean
will cause documents to be removed from the cache after a
given period of time, while CacheRefreshInterval will just
force the proxy to revalidate them before using them.
Defaults:
Syntax:
2 weeks for all documents
CacheRefreshInterval <URL pattern> <time period>
- This form specifies the refresh interval for
and URLs matching <URL pattern>
CacheRefreshInterval pattern> <time period>
- This specifies the refresh interval for any
documents NOT matching a <URL pattern> in
another CacheRefreshInterval directive - in
other words, the default refresh interval.
Example: refresh all .gif images after 8 hours, and all other documents
after a week.
CacheRefreshInterval *.gif 8 hours
CacheRefreshInterval 1 week
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
CacheRefreshInterval 2 weeks
#
#
#
#
#
#
#
#
#
#
CacheUnused directive:
Specify how long the proxy cache should keep files which have not
been used (requested by a client). Unused files which have been in
the cache longer than this will be removed during garbage collection.
Default:
Syntax:
CacheUnused
CacheUnused
CacheUnused
#
#
#
#
#
#
#
#
#
2 days for
3 days for
12 hours for
CacheUnused
http:* 2 days
ftp:* 3 days
gopher:* 12 hours
CacheExpiryCheck directive:
Normally, a caching proxy will check that the files in its cache
have not expired. In special circumstances (such as a network outage)
you may want to disable this check; the proxy will then serve files
from its cache even if they’re out of date.
Default: on
Syntax: CacheExpiryCheck <on | off>
CacheExpiryCheck
#
#
#
#
#
#
#
#
#
#
off
CacheNoConnect directive:
In normal situations, the proxy server will contact the content
server to fetch pages. However, in special situations (such as
a demonstration at a trade show), you may not want the proxy server
to try to contact the origin server. Setting CacheNoConnect “on”
prevents the proxy from contacting the origin server.
Default: off
Syntax: CacheNoConnect <on | off>
CacheNoConnect
#
#
http:*
ftp:*
gopher:*
<URL pattern> <time period>
on
CacheTimeMargin directive:
Appendix A. Configuration files and scripts
559
#
#
#
#
#
#
#
The proxy server will not cache documents which are due to expire
‘soon’. This directive defines what ‘soon’ is. In other words, a
document’s expiry date must be further in the future than the
CacheTimeMargin when WTE receives it in order for it to be cached.
Default: 10 minutes
Syntax: CacheTimeMargin <time period>
CacheTimeMargin
10
#
CacheLastModifiedFactor directive:
#
#
Use this directive to have the server set expirations for files which
#
have a Last-Modified date, but no Expires date. The server uses the
#
Last-Modified date to determine how long it has been since the file
#
was modified, then multiplies that time by the fraction specified by
#
the value on this directive to determine the proportion of a file’s
#
age (LastModified) to be used as the expiry time. The assumption
#
is that files that have changed recently are probably changing
#
frequently, while files that have not been modified in some time have
#
stablizied and do not need to be refreshed as frequently.
#
#
The higher this value is set, the longer these files will reside in
#
the cache without being checked for freshness. Setting this value too
#
high may cause stale (out-of-date) files to be served from cache.
#
#
For example, if a file was last modified 1 month ago and
#
CacheLastModifiedFactor was set to 0.5, the file would expire in
#
approximately 15 days (half a month). If the file was changed 4 days
#
ago, and CacheLastModifiedFactor was set to 0.25, the file would
#
expire in 1 day.
#
#
If a CacheLastModifiedFactor of -1 is specified, the file
#
last-modified date will not be used to calculate the file expiry time.
#
This setting is not suggested for normal operation, as it will result
#
in very few files being cached.
#
#
Default: Numerous factors, based on file extensions. The defaults
#
reflect the fact that certain types of files - like
#
graphics - tend to have longer lifetimes.
#
Syntax: CacheLastModifiedFactor <URL pattern> <fraction>
#
# Example:
# CacheLastModifiedFactor http://* 0.14
# CacheLastModifiedFactor ftp://* 0.25
CacheLastModifiedFactor http://*/ 0.10
CacheLastModifiedFactor http://*.htm* 0.20
560
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
CacheLastModifiedFactor
CacheLastModifiedFactor
CacheLastModifiedFactor
CacheLastModifiedFactor
http://*.gif 1.00
http://*.jpg 1.00
http://*.jpeg 1.00
http://*.png 1.00
CacheLastModifiedFactor http://*.tar 1.00
CacheLastModifiedFactor http://*.zip 1.00
CacheLastModifiedFactor http:* 0.15
CacheLastModifiedFactor ftp:* 0.50
CacheLastModifiedFactor
* 0.10
#
CacheMaxExpiry directive:
#
#
Specify the maximum lifetime allowed for objects matching a given
#
request template. An object may still be kept longer than its
#
maximum lifetime, but it must be revalidated with the origin
#
server when its lifetime has been reached. A maximum lifetime
#
of 0 is interpreted as no maximum - the lifetime will not have
#
an upper limit.
#
#
Note that it doesn’t make much sense to set this higher than
#
CacheClean, as CacheClean specifies the maximum time an object can
#
remain in the cache before it is deleted.
#
#
Default: 1 month for all objects
#
Syntax: CacheMaxExpiry <URL request template> <time spec>
#
# Example: .gif files may not have a lifetime longer than 2 weeks.
# CacheMaxExpiry http://*.gif 2 weeks
CacheMaxExpiry 1 month
#
CacheClean directive:
#
#
Specify how long you want the server to keep cached files with URLs
#
matching a given request template. The server deletes cached files
#
whose URLs match a given request template after they have been cached
#
for the specified time, regardless of their expiration date.
#
#
Default: 1 month for all objects
#
Syntax: CacheClean <URL request template> <time spec>
# Example:
# CacheClean http:* 2 weeks
CacheClean 1 month
Appendix A. Configuration files and scripts
561
#
CacheFileSizeLimit directive:
#
#
CacheFileSizeLimit specifies the maximum size for any file that will
#
be cached. The value can be specified in bytes (B), kilobytes (K),
#
megabytes (M), or gigabytes (G).
#
#
Note that in previous releases, this directive was called
#
CacheLimit_2. The syntax of the directive remains unchanged.
#
#
Default: CacheFileSizeLimit 4000 K
#
Syntax:
CacheFileSizeLimit <bytes> <B|K|M|G>
#
# Example: Don’t cache any files larger than 512 K.
# CacheFileSizeLimit 512 K
CacheFileSizeLimit 4000 K
#
CacheOnly and NoCaching directives:
#
#
The server allows control over the files to be cached in two ways.
#
#
CacheOnly - specifies a set of URLs which will be considered for
#
caching (URLs not in that list will never be cached)
#
NoCaching - specifies a set of URLs which must never be cached,
#
(all other URLs are candidates for caching)
#
#
Default: <none> (for both CacheOnly and NoCaching)
#
Syntax:
CacheOnly <URL pattern>
#
NoCaching <URL pattern>
#
# Example:
# CacheOnly http://www.ibm.com/*
# NoCaching http://never.cache.me.net/*
#
#
#
#
#
#
#
#
#
#
#
#
562
ContinueCaching directive:
Specifies the point at which a file being received from a content
server will continue to be received from the content server and
stored in the cache even if the connection to the client which
requested the file has been terminated. The value specified
represents a percentage of the size of the file being transferred.
If less than this percentage of the file has been transferred from
the content server at the time the client connection is terminated,
file transfer from the content server will be terminated and the cache
file containing the partial file will be removed from the cache.
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
Default: 75
#
#
Syntax:
ContinueCaching <percent of file already transferred>
#
# Example:
# ContinueCaching 75
#
CacheMontool directive:
#
#
CacheMontool is an ON/OFF flag for enabling a cache monitoring
#
utility which monitors the proxy cache. It gives the information
#
as to how full is the cache (either memory or cachedevices) and
#
whether its getting written, garbage collected or compacted.
#
#
Default: off
#
#
Syntax: CacheMontool <on | off>
#
# Example:
CacheMontool on
#
IMSToServer directive:
#
#
IMSToServer is a directive that turns on the feature of sending
#
If-Modified-Since requests to the origin server instead of
#
reloading the data. This can enhance bandwidth utilization
#
and overall cache performance if data is often expiring without
#
actually changing.
#
#
Default: on
#
#
Syntax: IMSToServer <on | off>
#
# Example:
# IMSToServer off
#
#
#
#
#
#
#
#
#
#
OutgoingConnMgmt directive:
OutgoingConnMgmt is a directive that turns on the server side
connection management feature. This specifies whether or not
a worker thread should wait for the server to respond before
continuing to another task. Turning this on will enhance
performance when accessing slow backend servers.
Default: off
Appendix A. Configuration files and scripts
563
#
Syntax: OutgoingConnMgmt <on | off>
#
# Example:
# OutgoingConnMgmt on
# ===================================================================== #
#
#
Proxy cache garbage collection directives
#
# ===================================================================== #
#
#
#
#
#
#
#
#
#
#
Gc
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
564
Gc (Garbage Collection) directive:
In order for a caching proxy server to function efficiently, it
needs to sweep through the cache and remove out-of-date files on a
regular basis. This is called ‘garbage collection’. It should only
be turned “off” in special circumstances, such as during an extended
network outage.
Default:
Syntax:
off
on
Gc <on | off>
GcperDevice directive:
GcperDevice directive is an ON/OFF flag which turns on garbage
collection per device basis and NOT complete system basis. Hence
the garbage collection of any cache device starts as soon as the
cache device is full irrespective of whether the GcHighWater mark
has been reached.
Default: off
Syntax: GcperDevice <on | off>
CacheAlgorithm directive:
Specifies which cache algorithm the server will use. Choices
include: bandwidth, responsetime, and blend. Specifying
bandwidth will rate cache files to minimize network bandwidth.
Choosing responsetime will rank cached files to minimize response
time. The blend option will do a combination of the two.
Default:
Syntax:
bandwidth
CacheAlgorithm <string>
where string is “bandwidth” | “responsetime” | “blend”
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
CacheAlgorithm bandwidth
#
GcHighWater directive:
#
#
When the cache fills up to the high-water mark, garbage collection
#
will begin. The high-water mark is specified as a percentage of the
#
total cache capacity. Garbage collection will continue until the
#
low-water mark has been reached - see GcLowWater to set this.
#
The high-water mark must not be set above 95%, and normally should
#
not be set below 50%.
#
#
Default: 90 (percent)
#
Syntax:
GcHighWater <percentage>
#
# Example: Start garbage collection when cache utilization reaches 85%
# GcHighWater 85
GcHighWater 90
#
GcLowWater directive:
#
#
Once cache garbage collection has begun, it will continue until
#
cache utilization reaches the low-water mark. The low-water mark is
#
specified as a percentage of the total cache capacity. The low-water
#
mark must be below the high-water mark; see the GcHighWater directive
#
for setting the high-water mark.
#
#
Default: 60 (percent)
#
Syntax:
GcLowWater <percentage>
#
# Example: End garbage collection when the cache reaches 75% full.
# GcLowWater 75
GcLowWater 60
# ===================================================================== #
#
#
Advanced proxy and caching directives
#
# ===================================================================== #
#
#
#
#
#
#
#
ProxyIgnoreNoCache directive:
Allows the proxy server to ignore the “Pragma: no-cache” header
(usually sent by browsers when the user hits the “Reload” button)
and serve the content from cache (if available) in defiance of
the client’s request.
Appendix A. Configuration files and scripts
565
#
NOTE: This should ONLY be used in unusual circumstances.
#
#
Default: off
#
Syntax:
ProxyIgnoreNoCache <on | off>
ProxyIgnoreNoCache
on
#
AggressiveCaching directive:
#
#
Allows the proxy server to cache responses which might not
#
ordinarily be cached, for example responses which contain
#
the “cache-control: no-cache” header.
#
#
Default: <none>
#
Syntax: AggressiveCaching <URL pattern>
# Example:
# AggressiveCaching http://www.hosta.com/*
#
ProxySendClientAddress directive:
#
#
Instructs the proxy to forward an HTTP header containing the
#
client’s IP address. If ProxySendClientAddress is not defined,
#
client IP addresses are NOT forwarded.
#
#
Default: <none>
#
Syntax:
ProxySendClientAddress <HTTP header name>
#
#
NOTE: The Remote Configuration forms only support 2 options:
#
(1) client IP addresses NOT forwarded
#
ProxySendClientAddress directive is removed
#
(2) client IP addresses ARE forwarded with Client-IP: header
#
ProxySendClientAddress Client-IP:
#
# Example:
# ProxySendClientAddress Client-IP:
#
#
#
#
web.
#
#
#
#
#
#
566
ProxyUserAgent directive:
Substitute a different User-agent string for the one that the client
sends. This helps make the clients more anonymous when surfing the
User-agent strings may include spaces.
NOTE: Some Web sites automatically generate different pages
for certain Web browsers (by looking at the User-Agent header),
so if you mask the identity of the browser, you won’t be able
to see the pages that are customized for certain browsers.
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
Default: <none>
#
Syntax:
ProxyUserAgent <string> (in the form “ProductName/Version”)
#
# Example:
# ProxyUserAgent Mozilla/4.0 (emulation; Javelin/2.0)
#
ProxyFrom directive:
#
#
Specifies the “From:” header to send on all requests that
#
go through this proxy.
#
#
NOTE: Replaces the “From:” header sent by the client
#
#
Default: <none>
#
Syntax:
ProxyFrom <string>
#
# Example:
# ProxyFrom [email protected]
#
NoProxyHeader directive:
#
#
Allows the proxy to block certain headers that clients send.
#
#
NOTE: Any HTTP header (even required headers) can be blocked
#
with this directive, so extreme care should be used.
#
#
Default: <none>
#
Syntax:
NoProxyHeader <header>
#
# Example:
# NoProxyHeader Referer:
#
#
#
#
#
#
#
#
#
#
#
#
#
ProxyVia directive:
Controls the use of HTTP Via header.
NOTE: There are 4 possible values for this directive.
Full: WTE will add a Via header into the request or
reply. If there is a Via header already in the
stream, it will add host infomation at the end
of the line.
Set:
The Via header will be set to host information.
If there is a via header already in the stream,
it will remove this header from stream.
Pass: No action involved, pass through.
Appendix A. Configuration files and scripts
567
#
#
#
#
Block: No Via header forward, including WTE’s.
Default: <Full>
Syntax: ProxyVia <Full|Set|Pass|Block>
#
CacheMinHold directive:
#
#
Overrides the “Expires” tag on documents from certain sites.
#
These sites routinely force documents to expire immediately
#
even when they have a longer lifetime.
#
#
Default: <none>
#
Syntax:
CacheMinHold <URL pattern> <time spec>
#
# Example:
# CacheMinHold http://www.cachebusters.com/* 1 hour
#
CacheLocalDomain directive:
#
#
Allows local domain sites to be cached, when set “on”.
#
#
NOTE: Assumes hostnames without a domain name are local.
#
#
Default: on
#
Syntax:
CacheLocalDomain <on | off>
CacheLocalDomain on
#
appendCRLFtoPost directive:
#
#
Some Origin servers require a Carriage Return and Line Feed to
#
be appended at the end of the content body on a POST request.
#
Use the appendCRLFtoPost directive to specify sites which should
#
have the CRLF appended at the end of the content body when sending
#
a POST request to the site.
#
#
Default: <none>
#
Syntax:
appendCRLFtoPost <URL pattern>
#
# Example:
# appendCRLFtoPost
http://www.hosta.com/*
#
#
#
#
568
SendHTTP10outbound directive:
Specifies the HTTP version to be used on requests sent outbound
to the origin server or to the next proxy in a chain of proxies.
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
SendHTTP10outbound should only be used if the origin server
#
or next downstream proxy have known problems handling HTTP/1.1
requests.
#
#
Default: <none>
#
Syntax:
SendHTTP10outbound <URL pattern>
#
#SendHTTP10outbound http://www.hosta.com/*
SendHTTP10Outbound http://*.mail.yahoo.com/*
#
CacheQueries directive:
#
#
Specifies whether or not responses to queries should be cached.
#
If ALWAYS is specified, responses to queries will be cached as
#
long as the response is otherwise cacheable.
#
If PUBLIC is specified, responses to queries will be cached if
#
the response is marked as “public” by containing a
#
“cache-control: public header, or if the response contains
#
cache-control headers which will force re-validation on every request,
and the
#
response is otherwise cacheable.
#
#
Default: <none>
#
Syntax: CacheQueries <ALWAYS | PUBLIC> <URL Pattern>
#
# Example:
# CacheQueries ALWAYS http://www.hosta.com/*
# CacheQueries PUBLIC http://www.hostb.com/*
#
CacheByIncomingUrl directive:
#
#
Specifies whether to use the incoming URL or the outgoing URL as
#
the basis for generating cache file names.
#
If ON is specified, the incoming URL will be used to generate the
#
cache file name. If OFF is specified, All applicable Name Translation
#
plug-ins and Map and Proxy rules will be applied to the incoming URL
and
#
the resulting URL will be used to generate the cache name.
#
#
Default: OFF
#
Syntax: CacheByIncomingUrl <on | off>
CacheByIncomingUrl off
#
#
#
ExternalCacheManager directive:
Specifies the ExternalCacheManager id to be used determine the
Appendix A. Configuration files and scripts
569
# cacheablity of dynamic resources. Such resources will receive
# explicit invalidations from the aforementioned cache manager.
# The directive also allows for specification of a default expiry
# time to be used in case an invalidation is not received.
#
# Default: <none>
# Synatax: ExternalCacheManager <External Manager ID> <time spec>
# Example:
# ExternalCacheManager ACME-APPSERV-1 20 minutes
# ExternalCacheManager ACME-APPSERV-2 10 minutes
# ===================================================================== #
#
#
RCA (Remote Cache Access) directives
#
# ===================================================================== #
#
#
#
#
#
#
Version
Version directive:
Specify the protocol name and version.
Default: <RCA/1.0>
Syntax:
Version <protocol/major.minor>
RCA/1.0
#
ArrayName directive:
#
#
Specify the name of this array.
#
#
NOTE: This is an administrative aid.
#
Spaces are not allowed in the array name.
#
This directive is required.
#
#
Default: <none>
#
Syntax:
ArrayName <array name>
# Example:
# ArrayName mastiff
#
#
#
#
#
in
#
the
570
SymmetricConfig
Specified if the members of this array are in a symmetric configuration
NOTE: Turning this directive on allows for some optimizations that result
Network traffic between RCA nodes. This feature should not be used when
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
#
members of the array are dissimilar with regards to configuration.
Default:
Syntax:
Off
SymmetricConfig ( On | Off )
#
RcaThreads directive:
#
#
Specify the number of threads working on the RCA port
#
#
NOTE: This number must be less than the MaxActiveThreads. RCA threads
will
#
only work on the RCA port. The number of RCA threads is based on
the
#
ArraySize and MaxActiveThreads numbers. If not defined, the
default
#
value will be used.
#
#
Default: MaxActiveThreads * (ArraySize - 1) / (2 * ArraySize - 1)
#
Syntax: RcaThreads <num>
# RcaThreads 50
#
Member directive:
#
#
Specify a member this array.
#
#
Default: <none>
#
Syntax:
Member Name { subdirectives }
#
#
Name
required; the hostname this member is known to
#
clients by
#
#
subdirectives are one of:
#
RCAAddr
required; IP address or hostname for RCA
#
communication
#
RCAPort
required; port for RCA communication
#
Timeout
( milliseconds )
#
optional; how long to wait for this member
#
before deciding rigor mortis has occured;
#
must be positive; default is 1000 ms
#
BindSpecific ( On | Off )
#
optional; allows communications to occur on a
#
private subnet, providing a measure of
security;
#
default is ON
#
ReuseAddr
( On | Off )
#
optional; allows faster rejoining of the array;
#
“on” would allow other processes to steal the
#
port, causing undefined behaviour;
Appendix A. Configuration files and scripts
571
#
default is OFF
#
# Example:
# Member bittersweet.chocolate.ibm.com {
#
RCAAddr
127.0.0.1
#
RCAPort
6294
#
Timeout
1000 milliseconds
#
BindSpecific On
#
ReuseAddr
Off
# }
# ===================================================================== #
#
#
SNMP directives
#
# ===================================================================== #
#
#
#
#
#
#
SNMP
SNMP directive:
Set SNMP communication on or off.
Default:
Syntax:
off
off
SNMP <on | off>
#
SNMPCommunity directive:
#
#
The community name which is used by the server to communicate
#
with the SNMP agent.
#
#
Default: public
#
Syntax:
SNMPCommunity <public | community name>
SNMPCommunity public
#
WebMasterEMail directive:
#
#
The E-mail address of the person who should get communications
#
about this server
#
#
Default: webmaster
#
Syntax:
WebMasterEMail [email protected]
WebMasterEMail webmaster
572
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
# ===================================================================== #
#
#
Icon directives
#
# ===================================================================== #
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
AddIcon, AddDirIcon, AddBlankIcon, AddUnknownIcon and AddParentIcon
directives:
AddIcon
- Bind icon URL to a MIME content-type or
content-encoding.
AddDirIcon
- Specify directory icon URL for directory listing
AddBlankIcon - Specify blank icon URL for directory listing
AddParentIcon - Specify parent directory icon URL for directory
listing
AddUnknownIcon - Specify unknown icon URL for directory listing
Default:
Syntax:
Syntax:
Syntax:
Syntax:
Syntax:
<default set of icons shown below>
AddIcon
<icon URL> <ALT text> <MIME-type template>
AddDirIcon
<icon URL> <ALT text>
AddBlankIcon <icon URL> <ALT text>
AddParentIcon <icon URL> <ALT text>
AddUnknownIcon <icon URL> <ALT text>
If the <icon URL> does not include a path, the server will prepend
the directory /cpicons/ to the icon filename. Note that the icon URL
is a virtual path; it will be sent through the mapping rules
(Map, Pass, etc) to find the real path in the filesystem.
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
app-123.gif
app-compress.gif
app-compress.gif
app-fl.gif
app-pcl.gif
app-pdf.gif
app-ps.gif
app-shar.gif
app-bsh.gif
app-csh.gif
app-ksh.gif
123
Z
gz
FL
PCL
PDF
PS
shar
sh
csh
ksh
application/x-123
x-compress
x-gzip
application/x-freelance
application/x-pcl
application/pdf
application/postscript
application/x-shar
application/x-bsh
application/x-csh
application/x-ksh
AddIcon
AddIcon
AddIcon
mp-tar.gif
mp-tar.gif
mp-zip.gif
tar
tar
zip
multipart/x-tar
multipart/x-ustar
multipart/x-zip
AddIcon
AddIcon
audio.gif
audio.gif
au
aiff
audio/basic
audio/x-aiff
Appendix A. Configuration files and scripts
573
AddIcon
AddIcon
audio.gif
audio.gif
wav
audio
audio/x-wav
audio/*
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
image-gif.gif
image-jpeg.gif
image-pixmap.gif
image-tif.gif
image.gif
GIF
JPEG
pixmap
TIFF
img
image/gif
image/jpeg
image/x-xpixmap
image/tiff
image/*
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
text-assem.gif
text-c.gif
text-html.gif
text-html.gif
text-uu.gif
text.gif
asm
c
HTML
HTML
UU
text
text/x-asm
text/x-c
text/html
text/x-ssi-html
text/x-uuencode
text/*
AddIcon
AddIcon
AddIcon
AddIcon
AddIcon
video-avi.gif
video-avi.gif
video-jpeg.gif
video-mpeg.gif
video.gif
avi
qt
mjpg
mpeg
video
video/x-msvideo
video/quicktime
video/x-motion-jpeg
video/mpeg
video/*
AddIcon
AddIcon
binary.gif
binary.gif
bin
bin
application/octet-stream
binary
AddBlankIcon
AddParentIcon
AddDirIcon
AddUnknownIcon
blank.gif
dir-up.gif
dir.gif
unknown.gif
UP
DIR
???
# ===================================================================== #
#
#
Cache agent directives
#
# ===================================================================== #
#
#
#
#
#
#
also
#
#
#
574
UpdateProxy directive:
Specify which proxy server the cache agent should update.
NOTE: This directive is required for the cache agent to function on
on AIX, Solaris, Linux and Windows systems. This directive is
required when the cache agent needs to update a proxy server
other than the local proxy server on which it is running.
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
Default: <host on which the cache agent is running>
#
Syntax:
UpdateProxy <fully qualified host name of the proxy server>
# Example:
# UpdateProxy www.ibm.com
#
LoadURL directive:
#
#
Specify URLs to be loaded into the cache.
#
#
NOTE: These URLs are placed at the top of the cache agent’s queue.
#
#
Default: <none>
#
Syntax:
LoadURL <URL to load>
# Example:
# LoadURL http://www.ibm.com/
#
LoadTopCached directive:
#
#
Specify that the cache agent should load the specified number of
#
most popular URLs to the cache.
#
#
NOTE: These URLs are queued below the URLs loaded by the
#
LoadURL directive
#
#
Default: 100
#
Syntax:
LoadTopCached <num to load>
#
#
NOTE: In order to use this directive, the server configuration
#
file MUST specify Caching “on” and have a valid values
#
for the cache location and size
#
LoadTopCached 100
#
IgnoreURL directive:
#
#
Specify URLs that are NOT to be retrieved by the cache agent.
#
This directive applies only to delving performed by the cache
#
agent. If you wish for the proxy server not to cache certain
#
URLs, then see the NoCache directive.
#
This directive may be specified multiple times.
#
#
NOTE: Wild cards “*” may be used.
#
#
Default: */cgi-bin/*
(Ignore URLs containing /cgi-bin/)
#
Syntax:
IgnoreURL <URL to ignore>
# Example:
Appendix A. Configuration files and scripts
575
# IgnoreURL http://www.yahoo.com/*
# IgnoreURL http://www.yahoo.com/*.html
#
LoadURL
http://RS-AppServer.demo.tivoli.com/
IgnoreURL */cgi-bin/*
#
DelveInto directive:
#
#
Specify whether or not the cache agent should load pages
#
linked off of cached URLs.
#
#
Default: always
#
Syntax:
DelveInto <always | never | admin | topn>
#
#
always - the cache agent will parse the HTML of cached
#
URLs and retrieve linked pages.
#
never - the cache agent will not parse the HTML of
#
cached URLs.
#
admin - the cache agent will only parse HTML documents that
#
originated from one of the LoadURL directives.
#
topn
- the cache agent will only parse HTML documents that
#
were chosen from the server’s cache.
DelveInto always
#
DelveDepth directive:
#
#
Specify the number of link levels to follow when searching
#
for pages to load into the cache.
#
#
NOTE: Applicable when DelveInto “always” is specified.
#
#
Default: 1
#
Syntax:
DelveDepth <num>
DelveDepth 2
#
DelveAcrossHosts directive:
#
#
Specify whether the cache agent will only retrieve pages
#
found on this server, or if the cache agent will retrieve
#
pages from other hosts.
#
#
NOTE: Applicable when DelveInto “never” is NOT specified.
#
#
Default: off
#
Syntax:
DelveAcrossHosts <on | off>
DelveAcrossHosts off
576
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
DelayPeriod directive:
#
#
Specify whether the cache agent should wait between sending
#
requests to destination servers.
#
#
Default: on
#
Syntax:
DelayPeriod <on | off>
#
#
on - reduces the load on the proxy machine and your
#
network link, as well as being kinder to the
#
destination servers.
#
off - allows the cache agent to run at maximum speed.
#
#
NOTE: When DelayPeriod off, certain sites will be
#
accessed very often in rapid succession.
DelayPeriod on
#
#
LoadInlineImages directive:
#
#
Indicate whether inline images should be retrieved by the
#
cache agent.
#
#
Default: on
#
Syntax:
LoadInlineImages <on | off>
LoadInlineImages on
#
NumClients directive:
#
#
Specify the number of worker threads to use to request pages.
#
#
NOTE: Increase this number for a fast machine and a fast
#
Internet link. Use a smaller number for a slow
#
machine or a slow Internet link.
#
#
Default: 4
#
Syntax:
NumClients <num> (maximum of 100)
NumClients 4
#
#
#
#
#
MaxQueueDepth directive:
Specify the maximum depth of the cache agent’s queue of
outstanding page retrieval requests. Specifying a larger
number creates a larger queue.
Appendix A. Configuration files and scripts
577
#
#
NOTE: Only applicable when DelveInto “always” is specified,
#
otherwise, MaxQueueDepth will be set equal to the
#
value specified on the MaxUrls directive.
#
#
Default: 250
#
Syntax:
MaxQueueDepth <num of queue entries>
# Example:
# MaxQueueDepth 500
MaxQueueDepth 250
#
MaxUrls directive:
#
#
Specify the maximum number of URLs the cache agent will
#
request during a particular run.
#
#
Default: 2000
#
Syntax:
MaxUrls <num>
MaxUrls 2000
#
MaxRuntime directive:
#
#
Specify the maximum amount of time that the cache agent
#
will continue requesting URLs.
#
#
NOTE: A value of 0 means “no limit” - the request runs
#
until completion.
#
#
Default: 2 hours
#
Syntax:
MaxRuntime 0 | [<num> hours [<num> minutes]]
# Example:
# MaxRuntime 4 hours
# MaxRuntime 4 hours 10 minutes
MaxRuntime 2 hours
#
AutoCacheRefresh directive:
#
#
Specify whether or not cache content will be refreshed
#
automatically. If this is set to OFF, the cache agent
#
will not be invoked (and all its settings will be ignored).
#
#
Default: on
#
Syntax:
AutoCacheRefresh <on | off>
AutoCacheRefresh on
578
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
CacheRefreshTime directive:
#
#
Specify when the cache agent should be started.
#
#
Default: 3:00 AM
#
Syntax:
CacheRefreshTime <HH:MM>
#
# Example: Begin cache refresh at 3:50 AM
# CacheRefreshTime 03:50
CacheRefreshTime
3:00
#
CacheReloadInterval directive:
#
#
Specify the time interval for the cache agent to start at.
#
This should be used instead of CacheRefreshTime.
#
#
Default: off
#
Syntax:
CacheReloadInterval <time-spec>
#
# Example: Begin reloading the cache every 30 seconds
# CacheReloadInterval 30 seconds
# ===================================================================== #
#
#
PICS Filtering directives
#
# ===================================================================== #
#
PICS Filtering using PICSRules
#
#
For a complete specification of PICSRules, see the URL
#
http://www.w3.org/PICS/PICS-FAQ-980126.html.
#
#
Default: “See Example below”
#
Syntax:
DefinePicsRule “filterName” {
#
(PicsRule-1.0
#
(
#
...subdirectives...(not required)
#
serviceinfo (name “serviceURL”
#
shortname “shortName”
#
bureau “bureauURL”
#
ratfile “ratFile”
#
available-with-content
“NO”
#
)
#
passURL ()
#
failURL ()
#
)
Appendix A. Configuration files and scripts
579
#
)
#
}
#
#
NOTE: Be sure to specify only one rule per DefinePicsRule {...}
#
Each rule should begin with a DefinePicsRule “filterName” {
#
and end with a closing “}”.
#
# Example:
#DefinePicsRule “RSAC Example” {
#
(PicsRule-1.0
#
(
#
serviceinfo (
#
name “http://www.rsac.org/ratingsv01.html”
#
shortname “RSAC”
#
available-with-content “YES”
#
)
#
name
(
#
rulename “RSAC Example”
#
description “Example rule using the RSAC system to
block naughty pictures.”
#
)
#
passURL
(“http://www.ibm.com/*”)
#
optextension (extension-name
“http://www1.raleigh.ibm.com/pics/PICSRules_1.0.html”)
#
ibm-javelin-extensions (
#
active “no”
#
)
#
Filter ( Pass ‘((RSAC.v < 3) && (RSAC.s < 3) && (RSAC.n < 3) &&
(RSAC.l < 3))’ )
#
)
#
)
#}
#
HTTPSCheckRoot directive:
#
#
Whether or not to check the unsecure homepage for self-labels and
#
apply them to a secure request for the same host.
#
#
Default: on
#
Syntax:
HTTPSCheckRoot <on | off>
HTTPSCheckRoot on
# ===================================================================== #
#
#
SSL Directives
#
580
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# ===================================================================== #
#
SSLEnabled directive:
#
#
Specifies to listen on Port 443 for secure requests.
#
#
Default: OFF
#
Syntax:
SSLEnable <ON | on | OFF | off>
SSLEnable OFF
#
#
#
#
#
#
SSLOnly directive:
Only have ssl port listening no other listening port
Default: OFF
Syntax: SSLOnly [ON, OFF]
#
SSLCaching directive:
#
#
In a reverse proxy scenario, attempt to cache content on
#
a secure request. Caching must be enabled for this
#
directive to be effective.
#
#
Default: off
#
Syntax:
SSLCaching <on | off>
SSLCaching off
#
SSL Version specification:
#
#
Specifies the SSL Version to use - Either SSLV3 or SSLV2.
#
Syntax: SSLVersion <SSLV2> | <SSLV3> | <ALL>
#
Default: <ALL>
#
SSLVersion
SSLV3
#
SSLV3Timeout:
#
#
The time in seconds allowed before an SSL V3 session will expire.
#
#
Default: 1000
#
Valid range: 1 - 86400 seconds (1 day)
#
Syntax:
SSLV3Timeout <seconds>
SSLV3Timeout 1000
#
#
#
#
SSLV2Timeout:
The time in seconds allowed before an SSL V2 session will expire.
Appendix A. Configuration files and scripts
581
#
Default: 50
#
Valid range: 1 - 100 seconds
#
Syntax:
SSLV2Timeout <seconds>
#SSLV2Timeout 50
#
#
#
#
#
#
TLSV1Enable
set the TLSV1 protocol on and off
Default: OFF
Syntax: TLSV1Enable [ON|OFF]
#
KeyRing directive:
#
#
Specifies the file path to the Key Ring Database the server will
#
use for SSL requests. Key Ring Files are generated via the
#
IKeyman utility.
#
#
Default: none
#
Syntax:
KeyRing <filename>
#
# Example:
# KeyRing /etc/key.kdb
#
KeyRingStash directive:
#
#
Specifies the file path to the Key Ring Database’s password file.
#
The password file is generated via the IKeyman utility when building
#
a Key Ring Database File.
#
#
Default: none
#
Syntax:
KeyRingStash <filename>
#
# Example:
# KeyRingStash /etc/key.sth
#
SSL V3 Cipherspecs:
#
The Cipherspecs allowed for SSL V3
#
# Syntax:
#
V3CipherSpecs <cipherspec string>
#
Default: US: “0A09060564620403020100” Export: “0906646203020100”
#
#V3CipherSpecs 0A0605
#
#
582
SSL V2 Cipherspecs:
The Cipherspecs allowed for SSL V2
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
# Syntax:
#
V2CipherSpecs <cipherspec string>
#
Default: US: “137624” Export: “246”
#
#V2CipherSpecs
176
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
SSLCertificate:
Use this directive to specify key labels to allow the proxy to
determine which certificate to send to the client when WTE is acting
as a single reverse proxy for multiple domains that are offering their
own SSL certificates.
Syntax:
SSLCertificate Domain/serverIP CertLabel ClientAuth
ClientAuth: [NoClientAuth, ClientAuthRequired, NULL]
Examples:
SSLCertificate myHostname SSCertLabel1 ClientAuthRequired
SSLCertificate 0.0.0.0 SSCert1 NoClientAuth
Default:
No default, customer must define SSLCertificate before using SSL
# SSLCryptoCard
# Use this directive if there is a cryptographic card installed.
#
# Syntax:
# SSLCryptoCard CardName ON/OFF
# where CardName: rainbowcs, nciphernfast
#
# default:
# no need to define this directive if there is no cryptographic hardware
installed
#
#
#
#
#
#
#
#
#
#
#
ForwardSSLPort directive:
Once this directive is specified(non zero), a new thread will
be created to listen on this port. All the http proxy requests
sent to this port will be changed to secure requests
Note: You must enable SSL support to use this directive.
Default: 0
Syntax: ForwardSSLPort
8888
SSLCRLLDAPServer directive:
Appendix A. Configuration files and scripts
583
#
#
#
#
#
#
#
#
#
#
#
Specifies one or more LDAP servers to use for checking for revoked
certificates in the LDAP Certificate Revocation List (CRL). For
each server, specify the LDAP server name, its corresponding port,
user (distinguished name) and password. You can specify up to 10
LDAP servers; additional ones will be ignored. The order of the
directives determines the order in which the LDAP server’s CRL is
checked.
#
#
#
#
#
#
#
Default: <none>
Syntax: SSLCRLLDAPServer <ldap server> <port> <DNuser> <passwd>
If this directive is not set, there is no CRL check on certificates.
If it is set, but not correctly set or the server is down, the CRL
checking will assume fail and the SSL handshake will fail.
Example:
SSLCRLLDAPServer ldap1.acme.com 389 cn=root,o=Acme,c=US password1
SSLCRLLDAPServer ldap2.acme.com 389 cn=admin password2
# ===================================================================== #
#
#
Miscellaneous directives
#
# ===================================================================== #
#
ConfigFile directive:
#
#
Specifies the name of an additional configuration file. Directives
#
found in the specified configuration file will be processed after
#
processing the current configuration file.
#
#
For backwards compatability, the directive ‘RCAConfigFile’ is
#
supported as an alias for ConfigFile.
#
#
Default: none
#
Syntax:
ConfigFile <filename>
#
# Example:
# ConfigFile /etc/rca.conf
#
#
584
FTPUrlPath directive:
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
Specifies whether FTP url paths should be treated as absolute
#
paths (specified in relation to the root directory) or as
#
relative paths (specified in relation to the home directory)
#
#
Default: absolute
#
#
Syntax: FTPUrlPath <absolute | relative>
#
# Example:
# FTPUrlPath absolute
#
flexibleSocks directive:
#
#
Whether or not to use a socks conifiguration file to specify
#
hosts for direct connections or connections to a socks server.
#
#
Default: on
#
Syntax:
flexibleSocks <on | off>
flexibleSocks on
# TimeStamp directive:
#
# Enabling this directive will cause WTE to send timestamp information via
# response header. With this directive set to request, WTE will generate
# timestamp header only if timestamp (TS-) header is present in the request.
#
# The timestamp header records the request receipt time in microsecond since
# the midnight 1/1/70 CUT epoch, and the time elapsed in microsecond when
# WTE is about to send response back to client,
#
# Default: request
# Syntax: TimeStamp <enable | request | disable>
TimeStamp request
#
#
#
#
#
#
#
#
#
#
TimeStampId directive:
Provides an identity for timestamp header. By default, machine’s
hostname defined in DNS or as Hostname directive will be used. For
example, if TimeStampId is set as “test_proxy_host”, WTE will generate
a “TS-WTE-test_proxy_host:” response header.
Default: <none>
Syntax: TimeStampId <EntityId>
TransparentProxy directive:
Appendix A. Configuration files and scripts
585
#
#
Specifies if the server is to be run as a transparent proxy server.
#
#
Default: off
#
Syntax:
TransparentProxy <on | off>
TransparentProxy off
#
ListenBacklog directive:
#
#
Specifies the size of the listen backlog to use for the socket
#
the proxy server listens with.
#
#
Default: 128
#
Syntax:
ListenBacklog <value>
ListenBacklog 128
#
PacFilePath directive:
#
#
Specifies the directory containing the PAC files
#
generated using the remote config PAC file form.
#
This can be accessed at http://proxy/pac/*.
#
#
Default: /opt/ibm/edge/cp/server_root/pub/pacfiles
#
Syntax:
PacFilePath <filepath>
PacFilePath /opt/ibm/edge/cp/server_root/pub/pacfiles
#==================================================================
#
# System Plug-ins
# This section of the configuration file contains directives
# for plug-ins provided as part of the WTE distribution.
#
#==================================================================
#
# ICP Plug-in directives:
#
<ModuleBegin>ICP
#
#
#
#
#
586
ICP_Address directive
The IP address to use for sending and receiving ICP queries.
If this address is not specified, the default is to accept and send
ICP queries on any interface.
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
Syntax: ICP_Address <IP address>
# ICP_Port directive
#
# The port number on which the ICP Server listens for ICP queries.
#
# Syntax: ICP_Port <port number>
#
ICP_Port3128
# ICP_Timeout directive
#
# The maximum time in milliseconds to wait for responses to ICP queries
# If this is not specified a default timeout of 2000 milliseconds is used
#
# Syntax: ICP_Timeout <timeout in milliseconds>
#
ICP_Timeout2000
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
ICP_MaxThreads directive
This specifies the number of threads to spawn that listen for ICP
queries. The default is 5. (Note: On RedHat Linux 6.2 and lower,
this number must be low because the maximum number of threads that
can be created per process is small. Specifying a large number of
threads for ICP use might limit the number of threads available for
use in servicing requests.
Syntax: ICP_MaxThreads <number of threads>
Example:
ICP_MaxThreads5
ICP_Peer directive
This directive is used to specify an ICP peer. There should be one line
for each peer. When a new peer is added to the ICP cluster, a new
ICP_Peer line must be added to the configuration file of all peers.
The current host can be included in the peer list and will be ignored
by ICP at initialization. This feature makes it possible to use one
configuration file repeatedly in all peers, without having to edit the
file for each host to remove its definition.
Syntax: ICP_Peer
<hostname> <http_port> <icp_port>
<hostname>the name of the peer
<http_port>its proxy port
Appendix A. Configuration files and scripts
587
#
#
#
#
#
#
#
<icp_port>its icp server port
Example:
The following line adds the host example.transarc.ibm.com whose proxy
port is 80 and ICP port 3128, as a peer
ICP_Peerexample.transarc.ibm.com
803128
<ModuleEnd>
##
## Application Service at the Edge: Application router directives
## Use these directives to configure the behavior of an Edge Application.
##
<ModuleBegin> ApplicationProxy
# ApprouterAdminPort directive
#
# This is the port at which the Application router listens for
# Admin requests from the Edge Admin Server.
#
# Default: 6060
# Syntax: AppRouterAdminPort <Admin Port>
#
AppRouterAdminPort 6060
# AppRouterLocalWASPort directive
#
# This is the port at which the WebSphere Application server
# listens for applicationrequests.
#
# Default: 9080
# Syntax: AppRouterLocalWASPort <WAS Port>
AppRouterLocalWASPort 9080
# ApprouterRetryInterval directive
#
# This directive specifies the length of time after the application
# router disables local application execution due to an exception
# condition, following which the application router will try to
# reroute requests for local execution.
#
# Default: 3600 seconds.
# Syntax: ApprouterRetryInterval <seconds>
AppRouterRetryInterval 3600
#
588
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
<Application> { directive
This directive is used to indicate the beginning of an Edge
application configuration section. There should be a corresponding
‘}’ to terminate the section.
Example:
<Application> {
EarName /installedApps/app1.ear
DistributionType Preloaded
Enable yes
Context-root /servlet/
Context-root /otherservlets/
NonEdgableURI /otherservlets/notedgable
HTTPFailOverStatusCodes 500
}
##
## The following describe the directives that are valid within an
## Application section.
##
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
EarName directive
This directive is used to indicate the name of an application archive.
For applications that are preloaded or manually installed on the
machine this must be a full path name to the archive.
Default : none
Syntax : EarName <Name of Ear>
Examples:
EarName /usr/WebSphere/AppServer/installedApps/sampleApp.ear
DistributionType directive
This directive is used to indicate the type of distribution in place
of the application. The two supported distribution types are through
Content Distribution or manual installation.
Default : none
Syntax : DistributionType CDSManaged | PreLoaded
Example:
DistributionType PreLoaded
Appendix A. Configuration files and scripts
589
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
590
Enable directive
This directive allows the admin to temporarily enable or disable routing
requests that are meant for this particular application.
Default : no
Syntax : Enable yes|no
Example:
Enable no
Context-root directive
This directive allows the admin to specify the path prefixes of request
URIs that need to match in order for the request to be routed to this
application.
Default: none
Syntax: Context-root <URI - prefix>
Type : Multiple
Example:
Context-root /servlet
Context-root /jsp
NonEdgableURI directive
This directive allows the admin to route requests that match one of the
context root prefixes but is also matched by this directive to the origin
server. This can be used to define rules to defer execution of the
application to the origin server.
Default : none
Syntax: NonEdgableURI <URI>
Type : Multiple
Example :
NonEdgableURI /servlet/notedgableServlet1
NonEdbableURI /jsp/notedgable.jsp
HTTPFailoverStatusCodes directive
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#
#
#
#
#
#
#
#
#
#
#
This directive allows the application execution at this node to be
temporarily disabled for subsequent requests if a response failed
with a specific status code.
Default: none
Syntax: HTTPFailoverStatusCodes <HTTP status codes separated by ‘,’>
Example:
HTTPFailoverStatusCodes 400,500
##
## It is possible to define multiple Application sections as follows.
##
# <Application> {
# EarName /installedApps/app1.ear
# DistributionType Preloaded
# Enable yes
# Context-root /servlet/
# Context-root /otherservlets/
# NonEdgableURI /otherservlets/notedgable
# HTTPFailOverStatusCodes 500
# }
#
# <Application> {
# EarName app2.ear
# DistributionType CDSManaged
# Enable no
# Context-root /jsp/
# Context-root /otherjsps/
# NonEdgableURI /otherjsp/notedgable.jsp
# HTTPFailOverStatusCodes 500
# }
#
#
<ModuleEnd>
wses_51_cachingproxy_install.sh
Example A-60 is the WebSphere Caching Proxy installation script.
Example: A-60 Installation script for WebSphere Caching Proxy
#!/bin/sh
Appendix A. Configuration files and scripts
591
# Arguments (positioonal)
# 1: new user name ( root)
# 2: new user password ( smartway )
# 3: fully qualified file name to be used as proxy configuration (
/swdist/wses/new_ibmproxy.conf )
# 4: path of configuration file target location ( /opt/ibm/edge/cp/etc/en_US )
cp_home=/opt/ibm/edge/cp
# add userid
htadm -adduser $cp_home/server_root/protect/webadmin.passwd $1 $2
$1
# stop the server
/etc/init.d/ibmproxy stop
# copy new config file
cp $3 $4/ibmproxy.conf
# start the server
/etc/init.d/ibmproxy start
exit 0
wses_51_cachingproxy_uninstall.sh
Example A-61 is helpful when uninstalling the WebSphere Caching Proxy.
Example: A-61 Script to stop the WebSphere Caching Proxy
#!/bin/sh
/etc/init.d/ibmproxy stop
exit 0
A.7.10 WebSphere Caching Proxy v5.1 Fixpack 1
The following files in this section are used to define and build a software package
capable of installing and uninstalling WebSphere Caching Proxy v5.1 fixpack 1
592
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
In the online material, all these files are located in the following path relative to
the parent directory of the type of file:
/websphere/edge/510_fp1/cachingproxy/Linux-IX86.
was_cachingproxy.5.1.0_fp1.spd
Example A-62 is the software package definition file.
Example: A-62 Caching proxy fixpack 1 spd file
“TIVOLI Software Package v4.2.1 - SPDF”
package
name = t_wses_cachingproxy
title = “WAS Caching Proxy V5.1.1”
version = 5.1.0_fp1
web_view_mode = hidden
undoable = n
committable = n
history_reset = n
save_default_variables = n
creation_time = “2004-11-29 19:11:22”
last_modification_time = “2004-11-29 19:11:22”
default_variables
prod_name = wses_cachingproxy_510_fp1
inst_dir = /opt/ibm/edge
work_dir = $(root_dir)/$(prod_name)
log_dir = $(root_dir)/log/$(prod_name)
img_dir = $(work_dir)/image
bin_dir = $(root_dir)/bin/$(os_family)
root_dir = /swdist
tar_file = EdgeCachingProxy511-Linux.tar
home_path = /mnt
prod_path =
websphere/edge/510_fp1/cachingproxy/$(os_name)-$(os_architecture)
src_base = $(home_path)/code
src_dir = $(src_base)/$(prod_path)
rsp_base = $(home_path)/rsp
tool_base = $(home_path)/tools
tools_dir = $(tool_base)/$(prod_path)
unpack = yes
cleanup = yes
end
log_object_list
location = $(log_dir)
Appendix A. Configuration files and scripts
593
unix_user_id = 0
unix_group_id = 0
unix_attributes = rwx,rx,
end
source_host_name = srchost
move_removing_host = y
no_check_source_host = y
lenient_distribution = n
default_operation = install
server_mode = all
operation_mode = not_transactional
log_path = /mnt/logs/wses_cachingproxy_510_fp1.log
post_notice = y
before_as_uid = 0
skip_non_zero = n
after_as_uid = 0
no_chk_on_rm = y
log_host_name = srchost
versioning_type = swd
package_type = patch
sharing_control = none
stop_on_failure = y
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = common
translate = n
destination = $(root_dir)/bin
descend_dirs = y
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
594
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
compression_method = stored
rename_if_locked = n
end
add_directory
stop_on_failure = n
condition = “$(unpack) == yes”
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
generic_container
caption = “on install”
stop_on_failure = y
condition = “$(operation_name)
== install”
check_disk_space
volume = /,250M
end
add_directory
stop_on_failure = n
condition = “$(unpack) == yes”
Appendix A. Configuration files and scripts
595
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)
translate = n
destination = $(work_dir)/image
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
execute_user_program
caption = unpack
condition = “$(unpack) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/bin/tar -C $(img_dir) -xvf
$(work_dir)/$(prod_name).tar”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_unpack.out
error_file = $(log_dir)/$(prod_name)_unpack.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
596
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
corequisite_files
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = $(src_dir)/$(tar_file)
translate = n
destination = $(work_dir)/$(prod_name).tar
compression_method = stored
rename_if_locked = n
end
end
end
end
end
execute_user_program
caption = “stop server”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/etc/init.d/ibmproxy stop”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install_stopServer.out
error_file = $(log_dir)/$(prod_name)_install_stopServer.err
output_file_append = n
Appendix A. Configuration files and scripts
597
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
success = 1,65535
end
end
during_remove
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(work_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove_cleanup.out
error_file = $(log_dir)/$(prod_name)_remove_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,255
success = 256,65535
end
end
end
install_rpm_package
caption = “WAS Caching Proxy”
rpm_options = -vv
598
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
rpm_install_type = install
rpm_install_force = y
rpm_install_nodeps = n
rpm_remove_nodeps = n
rpm_report_log = y
rpm_file
image_dir = $(img_dir)/admin
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = WSES_Admin_Runtime-5.1.1-0
rpm_package_file = WSES_Admin_Runtime-5.1.1-0.i686.rpm
end
rpm_file
image_dir = $(img_dir)/cp
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = WSES_CachingProxy_msg_en_US-5.1.1-0
rpm_package_file = WSES_CachingProxy_msg_en_US-5.1.1-0.i686.rpm
end
rpm_file
image_dir = $(img_dir)/icu
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = WSES_ICU_Runtime-5.1.1-0
rpm_package_file = WSES_ICU_Runtime-5.1.1-0.i686.rpm
end
rpm_file
image_dir = $(img_dir)/cp
is_image_remote = y
keep_images = n
compression_method = stored
rpm_package_name = WSES_CachingProxy-5.1.1-0
rpm_package_file = WSES_CachingProxy-5.1.1-0.i686.rpm
end
rpm_file
image_dir = $(img_dir)/doc
is_image_remote = y
Appendix A. Configuration files and scripts
599
keep_images = n
compression_method = stored
rpm_package_name = WSES_Doc_en_US-5.1.1-0
rpm_package_file = WSES_Doc_en_US-5.1.1-0.i686.rpm
end
end
execute_user_program
caption = “start server”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “/etc/init.d/ibmproxy start”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install_startServer.out
error_file = $(log_dir)/$(prod_name)_install_startServer.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
success = 1,65535
end
end
end
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_install
600
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(work_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install_cleanup.out
error_file = $(log_dir)/$(prod_name)_install_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,255
failure = 256,65535
end
end
during_remove
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(inst_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file =
$(log_dir)/$(prod_name)_remove_instdir.out
error_file = $(log_dir)/$(prod_name)_remove_intdir.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
Appendix A. Configuration files and scripts
601
success = 0,0
failure = 1,65535
end
end
end
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “/etc/init.d/ibmproxy stop”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file =
$(log_dir)/$(prod_name)_remove_stopServer.out
error_file = $(log_dir)/$(prod_name)_remove_stopServer.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
602
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
A.7.11 TimeCard v5.1
The following files in this section are used to define and build a software package
capable of installing and uninstalling the TimeCard application used in the Outlet
Solution.
In the online material, all these files are located in the following path relative to
the parent directory of the type of file: /outlet/TimeCard/510/all/generic.
timecard_510.spd
Example A-63 shows the software package used to install the TimeCard
WebSphere application.
Example: A-63 Software package for TimeCard installation and removal
“TIVOLI Software Package v4.2 - SPDF”
package
name = t_TimeCard
title = “TimeCard Store Application v5.1”
version = 5.1.0
web_view_mode = hidden
undoable = n
committable = o
history_reset = y
save_default_variables = n
creation_time = “2005-02-28 21:14:04”
last_modification_time = “2005-03-01 13:15:29”
default_variables
was_home = /opt/IBM/WebSphere/AppServer
db2_home = /opt/IBM/db2/V8.1
wasuser = root
waspassword = smartway
wasserver = server1
TimeCardDB = STORE
TimeCardInstance = db2inst1
img_base = $(work_dir)/image
root_dir = /swdist
home_path = /mnt
work_dir = $(root_dir)/$(prod_name)
src_dir = $(src_base)/$(prod_path)
prod_name = timecard_510
inst_dir = /opt/IBM/WebSphere/AppServer
tools_dir = $(tool_base)/$(prod_path)
bin_dir = $(root_dir)/bin/$(os_family)
Appendix A. Configuration files and scripts
603
log_dir = $(root_dir)/log/$(prod_name)
tool_base = $(home_path)/tools
prod_path = outlet/TimeCard/510/all/generic
src_base = $(home_path)/code
cleanup = yes
end
log_object_list
location = $(log_dir)
unix_user_id = 0
unix_group_id = 0
unix_attributes = rwx,rx,
end
source_host_name = srchost
move_removing_host = y
no_check_source_host = y
lenient_distribution = y
default_operation = install
server_mode = all
operation_mode = not_transactional,force
log_path = /mnt/logs/timecard_510.log
post_notice = y
before_as_uid = 0
skip_non_zero = n
after_as_uid = 0
no_chk_on_rm = y
log_host_name = srchost
versioning_type = none
package_type = refresh
stop_on_failure = y
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = common
translate = n
destination = $(root_dir)/bin
descend_dirs = y
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
604
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
unix_attributes = rwx,rx,rx
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_attributes = rwx,rx,rx
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
Appendix A. Configuration files and scripts
605
add_directory
stop_on_failure = y
add = y
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
location = $(tool_base)
name = $(prod_path)
translate = n
destination = $(root_dir)/$(prod_name)
descend_dirs = n
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = timecard_install.sh
translate = n
destination = $(prod_name)_install.sh
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
606
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = db2_store_cfg.sh
translate = n
destination = db2_store_cfg.sh
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,rx
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = Table.ddl
translate = n
destination = Table.ddl
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,rx
unix_owner = root
unix_user_id = 0
Appendix A. Configuration files and scripts
607
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_file
replace_if_existing = y
replace_if_newer = n
remove_if_modified = n
name = timecard_install.jacl
translate = n
destination = timecard_install.jacl
remove_empty_dirs = y
is_shared = n
remove_extraneous = n
substitute_variables = y
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
end
generic_container
caption = “on install”
stop_on_failure = y
condition = “$(operation_name)
== install”
check_disk_space
608
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
volume = /,500M
end
add_directory
stop_on_failure = n
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)/installableApps
translate = n
destination = $(inst_dir)/installableApps
descend_dirs = y
remove_empty_dirs = n
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
add_directory
stop_on_failure = n
add = y
replace_if_existing = n
replace_if_newer = n
remove_if_modified = n
location = $(src_base)
name = $(prod_path)/properties
translate = n
destination = $(inst_dir)/properties
descend_dirs = y
remove_empty_dirs = n
Appendix A. Configuration files and scripts
609
is_shared = n
remove_extraneous = n
substitute_variables = n
unix_attributes = rwx,rx,
unix_owner = root
unix_user_id = 0
unix_group_id = 0
create_dirs = y
remote = n
compute_crc = n
verify_crc = n
delta_compressible = d
temporary = n
is_signature = n
compression_method = stored
rename_if_locked = n
end
execute_user_program
caption = “Install TimeCard”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “$(work_dir)/$(prod_name)_install.sh install”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_install.out
error_file = $(log_dir)/$(prod_name)_install.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = y
retry = 1
exit_codes
success = 0,0
warning = 1,9
failure = 10,65535
end
610
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
end
end
execute_user_program
caption = cleanup
condition = “$(cleanup) == yes”
transactional = n
during_install
path = $(bin_dir)/do_it
arguments = “rm -r $(work_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_cleanup.out
error_file = $(log_dir)/$(prod_name)_cleanup.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
failure = 1,65535
end
end
end
end
generic_container
caption = “on remove”
stop_on_failure = y
condition = “$(operation_name)
== remove”
execute_user_program
Appendix A. Configuration files and scripts
611
caption = “remove”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “$(work_dir)/$(prod_name)_install.sh uninstall”
inhibit_parsing = n
working_dir = $(work_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_uninstall.out
error_file = $(log_dir)/$(prod_name)_uninstall.err
output_file_append = n
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,1
failure = 2,65535
end
end
end
execute_user_program
caption = “remove work_dir”
transactional = n
during_remove
path = $(bin_dir)/do_it
arguments = “/bin/rm -r $(work_dir)”
inhibit_parsing = n
working_dir = $(root_dir)
timeout = -1
unix_user_id = 0
unix_group_id = 0
user_input_required = n
output_file = $(log_dir)/$(prod_name)_remove_workdir.out
error_file = $(log_dir)/$(prod_name)_remove_workdir.err
output_file_append = n
612
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
error_file_append = n
reporting_stdout_on_server = y
reporting_stderr_on_server = y
max_stdout_size = 10000
max_stderr_size = 10000
bootable = n
retry = 1
exit_codes
success = 0,0
success = 1,65535
end
end
end
end
end
timecard_install.sh
Example A-64 is the script that controls the TimeCard installation.
Example: A-64 Timecard installation and setup script
#!/bin/sh
function=$1
was_home=$(was_home)
hostname=‘hostname‘
servername=$(wasserver)
wasuser=$(wasuser)
waspassword=$(waspassword)
curdir=‘pwd‘
db2_home=$(db2_home)
jaclfile=./timecard_install.jacl
wsadmin=”${was_home}/bin/wsadmin.sh -conntype SOAP -host localhost -port 8880
-username ${wasuser} -password ${waspassword} -f ${jaclfile}”
curdir=‘pwd‘
doit () {
echo “Executing command: “ $1
$1
Appendix A. Configuration files and scripts
613
rc=$?
echo “-- result was: “ $rc
return $rc;
}
restart () {
doit “${was_home}/bin/stopServer.sh ${servername} -user ${wasuser} -password
${waspassword}”
doit “${was_home}/bin/startServer.sh ${servername} -user ${wasuser} -password
${waspassword}”
}
setup () {
doit “${was_home}/bin/setupCmdLine.sh “
doit “${was_home}/bin/startServer.sh ${servername} -user ${wasuser} -password
${waspassword}”
doit “${wsadmin} create”
doit “${wsadmin} install”
doit “${wsadmin} start”
doit “${was_home}/bin/GenPluginCfg.sh -node.name ${hostname}”
doit “export LIBPATH=${db2_home}/lib:${db2_home}/java”
restart
}
case $function in
install)
# create database and more
doit “./db2_store_cfg.sh install”
#install websphere application
setup
;;
setup)
#install websphere application
setup
#doit “${was_home}/bin/setupCmdLine.sh “
#doit “${was_home}/bin/startServer.sh ${servername} -user ${wasuser}
-password ${waspassword}”
#doit “${wsadmin} create”
#doit “${wsadmin} install”
#doit “${wsadmin} start”
#doit “${was_home}/bin/GenPluginCfg.sh -node.name ${hostname}”
;;
uninstall)
614
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
#remove websphere application
doit “${was_home}/bin/setupCmdLine.sh “
doit “${wsadmin} stop”
doit “${wsadmin} uninstall”
doit “${wsadmin} remove”
doit “${was_home}/bin/GenPluginCfg.sh -node.name ${hostname}”
restart
# remove database and more
doit “./db2_store_cfg.sh uninstall”
;;
remove)
#remove websphere application
doit “${was_home}/bin/setupCmdLine.sh “
doit “${wsadmin} stop”
doit “${wsadmin} uninstall”
doit “${wsadmin} remove”
doit “${was_home}/bin/GenPluginCfg.sh -node.name ${hostname}”
# remove database and more
#doit “./db2_store_cfg.sh uninstall”
;;
*)
echo “Illegal function ${function} “
;;
esac
exit 0
db2_store_cfg.sh
Example A-65 is the script used to create the STORE database for the TimeCard
application.
Example: A-65 db2_store_cfg.sh script to create store databases
#!/bin/sh
function=$1
Appendix A. Configuration files and scripts
615
doit () {
echo “Executing command: “ $1
$1
rc=$?
echo “-- result was: “ $rc
return $rc;
}
db2_home=”$(db2_home)”
db2grp=”db2grp1”
db2instance=”$(TimeCardInstance)”
db2database=”$(TimeCardDB)”
password=”$(waspassword)”
curdir=‘pwd‘
myself=${curdir}/db2_store_cfg.sh
file=”${curdir}/Table.ddl”
case $function in
create_database)
file=$2
doit “db2start”
doit “db2 drop db ${db2database}”
doit “db2 create db ${db2database}”
doit “db2 connect to ${db2database}”
doit “db2 grant connect on database to user appuser”
doit “db2 -tvf $file”
doit “db2 drop alias appuser.timecardentity”
doit “db2 create alias appuser.timecardentity for timecardentity”
doit “db2 grant select,insert,update,delete on timecardentity to user
appuser”
;;
stop_database)
doit “db2stop -force”
;;
install)
## define group
echo “ ensuring existance of db2 user group”
doit “groupadd $db2grp”
616
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
## define user db2store
echo “ creating instance user”
doit “useradd -g $db2grp -G root -m -p $password $db2instance”
## create db2store instance
instance_list=‘${db2_home}/instance/db2iset -l‘
create_instance=true
for instance in $instance_list;
do
if [ $instance = $db2instance ]; then
create_instance=false
break
fi
done
if [ $create_instance = “true” ]; then
doit “${db2_home}/instance/db2icrt -a SERVER -p 50001 -s ese -u db2fenc1
$db2instance”
su - $db2instance -c “$myself create_database $file”
else
echo “Instance $db2instance already exists”
fi
;;
uninstall)
## drop db2store instance
instance_list=‘${db2_home}/instance/db2iset -l‘
drop_instance=false
for instance in $instance_list;
do
if [ $instance = $db2instance ]; then
drop_instance=true
break
fi
done
if [ $drop_instance = “true” ]; then
## stop the database
su - $db2instance -c “$myself stop_database $file”
doit “${db2_home}/instance/db2idrop $db2instance”
else
Appendix A. Configuration files and scripts
617
echo “Instance $db2instance does not exist”
fi
doit “userdel -r $db2instance”
;;
*)
echo “Illegal function: $function”
esac
exit 0
Table.ddl
Example A-66 shows the DB2 data definitions used to create the database
objects for the TimeCard application.
Example: A-66 Store database table definitions
-- Generated by Relational Schema Center on Fri Jan 30 17:03:38 EST 2004
DROP TABLE TIMECARDENTITY;
CREATE TABLE TIMECARDENTITY
(USERID VARCHAR(250) NOT NULL,
JOBDATE DATE NOT NULL,
INTIME TIME,
OUTTIME TIME);
ALTER TABLE TIMECARDENTITY
ADD CONSTRAINT PK_TIMECARDENTITY PRIMARY KEY (USERID, JOBDATE);
timecard_install.jacl
Example A-67 is the control file for installing the TimeCard J2EE application.
Example: A-67 TimeCard application installation script
# Usage: wsadmin -connType SOAP -host <HOSTNAME> -port 8880 -user wasuser
-password clemson -f installApp.jacl
# Configure Authentication Alias
# Configure JDBC Data Source and J2C Authentication Alias
# Install TimeCardEARProject application
618
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
# Authentication Alias Configuration
puts “received command: $argv”
set function $argv
#set function “create”
#set function “install”
#set function “start”
#set function “stop”
#set function “uninstall”
#set function “remove”
# <---------------set dbuser “$(wasuser)”
set dbpassword “$(waspassword)”
set wasuser “$(wasuser)”
set wasuserpw “$(waspassword)”
set db2_home “$(db2_home)”
set was_home “$(was_home)”
set driverClassPath “$(db2_home)/java/db2java.zip”
set serverName “$(wasserver)”
# <----------------set
set
set
set
aliasName “TimeCardAlias”
dsName “TimeCardDataSource”
applName “TimeCardEARProject”
databaseName1 “$(TimeCardDB)”
puts “ Getting ready to $function $applName”
set desc_attr [list description “Time Card Application Alias”]
set
set
set
set
set
set
cellName [$AdminControl getCell]
nodeName [$AdminControl getNode]
secId [$AdminConfig getid /Cell:$cellName/Security:/]
alias_attr [list alias $aliasName]
userid_attr [list userId $dbuser]
password_attr [list password $dbpassword]
set cellId [$AdminConfig getid /Cell:$cellName/]
set nodeId [$AdminConfig getid /Cell:$cellName/Node:$nodeName/]
set serverId [$AdminConfig getid /Node:$nodeName/Server:$serverName]
Appendix A. Configuration files and scripts
619
set cfName ${dsName}_CF
set mappingConfigAlias “DefaultPrincipalMapping”
set dsDescription “DataSource for Time Card application”
set applearname “${applName}.ear”
set jndiName “jdbc/TimeCardDataSource”
set statementCacheSize 10
set datasourceHelperClassname com.ibm.websphere.rsadapter.DB2DataStoreHelper
set rsadapterID [$AdminConfig list J2CResourceAdapter $serverId]
##set providerName “DB2 JDBC Provider (XA)”
##set implementationClassName “COM.ibm.db2.jdbc.DB2XADataSource”
set providerName “DB2 JDBC Provider”
##set implementationClassName “COM.ibm.db2.jdbc.DB2DataSource”
set implementationClassName “COM.ibm.db2.jdbc.DB2ConnectionPoolDataSource”
set providerDescription $providerName
set datasourceHelperClassname “com.ibm.websphere.rsadapter.DB2DataStoreHelper”
set authAliasName “StoreDataSourceAuthData”
set scope $serverId
########################################################
if { ${function} == “create” } {
# ----------------#
# Create the TradeDataSource Authentication Alias
#
puts “ “
puts “ “
puts “...Creating JAAS AuthData $authAliasName”
set attrs [subst {(cells/${cellName}:security.xml)} ]
set attrs0 [subst {{alias $authAliasName} {$userid_attr} {$password_attr}}]
foreach authEntry [$AdminConfig list JAASAuthData] {
foreach aliasEntry [$AdminConfig showall $authEntry] {
if { [lsearch -regexp $aliasEntry $authAliasName] >= 0 } {
puts “$authAliasName already exists”
set authDataAlias $authEntry
break
}
}
}
if { ![info exists authDataAlias] } {
# Always create AuthData at cell level
set authDataAlias [$AdminConfig create JAASAuthData $attrs $attrs0 ]
}
620
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
puts “authDataAlias: $authDataAlias”
# ----------------puts “ “
puts “ “
puts “Creating JDBCProvider in scope $scope”
set attrs1 [subst {{classpath “$driverClassPath”} {implementationClassName
$implementationClassName} {name \”$providerName\”} {description
“$providerDescription”}}]
foreach providerEntry [$AdminConfig list JDBCProvider $scope] {
if { [string first $providerName $providerEntry] >= 0 } {
puts “$providerName already exists for $scope”
set provider $providerEntry
break
}
}
# If JDBC provider does not yet exist, create a new one
if { ![info exists provider] } {
set provider [$AdminConfig create JDBCProvider $scope $attrs1]
}
##puts “$AdminConfig getid
/Cell:$cellName/Node:$nodeName/Server:$serverName/JDBCProvider:$providerName/”
##set provider [$AdminConfig getid
/Cell:$cellName/Node:$nodeName/Server:$serverName/JDBCProvider:$providerName/]
puts “provider: $provider”
# ----------------puts “ “
puts “ “
puts “...Creating Datasource $provider”
set attrs2 [subst {{name “$dsName”} {description “$dsDescription”} {jndiName
$jndiName} {statementCacheSize $statementCacheSize} {datasourceHelperClassname
$datasourceHelperClassname} {relationalResourceAdapter $rsadapterID}
{authMechanismPreference “BASIC_PASSWORD”} {authDataAlias “TimeCardAlias”}}]
foreach dsEntry [$AdminConfig list DataSource $scope] {
if { [string first $dsName $dsEntry] >= 0 } {
puts “DataSource $dsName already exists”
set DoNoCreateDatasource “true”
set datasource $dsEntry
break
}
}
Appendix A. Configuration files and scripts
621
#
# If DataSource does not yet exist, create a new one
#
if { ![info exists datasource] } {
set datasource [$AdminConfig create DataSource $provider $attrs2]
}
puts “datasource: $datasource”
# ----------------if { ![info exists DoNoCreateDatasource] } {
puts “ “
puts “ “
puts “...Creating J2EEResourcePropertySet for $datasource”
set propSet1 [$AdminConfig create J2EEResourcePropertySet $datasource {}]
puts “propSet1: $propSet1”
#-----------------puts “ “
puts “ “
puts “...Creating J2EEResourceProperty for $propSet1”
set attrs3 [subst {{name databaseName} {type java.lang.String} {value
“$databaseName1”}}]
set j2eeprop1 [$AdminConfig create J2EEResourceProperty $propSet1 $attrs3]
puts “j2eeprop1: $j2eeprop1”
#------------------puts “ “
puts “ “
puts “...Creating ConnectionPool for $datasource”
set connectionPool1 [$AdminConfig create ConnectionPool $datasource
{{connectionTimeout 1000} {maxConnections 30} {minConnections 1} {agedTimeout
1000} {reapTime 2000} {unusedTimeout 3000}}]
puts “connectionPool1 = $connectionPool1”
#------------------puts “ “
puts “ “
puts “...Creating $mappingConfigAlias on $datasource”
set map1 [$AdminConfig create MappingModule $datasource {{mappingConfigAlias
$mappingConfigAlias} {authDataAlias {}}}]
puts “map1: $map1”
}
#------------------puts “ “
puts “ “
622
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
puts “...Creating ConnectionFactory”
foreach cfEntry [$AdminConfig list CMPConnectorFactory $rsadapterID] {
if { [string first $cfName $cfEntry] >= 0 } {
puts “Connection factory $cfName already exists for $rsadapterID”
set cf1 $cfEntry
break
}
}
# If ConnectionFactory does not yet exist, create a new one
if { ![info exists cf1] } {
puts “Creating ConnectionFactory $cfName on $rsadapterID”
set attrs4 [subst {{name $cfName} {authMechanismPreference BASIC_PASSWORD}
{cmpDatasource $datasource} {authDataAlias “$aliasName”}}]
set cf1 [$AdminConfig create CMPConnectorFactory $rsadapterID $attrs4]
}
puts “connectionFactory1: $cf1”
puts “ “
puts “Saving configuration..”
$AdminConfig save
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SECUTIRY AND MORE
puts “ “
puts “ “
puts “Seting JVM Arguments”
# Set JVM Arguments (Server1->ProcessDefinition->JVM->Generic JVM Arguments).
# Create java properties: qMgr & queue.
set jvm [$AdminConfig list JavaVirtualMachine]
set qMgr “QMGR.STORE.1”
set queue “TIMECARD.QUEUE”
set jvmarg “-Dstore.queue.manager=$qMgr -Dstore.queue=$queue”
set attrs5 [subst {{genericJvmArguments [list $jvmarg]}}]
$AdminConfig modify $jvm $attrs5
######$AdminConfig save
puts “ “
puts “ “
puts “CUSTOM USER REGISTRY Configuration”
# CUSTOM USER REGISTRY Configuration
Appendix A. Configuration files and scripts
623
# Set active user registry to Custom User registry
set user_regName “Custom”
set security_item [$AdminConfig list Security]
# List all the user registries defined
set user_regs [$AdminConfig list UserRegistry]
# Find the one that starts with the name we set at the beginning of the script
foreach user_reg $user_regs { if {[regexp $user_regName $user_reg]} { set
new_user_reg $user_reg; break }}
set attrs6 [subst {{activeUserRegistry $new_user_reg} {enabled true}
{enforceJava2Security false}}]
# Modify the user registry attribute for the security object
$AdminConfig modify $security_item $attrs6
#################$AdminConfig save
puts “ “
puts “ “
puts “Set the properties for Custom User Registry”
# Set the properties for Custom User Registry
set curClassName “com.ibm.websphere.security.FileRegistrySample”
set serverId $wasuser
set serverPassword $wasuserpw
set properties [list [list [list name “usersFile”] [list value
“${was_home}/properties/users.prop”]] [list [list name “groupsFile”] [list
value “${was_home}/properties/groups.prop”]]]
set attrs7 [subst {{customRegistryClassName $curClassName} {serverId $serverId}
{serverPassword $serverPassword} {properties [list $properties]}}]
$AdminConfig modify $new_user_reg $attrs7
###############$AdminConfig save
$AdminConfig save
}
########################################################
if { ${function} == “install” } {
#-------------puts “ “
puts “ “
puts “Installing TimeCardEARProject application”
foreach applEntry [$AdminApp list] {
if { [string first $applName $applEntry] >= 0 } {
puts “Application $applName has already been installed”
set appl1 $applEntry
624
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
break
}
}
# If Application has not been installed, install it
if { ![info exists appl1] } {
$AdminApp install ${was_home}/installableApps/${applearname}
}
}
########################################################
if { ${function} == “start” } {
#---------------#puts “ “
#puts “ “
#puts “Starting TimeCardEARProject application”
#
set appMgr [$AdminControl queryNames
cell=$cellName,node=$nodeName,type=ApplicationManager,process=server1,*]
$AdminControl invoke $appMgr startApplication ${applName}
}
########################################################
if { ${function} == “stop” } {
#puts “ “
#puts “ “
#puts “Stopping $applName application”
#
set appMgr [$AdminControl queryNames
cell=$cellName,node=$nodeName,type=ApplicationManager,process=server1,*]
$AdminControl invoke $appMgr stopApplication ${applName}
}
Appendix A. Configuration files and scripts
625
########################################################
if { ${function} == “uninstall” } {
#-----------------puts “ “
puts “ “
puts “Uninstalling TimeCardEARProject application”
foreach applEntry [$AdminApp list] {
if { [string first $applName $applEntry] >= 0 } {
set appl1 [$AdminApp uninstall ${applName}]
break
}
}
# If Application has not been installed, don’t remove it
if { ![info exists appl1] } {
puts “Application $applName was not installed”
}
}
########################################################
if { ${function} == “remove” } {
#---------------puts “ “
puts “ “
puts “replacing CustomUserRegistry REGISTRY Configuration with
LocalOSUserRegistry”
### Set active user registry to Local OS registry
set regName “LocalOS”
set security_item [$AdminConfig list Security]
### List all the user registries defined
set user_regs [$AdminConfig list UserRegistry]
### Find the one that starts with the name we set at the beginning of the
script
foreach user_reg $user_regs { if {[regexp $regName $user_reg]} { set
new_user_reg $user_reg; break }}
set attrs6 [subst {{activeUserRegistry $new_user_reg} {enabled true}
{enforceJava2Security true}}]
## Modify the user registry attribute for the security object
626
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
$AdminConfig modify $security_item $attrs6
puts “ “
puts “ “
puts “Setring properties for LocalOSUserRegistry”
## Set the properties for Local User Registry
#set curClassName “com.ibm.websphere.security.FileRegistrySample”
set serverId $wasuser
set serverPassword $waspassword
##set properties [list [list [list name “usersFile”] [list value
“${was_home}/properties/users.prop”]] [list [list name “groupsFile”] [list
value “${was_home}/properties/groups.prop”]]]
##set attrs7 [subst {{customRegistryClassName $curClassName} {serverId
$serverId} {serverPassword $serverPassword} {properties [list $properties]}}]
set attrs7 [subst {{serverId $wasuser} {serverPassword $waspassword}}]
$AdminConfig modify $new_user_reg $attrs7
puts “ “
puts “ “
puts “...removing connection factory”
foreach cfEntry [$AdminConfig list CMPConnectorFactory $rsadapterID] {
if { [string first $cfName $cfEntry] >= 0 } {
puts “removing Connection factory $cfName for $rsadapterID”
$AdminConfig remove $cfEntry
break
}
}
# ----------------puts “ “
puts “ “
puts “...removing Datasource $dsName”
foreach dsEntry [$AdminConfig list DataSource $scope] {
if { [string first $dsName $dsEntry] >= 0 } {
puts “removing DataSource $dsName from $scope”
$AdminConfig remove $dsEntry
break
}
}
# ----------------puts “ “
puts “ “
puts “...removing JDBCProvider in scope $scope”
Appendix A. Configuration files and scripts
627
foreach providerEntry [$AdminConfig list JDBCProvider $scope] {
if { [string first $providerName $providerEntry] >= 0 } {
puts “$providerEntry is being removed from $scope”
$AdminConfig remove $providerEntry
break
}
}
# ----------------foreach authEntry [$AdminConfig list JAASAuthData] {
foreach aliasEntry [$AdminConfig showall $authEntry] {
if { [lsearch -regexp $aliasEntry $authAliasName] >= 0 } {
puts “$authAliasName is being removed”
$AdminConfig remove $authEntry
break
}
}
}
}
$AdminConfig save
exit
A.8 APM related scripts and files
This section shows the APM scripts and files.
A.8.1 ep_customization.sh
Example A-68 on page 628 shows the script for submitting the APM plan to
customize endpoints.
Example: A-68 ep_customization.sh
#!/bin/sh
set -x
ep_label=$1
TMR_NUMBER=$(wep $ep_label get object|cut -d “.” -f1)
TMR_NAME=$(wlookup -ar ManagedNode|grep $TMR_NUMBER.1. | awk ‘{print $1}’)
628
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
STORE=‘echo $ep_label | cut -d “_” -f3 | cut -d “-” -f1‘
SCRIPTS_LOC=”/mnt/code/tivoli/scripts”
#plan_name=”$SCRIPTS_LOC/apps/APM/new.xml”
plan_name=”$SCRIPTS_LOC/apps/APM/Production_Outlet_Plan_v1.0.xml”
#plan_name=”$SCRIPTS_LOC/apps/APM/test/test1.xml”
LOG_FILE=”$SCRIPTS_LOC/apps/APM/logs/apm_plan.log”
# Submitting plan for the EP
wsubpln -t $ep_label \
-f $plan_name \
-o \
-VEP=$ep_label \
-VMN=$TMR_NAME
RC=$?
if [ $RC -eq 0 ]; then
echo “Activity Plan named $plan_name succesfully submitted” >> $LOG_FILE
else
echo “Failed to submit Activity Plan named $plan_name” >> $LOG_FILE
fi
A.8.2 Production_Outlet_Plan_v1.0.xml
Example A-69 shows the exported version of the APM plan used to control the
endpoint customization.
Example: A-69 Production_Outlet_Plan_v1.0.xml
<?xml version=”1.0”?>
<!DOCTYPE activity_plan SYSTEM “apm_plan.dtd”>
<activity_plan>
<name>Production_Outlet_Activity_Plan</name>
<priority>medium</priority>
<submit_paused>n</submit_paused>
<cancel_at_cutoff>n</cancel_at_cutoff>
<post_notice>y</post_notice>
<targets_computation>p</targets_computation>
<activity>
<application>SoftwareDistribution</application>
<name>db2udb_server^8.2#hubtmr-region[0]</name>
<target_resource_type>endpoint</target_resource_type>
<targets type=”list”>$(TARGET_LIST)</targets>
<targets_computation>p</targets_computation>
<operation>Install</operation>
<parameter>
<name>EnableMulticast</name>
Appendix A. Configuration files and scripts
629
<value>F</value>
</parameter>
<parameter>
<name>DistributionMode</name>
<value>CHNG_ALL</value>
</parameter>
<parameter>
<name>Transactional</name>
<value>TRAN_NO</value>
</parameter>
<parameter>
<name>Undo</name>
<value>UNDO_NO</value>
</parameter>
<parameter>
<name>RetryUnicast</name>
<value>F</value>
</parameter>
<parameter>
<name>RoamEp</name>
<value>T</value>
</parameter>
<parameter>
<name>Force</name>
<value>F</value>
</parameter>
<parameter>
<name>MobileForceMandatory</name>
<value>F</value>
</parameter>
<parameter>
<name>MobileHidden</name>
<value>F</value>
</parameter>
<parameter>
<name>FromCD</name>
<value>F</value>
</parameter>
<parameter>
<name>SoftwarePackage</name>
<value>db2udb_server^8.2#hubtmr-region</value>
</parameter>
<parameter>
<name>DependencyCheck</name>
<value>T</value>
</parameter>
<parameter>
<name>WakeOnLan</name>
<value>F</value>
630
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
</parameter>
<parameter>
<name>Reboot</name>
<value>REBOOT_NO</value>
</parameter>
<parameter>
<name>FromDepot</name>
<value>F</value>
</parameter>
<parameter>
<name>EnableNotification</name>
<value>F</value>
</parameter>
<parameter>
<name>AutoAccept</name>
<value>F</value>
</parameter>
<parameter>
<name>Ignore</name>
<value>F</value>
</parameter>
<parameter>
<name>LenientDistribution</name>
<value>T</value>
</parameter>
<parameter>
<name>DisconnectedOperation</name>
<value>F</value>
</parameter>
<parameter>
<name>Disposable</name>
<value>F</value>
</parameter>
<parameter>
<name>Priority</name>
<value>PRTY_MEDIUM</value>
</parameter>
<parameter>
<name>FromFileServer</name>
<value>F</value>
</parameter>
<parameter>
<name>AutoCommit</name>
<value>F</value>
</parameter>
</activity>
<activity>
<application>SoftwareDistribution</application>
<name>ihs_server^2.0.47#hubtmr-region[0]</name>
Appendix A. Configuration files and scripts
631
<target_resource_type>endpoint</target_resource_type>
<targets type=”list”>$(TARGET_LIST)</targets>
<targets_computation>p</targets_computation>
<operation>Install</operation>
<condition>ST(db2udb_server^8.2#hubtmr-region[0])</condition>
<parameter>
<name>EnableMulticast</name>
<value>F</value>
</parameter>
<parameter>
<name>DistributionMode</name>
<value>CHNG_ALL</value>
</parameter>
<parameter>
<name>Transactional</name>
<value>TRAN_NO</value>
</parameter>
<parameter>
<name>Undo</name>
<value>UNDO_NO</value>
</parameter>
<parameter>
<name>RetryUnicast</name>
<value>F</value>
</parameter>
<parameter>
<name>RoamEp</name>
<value>T</value>
</parameter>
<parameter>
<name>Force</name>
<value>F</value>
</parameter>
<parameter>
<name>MobileForceMandatory</name>
<value>F</value>
</parameter>
<parameter>
<name>MobileHidden</name>
<value>F</value>
</parameter>
<parameter>
<name>FromCD</name>
<value>F</value>
</parameter>
<parameter>
<name>SoftwarePackage</name>
<value>ihs_server^2.0.47#hubtmr-region</value>
</parameter>
632
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
<parameter>
<name>DependencyCheck</name>
<value>T</value>
</parameter>
<parameter>
<name>WakeOnLan</name>
<value>F</value>
</parameter>
<parameter>
<name>Reboot</name>
<value>REBOOT_NO</value>
</parameter>
<parameter>
<name>FromDepot</name>
<value>F</value>
</parameter>
<parameter>
<name>EnableNotification</name>
<value>F</value>
</parameter>
<parameter>
<name>AutoAccept</name>
<value>F</value>
</parameter>
<parameter>
<name>Ignore</name>
<value>F</value>
</parameter>
<parameter>
<name>LenientDistribution</name>
<value>T</value>
</parameter>
<parameter>
<name>DisconnectedOperation</name>
<value>F</value>
</parameter>
<parameter>
<name>Disposable</name>
<value>F</value>
</parameter>
<parameter>
<name>Priority</name>
<value>PRTY_MEDIUM</value>
</parameter>
<parameter>
<name>FromFileServer</name>
<value>F</value>
</parameter>
<parameter>
Appendix A. Configuration files and scripts
633
<name>AutoCommit</name>
<value>F</value>
</parameter>
</activity>
<activity>
<application>SoftwareDistribution</application>
<name>mq_series^5.1#hubtmr-region[0]</name>
<target_resource_type>endpoint</target_resource_type>
<targets type=”list”>$(TARGET_LIST)</targets>
<targets_computation>p</targets_computation>
<operation>Install</operation>
<condition>ST(ihs_server^2.0.47#hubtmr-region[0])</condition>
<parameter>
<name>SW_CCM_CB_OBJECT_0</name>
<value>1393424439.1.946#CCMEngine::CCM#</value>
</parameter>
<parameter>
<name>SW_CCM_CB_METHOD_0</name>
<value>ccm_receive_report</value>
</parameter>
<parameter>
<name>SW_CCM_REFMODEL_NAME</name>
<value>Outlet</value>
</parameter>
<parameter>
<name>SoftwarePackage</name>
<value>mq_series^5.1#hubtmr-region</value>
</parameter>
<parameter>
<name>EnableMulticast</name>
<value>F</value>
</parameter>
<parameter>
<name>RetryUnicast</name>
<value>F</value>
</parameter>
<parameter>
<name>SW_CCM_REFMODEL_VERSION</name>
<value>1.1.0</value>
</parameter>
</activity>
<activity>
<application>SoftwareDistribution</application>
<name>mq_series^5.1.FP01#hubtmr-region[0]</name>
<target_resource_type>endpoint</target_resource_type>
<targets type=”list”>$(TARGET_LIST)</targets>
<targets_computation>p</targets_computation>
<operation>Install</operation>
<condition>ST(mq_series^5.1#hubtmr-region[0])</condition>
634
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
<parameter>
<name>SW_CCM_CB_OBJECT_0</name>
<value>1393424439.1.946#CCMEngine::CCM#</value>
</parameter>
<parameter>
<name>SW_CCM_CB_METHOD_0</name>
<value>ccm_receive_report</value>
</parameter>
<parameter>
<name>SW_CCM_REFMODEL_NAME</name>
<value>Outlet</value>
</parameter>
<parameter>
<name>SoftwarePackage</name>
<value>mq_series^5.1.FP01#hubtmr-region</value>
</parameter>
<parameter>
<name>EnableMulticast</name>
<value>F</value>
</parameter>
<parameter>
<name>RetryUnicast</name>
<value>F</value>
</parameter>
<parameter>
<name>SW_CCM_REFMODEL_VERSION</name>
<value>1.1.0</value>
</parameter>
</activity>
<activity>
<application>SoftwareDistribution</application>
<name>was_applserver^5.1#hubtmr-region[0]</name>
<target_resource_type>endpoint</target_resource_type>
<targets type=”list”>$(TARGET_LIST)</targets>
<targets_computation>p</targets_computation>
<operation>Install</operation>
<condition>ST(mq_series^5.1.FP01#hubtmr-region[0])</condition>
<parameter>
<name>SW_CCM_CB_OBJECT_0</name>
<value>1393424439.1.946#CCMEngine::CCM#</value>
</parameter>
<parameter>
<name>SW_CCM_CB_METHOD_0</name>
<value>ccm_receive_report</value>
</parameter>
<parameter>
<name>SW_CCM_REFMODEL_NAME</name>
<value>Outlet</value>
</parameter>
Appendix A. Configuration files and scripts
635
<parameter>
<name>SoftwarePackage</name>
<value>was_applserver^5.1#hubtmr-region</value>
</parameter>
<parameter>
<name>EnableMulticast</name>
<value>F</value>
</parameter>
<parameter>
<name>RetryUnicast</name>
<value>F</value>
</parameter>
<parameter>
<name>SW_CCM_REFMODEL_VERSION</name>
<value>1.1.0</value>
</parameter>
</activity>
<activity>
<application>SoftwareDistribution</application>
<name>tmtp_ma^5.3#hubtmr-region[0]</name>
<target_resource_type>endpoint</target_resource_type>
<targets type=”list”>$(TARGET_LIST)</targets>
<targets_computation>p</targets_computation>
<operation>Install</operation>
<condition>ST(was_applserver^5.1#hubtmr-region[0])</condition>
<parameter>
<name>SW_CCM_CB_OBJECT_0</name>
<value>1393424439.1.946#CCMEngine::CCM#</value>
</parameter>
<parameter>
<name>SW_CCM_CB_METHOD_0</name>
<value>ccm_receive_report</value>
</parameter>
<parameter>
<name>SW_CCM_REFMODEL_NAME</name>
<value>Outlet</value>
</parameter>
<parameter>
<name>SoftwarePackage</name>
<value>tmtp_ma^5.3#hubtmr-region</value>
</parameter>
<parameter>
<name>EnableMulticast</name>
<value>F</value>
</parameter>
<parameter>
<name>RetryUnicast</name>
<value>F</value>
</parameter>
636
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
<parameter>
<name>SW_CCM_REFMODEL_VERSION</name>
<value>1.1.0</value>
</parameter>
</activity>
<activity>
<application>SoftwareDistribution</application>
<name>TimeCard_Application</name>
<description>Deply TimeCard Application</description>
<target_resource_type>endpoint</target_resource_type>
<targets type=”list”>$(TARGET_LIST)</targets>
<targets_computation>p</targets_computation>
<operation>Install</operation>
<condition>ST(tmtp_ma^5.3#hubtmr-region[0])</condition>
<parameter>
<name>EnableMulticast</name>
<value>F</value>
</parameter>
<parameter>
<name>DistributionMode</name>
<value>CHNG_ALL</value>
</parameter>
<parameter>
<name>Transactional</name>
<value>TRAN_NO</value>
</parameter>
<parameter>
<name>Undo</name>
<value>UNDO_NO</value>
</parameter>
<parameter>
<name>RetryUnicast</name>
<value>T</value>
</parameter>
<parameter>
<name>RoamEp</name>
<value>T</value>
</parameter>
<parameter>
<name>Force</name>
<value>F</value>
</parameter>
<parameter>
<name>MobileForceMandatory</name>
<value>T</value>
</parameter>
<parameter>
<name>MobileHidden</name>
<value>F</value>
Appendix A. Configuration files and scripts
637
</parameter>
<parameter>
<name>FromCD</name>
<value>F</value>
</parameter>
<parameter>
<name>SoftwarePackage</name>
<value>timecard^2.0#hubtmr-region</value>
</parameter>
<parameter>
<name>WakeOnLan</name>
<value>F</value>
</parameter>
<parameter>
<name>Reboot</name>
<value>REBOOT_NO</value>
</parameter>
<parameter>
<name>DependencyCheck</name>
<value>T</value>
</parameter>
<parameter>
<name>FromDepot</name>
<value>F</value>
</parameter>
<parameter>
<name>EnableNotification</name>
<value>F</value>
</parameter>
<parameter>
<name>Ignore</name>
<value>F</value>
</parameter>
<parameter>
<name>AutoAccept</name>
<value>F</value>
</parameter>
<parameter>
<name>LenientDistribution</name>
<value>T</value>
</parameter>
<parameter>
<name>DisconnectedOperation</name>
<value>F</value>
</parameter>
<parameter>
<name>Disposable</name>
<value>F</value>
</parameter>
638
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
<parameter>
<name>Priority</name>
<value>PRTY_MEDIUM</value>
</parameter>
<parameter>
<name>AutoCommit</name>
<value>F</value>
</parameter>
<parameter>
<name>FromFileServer</name>
<value>F</value>
</parameter>
</activity>
<activity>
<application>TaskLibrary</application>
<name>Discover_WebSphere_Resources</name>
<description>Discover WebSphere Application Servers</description>
<target_resource_type>managed_node</target_resource_type>
<targets type=”list”>$(MN)</targets>
<targets_computation>p</targets_computation>
<operation>ExecuteTask</operation>
<condition>SA(TimeCard_Application)</condition>
<parameter>
<name>ExecutionMode</name>
<value>EXEC_MODE_PARALLEL</value>
</parameter>
<parameter>
<name>OutputHost</name>
<value>hubtmr</value>
</parameter>
<parameter>
<name>OutputFormat</name>
<value>HROE</value>
</parameter>
<parameter>
<name>Library</name>
<value>WebSphere Application Server Utility
Tasks#hubtmr-region</value>
</parameter>
<parameter>
<name>Timeout</name>
<value>600</value>
</parameter>
<parameter>
<name>OutputFile</name>
<value>/tmp/task_was_discovery.out</value>
</parameter>
<parameter>
<name>Task</name>
Appendix A. Configuration files and scripts
639
<value>Discover_WebSphere_Resources</value>
</parameter>
<parameter>
<name>Argument2</name>
<value>$(EP)</value>
</parameter>
<parameter>
<name>Argument1</name>
<value>/opt/IBM/WebSphere/AppServer</value>
</parameter>
<parameter>
<name>Argument0</name>
<value>WebSphere Application Servers</value>
</parameter>
</activity>
<activity>
<application>TaskLibrary</application>
<name>ITM_WAS_AppSvr_RM_Distribution</name>
<description>Distributes ITM WAS RM for ApplicationServer
resource</description>
<target_resource_type>managed_node</target_resource_type>
<targets type=”list”>$(MN)</targets>
<targets_computation>p</targets_computation>
<operation>ExecuteTask</operation>
<condition>SA(Discover_WebSphere_Resources)</condition>
<parameter>
<name>OutputFile</name>
<value>/tmp/ITM_WAS_AppSvr_RM_Distribution.out</value>
</parameter>
<parameter>
<name>ExecutionMode</name>
<value>EXEC_MODE_PARALLEL</value>
</parameter>
<parameter>
<name>Timeout</name>
<value>600</value>
</parameter>
<parameter>
<name>Argument0</name>
<value>$(EP)</value>
</parameter>
<parameter>
<name>OutputHost</name>
<value>hubtmr</value>
</parameter>
<parameter>
<name>Library</name>
<value>Production_TL#hubtmr-region</value>
</parameter>
640
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
<parameter>
<name>OutputFormat</name>
<value>HROE</value>
</parameter>
<parameter>
<name>Task</name>
<value>itm_WAS_rm_distrib</value>
</parameter>
</activity>
<activity>
<application>TaskLibrary</application>
<name>Create_DB2_instance_objects</name>
<description>Creates a DB2 Instance object on an Endpoint with a DB2
server installation</description>
<target_resource_type>managed_node</target_resource_type>
<targets type=”list”>$(MN)</targets>
<targets_computation>p</targets_computation>
<operation>ExecuteTask</operation>
<condition>SA(ITM_WAS_AppSvr_RM_Distribution)</condition>
<parameter>
<name>OutputFile</name>
<value>/tmp/create_DB2_instance_objects.out</value>
</parameter>
<parameter>
<name>ExecutionMode</name>
<value>EXEC_MODE_PARALLEL</value>
</parameter>
<parameter>
<name>Timeout</name>
<value>600</value>
</parameter>
<parameter>
<name>Argument0</name>
<value>$(EP)</value>
</parameter>
<parameter>
<name>OutputHost</name>
<value>hubtmr</value>
</parameter>
<parameter>
<name>Library</name>
<value>Production_TL#hubtmr-region</value>
</parameter>
<parameter>
<name>OutputFormat</name>
<value>HROE</value>
</parameter>
<parameter>
<name>Task</name>
Appendix A. Configuration files and scripts
641
<value>Create_DB2_instance_objects</value>
</parameter>
</activity>
<activity>
<application>TaskLibrary</application>
<name>Create_DB2_database_objects</name>
<description>Creates a DB2 Database object on an Endpoint with a DB2
server installation</description>
<target_resource_type>managed_node</target_resource_type>
<targets type=”list”>$(MN)</targets>
<targets_computation>p</targets_computation>
<operation>ExecuteTask</operation>
<condition>SA(Create_DB2_instance_objects)</condition>
<parameter>
<name>OutputFile</name>
<value>/tmp/create_DB2_database_objects.out</value>
</parameter>
<parameter>
<name>ExecutionMode</name>
<value>EXEC_MODE_PARALLEL</value>
</parameter>
<parameter>
<name>Timeout</name>
<value>600</value>
</parameter>
<parameter>
<name>Argument0</name>
<value>$(EP)</value>
</parameter>
<parameter>
<name>OutputHost</name>
<value>hubtmr</value>
</parameter>
<parameter>
<name>Library</name>
<value>Production_TL#hubtmr-region</value>
</parameter>
<parameter>
<name>OutputFormat</name>
<value>HROE</value>
</parameter>
<parameter>
<name>Task</name>
<value>Create_DB2_database_objects</value>
</parameter>
</activity>
<activity>
<application>TaskLibrary</application>
<name>ITM_DB2_Database_RM_Distribution</name>
642
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
<description>Distributes ITM DB2 Resource Models for Database
Object(s)</description>
<target_resource_type>managed_node</target_resource_type>
<targets type=”list”>$(MN)</targets>
<targets_computation>p</targets_computation>
<operation>ExecuteTask</operation>
<condition>SA(Create_DB2_database_objects)</condition>
<parameter>
<name>OutputFile</name>
<value>/tmp/ITM_DB2_Database_RM_Distribution.out</value>
</parameter>
<parameter>
<name>ExecutionMode</name>
<value>EXEC_MODE_PARALLEL</value>
</parameter>
<parameter>
<name>Timeout</name>
<value>600</value>
</parameter>
<parameter>
<name>Argument0</name>
<value>$(EP)</value>
</parameter>
<parameter>
<name>OutputHost</name>
<value>hubtmr</value>
</parameter>
<parameter>
<name>Library</name>
<value>Production_TL#hubtmr-region</value>
</parameter>
<parameter>
<name>OutputFormat</name>
<value>HROE</value>
</parameter>
<parameter>
<name>Task</name>
<value>itm_DB2_database_rm_distrib</value>
</parameter>
</activity>
<activity>
<application>TaskLibrary</application>
<name>ITM_DB2_Instance_RM_Distribution</name>
<description>Distributes ITM DB2 Resource Models for Instance
Object(s)</description>
<target_resource_type>managed_node</target_resource_type>
<targets type=”list”>$(MN)</targets>
<targets_computation>p</targets_computation>
<operation>ExecuteTask</operation>
Appendix A. Configuration files and scripts
643
<condition>SA(ITM_DB2_Database_RM_Distribution)</condition>
<parameter>
<name>OutputFile</name>
<value>/tmp/ITM_DB2_Instance_RM_Distribution.out</value>
</parameter>
<parameter>
<name>ExecutionMode</name>
<value>EXEC_MODE_PARALLEL</value>
</parameter>
<parameter>
<name>Timeout</name>
<value>600</value>
</parameter>
<parameter>
<name>Argument0</name>
<value>$(EP)</value>
</parameter>
<parameter>
<name>OutputHost</name>
<value>hubtmr</value>
</parameter>
<parameter>
<name>Library</name>
<value>Production_TL#hubtmr-region</value>
</parameter>
<parameter>
<name>OutputFormat</name>
<value>HROE</value>
</parameter>
<parameter>
<name>Task</name>
<value>itm_DB2_instance_rm_distrib</value>
</parameter>
</activity>
</activity_plan>
644
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
B
Appendix B.
Obtaining the installation
images
This appendix shows you how to obtain the installation materials discussed in
this IBM Redbook.
© Copyright IBM Corp. 2005. All rights reserved.
645
IBM Business Partners
If you are a registered IBM Business Partner that has purchased the IBM
Software Access Option and signed the IBM PartnerWorld® Agreement
PartnerWorld Software Usage Attachment, you can download trail copies of all
the IBM software used in this book from the IBM Software Access Catalog Web
site:
http://www.developer.ibm.com/OrderingWeb/ordersw.do
1. Accept the Licence Agreement and select the Software Access Catalog Electronic Software Download site:
Figure B-1 Searching for Software Downloads
2. To find the needed packages for selected products at the Software Access
Catalog site, enter the product name as the search criteria and enable
searching for All of the words entered. Then, press Search, to view the list of
downloads that fulfill you search criteria.
3. In addition to downloadable images obtained from the PartnerWorld site, a
number of Fix Packs and Patches were used in the development of this book.
All of these can be downloaded from the IBM Support site at:
http://www.tivoli.com/support/patches
Table B-1 on page 647 outlines the downloadable installation images used in
the creation of this book.
646
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Table B-1 Installation image download reference
name
title
size (bytes)
Tivoli Management Framework 4.1.1
1,662,524,545
C531TML.tar
Tivoli Management Framework v4.1.1
1 of 2, Multiplatform, International
English, Brazilian Portuguese,
French, German, Italian, Spanish
690,575,360
C531UML.tar
Tivoli Management Framework v4.1.1
2 of 2, Multiplatform, International
English, Brazilian Portuguese,
French, German, Italian, Spanish
290,498,560
C53UXML.tar
Tivoli Management Framework
Compatibility v4.1.1, Multiplatform,
International English, Brazilian
Portuguese, French, German, Italian,
Spanish
419,543,040
C48E9IE.tar
Tivoli Management Framework Tier 2,
Multiplatform, International English
9,297,920
C531RIE.zip
Tivoli Management Framework
Documentation v4.1.1, Multiplatform,
International English
9,809,665
4.1.1-LCF-0008.tar a
Tivoli Framework Patch
4.1.1-LCF-0008
55,700,000
4.1.1-TMF-0011.tar a
Tivoli Framework Patch
4.1.1-TMF-0011
114,300,000
4.1.1-TMF-0021.tar a
Tivoli Framework Patch
4.1.1-TMF-0021
72,800,000
Tivoli Enterprise Console v3.9
1,920,552,303
C52J3IE.zip
IBM Tivoli Enterprise Console
Documentation V3.9 Multi Int Engl
312,525,800
C52IZIE.zip
IBM Tivoli Enterprise Console (TME
New Installations) V3.9 Multi Int
English
300,861,020
3.9.0-TEC-0003LA.tar a
IBM Tivoli Enterprise Console Version
3.9.0 LA Interim Fix 03 .
250,224,640
3.9.0-TEC-FP02.tar a
IBM Tivoli Enterprise Console Version
3.9.0 Fix Pack 2
1,056,940,843
Appendix B. Obtaining the installation images
647
name
title
Tivoli Configuration Manager v4.2.1
648
size (bytes)
2,440,060,622
C52NNML.tar
IBM Tivoli Configuration Manager
Installation v4.2.1, Multiplatform,
Brazilian Portuguese, French,
German, International English, Italian,
Japanese, Korean, Simplified
Chinese, Spanish
167,066,112
C52NKML.tar
IBM Tivoli Configuration Manager
Upgrade v4.2.1, Multiplatform,
Brazilian Portuguese, French,
German, International English, Italian,
Japanese, Korean, Simplified
Chinese, Spanish
678,571,008
C52NJML.tar
IBM Tivoli Configuration Manager
v4.2.1, Multiplatform, Brazilian
Portuguese, French, German,
International English, Italian,
Japanese, Korean, Simplified
Chinese, Spanish
482,354,176
C561XML.tar
IBM Tivoli Configuration Manager
Web Gateway v4.2.1, Multiplatform,
Brazilian Portuguese, French,
German, International English, Italian,
Japanese, Korean, Simplified
Chinese, Spanish
539,246,080
C52NLML.zip
IBM Tivoli Configuration Manager
Windows v4.2.1, Brazilian
Portuguese, French, German,
International English, Italian,
Japanese, Korean, Simplified
Chinese, Spanish
305,555,324
C52NQIE.zip
IBM Tivoli Configuration Manager
Documentation v4.2.1, International
English,
4.1.1-TCM-FP01_imag
es.tar a
IBM Tivoli Configuration
Manager,Version 4.2.1,Fix Pack
4.2.1-TCM-FP01 (U498122)
4.2.1-INV-0006.tar a
4.2.1-INV-0006, Tivoli Inventory 4.2.1
Interim Fix
18,754,242
241,735,680
6,778,000
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
name
title
size (bytes)
ITM v5.1.2
1,817,351,168
C560LML.tar
Tivoli Monitoring Install V5.1.2 Multi
Int Engl Braz Port French Italian
German Spanish Japanese Korean
Simp Chin Trad Chin
401,261,568
C560GIE.TAR
Tivoli Monitoring V5.1.2 Multi Int
English
155,426,304
C560KML.tar
Tivoli Monitoring Tools V5.1.2 Multi Int
English Braz Port French Italian
German Spanish Japanese Korean
Simp Chin Trad Chin
357,331,456
C57HEML.tar
Tivoli Monitoring Publications for the
UNIX User V5.1.2 Multiplatform ML
162,826,240
C46XCML.exe
IBM Tivoli Monitoring Web Health
Console V5.1.1 for Windows, Linux
Braz Port, French, Italian, German,
Spanish, Japanese, Korean, Simp
Chin, Trad Chin, Int English
330,798,592
C560IML.tar
Tivoli Monitoring Web Health Console
V5.1.2 Fixpack 6 Multi Int Engl Braz
Port French Italian German Spanish
Japanese Korean Simp Chin Trad
Chin
336,208,896
Obtain Web Health Console for
WebSphere 5.1 from your local Tivoli
Support organization
5.1.2-ITM-FP02.tar a
Fix Pack (5.1.2-ITM-FP02) for IBM
Tivoli Monitoring 5.1.2
66,801,152
5.1.1-ICS-FP02.tar a
IBM Tivoli Monitoring Component
Services 5.1.1 Fix Pack 2
6,696,960
IBM Monitoring for Databases
977,066,466
C508KML.tar
301,711,360
Tivoli Monitoring for Databases: Install
V5.1.1 AIX HP-UX Linux Solaris
Win2000 WinNT WinXP Int Engl Braz
Port French Italian German Spanish
Japan Korea Simp Chin Trad Chin
Appendix B. Obtaining the installation images
649
name
title
C476PML.exe
Tivoli Monitoring for Databases: Install
2 V5.1.1 AIX HP-UX Linux Solaris
Win2000 WinNT WinXP Int Engl Braz
Port French Italian German Spanish
Japan Korea Simp Chin Trad Chin
305,062,400
C508NIE.zip
Tivoli Monitoring for Databases:
Documentation V5.1.1 AIX HP-UX
Linux Solaris Win2000 WinNT WinXP
Int Engl
148,361,698
C476VIE.exe
Tivoli Monitoring for Databases: DB2
Component Software V5.1 AIX
HP-UX Linux Solaris Win2000 WinNT
WinXP Int Engl
110,550,528
5.1.0-ITMD-DB2-FP05.
image.tar a
IBM Tivoli Monitoring for Databases
5.1.0 - DB2 Component Software
5.1.0 Fix Pack 5
111,380,480
IBM Tivoli Monitoring for Web Infrastructure
650
size (bytes)
1,493,504,052
C54TSML.tar
Tivoli Monitoring for Web
Infrastructure: Install V5.1.2 AIX
HP-UX Linux Solaris Win2000 WinNT
WinXP Int Engl Braz Port French
Italian German Spanish Japan Korea
Simp Chin Trad Chin
401,285,120
C54TTML.tar
Tivoli Monitoring for Web
Infrastructure: Install 2 V5.1.2 AIX
HP-UX Linux Solaris Win2000 WinNT
WinXP Int Engl Braz Port French
Italian German Spanish Japan Korea
Simp Chin Trad Chin
249,835,520
C54TUML.tar
Tivoli Monitoring for Web
Infrastructure: Install 3 V5.1.2 AIX
HP-UX Linux Solaris Win2000 WinNT
WinXP Int Engl Braz Port French
Italian German Spanish Japan Korea
Simp Chin Trad Chin
344,043,520
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
name
title
size (bytes)
C54TVIE.tar
Tivoli Monitoring for Web
Infrastructure: WebSphere
Application Server Component
Software V5.1.2 AIX HP-UX Linux
Solaris Win2000 WinNT WinXP Int
Engl
267,028,480
C54TXIE.zip
Tivoli Monitoring for Web
Infrastructure: Documentation V5.1.2
AIX HP-UX Linux Solaris Win2000
WinNT WinXP Int Engl
187,760,692
5.1.2-WAS-FP02.tar
5.1.2-WAS-FP02 Tivoli Monitoring for
Web Infrastructure: WebSphere
Application
43,550,720
Tivoli Monitoring for Transaction Performance v5.3
3,139,604,480
C8061IE.tar
IBM Tivoli Monitoring for Transaction
Performance, Version 5.3.0: Web
Transaction Performance Component
Software Management Server (1 of 2)
579,205,120
C8062IE.tar
IBM Tivoli Monitoring for Transaction
Performance, Version 5.3.0: Web
Transaction Performance Component
Software Management Server (2 of 2)
363,960,320
C8064IE.tar
IBM Tivoli Monitoring for Transaction
Performance, Version 5.3.0: Web
Transaction Performance Component
Software Management Agent, Store
and Forward
692,951,040
C806AIE.ta
IBM Tivoli Monitoring for Transaction
Performance, Version 5.3.0:
Documentation CD (English only)
UNIX
399,667,200
C806CIE.tar
IBM Tivoli Monitoring for Transaction
Performance, Version 5.3.0: Rational
Robot, Warehouse Enablement Pack,
Tivoli Intelligent Orchestrator, TMTP
5.1 to TMTP 5.3 Upgrade
428,328,960
Appendix B. Obtaining the installation images
651
name
title
C8065IE.tar
IBM Tivoli Monitoring for Transaction
Performance, Version 5.3.0: Web
Transaction Performance Component
Software TMTP 5.2 to TMTP 5.3
Upgrade
675,491,840
DB2 UDB Enterprise Server v8.2
495,104,000
C58S8ML.tar
495,104,000
DB2 UDB Enterprise Server Edition
V8.2 for Linux for Intel 32-bit (English,
French, German, Spanish, Italian,
Brazilian Portuguese, Russian, Polish,
Czech, Japanese, Korean, Simplified
and Traditional Chinese)
IBM HTTP Server V2.0
HTTPServer.linux.2047
.tar b
20,400,000
The installation image for IBM HTTP
Server v2 for Linux can be
downloaded from
http://www-306.ibm.com/software/w
ebservers/httpservers/IHS20.html
WebSphere MQ v5.3
20,400,000
153,434,302
C48UBML.tar.gz
WebSphere MQ v5.3.0.2, Linux for
Intel, Brazilian, Portuguese, French,
German, International English, Italian,
Japanese, Korean, Simplified
Chinese, Spanish, Traditional Chinese
U489967.gskit.tar.gz a
WebSphere MQ for Linux Intel, v5.3
PTF U489967 (CSD06) - with SSL
support
WebSphere Application Server v5.1
652
size (bytes)
136,994,668
16,439,634
1,070,672,037
c53ipml.tar
WebSphere Application Server v5.1
for Linux, (Brazilian Portuguese,
French, German, Italian, Japanese,
Korean, Spanish, US English)
318,648,320
C81E6ML.taz
WebSphere Application Server V5.1.1
for Linux, Fixpack 1, English, Brazilian
Portuguese, French, German, Italian,
Spanish, Japanese, Korean,
Simplified Chinese, Traditional
Chinese
258,967,717
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
name
title
size (bytes)
c53tdml.tar
WebSphere Application Server
Network Deployment V5.1 Edge
Components for LINUX Brazilian
Portuguese, French, German, Italian,
Japanese, Korean, Simplified
Chinese, Spanish, Traditional
Chinese, US English
381,286,400
EdgeCachingProxy511
-Linux.tar c
WebSphere Application Server
Network Deployment V5.1 Edge
Components for LINUX
111,769,600
Total:
47 downloads
15,190,273,975
a. Obtain this fix from the Tivoli support site at: http://www.tivoli.com/support/patches/
b. Obtain this file from: http://www-306.ibm.com/software/webservers/httpservers/IHS20.html.
c. Obtain this fix from the Tivoli support site at:
http://www-1.ibm.com/support/docview.wss?rs=180&context=SSEQTP&uid=swg24007420
Appendix B. Obtaining the installation images
653
654
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
C
Appendix C.
Additional material
This redbook refers to additional material that can be downloaded from the
Internet as described below.
Locating the Web material
The Web material associated with this redbook is available in softcopy on the
Internet from the IBM Redbooks Web server. Point your Web browser to:
ftp://www.redbooks.ibm.com/redbooks/SG246468
Alternatively, you can go to the IBM Redbooks Web site at:
ibm.com/redbooks
Select the Additional materials and open the directory that corresponds with
the redbook form number, SG246468.
Using the Web material
The additional Web material that accompanies this redbook includes the
following files:
File name
SG246468.zip
Description
Zipped Code Samples and scripts
© Copyright IBM Corp. 2005. All rights reserved.
655
System requirements for downloading the Web material
The following system configuration is recommended:
Hard disk space:
Operating System:
Processor:
Memory:
10 MB
Windows/UNIX
700 or higher
256 MB or more
How to use the Web material
Create a subdirectory (folder) on your workstation, and unzip the contents of the
Web material zip file into this folder.
Please refer to Appendix A., “Configuration files and scripts” on page 349 for a
detailed description of the content of, and instructions on how to unpack, the
additional material.
656
Implementing a Tivoli Solution for Central Management of Large Distributed Environments
Glossary
authorization role: A role assigned to Tivoli
administrators to enable them to perform their
assigned systems management tasks. A role can be
granted over the entire Tivoli Management Region
(TMR) or over a specific set of resources, such as
those contained in a policy region. Examples of
authorization roles include: super, senior, admin,
and user.
data packet: A data packet is a combination of the
actual data, plus additional information. This
information could consist of the following
information: priority, size, and encryption level.
bulletin board: The primary mechanism by which
the Tivoli Framework and Tivoli applications
communicate with Tivoli administrators. The bulletin
board is represented as an icon on the Tivoli desktop
through which the administrators can access
notices. Tivoli applications use the bulletin board as
an audit trail for important operations that the
administrators perform.
default policy: A set of resource property values
that are assigned to a resource when the resource is
created.
collection: A container that groups objects on a
Tivoli desktop, thus providing the Tivoli administrator
with a single view of related resources. Either the
Tivoli Framework or a Tivoli administrator can create
a collection. The contents of a collection are referred
to as its members. Examples of collections include
the administrator collection, the generic collection,
and the monitoring collection; the administrator
collection is an example of a collection generated by
the Tivoli Framework.
data: A piece of information that needs to be
communicated between the enterprise and the
outlet.
Deployment Request: A request created by the
solution deployer in the enterprise to arrange a
deployment in the outlet.
Downcall: A method invocation from the Tivoli
Management Region (TMR) server or gateway
“down” to an endpoint.
Employee Handhelds: Any handheld wireless
device.
EMR: Enterprise Management Region, the key
enterprise environment within the Outlet Inc.
configuration repository: In the Tivoli
environment, the relational database that contains
information collected or generated by Tivoli
applications. Following are examples of the
information stored in the configuration repository:
Tivoli Inventory stores information concerning
hardware, software, system configuration, and
physical inventory. Tivoli Software Distribution stores
information concerning file package
operations.Tivoli Enterprise Console stores
information concerning events.
© Copyright IBM Corp. 2005. All rights reserved.
657
endpoint : (1) A Tivoli client that is running an
endpoint service (or daemon). Typically, an endpoint
is a machine that is not used to perform daily
management operations. An endpoint
communicates only with its assigned gateway. (2)
Any managed resource that represents the final
destination for a profile distribution. Tivoli Distributed
Monitoring also uses the concept of a proxy
endpoint, which is a representation for a non-Tivoli
entity (such as a network device or a host) that
functions as a subscriber for Tivoli Distributed
Monitoring profiles. A Tivoli administrator associates
each proxy endpoint with a managed node; several
proxy endpoints can be associated with a single
managed node. (3) In the Tivoli environment, a
managed node, PC managed node, or SQL server
on which a task is run.
endpoint client: See endpoint.
endpoint gateway: See gateway.
endpoint list: A list of all endpoints in the Tivoli
Management Region (TMR) and their assigned
gateways. This list is maintained by the endpoint
manager.
endpoint manager: A service on the Tivoli server
that controls and configures gateways and
endpoints, assigns endpoints to gateways, and
maintains the endpoint list.
endpoint method: A method that runs on an
endpoint as the result of a request from other
managed resources in the Tivoli Management
Region (TMR). Results of the method are forwarded
to the gateway and then to the calling managed
resource.
Enterprise Application: An enterprise application
is an application that is installed in the enterprise. Its
design and operation are out of scope for this
solution.
Enterprise: An enterprise (or company) is
comprised of all the establishments that operate
under the ownership or control of a single
organization. For the purposes of this book,
Enterprise refers to a corporation, retailers or banks,
with large numbers of stores or branches throughout
a specific region or the world. Each retail store or
bank branch is referred as an outlet.
event repository: In the Tivoli environment, the
relational database in which TEC stores TEC
events.
fanout: In communications, the process of creating
copies of a distribution to be delivered locally or to be
sent through the network.
gateway method: A method that runs on the
gateway’s proxy managed node on behalf of the
endpoint. Results of the method are forwarded to the
calling m