Download Utilities - Volume 1 AK - Information Products

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

IMDb wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Database wikipedia , lookup

Concurrency control wikipedia , lookup

Functional Database Model wikipedia , lookup

Ingres (database) wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Clusterpoint wikipedia , lookup

Relational model wikipedia , lookup

ContactPoint wikipedia , lookup

Database model wikipedia , lookup

Transcript
Teradata Database
Utilities - Volume 1
A-K
Release 13.0
B035-1102-098A
October 2010
The product or products described in this book are licensed products of Teradata Corporation or its affiliates.
Teradata, BYNET, DBC/1012, DecisionCast, DecisionFlow, DecisionPoint, Eye logo design, InfoWise, Meta Warehouse, MyCommerce,
SeeChain, SeeCommerce, SeeRisk, Teradata Decision Experts, Teradata Source Experts, WebAnalyst, and You’ve Never Seen Your Business Like
This Before are trademarks or registered trademarks of Teradata Corporation or its affiliates.
Adaptec and SCSISelect are trademarks or registered trademarks of Adaptec, Inc.
AMD Opteron and Opteron are trademarks of Advanced Micro Devices, Inc.
BakBone and NetVault are trademarks or registered trademarks of BakBone Software, Inc.
EMC, PowerPath, SRDF, and Symmetrix are registered trademarks of EMC Corporation.
GoldenGate is a trademark of GoldenGate Software, Inc.
Hewlett-Packard and HP are registered trademarks of Hewlett-Packard Company.
Intel, Pentium, and XEON are registered trademarks of Intel Corporation.
IBM, CICS, RACF, Tivoli, and z/OS are registered trademarks of International Business Machines Corporation.
Linux is a registered trademark of Linus Torvalds.
LSI and Engenio are registered trademarks of LSI Corporation.
Microsoft, Active Directory, Windows, Windows NT, and Windows Server are registered trademarks of Microsoft Corporation in the United
States and other countries.
Novell and SUSE are registered trademarks of Novell, Inc., in the United States and other countries.
QLogic and SANbox are trademarks or registered trademarks of QLogic Corporation.
SAS and SAS/C are trademarks or registered trademarks of SAS Institute Inc.
SPARC is a registered trademark of SPARC International, Inc.
Sun Microsystems, Solaris, Sun, and Sun Java are trademarks or registered trademarks of Sun Microsystems, Inc., in the United States and other
countries.
Symantec, NetBackup, and VERITAS are trademarks or registered trademarks of Symantec Corporation or its affiliates in the United States
and other countries.
Unicode is a collective membership mark and a service mark of Unicode, Inc.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and company names mentioned herein may be the trademarks of their respective owners.
THE INFORMATION CONTAINED IN THIS DOCUMENT IS PROVIDED ON AN “AS-IS” BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR
NON-INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSION
MAY NOT APPLY TO YOU. IN NO EVENT WILL TERADATA CORPORATION BE LIABLE FOR ANY INDIRECT, DIRECT, SPECIAL, INCIDENTAL,
OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS OR LOST SAVINGS, EVEN IF EXPRESSLY ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
The information contained in this document may contain references or cross-references to features, functions, products, or services that are
not announced or available in your country. Such references do not imply that Teradata Corporation intends to announce such features,
functions, products, or services in your country. Please consult your local Teradata Corporation representative for those features, functions,
products, or services available in your country.
Information contained in this document may contain technical inaccuracies or typographical errors. Information may be changed or updated
without notice. Teradata Corporation may also make improvements or changes in the products or services described in this information at any
time without notice.
To maintain the quality of our products and services, we would like your comments on the accuracy, clarity, organization, and value of this
document. Please e-mail: [email protected]
Any comments or materials (collectively referred to as “Feedback”) sent to Teradata Corporation will be deemed non-confidential. Teradata
Corporation will have no obligation of any kind with respect to Feedback and will be free to use, reproduce, disclose, exhibit, display, transform,
create derivative works of, and distribute the Feedback and derivative works thereof without limitation on a royalty-free basis. Further, Teradata
Corporation will be free to use any ideas, concepts, know-how, or techniques contained in such Feedback for any purpose whatsoever, including
developing, manufacturing, or marketing products or services incorporating Feedback.
Copyright © 2000 – 2010 by Teradata Corporation. All Rights Reserved.
Preface
Purpose
This book, Utilities, describes the utility programs that support the Teradata Database and
consists of two manuals:
•
Utilities Volume 1 includes utilities A-K.
•
Utilities Volume 2 includes utilities L-Z.
Topics covered in this volume include utilities whose names begin with letters A through K.
Use this book in conjunction with the other volume.
Chapter names reflect the utility common name followed by the name of the executable utility
program enclosed in parentheses, for example, Control GDO Editor (ctl). Use the executable
program name to start the utility from the command line or Database Window.
Some utilities are restricted to running on specific platforms. Refer to the User Interfaces
section of each chapter to determine the platforms that support each utility.
A third manual, Support Utilities, describes utilities most often used to support Teradata
Database. These utilities are used primarily by Teradata Support and field engineers, and
should be used only under the direction of Teradata Support personnel.
Audience
The utilities described in this book are used primarily by Teradata Support Center field
engineers, Teradata Database developers, System Test and Verification personnel, and system
administrators. For example, these utilities are used to display control parameters, display
DBS control record fields, find and correct problems within the file system, initialize the
Teradata Database, rebuild tables in the database, and manage the virtual processors (vprocs).
These utilities are also used to abort transactions and processes; monitor system performance,
resources, and status, perform internal system checking, and perform system configuration,
initialization, recovery, and tuning.
Users should be familiar with the Teradata Database console running the Database Window
(DBW) and your client (host) system.
Experienced utilities users can refer to the simplified command descriptions in Utilities Quick
Reference, which provides the syntax diagrams for each Teradata Database utility.
Utilities
3
Preface
Supported Software Release
Supported Software Release
This book supports Teradata® Database 13.0.
Prerequisites
You should be familiar with using Database Window (DBW) to run the console utilities.
Additionally, you might want to review the following related books:
•
Performance Management
•
Resource Usage Macros and Tables
•
Platform-dependent Teradata Database Installation/Upgrade/Migration documents
For detailed information about the Archive and Recovery, FastExport, FastLoad, MultiLoad,
and Teradata Parallel Data Pump utilities, see the following client utilities books:
•
Teradata Archive/Recovery Utility Reference
•
Teradata FastExport Reference
•
Teradata FastLoad Reference
•
Teradata MultiLoad Reference
•
Teradata Parallel Data Pump Reference
Changes to This Book
4
Release
Utility
Description
Teradata Database 13.0
October 2010
CheckTable
Noted that on non-quiescent systems, CheckTable defaults to
CONCURRENT MODE with RETRY LIMIT set to one.
Teradata Database 13.0
March 2010
DBS Control
New DBS Control fields:
Teradata Database 13.0
November 2009
DBS Control
•
•
•
•
Teradata Database 12.0
April 2009
General
Removed references to MultiTool. MultiTool is no longer supported.
Check Table
• Added documentation for new DOWN ONLY option.
• DefaultCaseSpec
• DBQL Log Last Resp
Added SmallDepotCylsPerPdisk and LargeDepotCylsPerPdisk.
Added MPS_IncludePEOnlyNodes to list of general DBS Control fields.
Added AccessLockForUncomRead.
Added DBQL Log Last Resp
Utilities
Preface
Changes to This Book
Release
Utility
Description
Teradata Database 13.0
April 2009
(continued)
Cnsrun
Added documentation for new -multi option, which allows cnsrun to start a
utility if other instances of the utility are running.
Cufconfig
• Added documentation for new JavaHybridThreads and MallocLimit
settings.
• Removed -j option, modified description of JREPath setting, and
updated other JRE information for Java UDFs.
• Added documentation for six new fields related to global and persistent
(GLOP) data feature for external routines.
• Reformatted chapter for easier reading and better navigation.
Ctl
Control GDO
Editor
• Removed “Maximum FSG Area Dumped” and “Maximum Dump Size”
controls from Debug screen. They have no effect on Windows and Linux.
• Removed “screen dump n” and “PRINT DUMP” commands.
Database Window
Added new chapter to describe this console utility.
DBS Control
• Rearranged field descriptions alphabetically to facilitate navigation.
• Updated information for Performance field MaxParseTreeSegs.
• Noted that minicylpack operations will now try to honor the
FreeSpacePercent setting.
• Updated description of Temporary Storage Page Size field.
• Updated description of PermDBAllocUnit to reflect 255 is largest data
block size.
• Updated description of RedistBufSize field. Now provides finer control
over size of redistribution buffers.
• Increased maximum possible value for Cylinders Saved for PERM to
52487.
• Noted interactions between Teradata Dynamic Workload Manger
settings, MaxLoadAWT, and MaxLoadTasks.
• 1Removed information for JournalDBSize, which is currently ineffective.
The field is now reserved for future use.
• Added information for new File System group field, Free Cylinder Cache
Size.
Added information for the following new General fields:
•
•
•
•
•
•
•
•
•
•
Utilities
Bkgrnd Age Cycle Interval
MaxDownRegions
MaxJoinTables
MaxRowHashBlocksPercent
MPS_IncludePEOnlyNodes
ObjectUseCountCollectRate
PrimaryIndexDefault
TempLargePageSize
RepCacheSegSize
RevertJoinPlanning
5
Preface
Changes to This Book
Release
Utility
Description
Teradata Database 13.0
April 2009
(continued)
DIP
•
•
•
•
Ferret
• Added description of new SHOWWHERE command, and updated
SCOPE description accordingly.
• Replaced drive cylinder parameters with single cylid parameter. Data
locations are now specified using this single parameter.
Filer
The Filer chapter has been moved to the Support Utilities book.
Gateway Control
• Added description of new -n option.
• Removed documentation for discontinued -b option. Deprecated logons
are no longer allowed, even optionally.
Gateway Global
• Updated description and examples of DISPLAY SESSION command.
Check Table
• Added new information for new COMPRESSCHECK option of CHECK
command.
ctl
• Added information on new RSS options and added new screen shot.
• Added documentation for new LogOnly option of Cylinder Read setting.
cufconfig
• Added the -j option and the following new fields to the UDF GDO:
JavaLibraryPath, JREPath, JavaLogPath, JavaEnvFile, JavaServerTasks,
JavaVersion, and JavaBaseDebugPort. These changes support Java
external stored procedures.
• Added the following new fields to the UDF GDO: UDFLibPath,
UDFIncPath, UDFEnvFile, CLILibPath, CLIIncPath, and CLIEnvFile.
These changes support SQL invocation via external stored procedures.
• Updated default values for JREPath.
• Changed JAVA_HOME to TD_JAVA_HOME
Teradata Database 12.0
September 2007
6
Changed DIPPWD name to DIPPWRSTNS.
Added Period data type information to DIPUDT description.
New DIPRCO script. Reserved for future use.
New DIPGLOP script enables global and persistent (GLOP) data feature
for external routines.
Utilities
Preface
Changes to This Book
Release
Utility
Description
Teradata Database 12.0
September 2007
(continued)
DBS Control
• Removed references to obsolete SHAPasswordEncryption field.
• Added documentation for DisablePeekUsing field.
• Added documentation for NewHashBucketSize and CurHashBucketSize
fields.
• Enhanced explanation of PPICacheThrP.
• Changed documentation for DateForm, System TimeZone Hour, and
System TimeZone Minute field to indicate that changes to these settings
affect new sessions begun after the DBS Control Record has been
written. Existing sessions are not affected. A database restart is not
required.
• Added four new RSS tables to system tables section.
• Added documentation for IVMaxWorkloadCache field, and updated
documentation for IAMaxWorkloadCache field.
• Updated documentation for DBQL Options field. This field is obsolete
in Teradata Database 12.0, but is reserved for future DBQL use.
• Added information on new support for Windows-compatible session
character sets to tables under Export Width Table ID section.
• Added information for new MaxLoadAWT field. Updated
documentation for MaxLoadTasks field describing how it relates to
MaxLoadAWT.
• Removed references to DBC.HW_Event_Log.
• Updated ReadLockOnly description.
• Updated RollbackPriority field to indicate recommended setting is now
TRUE.
• Updated information for RedistBufSize field. Now used mainly for load
utilities.
• Updated information on FreeSpacePercent field.
• Added documentation for MonSesCPUNormalization field.
DIP
• Added information on changes to DIPDEM for Query Banding.
• New scripts DIPSQLJ and DIPPWD.
• Updated information on cost profiles provided by DIPOCES script.
DUL/DULTAPE
• Removed documentation of the “dbs64” option. This option is no longer
necessary because DUL now detects the type of database automatically.
• Updated chapter to removed old references to V1, except where they
were literals.
• Reorganized chapter and headings for clarity.
Utilities
7
Preface
Additional Information
Release
Utility
Description
Teradata Database 12.0
September 2007
(continued)
Ferret
• Updated the description of the INQUIRE option for SCANDISK.
• Updated PACKDISK information to indicate that occurrences of mini
cylinder packs can be monitored by checking the software event log on
all platforms.
• Added information about the new CR option for SCANDISK, which
allows cylinder reads when running SCANDISK. The new default is not
to use cylinder reads.
• Added information about the display options /S, /M, and /L for
SCANDISK.
• Valid values for PRIORITY command are 0, 1, 2, and 3, corresponding to
priorities low, medium, high, and rush.
• Added information about flags available with the ENABLE and
DISABLE commands
• Enhanced description of DEFRAGMENT command.
Filer
Updated syntax diagram and description for WFLUSH command.
Gateway Control
(gtwcontrol)
• The -F and -b options are deprecated, and should not be used.
• Added information on new -o and -z options that allow users to define
and apply custom default values for gtwcontrol settings.
Additional Information
URL
Description
http://www.info.teradata.com/
Use the Teradata Information Products Publishing Library
site to:
• View or download a manual:
1 Under Online Publications, select General Search.
2 Enter your search criteria and click Search.
• Download a documentation CD-ROM:
1 Under Online Publications, select General Search.
2 In the Title or Keyword field, enter CD-ROM, and
click Search.
• Order printed manuals:
Under Print & CD Publications, select How to Order.
http://www.teradata.com
The Teradata home page provides links to numerous
sources of information about Teradata. Links include:
• Executive reports, case studies of customer experiences
with Teradata, and thought leadership
• Technical information, solutions, and expert advice
• Press releases, mentions and media resources
8
Utilities
Preface
References to Microsoft Windows and Linux
URL
Description
http://www.teradata.com/t/TEN/
Teradata Customer Education designs, develops and
delivers education that builds skills and capabilities for
our customers, enabling them to maximize their Teradata
investment.
To maintain the quality of our products and services, we would like your comments on the
accuracy, clarity, organization, and value of this document. Please e-mail: [email protected]
References to Microsoft Windows and Linux
This book refers to “Microsoft Windows” and “Linux.” For Teradata Database 13.0, these
references mean:
Utilities
•
“Windows” is Microsoft Windows Server 2003 64-bit.
•
“Linux” is SUSE Linux Enterprise Server 9 and SUSE Linux Enterprise Server 10.
9
Preface
References to Microsoft Windows and Linux
10
Utilities
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Supported Software Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Changes to This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
References to Microsoft Windows and Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
Chapter 1: Teradata Database Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Alphabetical Listing of Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
User Logon for Administrator Utilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Chapter 2: Abort Host (aborthost) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Aborting Teradata Database Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Chapter 3: AMP Load (ampload) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Chapter 4: AWT Monitor (awtmon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Chapter 5: CheckTable (checktable) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Data Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Referential Integrity Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Recommendations for Running CheckTable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Using Function Keys and Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Utilities
11
Table of Contents
Viewing CheckTable Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Determining the Status of a Table Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
Stopping Table Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
CHECK Command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
CHECKTABLEB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
CheckTable Check Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
Level-Pendingop Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Level-One Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Level-Two Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62
Level-Three Checking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75
CheckTable General Checks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81
CHECK Command Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83
CheckTable and Deadlocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91
CheckTable Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92
Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94
Using Valid Characters in Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
Using Wildcard Characters in Names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .97
Chapter 6: CNS Run (cnsrun) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
Chapter 7: Configuration Utility (config) . . . . . . . . . . . . . . . . . . . . . . . .107
About the Configuration Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108
Configuration Utility Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111
ADD AMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
ADD HOST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
ADD PE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
BEGIN CONFIG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
DEFAULT CLUSTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120
DEL AMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123
DEL HOST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125
DEL PE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126
END CONFIG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128
LIST. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129
LIST AMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131
12
Utilities
Table of Contents
LIST CLUSTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
LIST HOST. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
LIST PE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
MOD AMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
MOD HOST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
MOD PE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
MOVE AMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
MOVE PE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
SHOW CLUSTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
SHOW HOST. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
SHOW VPROC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
STOP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Configuration Utility Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Chapter 8: Control GDO Editor (ctl) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Ctl Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
EXIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
HARDWARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
HELP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
PRINT group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
PRINT variable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
QUIT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
READ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
SCREEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SCREEN DBS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SCREEN DEBUG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SCREEN RSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SCREEN VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
166
166
169
172
174
variable = setting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
WRITE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Chapter 9: Cufconfig Utility (cufconfig) . . . . . . . . . . . . . . . . . . . . . . . . . 181
Cufconfig Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
CLIEnvFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Utilities
13
Table of Contents
CLIIncPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186
CLILibPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .187
CompilerPath. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
CompilerTempDirectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189
GLOPLockTimeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190
GLOPLockWait . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191
GLOPMemMapPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192
JavaBaseDebugPort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193
JavaEnvFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .195
JavaHybridThreads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196
JavaLibraryPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .197
JavaLogPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198
JavaServerTasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199
JavaVersion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .200
JREPath. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201
JSVServerMemPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202
LinkerPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
MallocLimit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204
MaximumCompilations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .205
MaximumGLOPMem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
MaximumGLOPPages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
MaximumGLOPSize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .208
ModTime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .209
ParallelUserServerAMPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .210
ParallelUserServerPEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
SecureGroupMembership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .212
SecureServerAMPs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .213
SecureServerPEs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .214
SourceDirectoryPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .215
SWDistNodeID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .216
TDSPLibBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .217
UDFEnvFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218
UDFIncPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
UDFLibPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220
UDFLibraryPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .221
UDFServerMemPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
UDFServerTasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223
Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .224
14
Utilities
Table of Contents
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Chapter 10: Database Initialization Program (DIP) . . . . . . . . . 231
DIP Scripts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Chapter 11: Database Window (xdbw) . . . . . . . . . . . . . . . . . . . . . . . . . 241
Running DBW. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
DBW Main Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Granting CNS Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Repeating Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Saving Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
DBW Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
ABORT SESSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
CNSGET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
CNSSET LINES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
CNSSET STATEPOLL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
CNSSET TIMEOUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
DISABLE LOGONS/ DISABLE ALL LOGONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
ENABLE DBC LOGONS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
ENABLE LOGONS/ ENABLE ALL LOGONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
GET ACTIVELOGTABLE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
GET CONFIG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
GET EXTAUTH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
GET LOGTABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
GET PERMISSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
GET RESOURCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
GET SUMLOGTABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
GET TIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
GET VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
GRANT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
LOG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
QUERY STATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Utilities
15
Table of Contents
RESTART TPA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .277
REVOKE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .278
SET ACTIVELOGTABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .280
SET EXTAUTH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282
SET LOGTABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283
SET RESOURCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .285
SET SESSION COLLECTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .288
SET SUMLOGTABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289
START . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .291
STOP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .293
Chapter 12: DBS Control (dbscontrol) . . . . . . . . . . . . . . . . . . . . . . . . . . .295
DBS Control Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296
DISPLAY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297
HELP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .300
MODIFY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302
QUIT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .304
WRITE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .305
DBS Control Fields (Settings) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .306
AccessLockForUncomRead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .312
Bkgrnd Age Cycle Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .313
Century Break . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .314
CheckTable Table Lock Retry Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .316
Client Reset Timeout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .317
CostProfileId . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .318
CurHashBucketSize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321
Cylinders Saved for PERM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322
DateForm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .323
DBQLFlushRate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .324
DBQL Log Last Resp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .326
DBQL Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .328
DBSCacheCtrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .329
DBSCacheThr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .330
DeadlockTimeOut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334
DefaultCaseSpec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
DefragLowCylProd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .337
16
Utilities
Table of Contents
DictionaryCacheSize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
DisableUDTImplCastForSysFuncOp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
DisablePeekUsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
DisableSyncScan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
DisableWAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
DisableWALforDBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
EnableCostProfileTLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
EnableSetCostProfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Export Width Table ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
ExternalAuthentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Free Cylinder Cache Size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
FreeSpacePercent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
HashFuncDBC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
HTMemAlloc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
IAMaxWorkloadCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
IdCol Batch Size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
IVMaxWorkloadCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
LargeDepotCylsPerPdisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
LockLogger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
LockLogger Delay Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
LockLogger Delay Filter Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
LockLogSegmentSize. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
MaxDecimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
MaxDownRegions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
MaxJoinTables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
MaxLoadAWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
MaxLoadTasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
MaxParseTreeSegs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
MaxRequestsSaved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
MaxRowHashBlocksPercent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
MaxSyncWALWrites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
MDS Is Enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Memory Limit Per Transaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
MiniCylPackLowCylProd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
MonSesCPUNormalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
MPS_IncludePEOnlyNodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
NewHashBucketSize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
ObjectUseCountCollectRate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Utilities
17
Table of Contents
PermDBAllocUnit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403
PermDBSize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .405
PPICacheThrP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .407
PrimaryIndexDefault. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .411
ReadAhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .413
Read Ahead Count. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .415
ReadLockOnly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .416
RedistBufSize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .417
RepCacheSegSize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .419
RevertJoinPlanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .420
RollbackPriority. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .421
RollbackRSTransaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .423
RollForwardLock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .424
RoundHalfwayMagUp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .425
RSDeadLockInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .426
SessionMode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .427
SkewAllowance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .428
SmallDepotCylsPerPdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .429
Spill File Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .431
StandAloneReadAheadCount. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .432
StepsSegmentSize. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .433
SyncScanCacheThr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .434
SysInit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436
System TimeZone Hour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .437
System TimeZone Minute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .438
Target Level Emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .439
TempLargePageSize. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .440
Temporary Storage Page Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .441
UseVirtualSysDefault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .442
UtilityReadAheadCount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443
Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .444
WAL Buffers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .445
WAL Checkpoint Interval. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .446
Checksum Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .446
System Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .449
System Journal Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .450
System Logging Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .451
User Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .452
18
Utilities
Table of Contents
Permanent Journal Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Temporary Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Chapter 13: Dump Unload/Load Utility
(dul, dultape) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
What DUL Does . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
What DULTAPE Does . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Starting and Running DUL/DULTAPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Saving Dumps to Removable Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Mailing Crash Dumps to the Teradata Support Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Transferring Windows Dump Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Restarting DUL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Return Codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
DUL Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
ABORT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
DATABASE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
DROP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
END . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
HELP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
LOAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
LOGOFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
LOGON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
.OS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
QUIT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
SEE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
SELECT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
SHOW TAPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
SHOW VERSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
UNLOAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
Chapter 14: Ferret Utility (ferret) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
Redirecting Input and Output (INPUT and OUTPUT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
The Teradata Database File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
About Write Ahead Logging (WAL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
Ferret Command Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
Utilities
19
Table of Contents
Using Ferret Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .498
Multitoken Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .498
Numeric Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .499
Classes of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503
Vproc Numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .504
Ferret Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .505
Defining Command Parameters Using SCOPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .505
Summary of Ferret Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .506
Ferret Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .507
DATE/TIME. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .508
DEFRAGMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .509
DISABLE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512
ENABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .513
ERRORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .514
HELP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .516
INPUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .517
OUTPUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .518
PACKDISK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .520
PRIORITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .525
QUIT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .526
RADIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .527
SCANDISK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .528
SCOPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .539
SHOWBLOCKS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .550
SHOWDEFAULTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .553
SHOWFSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .554
SHOWSPACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .561
SHOWWHERE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .563
TABLEID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .567
UPDATE DATA INTEGRITY FOR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .569
Chapter 15: Gateway Control (gtwcontrol) . . . . . . . . . . . . . . . . . . . . .571
Gateway Host Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .572
Gateway Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .574
Gateway Control Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .575
Changing Maximum Sessions Per Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .581
20
Utilities
Table of Contents
Chapter 16: Gateway Global (gtwglobal, xgtwglobal). . . . . . . 583
Gateway Global Commands Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User Names and Hexadecimal Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Specifying a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Displaying Network and Session Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Administering Users and Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performing Special Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Logging Sessions Off Using KILL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
585
585
586
586
587
587
588
588
DISABLE LOGONS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
DISABLE TRACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
DISCONNECT SESSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
DISCONNECT USER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
DISPLAY DISCONNECT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
DISPLAY FORCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
DISPLAY GTW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
DISPLAY NETWORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
DISPLAY SESSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
DISPLAY STATS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
DISPLAY TIMEOUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
DISPLAY USER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
ENABLE LOGONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
ENABLE TRACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
FLUSH TRACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
HELP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
KILL SESSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
KILL USER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
SELECT HOST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
SET TIMEOUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
Gateway Global Graphical User Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
Appendix A: How to Read Syntax Diagrams . . . . . . . . . . . . . . . . . . . 623
Syntax Diagram Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Notation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Required Entries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Utilities
623
623
623
624
21
Table of Contents
Optional Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .624
Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625
Loops. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625
Excerpts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .626
Multiple Legitimate Phrases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .627
Sample Syntax Diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .627
Diagram Identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .628
Appendix B: Starting the Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .629
Interfaces for Starting the Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .629
MP-RAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .629
Starting Utilities from Database Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .630
Starting Utilities from the Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .632
Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .632
Starting Utilities from Database Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .632
Starting Utilities from the Teradata Command Prompt . . . . . . . . . . . . . . . . . . . . . . . . . .634
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .634
Starting Utilities from Database Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .634
Starting Utilities from the Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637
z/OS and z/VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637
Starting Utilities from HUTCNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637
Appendix C: Session States, Events, and Actions . . . . . . . . . . . . .639
Session State Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .639
MP-RAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .639
Windows and Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .641
Session Event Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .644
MP-RAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .644
Windows and Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646
Session Action Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .651
MP-RAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .651
Windows and Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .653
22
Utilities
Table of Contents
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
Utilities
23
Table of Contents
24
Utilities
CHAPTER 1
Teradata Database Utilities
This chapter lists the Teradata Database utilities that are documented in Utilities Volume 1,
Utilities Volume 2, and Support Utilities.
Alphabetical Listing of Utilities
Utility
Purpose
Abort Host
(aborthost)
Aborts all outstanding transactions running on a failed host, until the
system restarts the host.
AMP Load
(ampload)
Displays the load on all AMP vprocs in a system, including the number of
AMP worker tasks (AWTs) available to each AMP vproc, and the number of
messages waiting (message queue length) on each AMP vproc.
AWT Monitor
(awtmon)
Collects and displays a user-friendly summary of the AMP Worker Task
(AWT) in-use count snapshots for the local node or all nodes in Teradata
Database.
CheckTable
(checktable)
Checks for inconsistencies between internal data structures such as table
headers, row identifiers, and secondary indexes.
CNS Run
(cnsrun)
Allows running of database utilities from scripts.
Configuration Utility
(config)
Defines AMPs, PEs, and hosts, and describes their interrelationships for
Teradata Database.
Control GDO Editor
(ctl)
Displays the fields of the PDE Control Parameters GDO, and allows
modification of the settings.
Note: This utility runs only under Microsoft Windows and Linux. It is
analogous to the Xctl Utility (xctl) utility on MP-RAS systems.
Utilities
Cufconfig Utility
(cufconfig)
Displays configuration settings for the user-defined function and external
stored procedure subsystem, and allows these settings to be modified.
Database
Initialization
Program
(DIP)
Executes one or more of the standard DIP scripts packaged with Teradata
Database. These scripts create a variety of database objects that can extend
the functionality of Teradata Database with additional, optional features.
DBS Control
(dbscontrol)
Displays the DBS Control Record fields, and allows these settings to be
modified.
Dump Unload/Load
(DUL, DULTAPE)
Saves system dump tables to tape, and restores system dump tables from
tape.
25
Chapter 1: Teradata Database Utilities
Alphabetical Listing of Utilities
Utility
Purpose
Ferret Utility
(ferret)
Defines the scope of an action, such as a range of tables or selected vprocs,
displays the parameters and scope of the action, and performs the action,
either moving data to reconfigure data blocks and cylinders, or displaying
disk space and cylinder free space percent in use of the defined scope.
Filer Utility
(filer)
Finds and corrects problems within the file system.
Gateway Control
(gtwcontrol)
Modifies default values in the fields of the Gateway Control Globally
Distributed Object (GDO).
Gateway Global
(gtwglobal)
Monitors and controls the Teradata Database LAN-connected users and
their sessions.
Lock Display
(lokdisp)
Displays a snapshot capture of all real-time database locks and their
associated currently-running sessions.
Locking Logger
(dumplocklog)
Logs transaction identifiers, session identifiers, lock object identifiers, and
the lock levels associated with currently-executing SQL statements.
Modify MPP List
(modmpplist)
Allows modification of the node list file (mpplist).
Priority Scheduler
(schmon, xschmon)
Creates, modifies, and monitors Teradata Database process prioritization
parameters.
Note: Filer is documented in Support Utilities.
All processes have an assigned priority based on their Teradata Database
session. This priority is used by Priority Scheduler to allocate CPU and
I/O resources.
Query Configuration
(qryconfig)
Reports the current Teradata Database configuration, including the Node,
AMP, and PE identification and status.
Query Session
(qrysessn)
Monitors the state of selected Teradata Database sessions on selected logical
host IDs.
Reconfiguration
Utility
(reconfig)
Uses the component definitions created by the Configuration Utility to
establish an operational Teradata Database.
Reconfiguration
Estimator
(reconfig_estimator)
Estimates the elapsed time for reconfiguration based upon the number and
size of tables on the current system, and provides time estimates for the
following phases:
• Redistribution
• Deletion
• NUSI building
26
Recovery Manager
(rcvmanager)
Displays information used to monitor progress of a Teradata Database
recovery.
Resource Check
Tools
(dbschk, nodecheck,
syscheck)
Identifies slow down and potential hangs of Teradata Database, and displays
system statistics that help identify the cause of the problem.
Utilities
Chapter 1: Teradata Database Utilities
Alphabetical Listing of Utilities
Utility
Purpose
RSS Monitor
(rssmon)
Displays real-time resource usage of Parallel Database Extensions (PDE) on
a per-node basis. Selects relevant data fields from specific Resource
Sampling Subsystem (RSS) tables to be examined for PDE resource usage
monitoring purposes.
Note: This utility runs only under MP-RAS.
Show Locks
(showlocks)
Displays locks placed by Archive and Recovery and Table Rebuild
operations on databases and tables.
For details Archive and Recovery, see Teradata Archive/Recovery Utility
Reference. For details on Table Rebuild, see Utilities Volume 2.
Utilities
System Initializer
(sysinit)
Initializes Teradata Database. Creates or updates the DBS Control Record
and other Globally Distributed Objects (GDOs), initializes or updates
configuration maps, and sets hash function values in the DBS Control
Record.
Table Rebuild
(rebuild)
Rebuilds tables that Teradata Database cannot automatically recover,
including the primary or fallback portions of tables, entire tables, all tables
in a database, or all tables in an Access Module Processor (AMP). Table
Rebuild can be run interactively or as a background task.
Tdlocaledef Utility
(tdlocaledef)
Converts a Specification for Data Formatting file (SDF) into an internal,
binary format (a GDO) for use by Teradata Database. The SDF file is a text
file that defines how Teradata Databaseformats numeric, date, time, and
currency output.
TDN Statistics
(tdnstat)
Performs GetStat/ResetStat operations and displays or clears Teradata
Database Network Services statistics.
TDN Tuner
(tdntune)
Displays and allows modification of Teradata Network Services tunable
parameters. The utility provides a user interface to view, get, or update the
Teradata Network Services, which are specific to tunable parameters.
Tpareset
(tpareset)
Resets the PDE and database components of Teradata Database.
Two-Phase Commit
Console
(tpccons)
Performs the following two-phase commit (2PC) related functions:
Task List
(tsklist)
Displays information about PDE processes and their tasks.
TVAM
(tvam)
Used to administer Teradata Virtual Storage.
Update DBC
(updatedbc)
Recalculates the PermSpace and SpoolSpace values in the DBASE table for
the user DBC, and the MaxPermSpace and MaxSpoolSpace values of the
DATABASESPACE table for all databases based on the values in the DBASE
table.
• Displays a list of coordinators that have in-doubt transactions.
• Displays a list of sessions that have in-doubt transactions.
• Resolves in-doubt transactions.
Note: This utility runs only under Microsoft Windows and Linux.
Note: TVAM is documented in Support Utilities.
27
Chapter 1: Teradata Database Utilities
User Logon for Administrator Utilities
Utility
Purpose
Update Space
(updatespace)
Recalculates the permanent, temporary, or spool space used by a single
database or by all databases in a system.
Verify _pdisks
(verify_pdisks)
Verifies that the pdisks on Teradata Database are accessible and are mapped
correctly.
Vproc Manager
(vprocmanager)
Manages the virtual processors (vprocs). For example, obtains status
ofspecified vprocs, initializes vprocs, forces a vproc to restart, and forces a
Teradata Database restart.
Xctl Utility
(xctl)
Displays the fields of the PDE Control Parameters GDO, and allows
modification of the settings.
Note: This utility runs only under MP-RAS. It is analogous to the Control
GDO Editor (ctl) utility on Windows and Linux systems.
Xperfstate Utility
(xperfstate)
Displays real-time performance data for the PDE system, including systemwide CPU utilization, system-wide disk utilization, and more.
Note: This utility runs only under MP-RAS.
Xpsh Utility
(xpsh)
A GUI front-end that allows performance of various MPP system level
tasks, such as debugging, analysis, and monitoring.
Note: This utility runs only under MP-RAS.
User Logon for Administrator Utilities
Access to administrator utilities usually is limited to rights specifically granted in the Teradata
Database. Users of these utilities should log on using their Teradata Database user names.
Externally authenticated users (such as those with directory or Kerberos user names) might
not have access to administrator utilities.
By convention, the Utilities books use the standard TD 2 logon format (Teradata
authentication) when discussing logons. For more information about logon formats, external
authentication, and the rights of externally authenticated users, see Security Administration.
For More Information
28
For information on...
See...
starting the utilities
the appendix titled “Starting the Utilities”
utilities related to Teradata Database security
Security Administration
Utilities
Chapter 1: Teradata Database Utilities
For More Information
Utilities
For information on...
See...
Archive and Recovery, FastExport, FastLoad,
MultiLoad, and TPump
the following client utility books:
•
•
•
•
•
Teradata Archive/Recovery Utility Reference
Teradata FastExport Reference
Teradata FastLoad Reference
Teradata MultiLoad Reference
Teradata Parallel Data Pump Reference
29
Chapter 1: Teradata Database Utilities
For More Information
30
Utilities
CHAPTER 2
Abort Host (aborthost)
The Abort Host utility, aborthost, allows you to cancel all outstanding transactions running
on a host that is no longer operating.
Audience
Users of Abort Host include the following:
•
Teradata Database operators
•
Teradata Database system programmers
•
Teradata Database system administrators
Prerequisites
You should be familiar with Teradata Database client (host software), particularly the Teradata
Director Program (TDP).
User Interfaces
aborthost runs on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Database Window
Windows
Database Window
Linux
Database Window
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
Utilities
31
Chapter 2: Abort Host (aborthost)
Aborting Teradata Database Transactions
Aborting Teradata Database Transactions
To abort all Teradata Database transactions for a particular host, do the following
✔ Type the following and press Enter:
ABORT HOST nnn
where nnn is the host number.
If you attempt a logon or a start request to a host that has been aborted in this manner, the
system displays this error message:
Host quiesced by operator
After running Abort Host, the only way to re-establish communication with the Teradata
Database is to restart the appropriate TDP.
32
Utilities
CHAPTER 3
AMP Load (ampload)
The Amp Load utility, ampload, displays the AMP vproc usage of AMP Worker Tasks (AWTs)
on Teradata Database, representing the load on each AMP vproc. The utility also displays the
number of messages queued for each AMP vproc, which can demonstrate congestion on the
system.
Audience
Users of ampload include the following:
•
Field engineers
•
Teradata Database System administrators
•
Teradata Database developers
•
Teradata Support Center
User Interfaces
ampload runs on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Command line
Database Window
Windows
Command line (“Teradata Command Prompt”)
Database Window
Linux
Command line
Database Window
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
Utilities
33
Chapter 3: AMP Load (ampload)
Usage Notes
Usage Notes
The following table describes the components of the ampload display.
Component
Description
Vproc Number
Uniquely identifies the vproc across the entire system.
Rel. Vproc#
Represents the number of the vproc number relative to the node upon which
the vproc resides.
Node ID
Identifies the node cabinet number and module number upon which the vproc
resides.
Msg Count
Represents the number of messages waiting (message queue length) on the
AMP vproc.
AWT Availability
Count
Represents the number of AMP work tasks (AWTs) available to the AMP vproc.
Examples
From the Database Window
After you start ampload from the Database Window, the following appears:
Vproc
No.
----4
0
1
5
6
2
3
7
Rel.
Vproc#
-----1
1
2
2
3
3
4
4
Node
ID
-----1-05
1-04
1-04
1-05
1-05
1-04
1-04
1-05
Msg
AWT Availability
Count
Count
------- ---------------0
80
0
79
1
80
0
80
0
80
1
80
1
80
0
80
From the Command Line or Teradata Command Prompt
After you start ampload from the Command Line or Teradata Command Prompt, the
following appears:
Vproc
No.
----4
5
6
7
34
Rel.
Vproc#
-----1
2
3
4
Node
ID
-----1-05
1-05
1-05
1-05
Msg
Count
------0
0
0
0
AWT Availability
Count
--------------80
80
80
80
Utilities
CHAPTER 4
AWT Monitor (awtmon)
The AWT Monitor utility, awtmon, collects and displays a user-friendly summary of the AMP
Worker Task (AWT) in-use count snapshots for the local node or all nodes in the Teradata
Database system. This information is useful for debugging purposes, including hot-AMP
conditions.
Audience
Users of awtmon include the following:
•
Teradata Support Center
•
Teradata Database administrators
•
Teradata Database programmers
User Interfaces
awtmon runs on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Command line
Windows
Command line (“Teradata Command Prompt”)
Linux
Command line
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
Utilities
35
Chapter 4: AWT Monitor (awtmon)
Syntax
Syntax
awtmon
-h
-d
-s
-S amp_cnt
-t threshold
t
n
1102B102
where:
Syntax element …
Specifies …
-h
the awtmon online help.
-d
debug mode.
-s
to display system-wide AWT in-use count snapshots.
-S amp_cnt
to display AWT in-use count snapshots in summary format if the total number
of AMP count (to be printed) is greater than or equal to amp_cnt. By default,
the -S option is enabled with 24. For a system-wide AWT in-use count
snapshots display (the -s option), amp_cnt is a total AMP count from all TPA
nodes.
- t threshold
to display the AMP # and AWT in-use count snapshots if the AWT in-use count
is greater than or equal to threshold. The default is 1.
tn
to display AWT in-use count snapshots in a loop. awtmon samples the AWT inuse counters for n iterations in t-second intervals. The default value for n is 2.
Usage Notes
By default, awtmon displays the AWT in-use count snapshots on the local node. When you use
the -s option, awtmon gathers and displays system-wide AWT in-use count snapshots in
summary output mode on a large system to reduce overall output size. Teradata recommends
to use the -s and -t options together on a large system to filter out the output size. This allows
you to locate any hot AMPs or nodes in the system. Once the hot AMPs or nodes are
identified, you can then run awtmon on them.
36
Utilities
Chapter 4: AWT Monitor (awtmon)
Usage Notes
Example 1
To display all AWT in-use count snapshots on a local node, type the following:
awtmon
The following displays:
Amp
Amp
Amp
Amp
Amp
Amp
Amp
Amp
Amp
Amp
Amp
Amp
Amp
Amp
Amp
Amp
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
62:
62:
62:
62:
62:
62:
62:
62:
62:
62:
62:
62:
62:
62:
62:
62:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
12
12
12
12
12
12
12
12
12
12
12
12
12
12
12
12
Example 2
To display all AWT in-use count snapshots on a local node with a three-loop count in a twosecond sleep interval, type the following:
awtmon 2 3
The following displays:
====> Mon Jan 24 11:36:45 2005 <=====
LOOP_0: Amp 2000: Inuse: 31: NEW: 17
LOOP_0: Amp 2007: Inuse: 32: NEW: 17
LOOP_0: Amp 2008: Inuse: 31: NEW: 17
LOOP_0: Amp 2015: Inuse: 34: NEW: 19
LOOP_0: Amp 2016: Inuse: 32: NEW: 18
LOOP_0: Amp 2023: Inuse: 33: NEW: 19
LOOP_0: Amp 2024: Inuse: 31: NEW: 17
LOOP_0: Amp 2031: Inuse: 31: NEW: 17
LOOP_0: Amp 2032: Inuse: 31: NEW: 17
LOOP_0: Amp 2039: Inuse: 32: NEW: 17
====> Mon Jan 24 11:36:47 2005 <=====
LOOP_1: Amp 2000: Inuse: 36: NEW: 17
LOOP_1: Amp 2007: Inuse: 36: NEW: 16
LOOP_1: Amp 2008: Inuse: 36: NEW: 17
LOOP_1: Amp 2015: Inuse: 36: NEW: 16
LOOP_1: Amp 2016: Inuse: 35: NEW: 16
LOOP_1: Amp 2023: Inuse: 36: NEW: 17
LOOP_1: Amp 2024: Inuse: 35: NEW: 16
LOOP_1: Amp 2031: Inuse: 35: NEW: 16
LOOP_1: Amp 2032: Inuse: 35: NEW: 16
LOOP_1: Amp 2039: Inuse: 36: NEW: 16
====> Mon Jan 24 11:36:49 2005 <====
LOOP_2: Amp 2000: Inuse: 35: NEW: 16
LOOP_2: Amp 2007: Inuse: 37: NEW: 17
Utilities
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
14
15
14
15
14
14
14
14
14
15
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
19
20
19
20
19
19
19
19
19
20
ONE: 19
ONE: 20
37
Chapter 4: AWT Monitor (awtmon)
Usage Notes
LOOP_2:
LOOP_2:
LOOP_2:
LOOP_2:
LOOP_2:
LOOP_2:
LOOP_2:
LOOP_2:
Amp
Amp
Amp
Amp
Amp
Amp
Amp
Amp
2008:
2015:
2016:
2023:
2024:
2031:
2032:
2039:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
Inuse:
36:
38:
37:
37:
37:
38:
37:
38:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
NEW:
17
17
17
17
17
18
17
17
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
ONE:
19
21
20
20
20
20
20
21
Example 3
To display AWT in-use count snapshots greater than or equal to 60 on a local node with a
three-loop count in a two-second interval, type the following:
awtmon -t 60 2 3
The following displays:
====> Tue Dec 16 08:55:49 2003 <====
LOOP_0: Amp 17 : Inuse: 62: NEW: 49 ONE: 13
LOOP_0: Amp 24 : Inuse: 62: NEW: 47 ONE: 15
LOOP_0: Amp 29 : Inuse: 60: NEW: 50 ONE: 10
LOOP_0: Amp 30 : Inuse: 60: NEW: 50 ONE: 10
LOOP_0: Amp 42 : Inuse: 62: NEW: 48 ONE: 14
====> Tue Dec 16 08:55:52 2003 <====
LOOP_1: Amp 0
: Inuse: 60: FOUR: 1 NEW: 50 ONE:
LOOP_1: Amp 6
: Inuse: 60: NEW: 50 ONE: 10
LOOP_1: Amp 17 : Inuse: 62: NEW: 48 ONE: 14
LOOP_1: Amp 24 : Inuse: 62: NEW: 46 ONE: 16
LOOP_1: Amp 29 : Inuse: 62: NEW: 50 ONE: 12
LOOP_1: Amp 30 : Inuse: 62: NEW: 50 ONE: 12
LOOP_1: Amp 35 : Inuse: 60: NEW: 50 ONE: 10
LOOP_1: Amp 36 : Inuse: 60: NEW: 50 ONE: 10
LOOP_1: Amp 42 : Inuse: 62: NEW: 48 ONE: 14
====> Tue Dec 16 08:55:54 2003 <====
LOOP_2: Amp 0
: Inuse: 60: FOUR: 1 NEW: 50 ONE:
LOOP_2: Amp 6
: Inuse: 60: NEW: 50 ONE: 10
LOOP_2: Amp 17 : Inuse: 62: NEW: 48 ONE: 14
LOOP_2: Amp 24 : Inuse: 62: NEW: 46 ONE: 16
LOOP_2: Amp 29 : Inuse: 62: NEW: 50 ONE: 12
LOOP_2: Amp 30 : Inuse: 62: NEW: 50 ONE: 12
LOOP_2: Amp 35 : Inuse: 60: NEW: 50 ONE: 10
LOOP_2: Amp 36 : Inuse: 60: NEW: 50 ONE: 10
LOOP_2: Amp 42 : Inuse: 62: NEW: 47 ONE: 15
9
9
Note: In-use count snapshots less than 60 are skipped.
38
Utilities
Chapter 4: AWT Monitor (awtmon)
Usage Notes
Example 4
To display AWT in-use count snapshots greater than or equal to 50 in a summary format with
a three-loop count in a two-second sleep interval, type the following:
awtmon -S 8 -t 50 2 3
The following displays:
====> Tue Dec 16 08:53:44
LOOP_0: Inuse: 55 : Amps:
LOOP_0: Inuse: 56 : Amps:
LOOP_0: Inuse: 57 : Amps:
====> Tue Dec 16 08:53:47
LOOP_1: Inuse: 54 : Amps:
LOOP_1: Inuse: 55 : Amps:
LOOP_1: Inuse: 56 : Amps:
====> Tue Dec 16 08:53:49
LOOP_2: Inuse: 54 : Amps:
LOOP_2: Inuse: 55 : Amps:
LOOP_2: Inuse: 56 : Amps:
2003 <====
5,6,11,12,17,18,23,29,30,35,36,41,42
0,47
24
2003 <====
5,6,11,12,17,18,23,29,30,35,36,41,42
0,47
24
2003 <====
5,11,12,17,18,23,29,30,35,36,41,42
0,6,47
24
Example 5
To display system-wide AWT in-use count snapshots greater than or equal to 50 in a summary
format, type the following:
awtmon -s -t 50
The following displays:
====> Tue
byn001-4:
byn001-4:
byn001-4:
byn001-4:
byn001-4:
byn001-4:
byn001-5:
byn001-5:
byn001-5:
byn001-5:
byn001-5:
byn001-5:
byn002-5:
byn002-5:
Utilities
Dec 16 08:58:07 2003 <====
LOOP_0: Inuse: 57 : Amps: 17,42
LOOP_0: Inuse: 58 : Amps: 5,11,12,35,41
LOOP_0: Inuse: 59 : Amps: 30
LOOP_0: Inuse: 60 : Amps: 23,36
LOOP_0: Inuse: 62 : Amps: 0
LOOP_0: Inuse: 63 : Amps: 6,18,24,29,47
LOOP_0: Inuse: 52 : Amps: 16,22
LOOP_0: Inuse: 53 : Amps: 28
LOOP_0: Inuse: 55 : Amps: 7,34,46
LOOP_0: Inuse: 56 : Amps: 10,13,19,43
LOOP_0: Inuse: 57 : Amps: 1,25,31,37,40
LOOP_0: Inuse: 62 : Amps: 4
LOOP_0: Inuse: 52 : Amps: 2,3,14,15,21,27,32,33,38,45
LOOP_0: Inuse: 53 : Amps: 9,20,39,44
39
Chapter 4: AWT Monitor (awtmon)
Usage Notes
40
Utilities
CHAPTER 5
CheckTable (checktable)
The CheckTable utility is a diagnostic tool that finds inconsistencies and data corruption in
internal data structures, such as table headers, row identifiers, and secondary indexes.
Teradata recommends running CheckTable on a regular schedule to detect problems that may
occur. To repair problems identified by CheckTable, contact the Teradata Support Center, as
required.
Audience
Users of CheckTable include the following:
•
Teradata Database system administrators
•
Teradata Database system programmers
•
Teradata Database operators
User Interfaces
CheckTable runs on the following platforms and interfaces.
Platform
Interfaces
MP-RAS
Database Window
Windows
Database Window
Linux
Database Window
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
Usage Notes
To start CheckTable, the following conditions are required:
Utilities
•
The system must be online.
•
No more than one AMP per cluster can be down.
•
Logons can be enabled or disabled.
41
Chapter 5: CheckTable (checktable)
Data Checking
Start CheckTable from the Supervisor window of Database Window (DBW). Use the DBW
start command. For more information on the options available with the start command, see
Chapter 11: “Database Window (xdbw).”
Data Checking
CheckTable provides various levels of data checking. Higher-level checks are generally more
thorough than lower-level checks, but require more system resources and time. Certain
general table checks are performed at all levels. After CheckTable runs, it reports the total
number of tables checked and bypassed.
CheckTable only checks the table headers of base global temporary tables and their instances;
data rows of instances are not checked during level-one checking. For detailed information on
level-one checking, see “Level-One Checking” on page 58.
Stored procedures, UDFs, and UDMs, store their text and object code internally as special
tables that are not normally accessible to users. Join index rows, and hash index rows are also
stored internally as tables. CheckTable checks all of these tables, in addition to checking
ordinary data tables.
Note: Some checks are not done for unhashed tables, and certain join indexes and hash
indexes. For additional information on check completion, see “CheckTable Messages” on
page 92.
CheckTable does not include the DBC.ALL table when accounting for tables checked because
it is not considered a conventional table.
Note: CheckTable does not check tables whose rollback has been cancelled.
CheckTable ensures that file system data structures are consistent. If CheckTable experiences
any file corruption problem while checking the tables and databases, the table check in
progress is aborted. A message similar to the following appears:
7495: <DbName>.<Table Name> was aborted at <date>/<time> due to error
<error code>. Run SCANDISK.
To find the error, run the SCANDISK command after CheckTable finishes checking the
remaining tables.
For information on messages, see “CheckTable Messages” on page 92. For information on
SCANDISK, see Chapter 14: “Ferret Utility (ferret)” and Filer in Support Utilities.
Referential Integrity Terminology
The following table defines the Referential Integrity terminology that CheckTable uses.
42
Utilities
Chapter 5: CheckTable (checktable)
Recommendations for Running CheckTable
Term
Definition
Child Table
Where referential constraints are defined, a Child Table refers to a referencing
table that references a parent table.
Parent Table
A Parent Table refers to a referenced table and is referenced by a Child table.
Primary Key
A Primary Key uniquely identifies a row of a table.
Foreign Key
A Foreign Key references columns in the Child Table and can consist of up to 16
different columns.
Referential
Constraint
Referential Constraint is defined on a column or on a table to ensure
Referential Integrity.
For example:
CREATE TABLE A
(A1 char(10) REFERENCES B (B1),
A2 integer,
FOREIGN KEY (A1,A2) REFERENCES C (C1,C2))
PRIMARY INDEX (A1);
The CREATE TABLE statement with two referential constraints declares the
following integrity constraints.
Constraint …
Is a level of …
1
column.
Foreign Key A1 references the Parent Key B1 in table B.
2
table.
Composite Foreign Key (A1, A2) references the unique
primary index in Parent Table C.
For more information on referential integrity, see “The Normalization Process” in Database
Design.
Recommendations for Running CheckTable
Running on a Quiescent System
You can run CheckTable on a quiescent system with all AMPs up for the following reasons:
Utilities
•
The validation of table integrity by CheckTable depends on the table not changing while
CheckTable performs its checks. Therefore, CheckTable places a READ lock on the table
during the check. You cannot perform an INSERT while CheckTable verifies consistency
between the primary and fallback rows of a table.
•
CheckTable places a READ lock on DBC.TVM momentarily to check for the existence of
the table being checked. This lock might cause a problem when you create or modify
tables, views, or macros, which require a WRITE lock on DBC.TVM.
43
Chapter 5: CheckTable (checktable)
Recommendations for Running CheckTable
•
In some cases, CheckTable cannot perform a complete validation of table integrity if one
or more AMPs are down.
Running on a Non-Quiescent System
You can also run CheckTable on a non-quiescent system by using the PRIORITY and
CONCURRENT MODE RETRY LIMIT option of the CHECK command.
To run CheckTable when users either are logged on or are expected to log on to the system, do
the following:
1
Sometimes CheckTable places a substantial resource demand (CPU cycles, disk access,
spool space, and so forth) on the system, degrading performance significantly for users
accessing the system. By default, CheckTable performs table checking at MEDIUM
priority.
To control the job scheduling based on the expected system workload and to improve
performance, use the PRIORITY option.
2
CONCURRENT MODE reduces lock contention by optimizing the locking protocol and
checking tables serially.
CONCURRENT MODE allows CheckTable to run to completion, and helps prevent
deadlocks on non-quiescent systems. The locking protocol used by concurrent mode is
optimized to minimize the number of locks required, and to use pseudo-table locks and
less restrictive lock types as much as possible. However, some locks are still required to
avoid reporting false errors due to in-progress transactions.
While lock contention is minimized, some blocking is still expected on an active system.
For example, a read lock is placed on a table while it is being checked. Therefore, update
operations to the table will be blocked until the table check is complete.
The CONCURRENT MODE RETRY LIMIT option skips all locked tables and retries these
locked tables after CheckTable finishes checking all specified tables not locked by other
sessions.
IF RETRY LIMIT is …
THEN CheckTable …
not specified
keeps retrying the locked tables forever or until all tables are checked
successfully. When trying to access a locked table, CheckTable waits a
maximum of five minutes for the lock to be released. If the lock is not
released in that time, CheckTable moves on to the next table.
CheckTable will return to the locked table to try again.
greater than 0
will continue to retry until the RETRY LIMIT is reached.
equal to 0
will not retry skipped tables.
negative
displays an error message.
Note: On non-quiescent systems and on quiescent systems with logons enabled,
CheckTable defaults to CONCURRENT MODE with RETRY LIMIT set to one. Under
these conditions, data dictionary checks are skipped.
44
Utilities
Chapter 5: CheckTable (checktable)
Using Function Keys and Commands
Using Function Keys and Commands
You can use function keys and commands to do the following:
•
View CheckTable help
•
Determine the status of a table check
•
Abort a table check
•
Abort the CHECK command
Viewing CheckTable Help
The following table shows how to view help, depending on your operating system.
On …
Use …
Linux
only commands.
MP-RAS
function keys or commands.
Windows
only commands.
Accessing Help Using F7 (MP-RAS Only)
To access the CheckTable help main menu, press F7. The following figure shows main menu.
Utilities
45
Chapter 5: CheckTable (checktable)
Viewing CheckTable Help
From the CheckTable help main menu, you can access other help information. The following
table describes each function key and its topics.
Press …
To view …
F2
general information about CheckTable.
F3
information about naming objects in CheckTable.
F4
the CHECK command syntax.
Accessing Help Using the Command Line
(Linux, MP-RAS, and Windows)
To access the entire CheckTable help system at once, type the following in the Enter a
command line:
help;
The following figure shows the beginning of the entire CheckTable help system.
To view the rest of CheckTable help, use the scroll bar on the right-hand side.
46
Utilities
Chapter 5: CheckTable (checktable)
Determining the Status of a Table Check
Determining the Status of a Table Check
Assume that the table check in progress specifies database DB0 with tables t1, t10, t100, t1000,
and t11. To determine the status of the table check, do the following.
Platform
To Check Status of Table Check
MP-RAS
Press the F2 function key or type status at the command prompt.
Windows
Type status at the command prompt.
Linux
Type status at the command prompt.
The following table shows the results for Parallel and Serial modes.
Mode
PARALLEL
Result Example
>>> STATUS: CheckTable running in PARALLEL mode.
5 CheckTable tasks started.
4 CheckTable tasks ACTIVE.
1 CheckTable tasks IDLE.
1000005 bytes spool space in use.
120 bytes spool space available.
Task
****
1
2
3
4
Status
**********************************
Checking data subtable ("DB0"."T1").
Checking data subtable ("DB0"."T10").
Checking data subtable ("DB0"."T100").
Waiting for a read lock on table
("DB0"."T1000"). 0 lock retry(s), total
wait time of 0.0e+00 minute(s).
These results indicate that CheckTable started five parallel checks:
• Three CheckTable tasks are checking the data subtable of tables
DB0.T1, DB0.T10, and DB0.T100.
• One CheckTable task is waiting for a read lock on table DB0.T1000.
• One CheckTable task is paused because of insufficient resources.
SERIAL
>>>> STATUS: Checking data subtable ("DB0"."T10").
This indicates that CheckTable is checking the data subtable of the table
DB0.T10.
Utilities
47
Chapter 5: CheckTable (checktable)
Stopping Table Checks
Stopping Table Checks
Stopping the Check of a Single Table
To stop the check of the current table or tables and continue the check of the next table, do the
following.
On …
Use …
MP-RAS
function key F3 or type Abort Table in the Enter a command line.
Windows
type Abort Table in the Enter a command line.
Linux
type Abort Table in the Enter a command line.
The following table shows the results of Abort Table command for serial and parallel modes.
IF you run CheckTable in…
THEN CheckTable aborts the …
SERIAL mode
check in progress on the current table.
The following message appears:
Abort table request will be honored.
Table check aborted by user.
PARALLEL mode
check in progress on all current tables.
The following message appears:
Abort table request will be honored.
Stopping the All Table Checks
To stop the CHECK command, do the following.
On …
Use …
MP-RAS
function key F4 or type Abort in the Enter a command line.
Windows
type Abort in the Enter a command line.
Linux
type Abort in the Enter a command line.
The result and output are identical in both PARALLEL and SERIAL modes. The following
message appears:
Abort check request will be honored.
Command aborted by user at 11:37:59 04/12/23.
158 table(s) checked.
155 fallback table(s) checked.
3 non-fallback table(s) checked.
13 table(s) bypassed due to being unhashed.
48
Utilities
Chapter 5: CheckTable (checktable)
Stopping Table Checks
3
204
0
0
table(s) bypassed due to pending MultiLoad.
table(s) bypassed due to pending Table Rebuild.
table(s) failed the check.
Dictionary error(s) were found.
Check completed at 11:37:59 04/12/23.
Utilities
49
Chapter 5: CheckTable (checktable)
CHECK Command
CHECK Command
The CHECK command allows you to specify the data structures you want to check in the
system by including or excluding certain tables, subtables, or databases.
Syntax
CHECK
A
ALL TABLES
,
30
dbname
EXCLUDE
dbname.tablename
,
30
dbname
dbname.tablename
EXCLUDE
tablename
,
30
dbname
dbname.tablename
A
B
AT LEVEL
,
BUT ONLY
INDEX ID=nnn
BUT NOT
UNIQUE INDEXES
NONUNIQUE INDEXES
REFERENCE ID=nnn
REFERENCE INDEXES
DATA
LARGE OBJECT ID = nnn
LOB ID = nnn
LARGE OBJECTS
LOBS
B
C
PENDINGOP
WITH ERROR LIMIT=
ONE
nnn
TWO
WITH NO ERROR LIMIT
THREE
C
D
SKIPLOCKS
IN SERIAL
IN PARALLEL
TABLES = n
D
E
PRIORITY=
L
M
H
R
Performance Group Name
E
F
CONCURRENT MODE
RETRY LIMIT n
F
;
ERROR ONLY
DOWN ONLY
50
COMPRESSCHECK
1102G029
Utilities
Chapter 5: CheckTable (checktable)
CHECK Command
Syntax element
Description
ALL TABLES
CheckTable checks all tables in all databases in the system.
dbname
CheckTable checks all tables in the specified database.
You can use wildcard characters or wildcard syntax in specifying database names.
For more information, see “Using Wildcard Characters in Names” on page 97 and
“Rules for Using Wildcard Syntax” on page 99.
EXCLUDE
CheckTable excludes from the check the specified tables or databases.
IF you use...
THEN you can...
CHECK ALL TABLES EXCLUDE
exclude one or more databases or tables.
CHECK dbname EXCLUDE
exclude one or more tables in a particular
database.
If a specified object does not exist in the system, then the summary report lists the
object in a message.
If you do not specify this option, then CheckTable checks all data objects in the
system.
tablename
CheckTable checks a specific table, including global temporary tables.
dbname.tablename
You can use wildcard characters or wildcard syntax in specifying database names.
For more information, see “Using Wildcard Characters in Names” on page 97 and
“Rules for Using Wildcard Syntax” on page 99.
BUT ONLY
Places constraints on what CheckTable checks:
BUT NOT
• BUT ONLY causes CheckTable to check only the subsequently specified objects.
• BUT NOT causes CheckTable to skip checking of the subsequently specified
objects.
If you do not specify any selection constraints, CheckTable checks all data objects in
the system.
These options are ignored for level-pendingop checks.
Example: The following command checks all tables limiting the check to only
INDEX ID=nnn at level three:
CHECK ALL TABLES BUT ONLY INDEX ID=nnn AT LEVEL THREE;
INDEX ID = nnn
Specifies a specific secondary index (specified by its index ID) when using a
constraint with CheckTable. In general, specify this option only when you want to
check a single table.
UNIQUE INDEXES
Specifies all unique secondary indexes when using a constraint with CheckTable.
NONUNIQUE INDEXES
Specifies all Nonunique Secondary indexes when using a constraint with
CheckTable.
REFERENCE ID = nnn
Specifies a specific index (as specified by its index ID) when using a constraint with
CheckTable.
REFERENCE INDEXES
Specifies all reference indexes when using a constraint with CheckTable.
Utilities
51
Chapter 5: CheckTable (checktable)
CHECK Command
Syntax element
Description
DATA
Specifies the data subtable when using a constraint with CheckTable.
LARGE OBJECT ID = nnn
Specifies a specific large object (as specified by its ID) when using a constraint with
CheckTable.
LOB ID = nnn
Specifies a specific large object (as specified by its ID) when using a constraint with
CheckTable.
LARGE OBJECTS
Specifies all large objects when using a constraint with CheckTable.
LOBS
Specifies all large objects when using a constraint with CheckTable.
AT LEVEL
Specifies the level checking CheckTable performs. Higher levels of checking yield
more thorough information, but generally require more system resources and more
time. You should choose the minimum level that will provide the information you
require.
•
•
•
•
PENDINGOP
ONE
TWO
THREE
Level
Description
PENDINGOP
Provides a list of tables for which pending
operations exist.
ONE
Isolates specific tables with errors.
TWO
Provides a detailed check of the following:
• Consistency of row IDs
• Checksum of primary and fallback
rows
• Hash codes
THREE
This level provides the most diagnostic
information, but uses more system
resources and requires more time. Use this
level of check only when necessary.
For more information, see “CheckTable Check Levels” on page 56.
WITH ERROR LIMIT= nnn
CheckTable stops checking a table if it finds nnn or more errors. If CheckTable was
checking more than one table, it continues on to the next table.
The default is 20 errors for each table checked.
WITH NO ERROR LIMIT
52
CheckTable reports all errors for each table.
Utilities
Chapter 5: CheckTable (checktable)
CHECK Command
Syntax element
Description
SKIPLOCKS
CheckTable skips all locked tables automatically.
If a table is locked, and if CheckTable cannot obtain a lock, then CheckTable
indicates the table check is skipped. The summary at the end of CheckTable
processing includes the total number of tables skipped.
Without this option, CheckTable waits indefinitely on the table to be checked until
it is unlocked. The CheckTable Table Lock Retry Limit field in DBS Control
specifies the duration, in minutes, that CheckTable, in non-concurrent mode, will
retry a table check when the table is locked by another application.
The default is 0, which indicates that in non-concurrent mode, CheckTable will
retry a table check until CheckTable can access the table.
If the CheckTable Table Lock Retry Entry field is greater than 0, then CheckTable
will retry a table check within the specified limit.
For more information, see “CheckTable Table Lock Retry Limit” on page 316.
IN SERIAL
Specifies the mode of checking CheckTable uses:
IN PARALLEL
• IN SERIAL means CheckTable checks a single table at a time.
This is the default.
• IN PARALLEL means CheckTable checks multiple tables simultaneously. Using
PARALLEL mode saves time but is resource intensive. The number of tables that
CheckTable can check simultaneously in parallel depends on resource
availability.
You can check the status of the number of parallel checks CheckTable performs
at any time. For information on how to check the status, see “Determining the
Status of a Table Check” on page 47.
TABLES=n
Optionally used with IN PARALLEL to specify the upper limit on the number of
tables that will be checked simultaneously. The value of n can be any integer from
two through eight.
At level-one checking, the actual maximum number of tables that can be checked
simultaneously is based on the maximum number of AMP work tasks (AWT)
defined for the system. At all other levels, the maximum is based on the maximum
number of AWT and on the available spool space. The TABLES=n option is used to
decrease the number of tables checked in parallel to something less than the
maximum.
If there is insufficient spool space to check n tables in parallel, the number checked
will be less than the number specified.
Utilities
53
Chapter 5: CheckTable (checktable)
CHECK Command
Syntax element
Description
PRIORITY =
Specifies a priority level at which CheckTable should run. This option can be used
to control resource usage and improve performance. The available priority levels
are:
• L - Performance Group Low
• M - Performance Group Medium
This is the default, if the PRIORITY= option is not specified.
• H - Performance Group High
• R - Performance Group Rush
• Performance Group Name - any other Performance Group that has been defined.
If the Performance Group name contains any special characters, enclose the
name in double quotes. CheckTable reports and error if the specified
Performance Group has not been defined.
For more information on Performance Groups and Priority Scheduler, see Utilities
Volume 2.
CONCURRENT MODE
Use when running on a non-quiescent system. CONCURRENT MODE reduces
lock contention by optimizing the locking protocol and by automatically skipping
the locked tables to retry them later. After CheckTable checks all tables,
CheckTable automatically retries all tables that were skipped due to lock
contention.
When you specify the CONCURRENT MODE option, you cannot specify the IN
PARALLEL option. CheckTable always checks tables serially in concurrent mode to
reduce lock contention.
CHECK ALL TABLES … CONCURRENT MODE does not check data dictionaries
and the DBC database in concurrent mode. To check the DBC database and data
dictionaries in concurrent mode, use CHECK DBC … CONCURRENT MODE.
For more information on CONCURRENT MODE, see “Running on a NonQuiescent System” on page 44.
RETRY LIMIT n
The duration in minutes that CheckTable waits before attempting to re-check a
table that was skipped during CONCURRENT MODE operation.
If n is zero, CheckTable will not attempt to check tables that were skipped.
If RETRY LIMIT is not specified, CheckTable retries the locked tables indefinitely,
or until all tables have been checked successfully.
When trying to access a locked table, CheckTable waits a maximum of five minutes
for the lock to be released. If the lock is not released in that time, CheckTable moves
on to the next table. If the RETRY LIMIT has not been met, CheckTable will return
to the locked table to try again.
ERROR ONLY
CheckTable displays only bypassed tables and tables that have errors or warnings.
This option allows you to quickly identify and address problems that CheckTable
finds.
This option is ignored during level-pendingop checks.
54
Utilities
Chapter 5: CheckTable (checktable)
CHECK Command
Syntax element
Description
DOWN ONLY
Teradata Database can isolate some file system errors to a specific data or
index subtable, or to a contiguous range of rows (“region”) in a data or
index subtable. In these cases, Teradata Database marks only the affected
subtable or region down. This improves system performance and
availability by allowing transactions that do not require access to the down
subtable or rows to proceed, without causing a database crash that would
require a system restart.
However, if several regions in a subtable are marked down, it could indicate
a fundamental problem with the subtable itself. Therefore, when a
threshold number of down regions is exceeded, the entire subtable is
marked down on all AMPs, making it unavailable to most SQL queries. This
threshold can be adjusted by means of the DBS Control utility.
The DOWN ONLY option causes CheckTable to display status information for only
those data and index subtables that have been marked down. Down status
information for subtables is displayed at check levels one, two, and three. The
specific rows included in down regions are listed only for check levels two and
three. Down status information is not displayed for level-pendingop checks.
COMPRESSCHECK
CheckTable compares the compress value list from Table Header field 5 against
dbc.tvfield.compressvaluelist during the check.
This option is ignored during level-pendingop checks.
Usage Notes
The CHECK command, including database or table names, cannot exceed 255 characters.
Wildcard characters are counted as single characters. For more information see “Using Valid
Characters in Names” on page 95 and “Using Wildcard Characters in Names” on page 97.
The CHECK command is not case sensitive. You can specify the syntax elements in uppercase
or lowercase or a mixture of both. This also applies to names specified in wildcard syntax.
CheckTable can check an unlimited number of databases and tables, however the CHECK
command accepts a maximum of 30 names on the command line. Names of the form dbname,
tablename, and dbname.tablename are each considered to be a single name. However, wildcard
expressions are also considered to be single names, and can resolve to any number of databases
and tables. For more information on using wildcard expressions, see “Using Wildcard
Characters in Names” on page 97.
For information on CHECK command syntax errors, see “Syntax Error Messages” on page 92.
CHECKTABLEB
CHECKTABLEB is the non-interactive batch mode of the CheckTable utility.
The syntax following the CHECKTABLEB command is the same as the syntax following the
CHECK command.
You can run CHECKTABLEB in batch mode by using the cnsrun utility.
Once you start CHECKTABLEB, you cannot abort it. It runs until completed.
Utilities
55
Chapter 5: CheckTable (checktable)
CheckTable Check Levels
For more information, see “CHECK Command” on page 50. For information on cnsrun, see
Chapter 6: “CNS Run (cnsrun).”
CheckTable Check Levels
CheckTable provides four levels of internal data structure checking:
•
Level pendingop is the lowest level and checks for tables with pending operations.
•
Level one checks for specified system data structures, data subtables, large object (LOB)
subtables, and unique and nonunique secondary indexes (USIs and NUSIs).
•
Level-two checking does the following:
•
•
Determines whether row IDs on any given subtable are consistent with row IDs on
other subtables by comparing lists of these IDs in those objects
•
Compares the checksum of primary and fallback rows.
•
Verifies that hash codes reflect correct row distribution in each subtable.
Level-three checking provides the most detailed check and requires more system resources
than the other checking levels.
Each level performs most or all of the checks from the lower levels, and performs additional
checks that are more thorough. For example, level-two checking performs checks similar to
those from level-pendingop and level-one, and adds additional checks.
Note: Level-three checking does not include any checking of large objects. LOB checks are
performed exclusively at levels one and two.
At each level of checking, CheckTable inspects specific internal data structures. If CheckTable
detects errors during these checks, it displays error messages that describe the nature of the
errors. The message may be followed by additional information to show the location of the
problem: AMP, subtable (primary data, fallback data, or index), and row or range of rows, if
applicable.
In the lists of errors described in this chapter, this additional information is surrounded by
square brackets ([ ]) to indicate that the information is displayed only when applicable to the
error detected. The square brackets themselves are not displayed in the error messages
returned by CheckTable.
Note: For more information on specific CheckTable error messages by number, see Messages.
The following table indicates the specific internal data structures that are checked by each type
of level check. Subsequent tables describe the operations performed at each level.
56
Utilities
Chapter 5: CheckTable (checktable)
CheckTable Check Levels
Check Level
Internal Data Structures Checked
When to use this level
Pendingop
• Data dictionary (if database DBC is
checked)
• Table dictionary
• Table header
Use pendingop checking to check for
tables with the following pending
operations:
• Data dictionary (if database DBC is
checked)
• Table dictionary
• Table header
• Obsolete subtables
• Unique secondary indexes
• Nonunique secondary indexes
• ParentCount
• ChildCount
• Subtable of a given table
• Base global temporary tables
• Data subtables
• Large object subtables
• Reference indexes
Use level-one checking only to isolate
specific tables with errors. Then perform
level-two or level-three checks on those
specific tables.
•
•
•
•
•
Data subtables
Large object subtables
Unique secondary indexes
Nonunique secondary indexes
Reference indexes
Use level-two checking when checks by
level one fail, and you require a detailed
checking of consistency of row IDs, the
checksum of primary and fallback rows,
and hash codes.
•
•
•
•
Data subtables
Unique secondary indexes
Nonunique secondary indexes
Reference indexes
Use level-three checking rarely and only
for specific diagnostic purposes, such as
when an AMP is down.
One
Two
Three
•
•
•
•
•
•
FastLoad
Restore
Reconfig
Rebuild
Replicate copy
MultiLoad
Note: When level-one checks use the
DOWN ONLY option, the CheckTable
results show only the subtables that have
been marked down.
Note: When level-two checks use the
DOWN ONLY option, the CheckTable
results show only the subtables and
regions (ranges of rows in subtables) that
have been marked down.
Note: When level-three checks use the
DOWN ONLY option, the CheckTable
results show only the subtables and
regions (ranges of rows in subtables) that
have been marked down.
Note: Databases and tables within databases are checked in alphabetical order.
Utilities
57
Chapter 5: CheckTable (checktable)
Level-Pendingop Checking
Level-Pendingop Checking
Level-pendingop checking checks field 4 of a table header and warns you if the table is in any
of the following states:
•
Pending FastLoad
•
Pending Restore
•
Pending Reconfig
•
Pending Rebuild
•
Pending Replicate copy
•
Pending MultiLoad
Level-One Checking
In addition to performing level-pendingop checks, level-one checking checks the following:
•
Specified system data structures
•
Data subtables
•
Large object subtables
•
Unique and nonunique secondary indexes
The following table summarizes level-one checks. Further information about each type of
check is provided in subsequent sections.
58
In a check type of …
CheckTable …
table dictionary
table header
• checks the partitioning definition in the table header for a
partitioned primary index for validity for this operating
environment.
• compares the compress value list from Table Header field 5
against dbc.tvfield.compressvaluelist, if the COMPRESSCHECK
option is specified.
obsolete subtables
checks for extraneous subtables of a given table.
reference indexes
• checks whether any AMP has any reference index flagged as
invalid.
• if the index is hashed and the table is fallback, compares the
primary row count with the fallback row count
• compares the row count in the index subtable with the row
count in the data subtable
data subtables
compares row count in the primary data subtable with row count
in fallback data subtables per AMP in each cluster. This is done
only if the table is fallback.
Utilities
Chapter 5: CheckTable (checktable)
Level-One Checking
In a check type of …
CheckTable …
large object subtables
• compares row counts in the LOB subtable
• verifies that the primary copy of the LOB subtable matches the
row count in the fallback subtables
• verifies that the row count of the base data table matches that of
each LOB subtable associated with the base table. Note that in
cases where the associated LOB value i s NULL, there will be no
LOB row in the LOB subtable.
unique secondary indexes
checks if any AMP has any unique secondary indexes flagged as
invalid. CheckTable compares the following:
• row counts between data subtables across all primary AMPs.
• row counts in corresponding unique secondary index subtables
across all primary AMPs.
If the primary AMP is unavailable, and the table is fallback, then
the appropriate fallback counts are used instead. If the AMP is
unavailable and the table is not fallback, then this check is
skipped.
• compares row counts in the unique secondary index subtable,
on each primary AMP, with the row count in the corresponding
unique secondary index subtables of the other AMPs in the
same cluster. However, this is done only if the table is fallback.
For unique and nonunique secondary indexes, an invalid index is
not an error. Unless the index is excluded explicitly from the check,
CheckTable issues a warning.
nonunique secondary indexes
checks if any AMP has any nonunique secondary indexes flagged as
invalid.
CheckTable compares row counts of nonunique secondary
subtables to row counts for the corresponding data subtable. You
can run the NUSI check during level-one checking without all
AMPs online. However, relationships between data and index
subtables on unavailable AMPs are not checked.
For unique and nonunique secondary indexes, an invalid index is
not an error. Unless the index is excluded explicitly from the check,
CheckTable issues a warning.
down status only
(DOWN ONLY option)
displays only those subtables that have been marked down.
The following sections describe specific types of checks in more detail.
Checking Table Headers
For a partitioned primary index, the partitioning definition in the table header is checked for
validity for the operating environment, and checked for consistency and correctness.
If the table headers for a table with a partitioned primary index are not valid, the following
error is reported, and the table is skipped:
Utilities
59
Chapter 5: CheckTable (checktable)
Level-One Checking
7535: Operation not allowed: database.table table header has invalid
partitioning.
To correct the table header and/or the partitioning of rows for a table, use the ALTER TABLE
statement with the REVALIDATE PRIMARY INDEX option.
For more information on the ALTER TABLE statement, see SQL Data Definition Language.
Checking Data Subtables
The data subtable check only checks for fallback tables. This check compares the row count in
the primary data subtable on each AMP to the sum of the row counts in the corresponding
fallback data subtables on other AMPs in the same cluster.
If the row counts do not match, then the following message appears.
Message
number
2747
Message
Primary and fallback data counts do not match.
Primary AMP 00000
nnnn [too many|too few] fallback rows
To complete this check, CheckTable must access all AMPs in a cluster. Therefore, any AMP in
a cluster with an unavailable AMP is not checked.
Checking Large Object Subtables
In the case of LOB subtables, there is not a one-to-one correspondence in the row counts of
the base data table and the LOB subtable, since LOBs may span multiple physical rows in the
LOB subtable. For example if a base data table row points to a LOB that is 198,000 bytes in
length, there will be four rows in the LOB subtable for this object (three rows containing
64,000 bytes and one row containing 6,000 bytes). When the counts are compared against the
base data table, these four rows are counted by CheckTable as a single row.
If the row counts do not match, one of the following messages appears.
Message
number
60
Message
7517
Data count does not match the LOB count.
AMP nnnnn, [Primary|Fallback] subtable
LOB id nnn
nnnn [too many | too few] LOB rows
7518
Primary and fallback LOB counts do not match
Primary AMP nnnnn
LOB id nnn
nnnn [too many | too few] fallback rows
Utilities
Chapter 5: CheckTable (checktable)
Level-One Checking
Checking Unique Secondary Indexes
The first check compares the sum of rows in the primary (and if needed, fallback) data
subtables across all AMPs to the sum of the rows in the corresponding primary (and if needed,
fallback) unique secondary index subtables across all AMPs.
If the sums of rows do not match, then the following message appears.
Message
number
2748
Message
Data count does not match the USI count.
USI id nnn
[nnnn [too many|too few] USI rows]
The row count of the data subtable should be greater than or equal to the row count of the
reference index subtable. If data row count is less than reference index row count, then the
system displays the following message:
Data count is less then RI count.
Reference Index ID nnn
[nnnn [too many|too few] reference index rows]
For no-fallback tables, if an AMP is unavailable, this entire check is bypassed. For fallback
tables, if an AMP is down, the fallback subtables (both data and index) correspond to primary
subtables.
The second check is performed only for fallback tables. The sum of the row count for the
primary unique secondary subtable on each AMP is compared to the sum of the row counts in
the corresponding fallback unique secondary index subtables on other AMPs in the same
cluster. If the sums of the row counts do not match, then the following message appears.
Message
number
2749
Message
Primary and fallback USI counts do not match
Primary AMP nnnnn
USI id nnn
[nnnn [too many|too few] fallback rows]
To complete this check, CheckTable must access all AMPs in a cluster. Therefore, this check is
bypassed on any AMP in a cluster that contains an unavailable AMP.
Checking Nonunique Secondary Indexes
The check of a nonunique secondary index compares the counts of the rows in each data
subtable, in both primary and fallback where appropriate, to the count of rows indexed by the
corresponding index subtable. If the counts of rows do not match, then the following message
appears.
Utilities
61
Chapter 5: CheckTable (checktable)
Level-Two Checking
Message
number
2750
Message
Data subtable count does not match NUSI count.
[AMP nnnnn, Primary subtable nnnn]
[AMP nnnnn, Fallback subtable nnnn]
NUSI id nnn
[nnnn [too many|too few] rows referenced by index]
Because NUSI rows reside on the same AMP as the data rows, the index allows this check to
complete without having all the AMPs available. Relationships between data and index
subtables on the unavailable AMPs are not checked.
Checking for Down Subtables and Regions
The check for down subtables and regions reads table header information where down status
is recorded. If a subtable or a range of rows in a subtable is marked down, one of the following
messages appears:
Message
number
Message
9129
AMP nnnnn, [Primary Data|Fallback|Index|Fallback Index] Subtable
nnnn is marked down [with down region].
9130
AMP nnnnn, [Primary Data|Fallback|Index|Fallback Index] Subtable
nnnn has down region.
Level-Two Checking
In addition to performing level-pendingop and level-one checks, level-two checking does the
following:
•
Determines whether row IDs on any given subtable are consistent with row IDs on other
subtables by comparing lists of these IDs in those objects.
•
Compares the checksum of primary and fallback rows.
•
Verifies that hash codes reflect correct row distribution in each subtable.
Critical to level-two and level-three checking is the amount of spool space available on the
system. The following formulas determine the amount of spool space required to perform
these two levels of checking:
62
•
RID = 32,020 * (# of rows / 3,200)
•
SIS = (52 * # of rows) + (3,000,000 * # of AMPs)
Utilities
Chapter 5: CheckTable (checktable)
Level-Two Checking
To determine the number of rows, count the rows in the primary data table, which is the same
as the primary index table. The following table shows the values and spool space required in
the above formulas.
The value …
Is the spool space required …
RID
for the data subtable.
SIS
by the largest secondary index subtable, whether it is unique or nonunique.
Therefore, to perform a level-two check for a nonfallback table that involves the secondary
index, SIS requires a total spool space equal to the following:
RID + (SIS*2)
If the table contains fallbacks, then SIS requires a total spool space equal to the following:
2 *(RID + (SIS*2))
If you specify PARALLEL mode, then the total required spool space equals the sum of the
spool space required for each of the tables being checked.
The following table summarizes level-two checks.
In a check type of …
CheckTable …
data subtable
checks primary and fallback subtables for duplicate row IDs,
out-of-order row IDs, and incorrectly distributed rows. For fallback
tables, it checks primary against fallback.
large object subtable
checks for:
• duplicate row id, out-of-order row id, and incorrectly distributed
rows in the LOB subtables
• missing primary or fallback row, where fallback exists for LOBs
• checksum of the primary and fallback copies of the LOB row
• stale locators/OIDs where the update tag of the OID in the base
row does not match the update tag in the large LOB row
• gaps in the LOB, for example, if the first and third portions of the
LOB are present, but the second piece is missing from the
subtable.
unique secondary indexes
Utilities
• checks the list of row IDs indexed in the primary or fallback index
subtable copy with row IDs in primary or fallback data subtable
copy. For fallback tables, compares row IDs in fallback copy to
row IDs in primary copy.
• checks for duplicate, out-of-order, and incorrectly distributed
row IDs.
63
Chapter 5: CheckTable (checktable)
Level-Two Checking
In a check type of …
CheckTable …
nonunique secondary
indexes
• checks index subtables for duplicate and out-of-order index rows.
• checks index row for out-of-order data row IDs.
• checks any indexed data row IDs that belong to another AMP or
belong in another subtable.
• checks lists of row IDs (indexed by index rows in the primary
index subtable and each fallback index subtable) with the actual
list of row IDs of data rows in the corresponding data subtables.
reference index check
checks specified reference index subtables for duplicate, out-of-order
row IDs, and incorrectly distributed rows.
Note: If the reference index subtable is fallback, Teradata Database
uses the row IDs of the reference index rows from the primary copy
of the subtables.
down status only
(DOWN ONLY option)
displays only those subtables and regions that have been marked
down.
Further information about each type of check is provided in subsequent sections.
Note: If you run SCANDISK and correct any problems before using CheckTable, then it
should not detect any duplicate or out-of-order rows during level-two checking.
Checking Data Subtables
As CheckTable scans the data subtables, it performs checksum calculations for primary and
corresponding fallback rows, and detects any duplicate, out-of-order, or incorrectly
distributed rows in the primary and fallback data subtables. If CheckTable finds an error, it
displays one of the following messages.
Message
number
64
Message
2751
Data row out of order.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Previous row id nnnnH nnnnH nnnnH nnnnH nnnnH
2752
Data row has same row id as previous row.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
2753
Primary data row is on wrong AMP.
AMP nnnnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
[Expected fallback AMP nnnnn,
Expected fallback subtable nnnn]
[Expected fallback AMP nnnnn,
Expected fallback subtable unknown]
Utilities
Chapter 5: CheckTable (checktable)
Level-Two Checking
Message
number
Message
2754
Fallback data row is on wrong AMP.
AMP nnnnn, Fallback subtable nnnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
[Expected fallback AMP nnnnn,
Expected fallback subtable nnnn]
[Expected fallback AMP nnnnn,
Expected fallback subtable unknown]
Expected primary AMP nnnnn
2755
Fallback data row is in wrong subtable.
AMP nnnnn, Fallback subtable nnnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected fallback subtable nnnn
Expected primary AMP nnnnn
2791
Primary and fallback data row checksums do not match.
Primary AMP nnnnn
Fallback AMP nnnnn Fallback subtable nnnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Note: CheckTable treats index rows that are out-of-order as if they do not exist, and continues
looking for the next row in the sequence. This can cause CheckTable to report out-of-order
errors for subsequent rows, even if only a single row was out of order.
For example, given the sequence: 1, 2, 4, 6, 8, 20, 9, 10, 11, 12, 17, 18, 21, 25, CheckTable
would see 9 as being out of ascending sequence. CheckTable would therefore ignore the 9,
then continue looking for a row to follow 20. In this case, although 20 is likely the only row
that is out of order, CheckTable would report errors for rows 9 through 18.
The level-two checks are performed on all primary and fallback data rows on all online AMPs.
CheckTable does not check data on down AMPs. CheckTable performs the remaining checks,
which detect primary or fallback rows without corresponding fallback rows or vice versa, only
for fallback tables.
If CheckTable detects extra or missing fallback data rows, then one of the following messages
appears.
Message
number
Utilities
Message
2756
Fallback data row is missing.
Primary AMP nnnnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
[Expected fallback AMP nnnnn,
Expected fallback subtable nnnn]
[Expected fallback AMP nnnnn,
Expected fallback subtable unknown]
2757
Primary data row is missing.
Fallback AMP nnnnn, Fallback subtable nnnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
65
Chapter 5: CheckTable (checktable)
Level-Two Checking
If an AMP is unavailable, then the primary-to-fallback data check bypasses rows whose
alternate (primary or fallback) copy would be expected to be on the unavailable AMP.
However, CheckTable still scans all subtables to check for exceptions other than
primary-to-fallback inconsistencies.
Checking Large Object Subtables
CheckTable determines whether row IDs on any given subtable are consistent with row IDs on
other subtables. It also compares the checksum of primary and fallback rows and verifies that
hash codes reflect correct row distribution in each subtable.
Message
number
66
Message
7519
LOB row out of order.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
LOB id nnn
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
Previous row id nnnnH nnnnH nnnnH nnnnH nnnnH
7520
LOB row has same row ID as previous row.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
LOB id nnn
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
7521
Primary LOB row is on wrong AMP.
AMP nnnnn
LOB ID nnn
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
Expected fallback AMP nnnnn
Expected fallback subtable nnnn
7522
Fallback LOB row is on wrong AMP.
AMP nnnnn
LOB id nnn
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
Expected fallback AMP nnnnn
Expected fallback subtable nnnn
7523
Fallback LOB row is in wrong subtable.
[AMP nnnnn, Fallback subtable nnnn]
LOB id nnn
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected fallback subtable nnnn
Expected primary AMP nnnnn
7524
Data row contains invalid LOB oid.
AMP nnnnn
LOB id nnn
Data row id nnnnH nnnnH nnnnH nnnnH nnnnH
LOB oid nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
Utilities
Chapter 5: CheckTable (checktable)
Level-Two Checking
Message
number
Message
7525
Fallback LOB row is missing
Primary AMP nnnnn
LOB id nnn
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected fallback AMP nnnnn
Expected fallback subtable nnnn
7526
Primary LOB row is missing
Fallback AMP nnnnn, Fallback subtable nnnn
LOB id nnn
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
7527
Base row OID update tag does not match LOB row OID udpate tag.
[AMP nnnnn, Fallback subtable nnnn]
LOB id nnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
7528
Primary and fallback LOB row checksums do not match
Primary AMP nnnnn
Fallback AMP nnnnn, Fallback subtable nnnn
LOB id nnn
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
7529
Missing a piece of the large object
AMP nnnnn
LOB id nnn
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
Prev LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
7530
LOB row references a non-existent data row
AMP nnnnn
LOB id nnn
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
Prev LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
7531
Data row references a non-existent LOB row
AMP nnnnn
LOB id nnn
Data row id nnnnH nnnnH nnnnH nnnnH nnnnH
LOB oid
7532
Data row references a LOB already referenced by another row
AMP nnnnn
LOB id nnn
Data row id nnnnH nnnnH nnnnH nnnnH nnnnH
LOB row id nnnnH nnnnH nnnnH nnnnH nnnnH
Checking Unique Secondary Indexes
If CheckTable performs a unique secondary index check without a preceding data or
secondary index check on the same table, a scan of the primary (and fallback, if an AMP is
unavailable) data subtables checks for duplicate, out-of-order, and incorrectly distributed data
rows.
Utilities
67
Chapter 5: CheckTable (checktable)
Level-Two Checking
IF an Amp is unavailable and the
table is …
THEN …
no-fallback
all unique secondary index checks are bypassed.
fallback
the fallback copies of index and data subtables are used in
place of the primary copies, which are on the unavailable
AMP.
As CheckTable checks the index subtables, it performs checksum calculations for primary and
corresponding fallback index rows, and it also detects duplicate, out-of-order, or incorrectly
distributed index rows. If CheckTable detects one of these index rows, then CheckTable
discards the offending row from the remainder of the checking process, and displays one of
the following messages.
Message
number
68
Message
2758
USI row out of order.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
USI id nnn
USI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Previous row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
2759
USI row has same row id as previous row
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
USI id nnn
USI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
2760
Primary USI row is on wrong AMP
AMP nnnnn
USI id nnn
USI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
[Expected fallback AMP nnnnn,
Expected fallback subtable nnnn]
[Expected fallback AMP nnnnn,
Expected fallback subtable unknown]
2761
Fallback USI row is on wrong AMP.
AMP nnnnn, Fallback subtable nnnn
USI id nnn
USI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
[Expected fallback AMP nnnnn,
Expected fallback subtable nnnn]
[Expected fallback AMP nnnnn,
Expected fallback subtable unknown]
Utilities
Chapter 5: CheckTable (checktable)
Level-Two Checking
Message
number
Message
2762
Fallback USI row is in wrong subtable.
AMP nnnnn, Fallback subtable nnnn
USI id nnn
USI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected fallback subtable nnnn
Expected primary AMP nnnnn
2792
Primary and fallback USI row checksums do not match.
Primary AMP nnnnn
Fallback AMP nnnnn Fallback subtable nnnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Note: CheckTable treats index rows that are out-of-order as if they do not exist, and continues
looking for the next row in the sequence. This can cause CheckTable to report out-of-order
errors for subsequent rows, even if only a single row was out of order.
For example, given the sequence: 1, 2, 4, 6, 8, 20, 9, 10, 11, 12, 17, 18, 21, 25, CheckTable
would see 9 as being out of ascending sequence. CheckTable would therefore ignore the 9,
then continue looking for a row to follow 20. In this case, although 20 is likely the only row
that is out of order, CheckTable would report errors for rows 9 through 18.
The first check made for unique secondary indexes compares the list of data row IDs, indexed
by index rows in the primary (or if needed, fallback) copy of the index subtables to the data
row IDs in the primary (or if needed, fallback) copy of the data subtable.
During these checks, CheckTable detects non-indexed data rows, nonexistent indexed data
rows, and multiple indexed data rows. In these cases, one of the following messages appears.
Message
number
Utilities
Message
2763
USI row indexes non existent data row.
[USI AMP nnnnn, Primary USI subtable]
[USI AMP nnnnn, Fallback USI subtable nnnn]
USI id nnn
USI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
[Expected primary data AMP nnnnn]
[Expected fallback data AMP nnnnn]
[Expected fallback data subtable nnnn]
[Expected fallback data subtable unknown]
2764
Data row not indexed by USI.
[Data AMP nnnnn, Primary data subtable]
[Data AMP nnnnn, Fallback data subtable nnnn]
USI id nnn
Data row id nnnnH nnnnH nnnnH nnnnH nnnnH
69
Chapter 5: CheckTable (checktable)
Level-Two Checking
Message
number
2765
Message
Data row indexed multiple times by USI.
[Data AMP nnnnn, Primary data subtable]
[Data AMP nnnnn, Fallback data subtable nnnn]
USI id nnn
Data row id nnnnH nnnnH nnnnH nnnnH nnnnH
[USI AMP #1 nnnnn, Primary USI subtable]
[USI AMP #1 nnnnn, Fallback USI subtable nnnn]
USI row id #1 nnnnH nnnnH nnnnH nnnnH nnnnH
[USI AMP #2 nnnnn, Primary USI subtable]
[USI AMP #2 nnnnn, Fallback USI subtable nnnn]
USI row id #2 nnnnH nnnnH nnnnH nnnnH nnnnH
CheckTable performs the remaining checks, which detect index rows in the primary copy of
the index subtable without corresponding index rows in the fallback subtables or vice versa,
only for fallback tables. CheckTable does not check rows whose primary or fallback copies are
unavailable due to an unavailable AMP.
If CheckTable detects extra or missing fallback unique secondary index rows, one of the
following messages appears.
Message
number
Message
2766
Fallback USI row is missing.
Primary AMP nnnnn
USI id nnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
[Expected fallback AMP nnnnn,
Expected fallback subtable nnnn]
[Expected fallback AMP nnnnn,
Expected fallback subtable unknown]
2767
Primary USI row is missing.
Fallback AMP nnnnn, Fallback subtable nnnn
USI id nnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
Checking Nonunique Secondary Indexes
If CheckTable performs a NUSI check without a preceding data or secondary index check on
the same table, it performs a scan of the primary and fallback data subtables to check for
duplicate, out-of-order, or incorrectly distributed data rows.
As CheckTable traverses the index subtables, it detects duplicate and out-of-order index rows.
If CheckTable detects either of these problems, it discards the offending row from the checking
process and displays one of the following messages.
70
Utilities
Chapter 5: CheckTable (checktable)
Level-Two Checking
Message
number
Message
2768
NUSI row out of order.
[AMP nnnnn, Primary subtable nnnn]
[AMP nnnnn, Fallback subtable nnnn]
NUSI id nnn
NUSI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Previous row id nnnnH nnnnH nnnnH nnnnH nnnnH
2769
NUSI
[AMP
[AMP
NUSI
NUSI
row has same row id as previous row.
nnnnn, Primary subtable]
nnnnn, Fallback subtable nnnn]
id nnn
row id nnnnH nnnnH nnnnH nnnnH nnnnH
Note: CheckTable treats index rows that are out-of-order as if they do not exist, and continues
looking for the next row in the sequence. This can cause CheckTable to report out-of-order
errors for subsequent rows, even if only a single row was out of order.
For example, given the sequence: 1, 2, 4, 6, 8, 20, 9, 10, 11, 12, 17, 18, 21, 25, CheckTable
would see 9 as being out of ascending sequence. CheckTable would therefore ignore the 9,
then continue looking for a row to follow 20. In this case, although 20 is likely the only row
that is out of order, CheckTable would report errors for rows 9 through 18.
While traversing index subtables, CheckTable checks the list of data row IDs in each index row
to verify that the list is in order. Although CheckTable does not discard out-of-order data row
IDs, the following error message appears for each such data row ID.
Message
number
2786
Message
NUSI
[AMP
[AMP
NUSI
NUSI
data row ID out of order.
nnnnn, Primary subtable]
nnnnn, Fallback subtable nnnn]
id nnn
row id nnnnH nnnnH nnnnH nnnnH nnnnH
While CheckTable traverses the index subtables, if it finds any indexed data row ID that
belongs (according to its hash code) on another AMP or in another fallback subtable,
CheckTable ignores that data row ID for the remainder of the check, and displays an error
message. One of the following messages appears.
Utilities
71
Chapter 5: CheckTable (checktable)
Level-Two Checking
Message
number
Message
2770
Data row id referenced by NUSI is on wrong AMP.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
NUSI id nnn
NUSI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
[Expected fallback AMP nnnnn,
Expected fallback subtable nnnn]
[Expected fallback AMP nnnnn,
Expected fallback subtable unknown]
2771
Data row id referenced by fallback NUSI in
wrong subtable.
[AMP nnnnn, Fallback subtable nnnn]
NUSI id nnn
NUSI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected fallback subtable nnnn
Expected primary AMP nnnnn
The check for nonunique secondary indexes compares the list of row IDs indexed by index
rows in the primary and each fallback index subtable with the actual list of row IDs of data
rows in the corresponding data subtables. During this check, CheckTable detects non-indexed
data rows, nonexistent indexed data rows, and multiple indexed data rows. In these cases, one
of the following messages appears.
Message
number
72
Message
2772
NUSI row indexes non existent data row.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
NUSI id nnn
NUSI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
2773
Data row not indexed by NUSI.
[Data AMP nnnnn, Primary subtable]
[Data AMP nnnnn, Fallback subtable nnnn]
NUSI id nnn
Data row id nnnnH nnnnH nnnnH nnnnH nnnnH
2774
Data
[AMP
[AMP
NUSI
Data
NUSI
NUSI
row indexed multiple times by NUSI.
nnnnn, Primary subtable]
nnnnn, Fallback subtable nnnn]
id nnn
row id nnnnH nnnnH nnnnH nnnnH nnnnH
row id #1 nnnnH nnnnH nnnnH nnnnH nnnnH
row id #2 nnnnH nnnnH nnnnH nnnnH nnnnH
Utilities
Chapter 5: CheckTable (checktable)
Level-Two Checking
Checking Reference Indexes
The system compares each reference index row from the subtables with the rows in the spool
file created during the data subtable check. If an inconsistency occurs between the reference
index subtables and the corresponding data subtable, then one of the following messages
appears.
Message
number
Utilities
Message
2736
RI row out of order.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, FallBack subtable nnnn]
Reference index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
Previous reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
2737
RI row has same row ID as previous row.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, FallBack subtable nnnn]
Reference index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
2820
Fallback reference index row is on the wrong AMP.
AMP nnnnn, FallBack subtable nnnn
Reference index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
[Expected fallback AMP nnnnn, Expected fallback subtable
nnnn]
[Expected fallback AMP nnnnn, Expected fallback subtable
unknown]
2880
Reference index row indexes non existent data row.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
Reference Index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
Reference index row count exceeds data row by n,nnn,nnn,nnn
2881
Foreign Key value is not found in any reference index row.
[Data AMP nnnnn, Primary subtable]
[Data AMP nnnnn, Fallback subtable nnnn]
Reference Index ID nnn
Data row ID nnnnH nnnnH nnnnH nnnnH nnnnH
2882
Data row is not indexed by reference index row.
[Data AMP nnnnn, Primary subtable]
[Data AMP nnnnn, Fallback subtable nnnn]
Reference Index ID nnn
Reference index row id nnnnH nnnnH nnnnH nnnnH nnnnH
Data row count exceeds reference index row by n,nnn,nnn,nnn
2898
Fallback reference index row is in the wrong subtable.
[AMP nnnnn, Fallback subtable nnnn]
Reference index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
Expected fallback subtable nnnn
Expected primary AMP nnnnn
73
Chapter 5: CheckTable (checktable)
Level-Two Checking
Message
number
Message
2899
Reference index row contains non existent Foreign Key value.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
Reference Index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
2993
Primary reference index row is on the wrong AMP.
AMP nnnnn
Reference index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
[Expected fallback AMP nnnnn, Expected fallback subtable
nnnn]
[Expected fallback AMP nnnnn, Expected fallback subtable
unknown]
The remaining checks detect reference rows in the primary copy of the reference index
subtable that do not have corresponding reference index rows in the fallback subtables or vice
versa.
If the system detects an error, then one of the following messages appears.
Message
number
Message
2733
Primary and fallback RI row checksums do not match
Primary AMP nnnnn
Fallback AMP nnnnn, Fallback subtable nnnn
Reference index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
2883
Fallback reference index row is missing.
Primary AMP nnnnn
Reference index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
[Expected fallback AMP nnnnn, Expected fallback subtable
nnnn]
[Expected fallback AMP nnnnn, Expected fallback subtable
unknown]
2884
Primary reference index row is missing
Fallback AMP nnnnn, Fallback subtable nnnn
Reference Index ID nnn
Reference Index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
Expected primary AMP nnnnn
Checking for Down Subtables and Regions
The check for down subtables and regions reads table header information where down status
is recorded. If a subtable or a range of rows in a subtable is marked down, one of the following
messages appears:
74
Utilities
Chapter 5: CheckTable (checktable)
Level-Three Checking
Message
number
Message
9129
AMP nnnnn, [Primary Data|Fallback|Index|Fallback Index] Subtable
nnnn is marked down [with down region].
[Start Row id nnnnH nnnnH nnnnH nnnnH nnnnH]
[End Row id nnnnH nnnnH nnnnH nnnnH nnnnH]
9130
AMP nnnnn, [Primary Data|Fallback|Index|Fallback Index] Subtable
nnnn has down region.
Start Row id nnnnH nnnnH nnnnH nnnnH nnnnH
End Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Level-Three Checking
Level-three checking provides the most detailed check and requires more system resources
than any of the other checking levels. Because of the cost in system resources, use this checking
level rarely and only for specific diagnostic purposes.
In addition to checks that are unique to this level, level-three checking includes most of the
same checks that are performed at lower check levels. However, level-three checking does not
perform the large object checking that occurs at levels one and two.
If an AMP is unavailable and the table is no-fallback, then CheckTable bypasses all unique
secondary index checks.
If an AMP is unavailable and the table is fallback, then CheckTable uses the fallback copies of
index and data subtables on the unavailable AMP in place of the primary copies on the
unavailable AMP.
The following table summarizes level-three checking.
Utilities
In a check type of …
CheckTable …
data subtable
• checks the primary data subtable, comparing each row with
corresponding fallback data row. Makes a byte-by-byte
comparison at all levels. If the table is not fallback, then no
fallback checks are done.
• checks for duplicate rows and duplicate unique primary keys.
• verifies the row hash.
• verifies the internal partition numbers of a partitioned
primary index, if it exists.
75
Chapter 5: CheckTable (checktable)
Level-Three Checking
In a check type of …
CheckTable …
unique secondary indexes
• checks primary or fallback index subtables for out-of-order
data rows, multiple data rows with same row ID, data rows on
the wrong AMP, or data rows in the wrong fallback subtable.
Makes a byte-by-byte comparison of primary and fallback
index subtables if table is fallback, and verifies the row hash.
• compares the key of each unique secondary index row to the
expected key extracted from the data row.
• verifies that every row is indexed only once.
nonunique secondary indexes
• checks that each data row on index subtables is indexed with
the correct index key.
• checks that the hash code for each nonunique secondary
index row corresponds to the key value of the row.
reference index subtables
• checks for duplicate index values.
• checks for incorrect hash code.
second set of checks for
reference index subtables
checks reference index subtables against parent tables.
down status only
(DOWN ONLY option)
displays only those subtables and regions that have been marked
down.
Checking Data Subtables
As CheckTable scans the data subtables, it verifies row hashes, performs a byte-by-byte
comparison of primary and fallback tables, detects any duplicate, out-of-order, or incorrectly
distributed rows, and verifies the internal partition number of each row if the table has a
partitioned primary index. In addition to errors from the lower-level data subtable checks,
level-three checking can return the following error messages.
Message
number
76
Message
2520
A variable field offset exceeds the next variable offset or the
row length.
AMP nnnnn
Table id nnnnH nnnnH
[Index id nnn
Index Row id nnnnH nnnnH nnnnH nnnnH nnnnH]
[Row id nnnnH nnnnH nnnnH nnnnH nnnnH]
Var Field nnnn (0xnnnn) contains 0xnnnn. Row length 0xnnnn.
2775
Data row hash code is incorrect.
[AMP nnnnn, Primary subtable nnnn]
[AMP nnnnn, Fallback subtable nnnn]
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected hash code nnnnH nnnnH
Expected primary AMP nnnnn
[Expected fallback AMP nnnnn, Expected fallback subtable nnnn]
[Expected fallback AMP nnnnn, Expected fallback subtable unknown]
Utilities
Chapter 5: CheckTable (checktable)
Level-Three Checking
Message
number
2778
Message
Primary and fallback data rows do not match.
Primary AMP nnnnn
Fallback AMP nnnnn Fallback subtable nnnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Note: CheckTable treats index rows that are out-of-order as if they do not exist, and continues
looking for the next row in the sequence. This can cause CheckTable to report out-of-order
errors for subsequent rows, even if only a single row was out of order.
For example, given the sequence: 1, 2, 4, 6, 8, 20, 9, 10, 11, 12, 17, 18, 21, 25, CheckTable
would see 9 as being out of ascending sequence. CheckTable would therefore ignore the 9,
then continue looking for a row to follow 20. In this case, although 20 is likely the only row
that is out of order, CheckTable would report errors for rows 9 through 18.
If an AMP is unavailable, then the primary-to-fallback data check bypasses rows whose
alternate (primary or fallback) copy would be expected to be on the unavailable AMP.
However, CheckTable still scans all subtables to check for exceptions other than
primary-to-fallback inconsistencies.
If an internal partition number does not match the one calculated for the row from the
partitioning expressions, the following error message appears.
Message
number
7534
Message
Data row partitioning is
AMP nnnnn
Row id nnnnH nnnnH nnnnH
External partition nnnnn
Internal partition nnnnn
incorrect.
nnnnH nnnnH
(nnnnH) but expected nnnnn (nnnnH)
(nnnnH) but expected nnnnn (nnnnH)
For multilevel partitioned primary indexes, the external partition number in the error
message is the number resulting from the combined partition expression. For more
information on partitioned primary indexes and on partitioning expressions, see Database
Design.
You might have to validate a partitioned primary index again if the following occurs:
Utilities
•
CheckTable or queries indicate rows with incorrect Row IDs.
•
The table is restored or copied to another system.
•
A partitioning expression includes decimal rounding operations, floating point
operations, or HASHROW or HASHBUCKET functions
•
The system is upgraded.
•
You change the Round Halfway Mag Up field in the DBS Control utility, causing changes
in the calculation of a partitioning expression.
77
Chapter 5: CheckTable (checktable)
Level-Three Checking
To correct the partitioning of rows for a table, use the ALTER TABLE statement with the
REVALIDATE PRIMARY INDEX WITH DELETE or INSERT option.
For more information on the ALTER TABLE statement, see SQL Data Definition Language.
Checking Unique Secondary Indexes
If CheckTable performs a unique secondary index check without a preceding data or
secondary index check on the same table, a scan of the primary (and fallback, if an AMP is
unavailable) data subtables checks for duplicate, out-of-order, and incorrectly distributed data
rows.
IF an Amp is unavailable and the
table is …
THEN …
no-fallback
all unique secondary index checks are bypassed.
fallback
the fallback copies of index and data subtables are used in
place of the primary copies, which are on the unavailable
AMP.
As CheckTable checks the index subtables, it detects duplicate, out-of-order, or incorrectly
distributed index rows. If CheckTable detects one of these index rows, then CheckTable
discards the offending row from the remainder of the checking process, and displays an error
message.
Note: CheckTable treats index rows that are out-of-order as if they do not exist, and continues
looking for the next row in the sequence. This can cause CheckTable to report out-of-order
errors for subsequent rows, even if only a single row was out of order.
For example, given the sequence: 1, 2, 4, 6, 8, 20, 9, 10, 11, 12, 17, 18, 21, 25, CheckTable
would see 9 as being out of ascending sequence. CheckTable would therefore ignore the 9,
then continue looking for a row to follow 20. In this case, although 20 is likely the only row
that is out of order, CheckTable would report errors for rows 9 through 18.
In addition to the error messages returned by lower-level checking, level-three USI checks can
display the following error messages.
Message
number
2779
78
Message
USI row hash code is incorrect.
[AMP nnnnn, Primary subtable nnnn]
[AMP nnnnn, Fallback subtable nnnn]
USI id nnn
USI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected hash code nnnnH nnnnH
Expected primary AMP nnnnn
[Expected fallback AMP nnnnn, Expected fallback subtable nnnn]
Utilities
Chapter 5: CheckTable (checktable)
Level-Three Checking
Message
number
Message
2780
USI row has duplicate key.
AMP nnnnn [Primary subtable nnnn] [Fallback subtable nnnn]
USI id nnn
USI row id nnnnH nnnnH nnnnH nnnnH nnnnH
Indexed data row id nnnnH nnnnH nnnnH nnnnH nnnnH
Duplicated index value row id nnnnH nnnnH nnnnH nnnnH nnnnH
2781
Data row indexed by USI row with incorrect key.
[Data AMP nnnnn, Primary subtable nnnn]
[Data AMP nnnnn, Fallback subtable nnnn]
Data row id nnnnH nnnnH nnnnH nnnnH nnnnH
USI id nnn
[USI AMP nnnnn, Primary subtable nnnn]
[USI AMP nnnnn, Fallback subtable nnnn]
USI row id nnnnH nnnnH nnnnH nnnnH nnnnH
2782
Primary and fallback USI rows do not match.
Primary AMP nnnnn
Fallback AMP nnnnn, Fallback subtable nnnn
USI id nnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Checking Nonunique Secondary Indexes
If CheckTable performs a NUSI check without a preceding data or secondary index check on
the same table, then CheckTable performs a scan of the primary and fallback data subtables
checks for duplicate, out-of-order, or incorrectly distributed data rows.
As CheckTable traverses the index subtables, it detects duplicate and out-of-order index rows.
If CheckTable detects either of these problems, then it discards the offending row from the
checking process, and displays an error message.
Note: CheckTable treats index rows that are out-of-order as if they do not exist, and continues
looking for the next row in the sequence. This can cause CheckTable to report out-of-order
errors for subsequent rows, even if only a single row was out of order.
For example, given the sequence: 1, 2, 4, 6, 8, 20, 9, 10, 11, 12, 17, 18, 21, 25, CheckTable
would see 9 as being out of ascending sequence. CheckTable would therefore ignore the 9,
then continue looking for a row to follow 20. In this case, although 20 is likely the only row
that is out of order, CheckTable would report errors for rows 9 through 18.
In addition to the error messages returned by lower-level checking, level-three NUSI checks
can display the following error messages.
Message
number
2784
Utilities
Message
NUSI row incorrectly has continuation flag clear.
[AMP nnnnn, Primary subtable nnnn]
[AMP nnnnn, Fallback subtable nnnn]
NUSI id nnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
79
Chapter 5: CheckTable (checktable)
Level-Three Checking
Message
number
Message
2785
Data
[AMP
NUSI
Data
NUSI
row indexed by NUSI row with incorrect key.
nnnnn, Primary subtable nnnn]
id nnn
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Row id nnnnH nnnnH nnnnH nnnnH nnnnH
2790
NUSI row hash code is incorrect.
[AMP nnnnn, Primary subtable nnnn]
[AMP nnnnn, Fallback subtable nnnn]
NUSI id nnn
NUSI Row id nnnnH nnnnH nnnnH nnnnH nnnnH
Expected hash code nnnnH nnnnH
[It contains nn data row id(s)
nnnnH nnnnH nnnnH nnnnH nnnnH
(...) nnnnH nnnnH nnnnH nnnnH nnnnH]
Checking Reference Index Subtables
During level-three checking, CheckTable performs two sets of checks on reference index
subtables. If CheckTable detects an error during the first set of checks, one of the following
error messages appears.
Message
number
Message
2885
Reference index row hash code is incorrect.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
Reference Index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
Expected hash code nnnnH nnnnH
Expected primary AMP nnnnn
[Expected fallback AMP nnnnn,
Expected fallback subtable nnnn]
[Expected fallback AMP nnnnn,
Expected fallback subtable unknown]
2886
Reference index row has duplicate Foreign Key values.
[AMP nnnnn, Primary subtable]
[AMP nnnnn, Fallback subtable nnnn]
Reference Index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
Duplicated foreign key value row ID
nnnnH nnnnH nnnnH nnnnH nnnnH
2887
Primary and fallback reference index rows do not match.
Primary AMP nnnnn
Fallback AMP nnnnn, Fallback subtable nnnn
Reference index ID nnn
Reference index row ID nnnnH nnnnH nnnnH nnnnH nnnnH
If CheckTable detects an error during the second set of checks, it places a read lock on the
Parent table specified in the reference index.
If the AMP cannot lock the Parent table, then the following message appears:
80
Utilities
Chapter 5: CheckTable (checktable)
CheckTable General Checks
Warning: Skipping parent key validation in level 3 checking.
Reference Index ID nnn
Checking for Down Subtables and Regions
The check for down subtables and regions reads table header information where down status
is recorded. If a subtable or a range of rows in a subtable is marked down, one of the following
messages appears:
Message
number
Message
9129
AMP nnnnn, [Primary Data|Fallback|Index|Fallback Index] Subtable
nnnn is marked down [with down region].
[Start Row id nnnnH nnnnH nnnnH nnnnH nnnnH]
[End Row id nnnnH nnnnH nnnnH nnnnH nnnnH]
9130
AMP nnnnn, [Primary Data|Fallback|Index|Fallback Index] Subtable
nnnn has down region.
Start Row id nnnnH nnnnH nnnnH nnnnH nnnnH
End Row id nnnnH nnnnH nnnnH nnnnH nnnnH
CheckTable General Checks
CheckTable checks certain aspects of tables regardless of the level of checking that is specified.
These general checks include things like checking the table headers, checking for obsolete
subtables, and checking the data dictionary.
Check Type
Description
Data Dictionary
This check detects tables that are not in the Teradata Database
Data Dictionary.
If CheckTable checks database DBC, it compares tables in
DBC.TVM and DBC.temp with tables on AMPs.
Utilities
Table headers
compares table header components on all AMPs with the table
header from the AMP with the lowest virtual processor (vproc)
number of all online AMPs.
Table structure
compares the table structure version stored in DBC.TVM to the
table structure version stored in the header.
Parent/Child count
verifies if the ParentCount and ChildCount in DBC.TVM match
the ParentCount and ChildCount in the table header.
Pending operations
checks the table header to determine if any table has any pending
operations. This check works for global temporary tables as well as
base tables.
81
Chapter 5: CheckTable (checktable)
CheckTable General Checks
CheckTable can display the following error messages during general checking phases of all
check levels.
Message
number
82
Message
2729
Dictionary parent count does not match header parent count.
Dictionary parent count nnnn
Header parent count nnnn
2740
One or more rows found for table not in DBC.TVM.
Table id nnnnH nnnnH
[Rows for table found on all AMPs]
[Rows for table found on following AMPs: nnnnn]
2741
Table header not found.
Table id nnnnH nnnnH
[Header missing on all AMPs]
[Header missing on following AMPs: nnnnn]
2744
Dictionary version does not match header version.
Dictionary version nnnn
Header version nnnnn
2787
Table check bypassed due to pending operation.
2788
Table header not found for table.
Table id nnnnH nnnnH
[Header missing on all AMPs]
[Header missing on following AMPs: nnnnn]
2890
Dictionary child count does not match header child count.
Dictionary child count nnnn
Header child count nnnn
2935
Table header format is invalid.
Table id nnnnH on AMP nnnnn
6719
The dictionary object name or text string
can not be translated.
Table id nnnnH on AMP nnnnn Checking skipped due to bad TVMNAME
7436
One or more rows found for a temporary table not in
DBC.TempTables.
Temporary table ID nnnnH nnnnH
[Rows for temporary table found on all AMPs]
[Rows for temporary table found on following AMPs: nnnnn]
7437
Table header not found for a materialized temporary table.
Temporary table ID nnnnH nnnnH
[Header missing on all AMPs]
[Header missing on following AMPs: nnnnn]
[HostID nnnnH SessNo nnnnH nnnnH]
[Row id nnnnH nnnnH nnnnH nnnnH nnnnH]
[Table header not found on AMP]
7565
One or more rows found for table not in DBC.TVM. Deleted.
Table id nnnnH nnnnH
[Rows for table found on all AMPs. Deleted.]
[Rows for table found on following AMPs and deleted: nnnnn]
Utilities
Chapter 5: CheckTable (checktable)
CHECK Command Examples
CheckTable can display the following error messages during general checking phases of levels
one, two, and three checking.
Message
number
Message
2742
Extraneous subtable nnnn on AMP nnnnn.
2743
Mismatch on the following table header components:
component(s)
yielded a mismatch between the header on AMP nnnnn and the
header on the following AMPs:
nnnnn [nnnnn ...]
2745
Warning: USI flagged as invalid on one or more AMPs.
USI id nnn
Remaining checks for index bypassed.
2889
Table header has incorrect parent count or child count.
[Parent Count]
[Child Count]
did not agree with the counts from the reference index
descriptors in the table header of the following AMPs:
nnnnn [nnnnn ...]
5954
Internal error: Inconsistency in the PPI descriptor of the
table header.
7590
If the COMPRESSCHECK option is specified, and the compress value from the
dbc.tvfields.compressvaluelist does not match that of the Table Header field 5, or
other problems are encountered during the check, CheckTable issues this
warning:
Warning: Skip Compress Value checking.
Note: Tables with online archive active logs are checked by CheckTable, however their online
archive log subtables are not checked. In these cases, CheckTable displays the following
message:
Table is online archive logging active, its log subtable will not be
checked.
CHECK Command Examples
This section shows some examples using the options of the CHECK command.
CHECK ALL TABLES EXCLUDE …
To exclude one or more databases or tables from the check, use CHECK ALL TABLES
EXCLUDE. If a specified object does not exist in the system, then the object appears in a
message at the end of the summary report.
CheckTable does the following:
Utilities
83
Chapter 5: CheckTable (checktable)
CHECK Command Examples
•
Checks the dictionary and database DBC.
(If database DBC is in the EXCLUDE list, it is not checked.)
•
Checks other non-excluded databases in database-name order
The following table shows different CHECK ALL TABLES EXCLUDE … examples based on
database DBC at level-one checking.
IF you want to check …
THEN type …
the Data Dictionary and all databases
except DBC, SalesDB1, and
PurchaseDB1
CHECK ALL TABLES EXCLUDE DBC, SalesDB1, PurchaseDB1 AT LEVEL
ONE;
only the Data Dictionary
one of the following:
• CHECK ALL TABLES EXCLUDE % AT LEVEL ONE;
• CHECK ALL TABLES EXCLUDE %.% AT LEVEL ONE;
all tables except those in db1 and t2
and t4 of db2
CHECK ALL TABLES EXCLUDE db1,db2.t2, db2.t4 AT LEVEL THREE;
all tables except those in database1
with tablenames beginning with Sales
CHECK ALL TABLES EXCLUDE database1.Sales% AT LEVEL ONE;
CHECK ALL TABLES AT LEVEL PENDINGOP
To check all tables to determine if they have any pending operations, type the following:
CHECK ALL TABLES AT LEVEL PENDINGOP;
The following output appears:
Check beginning at 11:13:31 02/06/26.
Data dictionary check started at 11:13:31 02/06/26.
...
"PROD"."CUSTOMERS" starting at 11:14:22 02/06/26.
Table id 0000H 624BH, No fallback.
Table check bypassed due to pending MultiLoad.
"PROD"."SHIPMENT" starting at 11:14:24 02/06/26.
Table id 0000H 04E1H, No fallback.
Table check bypassed due to pending Table Rebuild.
...
1435 table(s) checked.
420 fallback table(s) checked.
1015 non-fallback table(s) checked.
1
1
0
0
table(s) bypassed due to pending MultiLoad.
table(s) bypassed due to pending Table Rebuild.
table(s) failed the check.
Dictionary error(s) were found.
Check completed at 11:14:58 02/06/26.
84
Utilities
Chapter 5: CheckTable (checktable)
CHECK Command Examples
CHECK dbname EXCLUDE …
To check all tables in a specified database, except for those listed after EXCLUDE, use
CHECK dbname EXCLUDE ….
You can use wildcards in the EXCLUDE list. For example:
check all tables exclude mydb% at level two
will exclude database having names starting with mydb.
The following table shows different CHECK dbname EXCLUDE examples at level-one
checking.
IF you want to …
THEN type …
check database dbname1 except for
tables t1, t2, t3
CHECK dbname1 EXCLUDE t1, t2, t3 AT LEVEL ONE;
check database dbname2 except for
tables beginning with the word table
and followed by any single character
as well as tables containing the string
weekly
CHECK dbname2 EXCLUDE table?, %weekly% AT LEVEL ONE;
exclude any tables whose names either
begin with week1 and end with any
character or begin with the word
month in the SalesDB database
CHECK SalesDB EXCLUDE week1?, month% AT LEVEL ONE;
For more information, see “Wildcard Syntax” on page 97 and “Rules for Using Wildcard
Syntax” on page 99. If a specified table includes a dbname, then CheckTable only checks the
table in that referenced database.
NO ERROR LIMIT
To perform level-two checking with no error limits on the MfgDb database, type the
following:
CHECK MfgDb AT LEVEL TWO WITH NO ERROR LIMIT;
Output similar to the following appears.
Check beginning at 13:08:03 00/04/26.
"MFGDB"."INVENTORY" starting at 13:08:06 00/04/26.
Table id 0000H 0C31H, Fallback.
No errors reported.
"MFGDB"."PARTS" starting at 13:08:07 00/04/26.
Table id 0000H 0C33H, No fallback.
2753: Primary data row is on wrong AMP.
AMP 00000
Row id 0000H 79B6H 9E37H 0000H 0001H
Expected primary AMP 00001
1 error(s) reported.
"MFGDB"."RETURNS_TEMP" starting at 13:08:08 00/04/26.
Table id 0000H 0C32H, Fallback.
No errors reported.
Utilities
85
Chapter 5: CheckTable (checktable)
CHECK Command Examples
3 table(s) checked.
2 fallback table(s) checked.
1 non-fallback table(s) checked.
1 table(s) failed the check.
0 Dictionary error(s) were found.
Check completed at 13:08:09 00/04/26.
In the above output, the first line of the output is the header, which is displayed before the first
table is checked. The header shows the exact time and date that you started the check in the
following format:
Check beginning at HH:MM:SS YY/MM/DD
If one or more AMPs are unavailable, then the following message appears:
The following AMPs are not operational.
not be complete:
nnnnn nnnnn... ...
As a result, certain checks will
Unavailable AMPs are indicated by the format nnnnn.
The next set of lines displayed indicates the database name, table name, internal ID, fallback
status, and the date and time:
“dbname”.“tablename” starting at YY/MM/DD HH:MM:SS
[Table ID nnnnH nnnnH, Fallback] [Table ID nnnnH nnnnH, No fallback]
Note: When the checking process begins and CheckTable detects inconsistencies, CheckTable
displays specific messages that pertain to those inconsistencies. CheckTable checks tables in
alphabetical order by database name and table name. If you specify to check DBC, then
CheckTable always checks DBC before the named database and table.
SKIPLOCKS
CheckTable automatically skips any tables when it tries to lock them and finds that they are
locked already. For example, suppose you type the following:
CHECK fb3.t1 at level two SKIPLOCKS IN PARALLEL;
If fb3.t1 is locked when you submit the command, then the following output appears:
Check Beginning at 07:57:25 99/12/30
"FB3"."T1" skipped at 07:57:26 99/12/30 due to pending lock.
Table id 0000H 3E9H.
0 table(s) checked.
0 fallback table(s) checked.
0 non-fallback table(s) checked.
1 table(s) skipped due to pending lock.
0 table(s) failed the check.
0 Dictionary error(s) were found.
SERIAL/PARALLEL
Serial mode allows CheckTable to check a single table at a time. Parallel mode allows
CheckTable to check multiple tables simultaneously. The following tables shows examples of
both modes.
86
Utilities
Chapter 5: CheckTable (checktable)
CHECK Command Examples
IF you want to check database DB0 with tables
t1, t10, t100, t1000, and t11 at level TWO in …
THEN type …
SERIAL mode
CHECK db0.t1% at level two IN SERIAL;
PARALLEL mode
CHECK db0.t1% at level two IN
PARALLEL;
Serial Mode Output
When processing completes, the following termination message or summary of the check
results appears:
5
3
2
3
table(s) checked.
fallback table(s) checked.
non-fallback table(s) checked.
tables failed check.
0 Dictionary error(s) were found.
Check completed at 7:57:24 99/12/30.
Note: The entire output is not shown.
Parallel Mode Output
The database name and the table name precede the report for each message to help you
identify the table to which the output message belongs:
Check beginning at 07:57:14 99/12/30.
"DB0"."T1" starting at 07:57:15 99/12/30.
Table id 0000H 3F92H, Fallback.
"DB0"."T10" starting at 07:57:15 99/12/30.
Table id 0000H 0D14H, Fallback.
"DB0"."T100" starting at 07:57:15 99/12/30.
Table id 0000H 0D6EH, Fallback.
"DB0"."T1000" starting at 07:57:15 99/12/30.
Table id 0000H 12E6H, Fallback.
"DB0"."T10" checking at 07:57:15 99/12/30.
Table id 0000H 0D6EH, Fallback.
2741: Table header not found.
Table id 0000H 0D14H
Header missing on following AMPs:
00000 Further checking skipped because of missing
header(s).
"DB0"."T10" ending at 07:57:15 99/12/30.
Table id 0000H 0D6EH, Fallback.
"DB0"."T1" checking at 07:57:15 99/12/30.
Table id 0000H 3F92H, Fallback.
2757: Primary data row is missing.
Fallback AMP 00001, Fallback subtable 2048
Row id 0000H 8C49H CDABH 0000H 0001H
Expected primary AMP 00000
"DB0"."T111" starting at 07:57:16 99/12/30.
Table id 0000H 12E8H, Fallback.
"DB0"."T1" checking at 07:57:17 99/12/30.
Table id 0000H 3F92H, Fallback.
2757: Primary data row is missing.
Fallback AMP 00001, Fallback subtable 2048
Utilities
87
Chapter 5: CheckTable (checktable)
CHECK Command Examples
Row id 0000H BD81H 0459H 0000H 0001H
Expected primary AMP 00000
"DB0"."T100" checking at 07:57:17 99/12/30.
Table id 0000H 0D6EH, Fallback.
2757: Primary data row is missing.
Fallback AMP 00003, Fallback subtable 2048
Row id 0000H 1897H 9B57H 0000H 0001H
Expected primary AMP 00002
"DB0"."T1000" ending at 07:57:18 99/12/30.
Table id 0000H 12E6H, Fallback.
No errors reported.
"DB0"."T1" checking at 07:57:18 99/12/30.
Table id 0000H 3F92H, Fallback.
2880: Reference index row indexes non existent data row.
AMP 00000, Primary subtable
Reference index id 0
Reference index row id 0000H 8C49H CDABH 0000H 0001H
Reference index row count exceeds data row count by 1
"DB0"."T1" checking at 07:57:18 99/12/30.
Table id 0000H 3F92H, Fallback.
2888: Invalid reference index row.
AMP 00002, Primary subtable
Reference index id 0
Reference index row id 0000H 1897H 9B57H 0000H 0001H
"DB0"."T100" checking at 07:57:18 99/12/30.
Table id 0000H 0D6EH, Fallback.
2757: Primary data row is missing.
Fallback AMP 00003, Fallback subtable 2048
Row id 0000H 3133H 36AEH 0000H 0001H
Expected primary AMP 00002
"DB0"."T100" ending at 07:57:19 99/12/30.
Table id 0000H 0D6EH, Fallback.
2 error(s) reported.
"DB0"."T1" checking at 07:57:19 99/12/30.
Table id 0000H 3F92H, Fallback.
AMP 00002, Primary subtable
Reference index id 0
Reference index row id 0000H 3133H 36AEH 0000H 0001H
TABLES=n Clause
The following example uses the TABLES=n clause option to specify the number of tables to
check simultaneously in parallel mode.
check all tables at level one in parallel tables=3;
Check beginning at 10:55:23 05/01/06.
Data dictionary check started at 10:55:23 05/01/06.
F2
>>>> Status: CheckTable running in PARALLEL mode.
3 CheckTable tasks started.
3 CheckTable tasks ACTIVE.
0 CheckTable tasks IDLE.
Task STATUS
PRIORITY
A priority level controls resource usage and improves performance.
88
Utilities
Chapter 5: CheckTable (checktable)
CHECK Command Examples
To run CheckTable in PARALLEL mode at the priority level corresponding to the performance
group name $HMKTG, type the following:
CHECK eb3, db3, fb3.t1 at level two IN PARALLEL
PRIORITY='$HMKTG';
Assume the priority in the following example is invalid:
CHECK eb3, db3, fb3.t1 at level two IN PARALLEL
PRIORITY='$HMKG';
CheckTable displays an error message indicating that the priority is invalid and waits for the
next input.
CONCURRENT MODE
Example 1
To run CheckTable in CONCURRENT mode at level three on rfc66706.table_1, type the
following:
check rfc66706.table_1 at level three concurrent mode;
The following appears:
Check beginning at 15:38:01 05/03/17.
"RFC66706"."TABLE_1" skipped at 15:38:16 05/03/17 due to pending lock.
Table id 0000H 04D3H .
"RFC66706"."TABLE_1" skipped again at 15:43:16 05/03/17 due to pending
lock.
Table id 0000H 04D3H .
Remaining time to be retried is forever.
"RFC66706"."TABLE_1" skipped again at 15:48:16 05/03/17 due to pending
lock.
Table id 0000H 04D3H .
Remaining time to be retried is forever.
Example 2
To check all table headers and all tables in DBC, type the following:
check dbc at level one skiplocks concurrent mode retry limit 1;
Checking table headers requires a table read lock on DBC.TVM. For each table in DBC,
CheckTable will obtain a table read lock, check the table, and release the table lock. These
locks will probably block DDL operations. However, the duration of DBC check should be
short.
Example 3
To check all tables in all databases excluding DBC, type the following:
check all tables at level one skiplocks concurrent mode retry limit 1;
Utilities
89
Chapter 5: CheckTable (checktable)
CHECK Command Examples
A table access lock on DBC.DBASE is obtained for a short time to get a list of databases. This
access lock should cause minimal contention.
For each database, a table access lock on DBC.TVM is obtained for a short time to get a list of
tables in the current database. This access lock should also cause minimal contention.
For each table in the current database, CheckTable will obtain a table read lock, check the
table, and release the table lock. Any operation that requires either a write lock or exclusive
lock on the table being checked will be blocked. The locking duration may be longer for a large
table. An EXCLUDE clause can be used to skip large tables that are actively modified to avoid
blocking.
DOWN ONLY
Example 1
Level-one checks report down subtables and down regions, but do not define the region that is
down. Level-two and level-three checks specify the starting and ending rows that define the
down region.
check fiu.onedr at level one down only;
Check beginning at 20:14:37 08/07/01.
"FIU"."ONEDR" checking at 20:14:37 08/07/01.
Table id 0000H 06A1H
>
9131:Check was skipped due to detection of error <9130>.
AMP 00000, Primary Data Subtable 1024 has down region.
1 table(s) checked.
1 fallback table(s) checked.
0 non-fallback table(s) checked.
check fiu.onedr at level three down only;
Check beginning at 20:14:54 08/07/01.
"FIU"."ONEDR" checking at 20:14:54 08/07/01.
Table id 0000H 06A1H
>
9131:Check was skipped due to detection of error <9130>.
AMP 00000, Primary Data Subtable 1024 has down region.
Start Row Id 0000H 0837H D253H 0000H 0001H
End
Row Id 0000H 08B9H BE1EH 0000H 0000H
1 table(s) checked.
1 fallback table(s) checked.
0 non-fallback table(s) checked.
0
0
1
Check
90
table(s) failed the check.
Dictionary error(s) were found.
table(s) skipped due to down regions and/or down subtables in them.
completed at 20:14:43 08/07/01.
Utilities
Chapter 5: CheckTable (checktable)
CheckTable and Deadlocks
Example 2
If the number of down regions in a subtable exceeds the threshold defined by the DBS Control
setting MaxDownRegions, the subtable is marked down on all AMPs, and CheckTable reports
the subtable down.
check fiu.onedt at level two down only;
Check beginning at 20:19:12 08/07/01.
"FIU"."ONEDT" checking at 20:19:12 08/07/01.
Table id 0000H 06A2H
>
9131:Check was skipped due to detection of error <9129>.
AMP 00001, Primary Data Subtable 1024 is marked down.
AMP 00000, Primary Data Subtable 1024 is marked down.
1 table(s) checked.
1 fallback table(s) checked.
0 non-fallback table(s) checked.
0 table(s) failed the check.
0 Dictionary error(s) were found.
1 table(s) skipped due to down regions and/or down subtables in them.
CheckTable and Deadlocks
Because the system does not detect deadlocks that exist between utility functions, CheckTable
has a built-in timeout feature to prevent long-standing deadlocks. However, this feature makes
it difficult for CheckTable to apply a lock on a table that has frequent lock requests.
For non-concurrent mode, the first time a lock is requested, the time-out interval is one
minute. Each subsequent retry adds one minute to the interval until the interval is five
minutes. Then, all subsequent retries employ a five-minute time-out. The number of times
CheckTable can attempt a lock request is not limited.
If you specify the SKIPLOCKS option, then CheckTable requests a lock on a table only once. If
CheckTable does not obtain the lock the first time, then CheckTable does not request any
further locks and does not check the table.
To determine the current status of CheckTable under MP-RAS, press F2 or type status at the
command prompt. On other operating system platforms, type status at the CheckTable
command prompt.
To abort the current table check under MP-RAS, press F3 while CheckTable processes a
command. On other operating system platforms, enter “abort table” to abort the current table
check or “abort” to abort the current CHECK command.
For more information, see “Using Function Keys and Commands” on page 45.
For non-concurrent mode, when the CheckTable Table Lock Retry entry in the DBS Control
General fields is set to a positive number, CheckTable will retry the lock request until this
specified time in minutes is exhausted. CheckTable will display that the table is skipped due to
Utilities
91
Chapter 5: CheckTable (checktable)
CheckTable Messages
pending lock and will continue to check the next table. For more information, see
“CheckTable Table Lock Retry Limit” in Chapter 12: “DBS Control (dbscontrol).”
For concurrent mode, the first time a table lock is requested, the timeout interval is 15
seconds. CheckTable will skip the locked table and continue to check the remaining tables. All
subsequent retries employ a five-minute timeout interval per table. All skipped tables will be
retried forever if RETRY LIMIT is not specified. If RETRY LIMIT is positive, all skipped tables
will be retried until CheckTable reaches the RETRY LIMIT. For more information on
concurrent mode, see “Running on a Non-Quiescent System” on page 44.
CheckTable Messages
CheckTable issues the following types of messages:
•
Syntax error
•
Check completion
•
Error
Syntax Error Messages
When a syntax error occurs, CheckTable does the following:
•
Displays the part of the input line where the error occurs
•
Prints a dollar sign ($) in the command input line beneath the character that caused the
error
•
Displays an error message
For example, assume the following command:
CHECK db[2-] AT LEVEL ONE;
The following error message appears:
*** Syntax error ***
CHECK db[2-] AT LEVEL ONE;
$
Invalid range specified in dbname or dbname.tablename.
Check Completion Messages
After CheckTable completes a table check, you might see one of the following messages:
92
•
nnnnn error(s) reported.
•
No error(s) reported.
•
Table check aborted due to excessive error(s).
•
Table check aborted by fatal error.
•
Table check bypassed due to pending Fast Load.
•
Further checking skipped because of missing header(s).
•
Further checking skipped because of header inconsistencies.
Utilities
Chapter 5: CheckTable (checktable)
CheckTable Messages
•
Further checking skipped because table is unhashed.
•
Further checking skipped because of a value-ordered primary index.
Note: A value-ordered primary index can occur with only a join index or a hash index. If
you are concerned about the join index or hash index, then you can drop it and create it
again.
Further checking skipped because of a Join or Hash Index with
logical rows.
•
Note: The above message displays if a join index or hash index is compressed, meaning
that it contains multiple logical rows within a physical row.
After CheckTable completes, you see a summary report similar to the following:
nnnnn
nnnnn
nnnnn
nnnnn
nnnnn
nnnnn
nnnnn
nnnnn
nnnnn
nnnnn
Check
table(s) checked.
fallback table(s) checked.
no fallback table(s) checked.
table(s) bypassed due to pending Restore.
table(s) bypassed due to pending Fast Load.
table(s) bypassed due to Recovery Abort.
table(s) bypassed due to being unhashed.
table(s) skipped due to pending lock.
table(s) had checking aborted before completion.
table(s) failed the check.
completed at HH:MM:SS YY/MM/DD.
Error Messages
CheckTable aborts when it encounters problems, depending on the severity and context of the
problem. CheckTable crashes the Teradata Database rarely, such as when unable to clean up
resources allocated by the Teradata Database.
When CheckTable aborts, it does the following:
•
Displays an error message
•
Saves a snapshot dump for further analysis
•
Retains backtrace information
The following table shows some possible error messages that might appear.
Error Message Type
Actual Error Messages
Abort due to initialization
7492: CheckTable utility aborted at <date>/<time> in
initialization phase due to error <error code>.
Command abort
7493: CHECK command aborted at <date>/<time> due to
error <error code>.
Table abort
7494: <DbName>.<Table Name> was aborted at <date>/
<time> due to error <error code>.
7495: <DbName>.<Table Name> was skipped at <date>/
<time> due to error <error code>. Run SCANDISK.
Unexpected error
Utilities
7496: CheckTable utility aborted at <date>/<time> due
to unexpected error <error code>.
93
Chapter 5: CheckTable (checktable)
Error Handling
Error Message Type
Actual Error Messages
Down subtable or region
9131: Check was skipped due to detection of error
<error code>.
The following table shows the location of the backtrace information.
On platform …
The location of backtrace information is in …
Linux
event log file /var/log/messages.
MP-RAS
stream log file /var/adm/streams/*date.
Windows
Event Viewer.
Note: For more information on specific CheckTable error messages by number, see Messages.
Auxiliary Information
In auxiliary information included in messages, RowID output format includes the following:
•
The internal partition number consists of the first two bytes for both partitioned and
nonpartitioned tables. For a nonpartitioned table, the internal partition number is zero.
(This number is not actually stored in a row in this case.)
•
The hash value of the primary index consists of the next four bytes.
•
The uniqueness value consists of the last four bytes.
Error Handling
While CheckTable is processing, a system failure could occur for a variety of reasons, such as
the following:
•
Programming errors
•
Hardware problems
After a system failure, one of the following might occur:
•
The system resets.
•
The partition in which CheckTable is running resets.
•
CheckTable hangs.
When crashing the system, CheckTable displays the following message, which is logged in the
streams log.
10196: CheckTable utility is crashing the DBS because it has faced
problems while aborting due to error code <ErrorNumber>
If CheckTable hangs, verify the status using the status command. If you do not detect any
problems, then try to stop and restart CheckTable.
94
Utilities
Chapter 5: CheckTable (checktable)
Using Valid Characters in Names
Using Valid Characters in Names
The names of databases, tables, other objects or performance groups specified in the CHECK
command can consist of the following inclusive characters:
Utilities
•
Lowercase alphabet (a … z)
•
Uppercase alphabet (A … Z)
•
Digits (0 … 9)
•
The following special characters.
Special characters …
Include …
Parentheses, braces, and brackets
•
•
•
•
(
{
[
<
Punctuation marks
•
•
•
•
•
•
•
•
` (single quote)
! (exclamation point)
; (semicolon)
: (colon)
' (apostrophe)
? (question mark)
. (period)
, (comma)
Other
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
| (vertical line)
~ (tilde)
@ (at sign)
$ (dollar sign)
= (equals sign)
% (percent sign)
+ (plus)
# (number sign)
^ (circumflex accent or caret)
& (ampersand)
* (asterisk)
- (hyphen-minus)
_ (low line or underscore)
/ (forward slash)
\ (backward slash)
) (parentheses)
} (curly braces)
] (square brackets)
> (angle brackets)
95
Chapter 5: CheckTable (checktable)
Examples
You must specify any name containing one or more special characters or blank spaces within
single or double quotes, except for the following:
•
? (question mark)
•
% (percent sign)
•
$ (dollar sign)
•
_ (low line or underscore)
•
[ ] (square brackets)
•
# (number sign)
Note: A name cannot begin with a digit (0 … 9).
For more information on creating names, see “Basic SQL Syntax and Lexicon” in SQL
Fundamentals.
Examples
The following examples show valid database or table names:
•
Table1
•
MYTABLE10
•
$$MyAccount
•
#Your_Account_$100
•
%mydatabase?
•
%
•
???
The following examples show irregular but acceptable names:
•
'123'
•
“First&Second table”
•
'my db1'
The following examples show unacceptable and non-valid names:
96
•
123
•
First&Second table
•
my db1
Utilities
Chapter 5: CheckTable (checktable)
Using Wildcard Characters in Names
Using Wildcard Characters in Names
Use wildcard characters % and ? to specify a pattern for database names or table names.
The following table shows how CheckTable interprets wildcard characters.
Wildcard character …
Matches any …
% (percent sign)
string of characters of any length, including the Null string.
? (question mark)
single character.
You can use wildcard characters in any combination. However, you cannot use wildcard
characters in hexadecimal form.
The following table shows the use of wildcard characters in names.
Wildcard character …
Matches any …
%
database name.
%.%
table name.
%database%
database name containing the string: database.
SalesDB%
database name beginning with the following: SalesDB.
Wildcard Syntax
CheckTable supports the use of wildcard syntax to represent a list of possible characters at a
particular position in the names of databases or tables. Use the wildcard syntax to specify lists
of tables and databases you want CheckTable to check or not check. The wildcard syntax
begins with a left square bracket ([) and ends with a right square bracket (]).
The following figure shows the syntax for a database name or a table name.
Starting char
[
Start range char
Alphabet range
]
Remaining chars
[
Start range char
Digit
]
Hyphen range
" Char
"
Char
'
' Char
Char
Utilities
YSCTB002
97
Chapter 5: CheckTable (checktable)
Using Wildcard Characters in Names
where:
Syntax element …
Specifies …
Starting char
one of the following:
• alphabet (lowercase or uppercase)
• ? (question mark)
• % (percent sign)
Start range char
one of the following:
•
•
•
•
Alphabet range
alphabet (lowercase or uppercase)
$ (dollar sign)
_ (low line or underscore)
# (number sign)
two alphabets separated by a hyphen.
The range can be in ascending or descending order. Both the
alphabets should be the same type, either uppercase or lowercase.
Remaining chars
one of the following:
•
•
•
•
•
•
•
alphabet (uppercase or lowercase)
digit between 0 and 9
? (question mark)
% (percent sign)
$ ( dollar sign)
_ (low line or underscore)
# (number sign)
Digit
any digit between 0 and 9.
Hyphen range
two alphabets or two digits separated by a hyphen.
The range can be in ascending or descending order. Both the
characters should be the same type, uppercase or lowercase alphabet
or digit.
Char
one of the following:
•
•
•
•
alphabet (uppercase or lowercase)
digit between 0 and 9
special character
Kanji character
Other special characters can appear in table or database names but not in wildcard syntax. If
any syntax error occurs in the wildcard syntax, then CheckTable aborts, and an error message
appears.
For rules regarding use of Kanji and other Japanese characters in names, see “Basic SQL
Syntax and Lexicon” in SQL Fundamentals. For information on syntax error messages, see
“Syntax Error Messages” on page 92.
98
Utilities
Chapter 5: CheckTable (checktable)
Using Wildcard Characters in Names
Example 1
CHECK DB[15] AT LEVEL ONE;
The wildcard syntax defines two possible values (1 and 5) for the third character in the
database name. CheckTable checks all the tables in databases DB1 and DB5 at level one.
Example 2
CHECK D[BD]1.T[123] AT LEVEL ONE;
You can use the wildcard syntax in any place in the database name or table name. CheckTable
checks tables T1, T2, T3 in databases DB1 and DD1.
Rules for Using Wildcard Syntax
The following rules apply to the use of wildcard syntax in the CHECK command. Assume that
the databases and tables in the examples exist in the system, unless stated otherwise.
Rule
You can specify the following valid ASCII characters
in the wildcard syntax:
Example
Example 1: The following is a valid command:
CHECK db1.t[#af_2r]1 AT LEVEL ONE;
• A…Z
• a…z
• 0…9
• _ (low line or underscore)
• $ (dollar sign)
• # (number sign)
You cannot use digits 0 … 9 as wildcards to describe
the first character in the name.
Example 2: The following is not a valid command:
You must specify the wildcard characters within
square brackets. The wildcard syntax begins with a
left square bracket ([) and ends with a right square
bracket (]).
Example 1: Databases db1, db2, db3, db4, and db5 exist, and you
want only the tables in db1 and db5 checked. Type the following:
CHECK db[#,kA-d159]xy AT LEVEL ONE;
The above command results in a syntax error because the wildcards
specified for database name include the non-valid comma (,). For
information on syntax error messages, see “Syntax Error Messages”
on page 92.
CHECK db[15] AT LEVEL ONE;
CheckTable checks all the tables in databases db1 and db5 at level
one. The wildcard syntax defines two possible values (1 and 5) for
the third character in the database name.
Example 2: Databases db1, dc1, dd1, and so on exist, and each
database contains tables t1, t2, t3, and so on. Using the wildcard
syntax in any place in the name, type the following:
CHECK d[bd]1.t[123] AT LEVEL ONE;
CheckTable checks tables t1, t2, t3 in databases db1 and dd1.
Example 3: To specify wildcard syntax in multiple places in a name,
type the following:
CHECK db[12][pq] AT LEVEL TWO;
CheckTable checks databases db1p, db2p, db1q, and db2q at level
two. The wildcard syntax defines the possible values for the third
and fourth characters of the database name.
Utilities
99
Chapter 5: CheckTable (checktable)
Using Wildcard Characters in Names
Rule
You cannot specify the special characters % and ?
within wildcard syntax. However, you can use the
special characters % and ? with any valid wildcard
syntax.
Example
Example 1: Databases dba1, dba2, db11 and db12 exist, and you
want to check databases dba1, dba2, db11, and db12. Type the
following:
CHECK db[a1]? at level one;
This command is valid, because the ‘?’ is outside the wildcard syntax.
Example 2: The following is not a valid command, because the ‘?’ is
not allowed in wildcard syntax.
CHECK db[a1?] at level one;
You can use wildcard syntax to specify the names or
lists of the databases and tables to check and the list
of databases or tables not to check.
Example 1: Databases db1, db2, db3 and db4 exist, and you type the
following:
CHECK db% exclude db[34] at level one;
Databases db1 and db2 are checked.
Example 2: Databases db1, db2, db3 and db4 exist, and all these
databases have tables t1, t2, t3 and t4. You type the following:
CHECK db[23] exclude t[14] at level one;
CheckTable checks tables t2 and t3 in databases db2 and db3.
You can use wildcard syntax to specify a range of
characters by separating two characters with a
hyphen (-). For example, C and J separated by the
hyphen (C-J) represent any characters lexically
between C and J inclusive.
• The two characters should be of the same type:
uppercase, lowercase, or digit.
• The two characters can be in ascending or
descending lexical order. For example, [A-D]
and [D-A] both specify the same range of
characters: A through D inclusive.
100
Example 1:
CHECK db1.t[1-35] AT LEVEL ONE;
CheckTable checks the tables t1, t2, t3, and t5 in database db1 at level
one. 1-3 is considered a range, and 5 is an additional value.
Example 2:
CHECK db[a-5] AT LEVEL ONE;
The check does not take place. CheckTable reports a syntax error
because the range specified in dbname is invalid. For information on
syntax error messages, see “Syntax Error Messages” on page 92.
Utilities
Chapter 5: CheckTable (checktable)
Using Wildcard Characters in Names
Rule
You cannot specify the special characters % and ?
within wildcard syntax. However, you can use the
special characters % and ? with any valid wildcard
syntax.
Example
Example 1: Databases dba1, dba2, db11 and db12 exist, and you
want to check databases dba1, dba2, db11, and db12. Type the
following:
CHECK db[a1]? at level one;
This command is valid, because the ‘?’ is outside the wildcard syntax.
Example 2: The following is not a valid command, because the ‘?’ is
not allowed in wildcard syntax.
CHECK db[a1?] at level one;
You can use wildcard syntax to specify the names or
lists of the databases and tables to check and the list
of databases or tables not to check.
Example 1: Databases db1, db2, db3 and db4 exist, and you type the
following:
CHECK db% exclude db[34] at level one;
Databases db1 and db2 are checked.
Example 2: Databases db1, db2, db3 and db4 exist, and all these
databases have tables t1, t2, t3 and t4. You type the following:
CHECK db[23] exclude t[14] at level one;
CheckTable checks tables t2 and t3 in databases db2 and db3.
You can use wildcard syntax to specify a range of
characters by separating two characters with a
hyphen (-). For example, C and J separated by the
hyphen (C-J) represent any characters lexically
between C and J inclusive.
• The two characters should be of the same type:
uppercase, lowercase, or digit.
• The two characters can be in ascending or
descending lexical order. For example, [A-D]
and [D-A] both specify the same range of
characters: A through D inclusive.
Utilities
Example 1:
CHECK db1.t[1-35] AT LEVEL ONE;
CheckTable checks the tables t1, t2, t3, and t5 in database db1 at level
one. 1-3 is considered a range, and 5 is an additional value.
Example 2:
CHECK db[a-5] AT LEVEL ONE;
The check does not take place. CheckTable reports a syntax error
because the range specified in dbname is invalid. For information on
syntax error messages, see “Syntax Error Messages” on page 92.
101
Chapter 5: CheckTable (checktable)
Using Wildcard Characters in Names
Rule
Example
Wildcard syntax can include characters that might
not have any matching object names in the system.
However, at least one character in the wildcard
syntax must have at least one matching object.
Consider the following:
Example 1: Assume a system contains only databases db1 and db5
but not db2, db3, and so on. Type the following:
CHECK db[125] AT LEVEL ONE;
CheckTable checks all the tables in databases db1 and db5 at level
one. Since database db2 does not exist, CheckTable ignores character
2 in the wildcard syntax.
IF the …
THEN CheckTable …
Example 2: Assume a system contains the database db1 but not db2,
db3, or db4. Type the following:
syntax contains some
characters that do not
have a match at the
position specified in
any object names in the
system
checks (or excludes
from checking) all the
objects whose names
match the specified
wildcards.
CheckTable also
ignores the characters
that do not have any
matching objects.
CHECK db[1-4] AT LEVEL ONE;
This is true of any
number of wildcards.
CHECK db[1-3]AT LEVEL ONE;
The command is valid because one of the characters specified in the
wildcard syntax has a matching character at the position where the
wildcard syntax is specified in one object name. CheckTable checks
all the tables in the database db1 and ignores the remaining wildcard
characters.
Example 3: Assume a system that does not have databases db1, db2,
and db3. Type the following:
The following warning appears:
>>>> WARNING: Database 'DB[1-3]' does not exist.
system has no object
name that matches any
of the specified
wildcard characters
issues a warning
message.
Multiple occurrences of the same character in the
wildcard syntax are valid. If you repeat the same
character in the syntax for the same position, then
CheckTable recognizes the first occurrence and
ignores the repeated instances.
Example 1: In the following command, character b is repeated in
the same position.
CHECK d[abb]1 AT LEVEL ONE;
CheckTable checks all tables in the databases da1 and db1 at level
one and ignores the second instance of character b. No warning
appears.
Example 2: In the following command, character 3 is specified as
part of the hyphen range 1-5 and is repeated separately in the same
position.
CHECK db[1-53] AT LEVEL ONE;
CheckTable checks all tables in the databases db1, db2, db3, db4,
and db5 at level one. CheckTable ignores the repeated character 3.
The wildcard syntax does not apply when enclosed
between single quotes or double quotes.
In the following command, character p is a wildcard enclosed in
double quotes.
CHECK db1."[p]" AT LEVEL ONE;
CheckTable ignores the square brackets and checks only table
“[p]”, if it exists in database db1. If table “[p]” does not exist in
db1, then a warning appears.
102
Utilities
CHAPTER 6
CNS Run (cnsrun)
The CNS Run utility, cnsrun, provides the ability to start and run a database utility from a
script.
Audience
Users of cnsrun include the following:
•
Teradata Database administrators
•
Teradata Database programmers
Syntax
CNS Run runs from the command line.
To start CNS Run, use the following syntax:
cnsrun
-utility uname
-file file_name
-commands clist
A
-multi
-help
-debug n
A
-ok
-machine host_name
-tool cns_tool_path
-force
1102A228
where:
Syntax element …
Specifies …
-utility uname
the utility program to start.
-file file_name
a file that contains input commands to send to the utility program.
-commands clist
the commands to send to the utility.
Each command must be enclosed within braces { }, and there must be
one or more spaces between commands.
-multi
Utilities
that more than one instance of the specified utility program be allowed
to run simultaneously. Without the -multi option, cnsrun fails to start
a utility if an instance of the utility is already running.
103
Chapter 6: CNS Run (cnsrun)
Usage Notes
Syntax element …
Specifies …
-help
information on how to use this program.
-debug n
the output level required.
A value of 0 (the default) shows no output if the utility runs
successfully.
A value of 1 shows the utility programs output and its state changes.
Other larger values of n produce additional debugging output. All
output, including error messages, appears in STDOUT.
-ok
that this utility is to be started even if logons are disabled.
In the absence of this option or the -force option, the utility is not
started if logons are not enabled.
-force
that this utility is to be started even if the database software is not
running.
-machine host_name
the host name associated with the control node for the TPA instance on
which to run the utility program.
The default is localhost, which only works on SMP systems or when the
program is actually running on the control node (which can move
between TPA restarts) of an MPP system. To determine the name of the
control node, type cnscim at the command prompt.
-tool cns_tool_path
the full path to the cnstool program.
Usage Notes
This program runs the utility uname with the input provided by either the -file or -commands
option. The command is run by this program connecting to CNS on the node specified by the
-machine option and starting the utility from the Supervisor subwindow of Database Window
(DBW). When the utility issues a read, the commands specified are sent to it until all of the
commands are exhausted or until the program exits.
To use this command, the user running cnsrun must have access permission to run a DBW
session on the machine. To set up a user to have CNS access, use the GET PERMISSIONS,
GRANT, and REVOKE commands in DBW. For more information about the DBW Supervisor
Commands, see Chapter 11: “Database Window (xdbw).”
An error occurs if all of the interactive partitions are in use.
By default, this command produces no output unless an error occurs.
104
Utilities
Chapter 6: CNS Run (cnsrun)
Usage Notes
Example
To run the utility updatespace with the input:
update space for all databases;
quit
connecting to CNS on an SMP system, use the following command:
cnsrun -utility updatespace -commands "{update space for all databases;} {quit}"
Utilities
105
Chapter 6: CNS Run (cnsrun)
Usage Notes
106
Utilities
CHAPTER 7
Configuration Utility (config)
The Configuration and Reconfiguration utilities, config and reconfig, are used to define the
AMPs and PEs that operate together as a Teradata Database. (The Reconfiguration utility is
described in Utilities Volume 2.)
Use Configuration to define the system: the interrelationships among the AMPs, AMP clusters
(logical groupings of AMPs), PEs, and hosts that are connected as a Teradata Database system.
(The Reconfiguration utility uses information from Configuration to configure the Teradata
Database components into an operational system.) You must access a system console to use
either utility.
Use the Configuration commands described in this chapter to add, delete, list, modify, move,
or show AMPs and PEs. Other steps in configuring a system involve the Parallel Database
Extensions (PDE) portions of Teradata Database, and are configured with the PUT utility. For
more information on these configuration steps, see the Parallel Upgrade Tool (PUT) User
Guide, available from http://www.info.teradata.com/.
Audience
Users of Configuration include the following:
•
Teradata system engineers
•
Field engineers
•
System developers
User Interfaces
config runs on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Database Window
Windows
Database Window
Linux
Database Window
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
Utilities
107
Chapter 7: Configuration Utility (config)
About the Configuration Utility
About the Configuration Utility
Vprocs
Virtual processors (vprocs) are instances of AMP or PE software within the node. Vprocs
allow the node to execute multiple instances of AMP or PE software, each instance behaving as
though it is executing in a dedicated processor.
Physical Processors
The Configuration and Reconfiguration utilities are not responsible for the maintenance of
the physical environment in which the Teradata Database configuration is defined.
The AMPs and PEs exist within a previously defined physical configuration. Use the Parallel
Upgrade Tool (PUT) to configure parts of the physical configuration, such as creating Logical
Units (LUNs) on disk arrays.
For more information, see Parallel Upgrade Tool (PUT) User Guide.
Configuration Maps
Although the Configuration and Reconfiguration utilities are functionally independent,
normally they are used together to change the contents of working areas, called configuration
maps, in the node.
A configuration map does the following:
•
Stores the identification and status of each vproc in the Teradata Database
•
Identifies the AMPs that constitute each AMP cluster
•
Identifies each PE and its associated host
The node contains these configuration maps:
•
The current configuration map, which describes the current arrangement and status of
vprocs in the system
•
The new configuration map, which includes changes and additions to the configuration
If you want to list or show information about or to delete a vproc, first you must have added it
to the applicable (new or current) configuration map.
If you want to add a vproc to the new configuration map, you must have defined the AMP or
PE through the use of the PUT utility.
These component types constitute a Teradata Database configuration:
•
Hosts (or clients)
•
PEs
•
AMPs
The following sections describe these components.
For more information, see Parallel Upgrade Tool (PUT) User Guide.
108
Utilities
Chapter 7: Configuration Utility (config)
About the Configuration Utility
Hosts
You can attach more than one host simultaneously to the Teradata Database. The hosts are
identified in the configuration by the following:
•
A host number
•
A logical host ID
•
An associated host type designator
You assign the host a number between 1 and 1023.
The host type designator specifies the host data format.
The system generates the16-bit logical host ID based on the assigned host number and the
host type. The value is equal to the host number + the host type value, as shown in the
following table:
Host Type
Designator
Host Type
Value
Logical Host ID Range
(Host Number + Host Type Value)
IBM 370
(Mainframe)
IBM
0
1-1023
Communications
Processor
COP
1024
1025-2047
Motorola 68000,
RISC (Solaris, HP)
ATT3B
2048
2049-3071
Honeywell 6000
(Mainframe)
BULLHN
3072
3073-4095
Unisys OS1100
(Mainframe)
OS1100
4096
4097-5119
Connection Type
PEs
You can associate PEs defined in a Teradata Database configuration with one or more host
computers or local area networks that also are defined in the configuration.
AMPs
Typically, AMPs in a configuration are related to other AMPs through cluster assignments.
The placement of AMPs and PEs in a configuration is critical to the overall performance of the
Teradata Database.
Caution:
Utilities
Consult your Teradata systems engineer before running the Configuration or the
Reconfiguration utility to resolve any questions or issues.
109
Chapter 7: Configuration Utility (config)
About the Configuration Utility
Configuration Activities
When the Teradata Database is initialized, System Initializer (sysinit) procedures build a
default configuration map that describes the one target AMP involved in sysinit. This
configuration is stored in both the current and new configuration maps.
When the Teradata Database is operational, Configuration describes the complete system in
the new configuration map area.
As the system grows and changes, use Configuration to revise the new configuration map to
reflect these types of changes to the system:
Warning:
•
Addition and deletion of vprocs and hosts
•
Changes to cluster assignments
When changing cluster assignments without adding AMPs, make sure that ample disk space is
available on all AMPs.
If the system determines that ample space is not available, the system stops. To recover,
perform a SYSINIT on the Teradata Database, which results in loss of all data. Teradata
recommends that currentperm space should be less than 53% of the total maxperm space
before starting a change of clusters without adding AMPs.
•
Changes to host assignments
Reconfiguration Activities
After Configuration builds a new configuration map, the Reconfiguration utility redefines the
system configuration according to the new map. Reconfiguration copies the new
configuration map to the current configuration map.
For more information on the Reconfiguration Utility, see Utilities Volume 2.
Message Types
The application window running Configuration can contain the types of messages displayed
in the output subwindow, as shown in the following table.
Message
Description
Information
Indicates the status of a command or the result of an operation.
OK indicates that a command has been accepted or an operation has completed
successfully.
Prompt
Prompts for a response to a request or for confirmation of an action.
Error
Composed of a message code and text.
To view all error messages issued by Configuration, see Messages.
110
Utilities
Chapter 7: Configuration Utility (config)
Configuration Utility Commands
Configuration Utility Commands
Command Types
This section summarizes the functions performed by Configuration commands.
Session Control Commands
Session control functions begin and end Configuration sessions. The BEGIN CONFIG and
END CONFIG commands perform these functions. You cannot type many Configuration
commands unless you have entered the BEGIN CONFIG command first.
System Attribute Commands
The LIST commands can display the attributes of the following:
•
AMPs
•
AMP clusters
•
PEs
•
Hosts
The SHOW commands can display the attributes of the following:
•
An individual vproc
•
A cluster
•
A host
These commands do not require that you previously issued the BEGIN CONFIG command.
The individual LIST and SHOW commands are described later in this chapter.
AMP Commands
Use the AMP commands for the following:
•
To add an AMP to the new configuration map
•
To delete an AMP from the new configuration map
•
To move an AMP from one cluster to another
•
To move all data rows from one AMP or group of AMPs to another AMP or group of
AMPs
The functions are accomplished with these commands:
•
ADD AMP
•
DEL AMP
•
MOD AMP
•
MOVE AMP
Type ADD AMP, DEL AMP, MOD AMP, or MOVE AMP after the BEGIN CONFIG
command.
Utilities
111
Chapter 7: Configuration Utility (config)
Configuration Utility Commands
When an AMP operation conflicts with another AMP operation, Configuration reports the
conflict when you type the END CONFIG command.
PE Commands
Use the PE commands to perform the following:
•
Add a PE to the new configuration map
•
Delete a PE from the new configuration map
•
Move a PE from one host to another
•
Move a PE or group of PEs to another PE or group of PEs
The functions are accomplished with these commands:
•
ADD PE
•
DEL PE
•
MOD PE
•
MOVE PE
Type ADD PE, DEL PE, MOD PE, or MOVE PE after the BEGIN CONFIG command.
Host Commands
Use host commands to add a host to the new configuration map, to delete a host from the new
configuration map, or to change the host type in the new configuration map.
The functions are accomplished through these commands:
•
ADD HOST
•
DEL HOST
•
MOD HOST
Type ADD HOST, DEL HOST, or MOD HOST after the BEGIN CONFIG command.
Command List
The following table lists Configuration commands and functions based on activity. The
following sections discuss the commands in detail.
Activity
Command
Function
AMP operations
ADD AMP
Adds an AMP to the configuration map.
DEFAULT CLUSTER
Allows Configuration to assign AMPs to clusters when a new
configuration is defined.
DEL AMP
Deletes an AMP from the configuration map.
MOD AMP
Modifies the attributes of an AMP.
MOVE AMP
Moves an AMP or group of AMPs to another AMP or group of AMPs.
112
Utilities
Chapter 7: Configuration Utility (config)
Configuration Utility Commands
Activity
Command
Function
Displaying system
attributes
LIST
Lists the vprocs and hosts described in the specified configuration.
LIST AMP
Lists the attributes of every AMP in the specified configuration.
LIST CLUSTER
Lists the attributes of every AMP in the specified configuration, ordered by
cluster number.
LIST HOST
Lists all hosts in the specified configuration.
LIST PE
Lists the attributes of every PE in the specified configuration.
SHOW CLUSTER
Shows all AMPs in a specified cluster.
SHOW VPROC
Shows the attributes of a specified vproc.
SHOW HOST
Shows information about a specified host.
ADD HOST
Adds a host to the configuration map.
DEL HOST
Deletes a host from the configuration map.
MOD HOST
Modifies a host in the configuration map.
ADD PE
Adds a PE to the configuration map.
DEL PE
Deletes a PE from the configuration map.
MOD PE
Modifies the attributes of a PE in the configuration map.
MOVE PE
Moves a PE or group of PEs to another PE or group of PEs.
BEGIN CONFIG
Begins a configuration session.
END CONFIG
Ends a configuration session and stores the new configuration map.
STOP
Stops Configuration.
Host operations
PE operations
Session control
Note: You can access Help information on Configuration by pressing the F7 key.
Utilities
113
Chapter 7: Configuration Utility (config)
ADD AMP
ADD AMP
Purpose
The ADD AMP command adds an AMP or a range of AMPs to the new configuration map.
Syntax
ADD AMP
mmmlist
AA
, CN = nnnn
GT05C001
where:
Syntax element …
Specifies …
mmmlist
the range of vproc numbers that is being added to the configuration.
The AMP vproc number ranges from 0 to 16199.
Note: The AMP vproc numbers start at the low end of the range and
increase, and the PE vproc numbers start at the high end of the range
and decrease.
CN = nnnn
the cluster number with which the AMP is to be associated.
The cluster number ranges from 0 to 8099.
If you do not specify a cluster number, the system assigns the number
automatically after you type the END CONFIG command.
Usage Notes
You can add an AMP or a range of AMPs to the Teradata Database in a well-defined order.
The AMP being added must have a vproc number equal to the number of AMPs currently
entered in the new configuration map. That is, given a system with a new configuration map
that describes n AMPs (whose vproc numbers are 0 through n-1), the only AMP that you can
add to the system now is the AMP with vproc number n.
Configuration verifies that the vproc number specified for the new AMP does not exist in the
new configuration map. When the ADD AMP command is accepted, the specified AMP is
added to the vproc list, and the system displays the following message:
The AMP is added.
If you type the ADD AMP command before the BEGIN CONFIG command, the command is
ignored, and the system displays the following message:
BEGIN CONFIG is not entered, and ADD AMP is ignored.
114
Utilities
Chapter 7: Configuration Utility (config)
ADD AMP
Example 1
To add AMPs from vproc four to vproc seven in cluster three to the new configuration map,
type the following:
add amps 4 - 7, cn = 3
Note: The maximum number of AMPs that you can add to a cluster is 16. To have
Configuration assign AMPs to clusters automatically, use “DEFAULT CLUSTER” on page 120.
Example 2
To add an AMP as vproc eight in cluster three to the new configuration map, type the
following:
add amp 8, cn = 3
Utilities
115
Chapter 7: Configuration Utility (config)
ADD HOST
ADD HOST
Purpose
The ADD HOST command adds a host to the host list in the new configuration map.
Syntax
ADD
HOST
HN = nnnn
, HT =
A
IBM
AH
COP
ATT3B
BULLHN
OSII00
GT05B002
where:
Syntax element …
Specifies …
HN = nnnn
the host number to be added.
Assign the host a number from 1 to 1023.
HT = nnnn
the host type to be added.
Usage Notes
This command creates the host group. When PEs are configured, they are assigned to a host
number. This grouping provides a host number used to refer to all the PEs.
After the command is accepted and the host is added to the new configuration map, the
system displays the following message:
The host is added.
Configuration checks for a valid host number and host type. You must type this command
before ADD PE commands that specify PEs to be associated with this host.
BEGIN CONFIG is not entered, and ADD HOST is ignored.
Example
To add host four to the configuration, type the following:
add host hn = 4, ht = ibm
116
Utilities
Chapter 7: Configuration Utility (config)
ADD PE
ADD PE
Purpose
The ADD PE command adds a PE or a range of PEs to the new configuration map.
Syntax
ADD PE
mmmlist, HN = nnnn
AP
GT05C003
where:
Syntax element …
Specifies …
mmmlist
the PE (or range of PEs) that is being added to the new configuration
map.
The PE vproc number ranges from 15360 to 16383.
Note: The AMP vproc numbers start at the low end of the range and
increase, and the PE vproc numbers start at the high end of the range
and decrease.
HN = nnnn
the host number with which the PE (or PEs) is to be associated.
The host number ranges from 1 to 1023.
Usage Notes
Configuration validates the vproc number and host number. You must add the host number
to the new configuration map (through the ADD HOST command) before the PE associated
with the host can be described in the new configuration map.
After the ADD PE command is accepted, the specified PE is added to the vproc list, and the
system displays the following message:
The PE is added.
If you type the ADD PE command before the BEGIN CONFIG command, the command is
ignored, and the system displays the following message:
BEGIN CONFIG is not entered, and ADD PE is ignored.
Utilities
117
Chapter 7: Configuration Utility (config)
ADD PE
Example 1
To add PE 15363 to host 10, type the following:
add PE 15363, hn = 10
Example 2
To add PEs from vproc 16344 to vproc 16351 in host 282, type the following:
add PE 16344 - 16351, hn = 282
118
Utilities
Chapter 7: Configuration Utility (config)
BEGIN CONFIG
BEGIN CONFIG
Purpose
The BEGIN CONFIG command begins the configuration session. Changes caused by
subsequent commands are recorded in the new configuration map.
Syntax
BEGIN CONFIG
BC
GT05A004
Usage Notes
When the BEGIN CONFIG command is accepted, you can type commands to describe the
new configuration map. The new configuration map is copied from main memory to disk
when a subsequent END CONFIG command is executed.
If you type the BEGIN CONFIG command when a configuration session is in progress, the
utility prompts whether to end the configuration session that is in progress:
The previous BEGIN CONFIG has not been ended. Do you want to abort it? - Answer Y(es) or N(o):
To abort the previous BEGIN CONFIG command, type Y.
To continue the configuration session in progress, type N.
The system displays the following message:
BEGIN CONFIG is ignored.
When a configuration session ends prematurely, the new configuration map is not updated,
and all changes are lost. A configuration session ends prematurely if one of these events
occurs:
Utilities
•
The configuration session is aborted as described above.
•
A STOP command is executed before an END CONFIG command.
•
A Teradata Database restart occurs.
119
Chapter 7: Configuration Utility (config)
DEFAULT CLUSTER
DEFAULT CLUSTER
Purpose
The DEFAULT CLUSTER command indicates that Configuration is to assign AMPs to clusters
automatically for the specified configuration.
An AMP cluster is a collection of AMPs that are grouped together logically to provide fallback
capability for each other for tables that are created with the FALLBACK option. Each AMP in
a cluster contains a backup copy of a portion of the primary data for every other AMP in the
cluster.
Syntax
DEFAULT CLUSTER
DC
c
GT05A005
where:
Syntax element …
Specifies …
c
the default cluster size.
The maximum value of c is 16. If you do not specify c, the default cluster
size is dependent upon the number of cliques, number of AMPs, and/or
the number of AMPs per clique.
Usage Notes
If you do not specify a default cluster size, the following rules determine the size.
IF the number of cliques in the system is …
THEN the number of AMP vprocs per cluster is …
1
2.
2
2.
120
Utilities
Chapter 7: Configuration Utility (config)
DEFAULT CLUSTER
IF the number of cliques in the system is …
THEN the number of AMP vprocs per cluster is …
3-4
based on the following:
5 or more
IF the number of AMPs per
clique is …
THEN the number of AMP vprocs per
cluster is …
the same in all cliques
the number of cliques (3 or 4) in the
configuration.
not the same
one less than the number of cliques (2
or 3) in the configuration.
based on the following:
IF the number of AMPs per
clique is …
THEN the number of AMP vprocs per
cluster is …
the same in all cliques
5 if the number of AMPS in the
configuration is a multiple of 5.
Otherwise the number is 4.
not the same
4.
Typically, you type the DEFAULT CLUSTER command at the end of a session during which
AMPs were added to the system.
When the DEFAULT CLUSTER command is executed, AMPs are assigned to clusters
automatically.
In a system with n AMPs, each cluster contains c AMPs and n/c clusters.
If n is not divisible by c, the last cluster will be no smaller than 50% less and not larger than
50% more than any other cluster. The number of clusters is adjusted accordingly.
After the DEFAULT CLUSTER command is accepted, AMPs are assigned to clusters, and the
system displays the following message:
The cluster assignment is complete.
Execution of the DEFAULT CLUSTER command overrides all previous cluster assignments.
AMP cluster assignments should be defined with respect to the hardware configuration.
Usually, AMP failures result from hardware-related problems. AMP clustering assignments
should be defined as closely as possible to the following fault-tolerant strategy:
Utilities
•
No two or more AMPs of a cluster reside in the same node cabinet.
•
No two or more AMPs of a cluster are serviced by the same disk array cabinet.
•
No two or more AMPs of a cluster are serviced by the same disk array controller.
121
Chapter 7: Configuration Utility (config)
DEFAULT CLUSTER
Example 1
Typing DEFAULT CLUSTER 4 for a system of 15 AMPs results in a cluster size of four. Two
clusters are connected to four AMPs each, and one cluster is connected to seven AMPs.
Example 2
Typing DEFAULT CLUSTER 3 for a system of 16 AMPs results in cluster size of three. Four
clusters are connected to three AMPs each, and one cluster is connected to four AMPs.
Example 3
Typing DEFAULT CLUSTER 16 for a system of 16 AMPs results in a cluster size of 16. One
cluster is connected to 16 AMPs.
122
Utilities
Chapter 7: Configuration Utility (config)
DEL AMP
DEL AMP
Purpose
The DEL AMP command specifies that an AMP or a range of AMPs is to be deleted from the
new configuration map.
Syntax
DEL AMP
mmmlist
DA
GT05C006
where:
Syntax element …
Specifies …
mmmlist
one or more vproc numbers to be removed from the configuration.
The vproc number ranges from 0 to 16199.
Usage Notes
If you are deleting an AMP, the AMP being deleted must have a vproc number equal to the
highest number for any AMP currently entered in the new configuration map.
That is, given a system with a new configuration map that describes n AMPs (whose vproc
numbers are 0 through n-1), the only AMP that you can remove from that configuration map
is the AMP with vproc number n-1.
Configuration verifies that the vproc number to be deleted exists in the new configuration
map.
After the DEL AMP command is accepted, the specified AMP is deleted from the vproc list in
the new configuration map, and the system displays the following message:
The AMP is deleted.
If you type the DEL AMP command before the BEGIN CONFIG command, the command is
ignored, and the system displays the following message:
BEGIN CONFIG is not entered, and DEL AMP is ignored.
Utilities
123
Chapter 7: Configuration Utility (config)
DEL AMP
Example 1
To delete an AMP that is vproc seven of the new configuration, type the following:
da 7
Example 2
To delete AMPs that are vproc four to vproc six of the new configuration, type the following:
da 4 - 6
124
Utilities
Chapter 7: Configuration Utility (config)
DEL HOST
DEL HOST
Purpose
The DEL HOST command specifies that a host is to be deleted from the new configuration
map.
Syntax
DEL
HOST
HN = nnnn
D
DH
GT05B007
where:
Syntax element …
Specifies …
HN = nnnn
the host number to be deleted.
The host number ranges from 1 to 1023.
Usage Notes
Configuration verifies that a host with the specified number exists. After the command is
accepted, the specified host is deleted from the new configuration map, and the system
displays the following message:
The host is deleted from new configuration map.
If you type the DEL HOST command before the BEGIN CONFIG command, the command is
ignored, and the system displays the following message:
BEGIN CONFIG is not entered, and DEL HOST is ignored.
You cannot delete a host from a configuration unless the host is no longer associated with any
PE that is defined in the configuration. For information, see “DEL PE” on page 126.
Example
To delete host 52, type the following:
dh hn = 52
Utilities
125
Chapter 7: Configuration Utility (config)
DEL PE
DEL PE
Purpose
The DEL PE command specifies that a PE or a range of PEs is to be deleted from the new
configuration map
Syntax
DEL PE
mmmlist
DP
GT05C007
where:
Syntax element …
Specifies …
mmmlist
the PE (or PEs) to be deleted from the new configuration map.
The PE vproc number ranges from 15360 to 16383.
Usage Notes
Configuration verifies that the vproc number (or numbers) to be deleted exists in the new
configuration map.
When all PEs that are associated with a host have been deleted from the new configuration
map, you also must delete the host from the map.
After the DEL PE command is accepted, the specified PE (or PEs) is deleted from the vproc list
in the new configuration map, and the system displays the following message:
The PE is deleted.
If you type the DEL PE command before the BEGIN CONFIG command, the command is
ignored, and the system displays the following message:
BEGIN CONFIG is not entered, and DEL PE is ignored.
126
Utilities
Chapter 7: Configuration Utility (config)
DEL PE
Example 1
To delete PE 16380, type the following:
del pe 16380
Example 2
To delete PEs from vproc 16344 to 16351, type the following:
del pe 16344 - 16351
Utilities
127
Chapter 7: Configuration Utility (config)
END CONFIG
END CONFIG
Purpose
The END CONFIG command validates and updates the new configuration map and
terminates a configuration session.
Syntax
END CONFIG
EC
GT05A009
Usage Notes
When the END CONFIG command is executed, Configuration validates the new
configuration map.
As a result, one of the these events can occur:
•
The session is terminated, the new configuration is accepted when the configuration
changes are validated, and the new configuration map is updated on disk.
The system displays the following message:
The session is terminated and the new configuration is stored.
The Teradata Database is ready for reconfiguration.
•
The END CONFIG command is ignored because one of the following occurred:
•
A BEGIN CONFIG command was not entered, and, therefore, no configuration
session is in progress.
•
The Teradata Database has not completed startup.
•
The configuration changes were not valid; an error message that describes the problem
is displayed.
When the END CONFIG command is not executed successfully, the system displays the
following message:
The new configuration was not stored due to error(s) detected. Please
update the configuration and try END CONFIG again.
128
Utilities
Chapter 7: Configuration Utility (config)
LIST
LIST
Purpose
The LIST command lists the vprocs and hosts in the current or new configuration.
Syntax
LIST
L
SHORT
FROM
CURRENT
LONG
F
C
NEW
N
GT05R010
where:
Syntax element …
Specifies …
SHORT
the compact configuration map. This is the default. It displays the
contents of the following commands:
• LIST CLUSTER
• LIST PE
• LIST HOST
LONG
the detailed configuration map.
FROM
that you will choose the type of configuration map to be displayed.
CURRENT
the configuration map that the Teradata Database is using currently.
This is the default.
If you do not specify the FROM keyword, the default map type is
CURRENT, and the current configuration map is displayed.
NEW
the proposed configuration map.
Usage Notes
When you type the LIST command without any options, the system displays the short form of
the default (current) configuration map.
When you type the FROM option, you must specify the map type. No default type is supplied
for this form.
Utilities
129
Chapter 7: Configuration Utility (config)
LIST
Example
An example output generated by the LIST command is shown below:
------------AMP Array
------------Vproc-Id Range
Status
Cluster
Vproc-Id Range
Status
Cluster
--------------------- --------------------- -----------0, 16, 32, 48
Online
0
1, 17, 33, 49 Online
1
2, 18, 34, 50
Online
2
3, 19, 35, 51 Online
3
4, 20, 36, 52
Online
4
5, 21, 37, 53 Online
5
.
.
.
14, 30, 46, 62
Online
14
15, 31, 49, 63 Online
15
----------------------------------------------------------------------------------PEs ARRAY
------------Vproc-Id Range
Status
--------------------16344-16354
Online
16355
Down
16356-16363
Online
16364
Down
16365-16377
Online
16378
Down
16379-16383
Online
------------------------------------HOSTs ARRAY
------------HostNo Logical HostID
Type
------ ----------------52
1076
COP
282
282
IBM
286
286
IBM
Connected PE Range
-----------------16368-16383
16344-16351
16353, 16355, 16357, 16359
16361, 16363, 16365, 16367
16360, 16362, 16364, 16366
In response to the LIST command, Configuration lists all AMPs, PEs, and hosts that are
included in the current configuration map. The following table lists the possible status of the
vproc in the previous example.
130
Status
Description
Online
The vproc is participating in the current configuration.
Down
The vproc is not participating in the current configuration.
Hold
The vproc was offline during the preceding system execution and is in the process
of being recovered to the online state.
Newready
The vproc has been initialized but has not been added to the configuration.
Utilities
Chapter 7: Configuration Utility (config)
LIST AMP
LIST AMP
Purpose
The LIST AMP command displays the attributes of every AMP in the current or new
configuration map.
Syntax
LIST
AMP
L
LA
SHORT
FROM
CURRENT
LONG
F
C
NEW
N
1102A163
where:
Syntax element …
Specifies …
SHORT
the compact configuration map of the AMP vprocs. This is the default.
This option contains only two fields:
• vproc-id range
• status
LONG
the detailed configuration map of the AMP vprocs.
This option contains three fields:
• vproc
• status
• cluster
FROM
that you will choose the type of configuration map to be displayed.
CURRENT
the configuration map that the Teradata Database is using currently.
This is the default.
The default map type is CURRENT. If you do not specify the map type,
the AMP configuration is displayed from the current map.
NEW
the proposed configuration map.
Usage Notes
The LIST AMP command displays the AMP configuration map.
Utilities
131
Chapter 7: Configuration Utility (config)
LIST AMP
Example 1
An example output generated by the LIST AMP command is shown below:
AMPs ARRAY
____________________________
Vproc-Id Range
______________
0-63
Status
_______
Online
Vproc-Id Range
_______________
Status
_______
Note: As indicated in the previous example, all the AMP vprocs are online, so the vproc-id is
blocked from the beginning vproc 0 to the ending vproc 63. If the system displays AMP vprocs
that have status other than online, the system lists their vproc-ids and status, as shown in the
following example.
AMPs ARRAY
____________________________
Vproc-Id Range
______________
0
16-63
3-14
Status
_______
Online
Online
Online
Vproc-Id Range
_______________
1-2
15
Status
_______
Fatal
NewReady
Example 2
An example output generated by the LIST AMP LONG command is shown below:
AMPs ARRAY
____________________________
Vproc
_____
0
2
4
.
.
.
62
132
Status
_______
Online
Fatal
Online
Online
Cluster
_______
0
2
4
14
Vproc
_____
1
3
5
63
Status
_______
Fatal
Online
Online
Online
Cluster
________
1
3
5
15
Utilities
Chapter 7: Configuration Utility (config)
LIST CLUSTER
LIST CLUSTER
Purpose
The LIST CLUSTER command displays the attributes of every AMP in the current or new
configuration map, ordered by cluster number.
Usage Notes
LIST
CLUSTER
L
SHORT
FROM
CURRENT
LC
LONG
F
C
NEW
N
GT05R012
where:
Syntax element …
Specifies …
SHORT
the compact display of the attributes of every AMP in the current or
new configuration map, ordered by cluster number.
This option contains three fields:
• vproc-id range
• status
• CN
This is the default.
LONG
the detailed display of the attributes of every AMP in the current or
new configuration map, ordered by cluster number.
This option contains three fields:
• vproc
• status
• cluster
FROM
that you will choose the type of configuration map to be displayed.
CURRENT
the configuration map that the Teradata Database is using currently.
This is the default.
If you do not specify a map type, the current configuration map is
displayed.
NEW
Utilities
the proposed configuration map.
133
Chapter 7: Configuration Utility (config)
LIST CLUSTER
Usage Notes
The LIST CLUSTER command displays the cluster configuration map. This map shows a list
of AMPs ordered by cluster number.
Example
Example output generated by the LIST CLUSTER command is shown below:
AMPs ARRAY
_____________________
Vproc-Id Range
______________
0, 16, 32, 48
2, 18, 34, 50
4, 20, 36, 54
.
.
.
14, 30, 46, 62
Status
CN
Vproc-Id Range Status Cluster
_______ ________ _______________ ______ _______
Online
0
1, 17, 33, 49
Online
1
Online
2
3, 19, 35, 51
Online
3
Online
4
5, 21, 37, 53
Online
5
Online
14
15, 31, 49, 63
Online
15
AMPs ARRAY
_____________________
Vproc
______
0
2
4
.
.
.
62
134
Status
______
Online
Online
Online
Cluster
________
0
2
4
Online
14
Vproc
_____
1
3
5
63
Status
______
Online
Online
Online
Online
Cluster
_______
1
3
5
15
Utilities
Chapter 7: Configuration Utility (config)
LIST HOST
LIST HOST
Purpose
The LIST HOST command displays all hosts in a current or new configuration.
Syntax
LIST
HOST
L
SHORT
FROM
CURRENT
LH
LONG
F
C
NEW
N
GT05R013
where:
Syntax element …
Specifies …
SHORT
the compact display of all hosts in a current or new configuration map.
This is the default.
This option contains these fields:
•
•
•
•
LONG
HostNo
Logical HostID
Type
Connected PE range
the detailed display of all hosts in a current or new configuration map.
This option contains these fields:
•
•
•
•
HostNo
Logical HostID
Type
Connected PE
FROM
that you will choose the type of configuration map to be displayed.
CURRENT
the configuration map that the Teradata Database is using currently.
This is the default.
If you do not specify the map type, the current configuration map is
displayed.
NEW
the proposed configuration map.
Usage Notes
The LIST HOST command displays the host configuration map.
Utilities
135
Chapter 7: Configuration Utility (config)
LIST HOST
Example
An example output generated by the LIST HOST command is shown below:
HOSTs ARRAY
_______________________
HostNo
______
52
821
829
LogicalHostID
_____________
1076
821
829
Type
____
COP
IBM
IBM
Connected PE Range
__________________
16379-16383
16377
16378
Type
____
COP
Connected PE Range
__________________
16379
16380
16381
16382
16383
16377
16378
HOSTs ARRAY
_______________________
HostNo
______
52
821
829
LogicalHostID
_____________
1076
821
829
IBM
IBM
For information about the ranges of logical host IDs for all supported hosts, see “Hosts” on
page 109.
136
Utilities
Chapter 7: Configuration Utility (config)
LIST PE
LIST PE
Purpose
The LIST PE command displays the attributes of every PE in the current or new configuration
map.
Syntax
LIST
PE
L
LP
SHORT
FROM
CURRENT
LONG
F
C
NEW
N
GT05R014
where:
Syntax element …
Specifies …
SHORT
the compact configuration map of the PE vprocs. This is the default.
This option contains only two fields:
• vproc-id range
• status
LONG
the detailed configuration map of the PE vprocs.
This option contains three fields:
• vproc
• status
• hostno
FROM
that you will choose the type of configuration map to be displayed.
CURRENT
the configuration map that the Teradata Database is using currently.
This is the default.
If you do not specify the map type, the current PE configuration map
is displayed.
NEW
the proposed configuration map.
Usage Notes
The LIST PE command displays the PE configuration map.
Utilities
137
Chapter 7: Configuration Utility (config)
LIST PE
Example 1
An example output generated by the LIST PE command is shown below.
PEs ARRAY
_________________________
Vproc-Id Range
_______________
16343-16383
Status
_______
Online
Vproc-Id Range
_______________
Status
_______
Note: As indicated in the previous example, all the PE vprocs are online, so the vproc-id is
blocked from the beginning vproc 16343 to the ending vproc 16383. If the system displays PE
vprocs that have status other than online, the system lists their vproc-id and status, as shown
in the following example.
PEs ARRAY
_________________________
Vproc-Id Range
_______________
16343-16354
16356-16363
16365-16377
16379-16383
Status
_______
Online
Online
Online
Online
Vproc-ID Range
______________
16355
16364
16378
Status
_______
Down
Down
Down
Example 2
An example output generated by the LIST PE LONG command is shown below.
PEs ARRAY
_________________________
Vproc
Status
HostNo
______
_______
_______
16344
Online
282
16346
Online
282
16348
Online
282
.
.
.
16382
Online
52
138
Vproc
______
16345
16347
16349
Status
______
Online
Online
Online
HostNo
_______
282
282
282
16383
Online
282
Utilities
Chapter 7: Configuration Utility (config)
MOD AMP
MOD AMP
Purpose
The MOD AMP command specifies that an AMP or a range of AMPs is to be moved from one
cluster to another.
Syntax
MOD AMP
mmmlist TO CN = nnnn
MA
GT05C015
where:
Syntax element …
Specifies …
mmmlist
one or more vproc numbers to be moved from one cluster to another.
The vproc number ranges from 0 to 16199.
TO
the new cluster number entry.
CN = nnnn
new cluster number with which the AMP or AMPs are to be
associated.
The cluster number ranges from 0 to 8099. If you do not specify a
cluster number, it remains unchanged.
Usage Notes
Configuration verifies the vproc number(s) and cluster number.
After the MOD AMP command is accepted, the attributes of the specified AMP are modified,
and the system displays the following message:
The AMPs attribute is modified in the new configuration.
If you type the MOD AMP command before the BEGIN CONFIG command, the command is
ignored, and the system displays the following message:
BEGIN CONFIG is not entered, and MOD AMP is ignored.
Example
To modify the cluster assignment of AMP four to AMP eight to be part of cluster one, type the
following:
ma 4 - 8 to cn = 1
Utilities
139
Chapter 7: Configuration Utility (config)
MOD HOST
MOD HOST
Purpose
The MOD HOST command changes the host type in the new configuration map.
Syntax
MOD
HOST
HN = nnnn
M
, HT =
MH
IBM
COP
ATT3B
BULLHN
OSII00
GT05R016
where:
Syntax element …
Specifies …
HN = nnnn
the host number whose type is to be changed.
The host number ranges from 1 to 1023.
HT =
the host type.
Usage Notes
After the MOD HOST command is accepted, the type of the specified host is modified in the
new configuration map, and the system displays the following message:
The host is changed in the new configuration map.
If you type the MOD HOST command before the BEGIN CONFIG command, the command
is ignored, and the system displays the following message:
BEGIN CONFIG is not entered, and MOD HOST is ignored.
Example
To change the host type for host 52 to cop, type the following:
mh hn = 52, ht = cop
140
Utilities
Chapter 7: Configuration Utility (config)
MOD PE
MOD PE
Purpose
The MOD PE command specifies that a PE or a range of PEs is to be moved from one host to
another.
Syntax
MOD PE
mmmlist TO HN = nnnn
MP
GT05C017
where:
Syntax element …
Specifies …
mmmlist
one or more vproc numbers to be assigned to a different host.
The PE vproc number ranges from 15360 to 16383.
HN = nnnn
the new host number with which the PE (or PEs) is to be associated.
The host number ranges from 1 to 1023.
Usage Notes
Configuration validates the vproc number (or numbers) assigned to the PE (or PEs) and the
host number with which the PE (or PEs) is to be associated.
After the command is accepted, the specified PE (or PEs) is modified in the vproc list in the
new configuration map, and the system displays the following message:
The PE attributes are modified in the new configuration map.
If you type the MOD PE command before the BEGIN CONFIG command, the command is
ignored, and the system displays the following message:
BEGIN CONFIG is not entered, and MOD PE is ignored.
Example
To move PEs from vproc 16344 through vproc 16351 to host 286, type the following:
mod PE 16344 - 16351 to hn = 286
Utilities
141
Chapter 7: Configuration Utility (config)
MOVE AMP
MOVE AMP
Purpose
The MOVE AMP command specifies that all data rows from an AMP or a group of AMPs are
to be moved to another AMP or group of AMPs.
Syntax
MOVE AMP
mmmlist
TO
nnnlist
MVA
GU01A015
where:
Syntax element …
Specifies …
mmmlist
the range of AMP vproc numbers being moved in the configuration.
nnnlist
the range of AMP vproc numbers that mmmlist is being moved to in the
configuration.
Usage Notes
AMP numbers range from 0 to 16199 and must be in contiguous order. The number of AMPs
specified in mmmlist and in nnnlist must be the same. The following applies.
AMPs in the …
Must be …
mmmlist
AMPs defined in the current configuration.
nnnlist
new AMPs only.
After the MOVE AMP command is accepted, Configuration displays the following message:
The AMP range is moved.
If you type the MOVE AMP command before the BEGIN CONFIG command, the command
is ignored. Configuration displays the following message:
BEGIN CONFIG is not entered, and MOVE AMP is ignored.
If you type the MOVE AMP command and the ADD AMP command, and if the MOVE AMP
range of vproc numbers is before the ADD AMP range of vproc numbers, Configuration
displays the following message:
Moved AMPS must follow added AMPs.
142
Utilities
Chapter 7: Configuration Utility (config)
MOVE AMP
If you type the MOVE AMP command and either the MODIFY AMP command or the DEL
AMP command, the commands are ignored. Configuration displays the following message:
Addition, Deletion, and Moving of AMPS are mutually exclusive.
Note: Run the Reconfig utility to do the following:
•
Redistribute all rows from the moved AMPs
•
Change the configuration so that the moved AMPs are properly associated with the correct
nodes and/or disk arrays
For more information on MOVE AMP and the Reconfiguration utility, see Utilities Volume 2.
Example 1
To move AMP 0 in an existing eight-AMP system (AMPs 0-7) to a new AMP (AMP 8), type
the following:
move amp 0 to 8
Example 2
To move AMPs 4-5 in an existing eight-AMP system (AMPs 0-7) to new AMPs (AMPs 8-9),
type the following:
move amp 4-5 to 8-9
Utilities
143
Chapter 7: Configuration Utility (config)
MOVE PE
MOVE PE
Purpose
The MOVE PE command specifies that a PE or group of PEs is to be moved to another PE or
group of PEs.
Syntax
MOVE PE
mmmlist
TO
nnnlist
MVP
GU01A014
where:
Syntax element …
Specifies …
mmmlist
the range of PE vproc numbers being moved in the configuration.
nnnlist
the range of PE vproc numbers that mmmlist is being moved to in the
configuration.
Usage Notes
PE numbers range from 16383 downward in decreasing, contiguous order. The number of PEs
specified in mmmlist and in must be the same. The following applies.nnnlist
PEs in the …
Must be …
mmmlist
PEs defined in the current configuration.
nnnlist
new PEs only.
Note: You also can move PEs that are restricted to specific nodes, such as channel-connected
PEs, to other nodes.
After the MOVE PE command is accepted, Configuration displays the following message:
The PE range is moved.
If you type the MOVE PE command before the BEGIN CONFIG command, the command is
ignored. Configuration displays the following message:
BEGIN CONFIG is not entered, and MOVE PE is ignored.
Note: Run the Reconfig utility to change the configuration so that the moved PEs are properly
associated with the correct nodes and/or disk arrays.
144
Utilities
Chapter 7: Configuration Utility (config)
MOVE PE
Example 1
To move PE 16383 in an existing eight-PE system (PEs 16376 - 16383) to a new PE (PE 16375),
type the following:
move pe 16383 to 16375
Example 2
To move PEs 16380 - 16381 in an existing eight-PE system (PEs 16376 - 16383) to new PEs
(PEs 16374 - 16375), type the following:
move pe 16380-16381 to 16374-16375
Utilities
145
Chapter 7: Configuration Utility (config)
SHOW CLUSTER
SHOW CLUSTER
Purpose
The SHOW CLUSTER command displays all AMPs in a specified cluster for a current or new
configuration map.
Syntax
SHOW CLUSTER
CN = nnnn
SC
FROM
CURRENT
F
C
NEW
N
GT05R021
where:
Syntax element …
Specifies …
CN = nnnn
the cluster number for which the appropriate AMPs are to be displayed.
The cluster number ranges from 0 to 8099.
FROM
that you will choose the type of configuration map to be displayed.
CURRENT
the configuration map that the Teradata Database is using currently.
This is the default.
If you do not specify a map type, the current cluster configuration map
is displayed.
NEW
the proposed configuration map.
Usage Notes
The SHOW CLUSTER command displays the cluster configuration map.
Example
To display the current configuration map for cluster three and its associated AMPs, type:
show cluster cn = 3
An example output generated by the SHOW CLUSTER command is shown below.
OK, AMPs of cluster 3 are listed.
Vproc
Status
Cluster
Vproc
______ _______ ________
______
3
Online
3
7
11
Online
3
15
146
Status
_______
Online
Online
Cluster
_________
3
3
Utilities
Chapter 7: Configuration Utility (config)
SHOW HOST
SHOW HOST
Purpose
The SHOW HOST command shows information for a specified host.
Syntax
HN = nnnn
FROM
F
CURRENT
C
GT05B019
NEW
N
where:
Syntax element …
Specifies …
HN = nnnn
the host number.
The host number ranges from 1 to 1023.
FROM
that you will choose the type of configuration map to be displayed.
CURRENT
the configuration map that the Teradata Database is using currently.
This is the default.
If you do not specify a map type, the current configuration map is
displayed.
NEW
the proposed configuration map.
If you use FROM, you must specify a map type.
Usage Notes
The SHOW HOST command displays the host number, logical host ID, host type, and vproc
numbers of connected PEs.
Utilities
147
Chapter 7: Configuration Utility (config)
SHOW HOST
Example
To display information from the current configuration map about host 10, type the following:
show host hn = 10
An example output generated by the SHOW HOST command is shown below:
HOSTs ARRAY
___________________
HostNo
LogicalHostID
Type
Connected PE
___________________________________________________
10
17
IBM
16383
10
17
IBM
16382
10
17
IBM
16381
148
Utilities
Chapter 7: Configuration Utility (config)
SHOW VPROC
SHOW VPROC
Purpose
The SHOW VPROC command displays the attributes of a specified AMP or PE vproc in a
current or new configuration.
Syntax
SHOW vproc
SV
mmmm
FROM
CURRENT
F
C
NEW
N
GT05R020
where:
Syntax element …
Specifies …
mmmm
the vproc number of an AMP or PE.
Note: The AMP vproc number ranges from 0 to 16199, and the PE
vproc number ranges from 15360 to 16383.
FROM
that you will choose the type of configuration map to be displayed.
CURRENT
the configuration map that the Teradata Database is using currently.
This is the default.
The default map type is CURRENT.
If you do not specify a map type, the current configuration map is
displayed.
NEW
the proposed configuration map.
Usage Notes
The following table lists what the SHOW VPROC command displays.
Utilities
Type of vproc
Information displayed
PE
• Vproc number
• Status
• Host number with which the PE is associated
149
Chapter 7: Configuration Utility (config)
SHOW VPROC
Type of vproc
Information displayed
AMP
• Vproc number
• Status
• Cluster number
Example 1
To display the attributes of PE vproc 16380 for the current configuration map, type the
following:
show vproc 16380
An example output generated by the SHOW VPROC command is shown below:
Vproc
----16380
Status
-----Online
HostNo
---------821
Example 2
To display the attributes of AMP vproc six for the current configuration map, type the
following:
show vproc 6
An example output generated by the SHOW VPROC command is shown below:
Vproc
----6
150
Status
-----Online
Cluster
------2
Utilities
Chapter 7: Configuration Utility (config)
STOP
STOP
Purpose
The STOP command stops Configuration.
Syntax
STOP
S
GT05A021
Usage Notes
When you type the STOP command, all AMP tasks created for Configuration are aborted, and
all storage allocated for Configuration is released.
Typing this command does not automatically update the new configuration map for the
current session.
The updating of the new configuration map is accomplished by the END CONFIG command.
The STOP command displays the following message:
Config is about to be stopped.
Caution:
Utilities
The STOP command does not warn you if the END CONFIG command has not been
executed. Be sure to type END CONFIG before using the STOP command.
151
Chapter 7: Configuration Utility (config)
Configuration Utility Examples
Configuration Utility Examples
This section shows the process of adding vprocs to a Teradata Database.
Current configuration
New configuration
Four AMPS: 0, 1, 2, 3
Eight AMPS: 0, 1, 2, 3, 4, 5, 6, 7
Two PEs: 16383, 16382
Four PEs: 16383, 16382, 16381, 16380
Cluster 0: (0, 1, 2, 3)
Cluster 0: 0, 1, 4, 5
Cluster 1: 2, 3, 6, 7
Activities Performed
The configuration procedure described below performs three activities:
•
Adds AMPs
•
Adds PEs
•
Changes cluster assignments
For this demonstration, do the following in the Database window:
1
In the Database Window, select the Supervisor (Supvr) icon.
The Supervisor window appears.
Note: The PDE must be up and the Supvr window must display the status as “Reading” in
order to enter commands in the command input line.
2
In the Enter a command subwindow of the Supervisor window, type the following and
press Enter:
start config
The Supervisor window displays the following message:
Started ‘config’ in window 1.
The window number represents the application window in which Configuration is
running. The Configuration window appears.
3
To begin the session, type the following in the Configuration window and press Enter:
begin config
4
To add AMPs numbered four through seven, type the following:
add amp 4
add amp 5
add amp 6
add amp 7
5
To add PEs numbered 16381 and 16380 to host number one, type the following:
add pe 16381, hn = 1
add pe 16380, hn = 1
152
Utilities
Chapter 7: Configuration Utility (config)
Error Messages
6
To assign AMPs to clusters automatically, type the following:
default cluster 4
7
To end the session, type the following:
end config
8
To verify the new configuration, type the following:
list
9
To stop configuration, type the following:
stop
You used Configuration to define the new system. Run the Reconfiguration utility next to
configure the defined Teradata Database components into an operational system.
For an example of how to run the Reconfiguration utility following the use of Configuration,
see Utilities Volume 2.
Error Messages
For Configuration error messages, see Messages.
Utilities
153
Chapter 7: Configuration Utility (config)
Error Messages
154
Utilities
CHAPTER 8
Control GDO Editor (ctl)
The Control GDO Editor, ctl, also called the PDE Control program, lets you display and
modify PDE configuration settings. These settings affect how the PDE handles startup,
responds to crashes, and the functions during the normal operation of Teradata.
Note: Ctl is available only on Windows and Linux. The corresponding utility on MP-RAS is
called Xctl. For information on the Xctl utility, see Utilities Volume 2.
Audience
Users of Ctl include the following:
•
Database administrators
•
System and application programmers
•
System operators
User Interfaces
Ctl runs on the following platforms and interfaces:
Utilities
Platform
Interfaces
Windows
Command line (“Teradata Command Prompt”)
Linux
Command line
155
Chapter 8: Control GDO Editor (ctl)
Syntax
Syntax
To start Ctl from the command line, use the following command syntax.
ctl
"
command
;
command
"
"
command
;
command
"
-first
-last
-help
1102B041
Setting
Description
-first command
First command to execute before other processing.
No default.
The -first command and the -last command accept command lists,
which are one or more commands separated by semicolons.
The following is an example using the -first command:
ctl -first “RSS Collection Rate = 600;Vproc
Logging Rate = 600;Node Logging Rate = 600;screen
rss; write; quit”
Note: The quotation marks used in the example above are necessary
because of the spaces in the commands.
To set values, you should use the interactive Ctl screens.
-last command
Last command to execute just before exiting.
No default.
The following is an example using the -last command:
ctl -last screen
The above command displays the current screen after you quit the
interactive Ctl screen.
-help
Provides information on Ctl command-line options.
Note: Some strictly internal and rarely-needed options have been omitted from this
discussion. Those options are documented in the Ctl command-line help.
156
Utilities
Chapter 8: Control GDO Editor (ctl)
Ctl Commands
Ctl Commands
Starting Ctl from the command line invokes a command shell, recognized by the Ctl
command prompt, a right angle bracket (>).
The Ctl commands are described in the sections that follow:
•
“EXIT” on page 158
•
“HARDWARE” on page 159
•
“HELP” on page 161
•
“PRINT group” on page 162
•
“PRINT variable” on page 163
•
“QUIT” on page 164
•
“READ” on page 165
•
“SCREEN” on page 166
•
“variable = setting” on page 177
•
“WRITE” on page 179
Note: The first two letters of each command can be used as a synonym for the command at
the Ctl command prompt.
Utilities
157
Chapter 8: Control GDO Editor (ctl)
EXIT
EXIT
Purpose
The EXIT command exits Ctl.
Syntax
EXIT
EX
WRITE
1102A157
Usage Notes
If the -last option was used to specify a command when Ctl was started, that command will
be executed before Ctl exits.
If you modified values in Ctl and specify the write option, Ctl will write the changes to the
GDO before exiting.
If you modified values in Ctl and do not specify the write option, Ctl will ask if you want the
changes written before Ctl exits.
158
Utilities
Chapter 8: Control GDO Editor (ctl)
HARDWARE
HARDWARE
Purpose
The HARDWARE command displays a screen containing the PDE hardware configuration
information for this system. This information is recomputed every time Teradata Database is
restarted, so it reflects the actual running configuration. The information is read only and
cannot be directly modified by the user.
Syntax
HARDWARE
HA
1102A147
Example
The following example shows sample output from the hardware command.
Node
#nodes
4
min
cpus/node
4
mips/cpu 248
segmem/vproc(mb) 149
VPROCS
Type Name
1
AMP
2
PE
3
GTW
4
5
6
7
8
9
10
11
12
13
14
15
Utilities
##
32
12
4
0
0
0
0
0
0
0
0
0
0
0
0
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
vprocs/node
max
4
248
170
8
3
1
0
0
0
0
0
0
0
0
0
0
0
0
8
3
1
0
0
0
0
0
0
0
0
0
0
0
0
avg
4 nodes/clique
248 arrays/clique
165
8
3
1
0
0
0
0
0
0
0
0
0
0
0
0
ARRAYS:
Rand. Reads/Sec
4k: 2535 2535
8k: 1845 1845
16k: 1383 1383
32k: 770
770
64k: 385
385
128k: 193
193
2535
1845
1383
770
385
193
Rand. Writes/Sec
4k: 624
624
8k: 567
567
16k: 458
458
32k: 306
306
32k: 153
153
32k:
77
77
624
567
458
306
153
77
DISKS:
Rand. Reads/Sec
4k:
43
43
8k:
38
38
16k:
31
31
32k:
24
24
64k:
12
12
128k:
6
6
43
38
31
24
12
6
Rand. Writes/Sec
4k:
14
14
8k:
13
13
16k:
10
10
32k:
8
8
32k:
4
4
32k:
2
2
14
13
10
8
4
2
min
4
0
max
4
0
avg
4
0
kmem/vproc 125472 130336 128544
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
kmem/vproc
0
0
0
159
Chapter 8: Control GDO Editor (ctl)
HARDWARE
NETWORK:
Monocast rates: Opers/Node/second
0 b:
5800
5800
5800
100 b:
5500
5500
5500
400 b:
4700
4700
4700
1600 b:
2850
2850
2850
6400 b:
1130
1130
1130
25600 b:
330
330
330
160
Multicast rates: Opers/Node/second
0 b:
18500
18500
18500
100 b:
14000
14000
14000
400 b:
10000
10000
10000
1600 b:
4300
4300
4300
6400 b:
1300
1300
1300
25600 b:
340
340
340
Utilities
Chapter 8: Control GDO Editor (ctl)
HELP
HELP
Purpose
The HELP command displays the online help screen.
Syntax
HELP
HE
cmd
TOPICS
1102A168
Usage Notes
For help, type one of the following:
IF you want …
THEN type …
help on a specific command
help cmd
where cmd is the name of the command you want help on.
a list of all the help topics
Utilities
help topics
161
Chapter 8: Control GDO Editor (ctl)
PRINT group
PRINT group
Purpose
The PRINT group command displays groups of variables by variable types.
Syntax
PRINT
PR
ALL
BUTTONS
CHECKBOX
LABELS
SCALES
TEXT
TRACE
1102A226
Usage Notes
162
Group
Description
all
All variables in the PDE Control GDO. This is the default.
buttons
Variables that take one value from a set of two or more possible values
checkbox
Variables that take binary values.
labels
Variables that are labels (not normally modified by users).
scales
Variables that are integer values.
text
Variables that are text strings.
trace
All trace entries.
Utilities
Chapter 8: Control GDO Editor (ctl)
PRINT variable
PRINT variable
Purpose
The PRINT variable command displays the value of a specified variable.
Syntax
;
PRINT
PR
variable
1102A173
Usage Notes
The value is displayed in the form of an assignment command for that variable.
Some variables that can be changed with Ctl do not appear on any of the Ctl output screens.
To see a listing of all variable names, use the print all command.
Example
For example, if you type the following two variables
> print minimum node action;fsg cache percent
then the following might be displayed, depending on your system:
Minimum Node Action=Clique-Down
FSG cache Percent=80
Utilities
163
Chapter 8: Control GDO Editor (ctl)
QUIT
QUIT
Purpose
The QUIT command exits Ctl.
Syntax
QUIT
QU
WRITE
1102A158
Usage Notes
If the -last option was used to specify a command when Ctl was started, that command will
be executed before Ctl quits.
If you modified values in Ctl and specify the write option, Ctl will write the changes to the
GDO before quitting.
If you modified values in Ctl and do not specify the write option, Ctl will ask if you want the
changes written before Ctl exits.
164
Utilities
Chapter 8: Control GDO Editor (ctl)
READ
READ
Purpose
The READ command re-reads the control GDO, resetting any changes.
Syntax
READ
RE
1102A175
Usage Notes
You can update the view manually. After you write to the GDO, the Ctl view is updated to the
actual GDO values automatically.
Utilities
165
Chapter 8: Control GDO Editor (ctl)
SCREEN
SCREEN
Purpose
The SCREEN command displays a screen of PDE configuration information from the PDE
Control GDO. Some of this information can be modified using the variable = setting and
WRITE commands.
Syntax
SCREEN
SC
DBS
DEBUG
RSS
VERSION
1102A225
Usage Notes
•
Each screen displays groups of related control fields. To change the values of modifiable
fields, use the variable = setting command, identifying the field either by the full field
name or by the alphanumeric identifier that appears next to the field in the screen output.
Fields lacking alphanumeric identifiers should not be modified.
•
Scripts that change field values should use the full variable names, rather than the
alphanumeric identifiers.
•
Entering the screen command alone, without a screen name, redisplays the current screen
(the one that was most recently displayed).
The Ctl screens are described in the sections that follow:
•
“SCREEN DBS” on page 166
•
“SCREEN DEBUG” on page 169
•
“SCREEN RSS” on page 172
•
“SCREEN VERSION” on page 174
SCREEN DBS
Function
The DBS screen displays parameters that control how the database software responds to
unusual conditions.
Example
>screen dbs
166
Utilities
Chapter 8: Control GDO Editor (ctl)
SCREEN
(0)
(1)
(3)
(5)
(7)
Minimum Node Action:
Minimum Nodes Per Clique:
Clique Failure:
Unused option
Cylinder Slots/AMP:
Clique-Down
1
Clique-Down
8
(2)
(4)
(6)
(8)
Unused option
FSG cache Percent:
0
Cylinder Read:
User
Restart After DAPowerFail: Off
Control Fields
The DBS screen contains the following control fields.
Setting
Description
Minimum Nodes
Per Clique
Specifies the number of nodes required for a clique to operate. (Inactive hot standby nodes are not
considered.) If a clique contains fewer than this number of nodes when Teradata Database is started,
the Minimum Node Action setting determines what action to take.
Note: Minimum Nodes Per Clique does not affect cliques in which all nodes are running properly,
including “single-node cliques” such as AMP-less Channel nodes. These cliques are exempt from the
Minimum Node Action, regardless of the Minimum Nodes Per Clique setting.
Changes to this value do not take effect until Teradata Database is restarted.
When Teradata Database starts, the system determines the minimum number of nodes required for
each clique, based on the number of vprocs, nodes, and available memory in the clique. For systems
containing cliques of different sizes, Teradata determines a minimum node requirement for each
clique, then considers the largest value as the minimum node requirement for all cliques in the
system.
• If Minimum Nodes Per Clique is manually set to a value less than the system-determined
minimum, the manually set value is replaced with the system value, and the action is noted in the
system log.
• If Minimum Nodes Per Clique is manually set to a value greater than the system-determined
minimum, the manual setting takes precedence.
• If Minimum Nodes Per Clique is manually set to a number greater than the total number of
nodes in any clique, all nodes in that clique must be fully functional for the clique to be used.
Choosing a value for Minimum Nodes Per Clique involves a trade off between performance and
system availability. When one or more nodes in a clique fail, the AMPs assigned to the failed nodes
migrate to the remaining nodes in the clique. System performance can degrade when some nodes
handle more vprocs than other nodes.
Setting a Minimum Nodes Per Clique value allows you to define at what point it is more efficient for
the system to consider a partially disabled clique to be entirely unavailable, allowing the Teradata
fallback logic to compensate for the problem. Note that running a system with fallback limits some
functions, which should be a consideration when choosing an appropriate value for this setting.
Minimum Node
Action
Utilities
Determines what action to perform when a clique contains fewer than the Minimum Nodes Per
Clique field. The following values apply:
Value
Action
Clique Down
That clique is not started, and the vprocs associated with
the clique are marked OFFLINE. This is the default.
DBS Down
Do not start Teradata Database. All the vprocs are marked
OFFLINE.
167
Chapter 8: Control GDO Editor (ctl)
SCREEN
Setting
Description
Clique Failure
Determines what to do when a clique is down. The following values apply:
FSG Cache Percent
Value
Description
Clique Down
Attempt to start the Teradata Database without the down
clique. This is the default.
DBS Down
The Teradata Database is not started if a clique is down.
Specifies the percentage of available memory (after making allowance for memory needed to run
utilities and the Teradata Database programs) to be used for the database file segment cache.
The default is 100%.
Changes to this value do not take effect until Teradata Database is restarted.
Note: You cannot disable FSG Cache Percent. A setting of 0 sets the FSG Cache Percent to the default
value.
Cylinder Read
Allows full-file scan operations to run efficiently by reading the cylinder-resident data blocks with a
single I/O operation.
The data block is a disk-resident structure that contains one or more rows from the same table and is
the physical I/O unit for the Teradata Database file system. Data blocks are stored in physical disk
space called sectors, which are grouped together in cylinders.
When Cylinder Read is enabled, the system incurs I/O overhead once per cylinder, as opposed to
being charged once per data block when blocks are read individually. The system benefits from the
reduction in I/O time for operations such as table-scans and joins, which process most or all of the
data blocks of a table. Cylinder Read is supported only if the size of the FSG cache is at least 36 MB/
AMP. At this size, only two Cylinder Slots/AMP are configured. The number of Cylinder Slots/AMP
that is configured might differ from what you request if the FSG cache size is not large enough.
You must restart Teradata Database before the new setting is effective.
To optimize performance using this feature, see Performance Management.
Note: The setting for Cylinder Read is related to the setting for Cylinder Slots/AMP.
168
Setting
Description
Default
The Cylinder Read feature is enabled, using the default
value for Cylinder Slots/AMP.
Off
The Cylinder Read feature is disabled.
User
The Cylinder Read feature is enabled, using a userspecified value for Cylinder Slots/AMP.
LogOnly
Cylinder Read is used only for scanning the WAL log. The
Cylinder Slots/AMP field is ignored. Two cylinder slots
are allocated per AMP.
Utilities
Chapter 8: Control GDO Editor (ctl)
SCREEN
Setting
Description
Cylinder Slots/AMP
This setting is related to the Cylinder Read feature. This control displays and sets the number of
cylinder memory slots per AMP that are reserved inside the FSG cache to host the loaded cylinders.
The Cylinder Read feature is not effective if Cylinder Slots/AMP is set to 1.
You must restart Teradata Database before a new setting takes effect.
Note: This option is effective only if Cylinder Read is set to User. For more information, see the
description of Cylinder Read.
Restart After
DAPowerFail
Allows you to select whether or not to restart the Teradata Database after a disk array AC power
failure.
The following values apply:
Value
Description
On
The Teradata Database is restarted after a disk array AC
power failure This is the default.
Off
The Teradata Database is not restarted after a disk array
AC power failure.
SCREEN DEBUG
Function
The Debug screen is used for internal debugging of PDE and the Teradata Database.
Caution:
Do not modify the debug values unless you are explicitly directed to do so by Teradata
Support Center Personnel.
Examples
>screen debug
(0)
(2)
(4)
(6)
(8)
Start DBS:
On
Start With Logons:
All
Save Dumps:
Off
Maximum Dumps:
5
Mini Dump Type:
No-Dump
(1)
(3)
(5)
(7)
Break Stop:
Start with Debug:
Snapshot Crash:
Start PrgTraces:
Off
On
Off
Off
The following is an example of Screen Debug command output for Linux:
>screen debug
(0)
(2)
(4)
(6)
Start DBS:
On
Start With Logons:
All
Save Dumps:
Off
Maximum Dumps:
5
(1)
(3)
(5)
(7)
Break Stop:
Start with Debug:
Snapshot Crash:
Start PrgTraces:
Off
On
Off
Off
Control Fields
The Debug screen contains the following control fields.
Utilities
169
Chapter 8: Control GDO Editor (ctl)
SCREEN
Setting
Description
Start DBS
Determines whether Teradata Database is started automatically when PDE starts.
The following values apply:
Break Stop
Value
Description
Off
Teradata Database is not started automatically when PDE is started.
On
Teradata Database is started automatically when PDE is started. This is the
default.
Controls whether the Teradata Database restarts automatically or stops for the system debugger
when a fatal error occurs.
The following values apply:
Start With Logons
Value
Description
Off
Teradata Database restarts automatically after performing a dump. This is the
default.
On
Teradata Database stops and waits for the system debugger to be attached so
that the problem can be diagnosed interactively.
Controls whether logons are enabled or not.
The following values apply:
Value
Description
None
None of the users are allowed to log on to the system.
All
All of the users are allowed to log on to the system.
DBC
Only new DBC users are allowed to log on to the system.
Note: Changes to this setting take effect after the next Teradata Database restart. Start With Logons
commands issued from the Supervisor window of Database Window have the same effect, but do not
require a database restart. For more information, see Chapter 11: “Database Window (xdbw).”
Start with Debug
Halts the database software startup until after the system debugger has been attached.
The following values apply:
170
Value
Description
Off
The Teradata Database is started normally. This is the default.
On
The Teradata Database will not run until the system debugger is attached and
used to continue operations.
Utilities
Chapter 8: Control GDO Editor (ctl)
SCREEN
Setting
Description
Save Dumps
Specifies whether database dumps are to be loaded into the database.
The following values apply:
Snapshot Crash
Value
Description
Off
Database dumps are not loaded into the database. This is the default.
On
Database dumps are loaded into the database.
Specifies whether Teradata Databasecontinues to run after a snapshot dump.
The following values apply:
Maximum Dumps
Value
Description
Off
Teradata Database continues to run after a snapshot dump. This is the default.
On
A snapshot dump is accompanied by a full database restart. If Break Stop is also
On, the system stops for debugging.
Applies to a per-node basis and is meaningful only for database dumps. Controls the maximum
number of crash dumps that will be saved. A value of -1 means the system will save as many dumps
as can fit on the disk containing the dump directory.
The default is 5. Setting this field to 0 disables database dumps.
Start PrgTraces
Allows you to save or not save prgtraces to files.
The following applies:
Mini Dump Type
Value
Description
Off
Prgtraces are not saved to files. This is the default.
On
Prgtraces are saved to files.
Allows the Teradata Database System to perform a mini dump of one of the following types:
Value
Description
No Dump
No dumps are performed. This is the default.
Stack Only
Dump
Stack only dumps are performed.
Full Dump
Full dumps are performed.
Note: This option is meaningful only on the Windows platform.
Utilities
171
Chapter 8: Control GDO Editor (ctl)
SCREEN
SCREEN RSS
Function
Teradata Database resource usage (ResUsages) statistics are collected by the Resource
Sampling Subsystem (RSS). These statistics are logged to special tables in the database.
The ResUsage tables fall into two categories, based on the kind of data they store:
•
Node logging tables (SPMA, IPMA, and SCPU) store statistical data that applies to nodes.
•
Vproc logging tables store statistical data that applies to individual vprocs on each node.
The RSS screen allows you to specify the rates and types of ResUsage data collection.
Example
>screen RSS
(0) RSS Collection Rate:
(1) Node Logging Rate:
600 sec
600 sec
(2) Vproc Logging Rate: 600 sec
RSS Table Logging Enable
(3) SPMA : On
(6) SVPR : OFF
(A) SPDSK: Off
(4) IPMA : Off
(7) IVPR : Off
(B) SVDSK: Off
(5) SCPU: Off
(8) SLDV: Off
(C) SAWT: Off
(9) SHST: Off
(D) SPS : Off
RSS Summary Mode Enable
Summarize SPMA: Off
(F) Summarize SVPR: Off
(I) Summarize SHST: Off
(L) Summarize SAWT: Off
Summarize IPMA : Off
(G) Summarize IVPR : Off
(J) Summarize SPDSK: Off
(M) Summarize SPS : Off
(E) Summarize SCPU : Off
(H) Summarize SLDV : Off
(K) Summarize SVDSK: Off
RSS Active Row Filter Mode Enable
Active
Active
Active
(R) Active
SPMA:
SVPR:
SHST:
SAWT:
Off
Off
Off
On
Active
Active
(N) Active
(S) Active
IPMA :
IVPR :
SPDSK:
SPS :
Off
Off
On
On
Active SCPU : Off
Active SLDV : Off
(O) Active SVDSK: On
Control Fields
The RSS screen contains the following control fields.
Setting
Description
RSS Collection Rate
The interval (in seconds) between collecting per-node
and per-vproc statistics.
The default is 600.
Node Logging Rate
The interval (in seconds) between writing per-node
statistics to the database. It must be an integer multiple of
the RSS Collection Rate.
The default is 600.
Vproc Logging Rate
The interval (in seconds) between writing per-vproc
statistics to the database. It must be an integer multiple of
the RSS Collection Rate.
The default is 600.
172
Utilities
Chapter 8: Control GDO Editor (ctl)
SCREEN
Setting
Description
RSS Table Logging Enable
Controls whether logging is enabled to the various
ResUsage tables.
Note: For logging to occur, the RSS Collection Rate must
be set to a non-zero value, and Node Logging and Vproc
Logging rates must be set to integer multiples of the RSS
Collection Rate.
Only the SPMA table has logging enabled by default.
Because writing rows to the ResUsage tables uses system
resources, Teradata recommends that you leave logging to
the other tables disabled until a specific need requires
these statistics.
RSS Summary Mode Enable
When logging is enabled for certain ResUsage tables,
multiple rows of resource usage data are written during
each logging period. Summary mode reduces the amount
of data collected per logging period by causing the RSS to
store a single summary row per type per node, instead of
one row per logging entity.
For example, if regular logging is enabled for the SCPU
table, separate rows storing statistics for every CPU are
written during each logging period. If summary mode is
enabled, only a single row is written for each node,
regardless of the number of CPUs in that node. The single
row includes summary data for all node CPUs.
Similarly, if regular logging is enabled for the SVPR table,
separate rows are written for every individual vproc. If
summary mode is enabled for this table, one row is
written for each vproc type (AMP, PE, and others).
Note: RSS Summary Mode is effective for a table only if
RSS Table Logging is also enabled.
Utilities
173
Chapter 8: Control GDO Editor (ctl)
SCREEN
Setting
Description
RSS Active Row Filter Mode Enable
Active Row Filter Mode limits the data rows that are
logged to the database. For the tables for which this
option is on, only those rows whose data has been
modified during the current logging period will be
logged.
For the SPS table, there are a large number of possible
rows, and most of them are not used at any one time.
Logging the inactive rows would waste a large amount of
resources, so it is highly recommended that Active Row
Filter Mode remain enabled for this table.
For tables that have both active row filter mode and
summary mode enabled, the active row filtering is applied
after the summarization. For data fields that have
persistent data across logging periods, the summarized
rows may combine the data from both active and inactive
rows. In these cases, the active row filtering reports
summarized rows that have at least one active row
contribution.
Note: RSS Active Row Filter Mode is effective for a table
only if RSS Table Logging is also enabled.
Usage Notes
•
You can retrieve data that is collected but not logged using Teradata Manager or Teradata
Performance Monitor. However, if you want to run ResUsage reports, you must select the
logging rate and enable the appropriate tables for logging.
•
Fields without an alphanumeric identifier preface should not be modified.
•
RSS aligns the logging periods to the clock on the top of every hour. Therefore, log values
must divide evenly into 3600 seconds (one hour). The following set of values are legal for
RSS collection and logging rates: 0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 30, 36,
40, 45, 48, 50, 60, 72, 75, 80, 90, 100, 120, 144, 150, 180, 200, 225, 240, 300, 360, 400, 450,
600, 720, 900, 1200, 1800, 3600.
•
For more information on the ResUsage tables and the types of information each table
stores, see Resource Usage Macros and Tables and Performance Management.
SCREEN VERSION
Function
The Version screen displays the version numbers of the running PDE and Database System
software. Input fields on this screen let you specify different versions of the software to be
installed and run the next time the system is restarted.
Examples
The following is an example of Screen Version command output for Windows:
>screen version
174
Utilities
Chapter 8: Control GDO Editor (ctl)
SCREEN
(0)
Running PDE: 13.00.00.00
Desired PDE:
(1)
Running DBS: 13.00.00.00
Desired DBS:
(2)
Running RSG:
Desired RSG:
(3)
Running TGTW: 13.00.00.00
Desired TGTW:
(4)
Running TCHN: 13.00.00.00
Desired TCHN:
(5)
Running TDGSS: 13.00.00.00
Desired TDGSS:
The following is an example of Screen Version command output for Linux:
(0)
Running PDE: 13.00.00.00
Desired PDE:
(1)
Running DBS: 13.00.00.00
Desired DBS:
(2)
Running RSG:
Desired RSG:
(3)
Running TGTW: 13.00.00.00
Desired TGTW:
(4)
Running TCHN:
Desired TCHN:
(5)
Running TDGSS: 13.00.00.00
Desired TDGSS:
Running PDEGPL: 13.00.00.00
Desired PDEGPL:
Control Fields
The Versions screen contains the following control fields.
Utilities
Setting
Description
Running PDE
The currently running version of Teradata Parallel
Database Extensions.
Running DBS
The currently running version of Teradata Database.
Running TGTW
The currently running versions of Teradata Gateway
software.
175
Chapter 8: Control GDO Editor (ctl)
SCREEN
Setting
Description
Running TCHN
The currently running version of Teradata Channel
software.
Running RSG
The currently running version of Teradata Relay Services
Gateway software.
Running TDGSS
The currently running version of Teradata Database
Generic Security Services software.
Running PDEGPL (For Linux only)
The currently running GNU general public license version
of Teradata Parallel Database Extensions.
Desired PDE
Changing this field to an installed PDE version causes that
version to be run the next time Teradata Database restarts.
Desired DBS
Changing this field to an installed Teradata Database
version causes that version to be run the next time
Teradata Database restarts.
Desired TGTW
Changing this field to a an installed TGTW version causes
that version to be run the next time Teradata Database
restarts.
Desired TCHN
Changing this field to an installed TCHN version causes
that version to be run the next time Teradata Database
restarts.
Desired RSG
Changing this field to an installed RSG version causes that
version to be run the next time Teradata Database restarts.
Desired TDGSS
Changing this field to an installed TDGSS version causes
that version to be run the next time Teradata Database
restarts.
Desired PDEGPL (For Linux only)
Indicates an installed PDEGPL version to be run the next
time Teradata Database restarts.
Note: Whenever a user sets the Desired PDE to a different
version, the CTL tool automatically sets the Desired
PDEGPL to the same version.
176
Utilities
Chapter 8: Control GDO Editor (ctl)
variable = setting
variable = setting
Purpose
The variable=setting command changes the value of the specified variable to the value of
setting. The setting must be an appropriate type for the variable being assigned, or an error is
reported.
Syntax
variable
= setting
?
1102A177
Usage Notes
•
If you have used the SCREEN command to display a group of related control fields,
modifiable fields are prefaced by alphanumeric identifiers. You can use either those
identifiers or the exact field names with the variable = setting command to change the field
values.
•
Entering the variable name or identifier followed by =? displays information on the valid
values for that variable.
•
1, yes, and true can be used as synonyms for on.
0, no, and false can be used as synonyms for off.
•
Entering an empty string for string variables can cause the current value to be replaced by
an empty string.
Examples
> screen dbs
(0)
(1)
(3)
(5)
(7)
Minimum Node Action:
Minimum Nodes Per Clique:
Clique Failure:
Unused option
Cylinder Slots/AMP:
Clique-Down
1
Clique-Down
8
(2)
(4)
(6)
(8)
Unused option
FSG cache Percent:
0
Cylinder Read:
User
Restart After DAPowerFail: Off
> minimum nodes per clique=?
CTL: Minimum Nodes Per Clique
Specifies the number of nodes required for a clique to operate normally.
The legal value is `1' or more.
>
> minimum nodes per clique=3
> screen
(0)
(1)
(3)
(5)
(7)
Minimum Node Action:
Minimum Nodes Per Clique:
Clique Failure:
Unused option
Cylinder Slots/AMP:
Clique-Down
3
Clique-Down
8
(2)
(4)
(6)
(8)
Unused option
FSG cache Percent:
0
Cylinder Read:
User
Restart After DAPowerFail: Off
> 1=2
Utilities
177
Chapter 8: Control GDO Editor (ctl)
variable = setting
> screen
(0)
(1)
(3)
(5)
(7)
178
Minimum Node Action:
Minimum Nodes Per Clique:
Clique Failure:
Unused option
Cylinder Slots/AMP:
Clique-Down
2
Clique-Down
8
(2)
(4)
(6)
(8)
Unused option
FSG cache Percent:
0
Cylinder Read:
User
Restart After DAPowerFail: Off
Utilities
Chapter 8: Control GDO Editor (ctl)
WRITE
WRITE
Purpose
The WRITE command saves any configuration changes made during the Ctl session back to
the source from which they were read. This is usually the PDE Control GDO, if Ctl was started
without the -file option.
Syntax
WRITE
WR
1102A178
Usage Notes
The write command does not cause Ctl to exit.
Because different users may be modifying the PDE control settings at the same time, Ctl
merges only the changed settings from the current Ctl session to the PDE Control GDO. This
minimizes the chances that concurrent users will overwrite each others’ changes.
If no changes have been made during the current Ctl session, issuing a write command does
nothing.
Utilities
179
Chapter 8: Control GDO Editor (ctl)
WRITE
180
Utilities
CHAPTER 9
Cufconfig Utility (cufconfig)
The Cufconfig utility, cufconfig, allows you to view and change configuration settings for the
user-defined function and external stored procedure subsystem. These configuration settings
are specified in the user-defined function globally distributed object (UDF GDO). Globally
distributed objects store global configuration settings available to all nodes of a Teradata
system. The Cufconfig utility is also known as the UDF GDO Configuration utility.
For more information about UDFs, external stored procedures, or user-defined methods
(UDMs), see SQL External Routine Programming.
Audience
Users of cufconfig include Teradata Database system administrators.
User Interfaces
Cufconfig runs on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Command line
Database Window
Windows
Command line (“Teradata Command Prompt”)
Database Window
Linux
Command line
Database Window
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
Utilities
181
Chapter 9: Cufconfig Utility (cufconfig)
Syntax
Syntax
cufconfig
-h
-o
-i
-f filename
1102C101
Syntax element
Description
-f filename
Modifies the field values of the UDF GDO.
The filename specifies the path to the file that is used to modify the GDO. The
file should contain a line for each fields whose value is to be changed. The
available field names and current values are shown in the output of the
cufconfig -o command.
-h
Displays the cufconfig help.
-i
Initializes the UDF GDO field values to the default values of the shipped
software version.
-o
Displays the contents of the UDF GDO.
If multiple options are specified on the command line, they are processed in the following
order:
•
-i
•
-f
•
-o
Modifications to cufconfig settings take effect after the next Teradata Database restart.
For examples of the use of cufconfig, see “Examples” on page 225.
Cufconfig Fields
The following table summarizes the cufconfig utility fields.
182
Field Name
Description
“CLIEnvFile”
The path and name of the file containing the environmental settings
used by CLI-based external stored procedures.
“CLIIncPath”
The path of the header files required by CLI-based external stored
procedures.
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
Cufconfig Fields
Utilities
Field Name
Description
“CLILibPath”
The path of the libraries required by CLI-based external stored
procedures.
“CompilerPath”
The path to the C/C++ compiler.
“CompilerTempDirectory”
The path for the intermediate files used to compile or link the UDFs,
external stored procedures, or UDMs.
“GLOPLockTimeout”
The maximum time that an external routine will wait for a lock
request to be granted on GLOP data.
“GLOPLockWait”
The maximum time that an external routine can hold a lock on
GLOP data if another external routine is waiting for the lock to be
freed.
“GLOPMemMapPath”
The location of the shared memory and cache files used by the
GLOP data feature.
“JavaBaseDebugPort”
The base port number for debugging a Java external routine via a
debugger.
“JavaEnvFile”
The environment file for Java Virtual Machine (JVM) startup.
“JavaHybridThreads”
The maximum number of threads that can run simultaneously by
the Java hybrid server.
“JavaLibraryPath”
The root directory path where the JAR files are stored. These files are
used by Java external routines.
“JavaLogPath”
The location of the log files for Java external routines.
“JavaServerTasks”
The number of Java secure servers that can be run simultaneously
per vproc, subject to the limitations imposed by the
ParallelUserServerAMPs and ParallelUserServerPEs settings.
“JavaVersion”
The Java Native Interface (JNI) version used by Java external
routines. This value is a hexadecimal number.
“JREPath”
The Java Runtime Environment (JRE) installation path. To run Java
external routines, the Java server looks at the JREPath to find the
required executable files and JVM library files.
“JSVServerMemPath”
The location of the hybrid server control files.
“LinkerPath”
The path to the linker.
“MallocLimit”
The upper limit (in bytes) on the amount of memory that an
external routine (UDF, UDM, or external stored procedure) can
allocate using the Teradata C library function FNC_malloc.
“MaximumCompilations”
The maximum number of UDFs, external stored procedures, or
UDMs compiled simultaneously, on different sessions, on any one
node.
“MaximumGLOPMem”
The maximum amount of memory an AMP or PE can use to store
GLOP data mappings.
183
Chapter 9: Cufconfig Utility (cufconfig)
Cufconfig Fields
184
Field Name
Description
“MaximumGLOPPages”
The maximum number of GLOP data pages that can be allocated per
item of read-only GLOP data.
“MaximumGLOPSize”
The maximum size of an item of GLOP data.
“ModTime”
Displays the timestamp of the last UDF GDO modification.
“ParallelUserServerAMPs”
The maximum number of secure servers allowed to run
simultaneously using the same operating system user ID on an AMP
vproc.
“ParallelUserServerPEs”
The maximum number of secure servers allowed to run
simultaneously using the same operating system user ID on a PE
vproc.
“SecureGroupMembership”
The default group membership required to run the secure server
UDFs, external stored procedures, or UDMs.
“SecureServerAMPs”
The maximum number of secure servers allowed to run
simultaneously on an AMP vproc.
“SecureServerPEs”
The maximum number of secure servers allowed to run
simultaneously on a PE vproc.
“SourceDirectoryPath”
The default path of the UDF source directory when copying the UDF
source code onto the server.
“SWDistNodeID”
Displays the node ID of the software distribution node.
“TDSPLibBase”
The base library directory for stored procedures.
“UDFEnvFile”
The path and name of the file containing the environmental settings
used by UDFs, external stored procedures, or UDMs that specify a
data access clause of NO SQL in the CREATE in the CREATE or
REPLACE statement for the UDF, external stored procedure, or
UDM.
“UDFIncPath”
The header file path in addition to the standard path for use by
UDFs, external stored procedures, or UDMs that specify a data
access clause of NO SQL in the CREATE PROCEDURE or REPLACE
PROCEDURE statement.
“UDFLibPath”
The library path in addition to the standard library path for use by
UDFs, external stored procedures, or UDMs that specify a data
access clause of NO SQL in the CREATE PROCEDURE or REPLACE
PROCEDURE statement.
“UDFLibraryPath”
The root directory path of the UDF, external stored procedure, or
UDM linked libraries.
“UDFServerMemPath”
The path to the UDF, external stored procedure, or UDM shared
memory files.
“UDFServerTasks”
The number of protected mode UDFs, external stored procedures, or
UDMs that can be run simultaneously per vproc.
“Version”
Displays the version of the UDF GDO.
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
CLIEnvFile
CLIEnvFile
Purpose
Specifies the path and name of the file containing environment settings used by CLI-based
external stored procedures.
Default
This field has no default value.
Usage Notes
An environment file with the same path and file name must exist on all nodes of the system.
Example
The environment file contains a list of the required environment variables formatted using the
standard shell script semantics. There is no support for shell-like parameter substitution. The
following shows sample content for an environment file.
COPLIB=/usr/lib;
COPERR=/usr/lib;
Utilities
185
Chapter 9: Cufconfig Utility (cufconfig)
CLIIncPath
CLIIncPath
Purpose
Specifies the path of the directory that contains the header files required by CLI-based external
stored procedures.
Default
186
Operating System
Default Path
Linux
/opt/teradata/client/include
Windows
\Program Files\Teradata\Teradata Client\cli\inc
MP-RAS
/usr/include
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
CLILibPath
CLILibPath
Purpose
Specifies the path of the libraries required by CLI-based external stored procedures.
Default
Utilities
Operating System
Default Path
Linux
/opt/teradata/client/lib
Windows
\Program Files\Teradata\Teradata Client\cli\lib
MP-RAS
/usr/lib
187
Chapter 9: Cufconfig Utility (cufconfig)
CompilerPath
CompilerPath
Purpose
Specifies the path to the C/C++ compiler.
Note: This field should be changed only under the direction of Teradata Support Center
personnel.
Default
Directory containing the C/C++ compiler used for SQL stored procedures and external
routines that are written in C, C++, or Java.
Usage Notes
For Linux and Windows operating systems, this field is set to the path of the C/C++ compiler
used for creating external routines. MP-RAS comes with its own compiler, which the default
value for MP-RAS reflects.
188
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
CompilerTempDirectory
CompilerTempDirectory
Purpose
Specifies the path for the intermediate files used to compile or link external routines.
Note: This field should be changed only by Teradata Support Center personnel.
Default
Operating System
Default Path
Linux systems
PDE_temp_path/UDFTemp
where PDE_temp_path is the PDE Temp directory.
Typically, this is /var/opt/teradata/tdtemp/UDFTemp/
Windows
PDE_temp_path\UDFTemp\
where PDE_temp_path is the PDE Temp directory.
Typically, this is \Program Files\Teradata\Tdat\TdTemp\UDFTemp\
MP-RAS systems
/tmp/UDFTemp/
To determine the path of the PDE Temp directory, enter the following on the command line:
pdepath -S
Utilities
189
Chapter 9: Cufconfig Utility (cufconfig)
GLOPLockTimeout
GLOPLockTimeout
Purpose
Specifies the maximum time, in seconds, that an external routine (UDF, UDM, or external
stored procedure) will wait for a lock request to be granted on global and persistent (GLOP)
data.
GLOP data is a type of data available to external routines. The persistence in memory of GLOP
data is based on specific boundaries, such as a user role or user account.
Valid Range
0 to 2147483647 (the maximum size of an SQL INTEGER data type)
Default
130 seconds
Usage Notes
This should be a value slightly greater than the GLOPLockWait value.
When an external routine requests a lock on GLOP data, and the timeout value specified by
GLOPLockTimeout is exceeded, the system returns a -2 error code to the external routine.
Related Topics
For more information on global and persistent data, see SQL External Routine Programming.
190
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
GLOPLockWait
GLOPLockWait
Purpose
Specifies the maximum time, in seconds, that an external routine can hold a lock on global
and persistent (GLOP) data if another external routine is waiting for the lock to be freed.
When the lock exceeds the GLOPLockWait value, the transaction that was holding the lock is
aborted, and the lock is freed.
GLOP data is a type of data available to external routines. The persistence in memory of GLOP
data is based on specific boundaries, such as a user role or user account.
Valid Range
0 to 2147483647 (the maximum size of an SQL INTEGER data type)
Default
120 seconds
Usage Notes
This should be a value slightly less than the value for GLOPLockTimeout value.
Related Topics
For more information on global and persistent data, see SQL External Routine Programming.
Utilities
191
Chapter 9: Cufconfig Utility (cufconfig)
GLOPMemMapPath
GLOPMemMapPath
Purpose
Specifies the location of the shared memory and cache files used by the global and persistent
(GLOP) data feature.
GLOP data is a type of data available to external routines. The persistence in memory of GLOP
data is based on specific boundaries, such as a user role or user account.
Caution:
This field is for informational purposes only, and should not be changed.
Related Topics
For more information on global and persistent data, see SQL External Routine Programming.
192
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
JavaBaseDebugPort
JavaBaseDebugPort
Purpose
Specifies the base number used to determine the debug port for Java external routines (Java
external stored procedures and Java user defined functions). This field is only used when
debugging Java external routines.
This field is available only on 64-bit Windows and Linux systems.
Default
0 (The JVM debug port is disabled.)
Usage Notes
The actual port number to use for attaching a debugger to a Java external routine is the value
of JavaBaseDebugPort plus an offset.
For Java external routines running on a hybrid server, the offset is always 2000, so the debug
port for these routines is always the value of JavaBaseDebugPort plus 2000.
For Java external routines running on secure servers, the offset depends on which instance of
the server is executing the routine (there can be up to 10 secure servers per vproc), and on the
type and ID of the vproc that is executing the routine.
The following table demonstrates how representative offsets are determined.
Vproc Type
Vproc ID
Secure Server Instance
Offset
PE
16383
1 through 10
0 through 9
PE
16382
1 through 10
10 through 19
PE
16381
1 through 10
20 through 29
and so forth through
PE
16284
1 through 10
990 through 999
AMP
0
1 through 10
1000 through 1009
AMP
1
1 through 10
1010 through 1019
AMP
2
1 through 10
1020 through 1029
1 through 10
1990 through 1999
and so forth through
AMP
Utilities
99
193
Chapter 9: Cufconfig Utility (cufconfig)
JavaBaseDebugPort
Note: PEs with IDs less than 16284, and AMPs with IDs greater than 99 do not have debug
ports available for Java external routines.
Examples
Assume the value of JavaBaseDebugPort is 8000.
•
To debug any external routine running on a hybrid server, connect the debugger to port
10,000 (8000 plus the offset of 2000).
•
To debug an external routine running on the second secure server on PE 16383, connect
the debugger to port 8002 (8000 plus the offset of 2).
•
To debug an external routine running on the fifth secure server on PE 16381, connect the
debugger to port 8024 (8000 plus the offset of 24).
•
To debug an external routine running on the first secure server on AMP 0, connect the
debugger to port 9000 (8000 plus the offset of 1000).
Related Topics
For more information on Java external stored procedures and Java user defined functions, see
SQL External Routine Programming.
194
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
JavaEnvFile
JavaEnvFile
Purpose
Specifies the environment file for Java Virtual Machine (JVM) startup.
This field is available only on 64-bit Windows and Linux systems.
Note: This field should be changed only by Teradata Support Center personnel.
Default
This field has no initial value.
Usage Notes
In the environment file, each line specifies a JavaVMInitArgs option. This permits the
database administrator to configure the JVM as needed for Java external stored procedures
routines.
An environment file with the same path and file name must exist on all nodes of the system.
Example
On a Windows system, the contents of the environment file would be similar to the following:
-XX:NewSize=128m
-XX:MaxNewSize=128m
-XX:SurvivorRatio=8
-Xms512m
-Xmx512m
Utilities
195
Chapter 9: Cufconfig Utility (cufconfig)
JavaHybridThreads
JavaHybridThreads
Purpose
Specifies the maximum number of threads that can be run simultaneously by the Java hybrid
server.
This field is available only on 64-bit systems (Windows and Linux).
Valid Range
0 through the setting for SecureServerAMPs or SecureServerPEs, whichever is smaller
Default
20
Usage Notes
Each node can have one hybrid server that provides multiple threaded execution of protected
mode Java UDFs to all AMPs and PEs on the node. (Protected mode Java UDFs are those that
do not include the EXTERNAL SECURITY clause in their UDF SQL definitions.)
When this field is set to zero, Java UDFs cannot be run by the hybrid server, so must be run by
Java secure servers, which are single threaded.
If JavaHybridThreads and JavaServerTasks are both set to zero, neither Java UDFs nor Java
external stored procedures can be run.
Related Topics
For more information on Java external stored procedures and Java user defined functions, see
SQL External Routine Programming.
196
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
JavaLibraryPath
JavaLibraryPath
Purpose
Specifies the root directory path where the JAR files are stored. These files are used by Java
external routines.
This field is available only on 64-bit Windows and Linux systems.
Note: This field should be changed only by Teradata Support Center personnel.
Default
Operating System
Default Path
Linux
config_path/jarlib/
where config_path is the PDE configuration directory.
Typically, this is /etc/opt/teradata/tdconfig/jarlib/
Windows
config_path\jarlib\
where config_path is the PDE configuration directory.
Typically, this is \Program Files\Teradata\Tdat\tdconfig\jarlib\
To determine the path of the PDE configuration directory, enter the following on the
command line:
pdepath -c
Utilities
197
Chapter 9: Cufconfig Utility (cufconfig)
JavaLogPath
JavaLogPath
Purpose
Specifies the location of the log files for Java external routines.
This field is available only on 64-bit Windows and Linux systems.
Default
Operating System
Default Path
Linux
/tmp/
Windows
\Temp\
Usage Notes
If a JavaLogPath is specified and Java logging is enabled, the standard error and standard out
of the Java server task are redirected to the files in the JavaLogPath directory for debugging
purposes.
198
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
JavaServerTasks
JavaServerTasks
Purpose
Specifies the number of Java secure servers that can be run simultaneously per vproc, subject
to the limitations imposed by the ParallelUserServerAMPs and ParallelUserServerPEs settings.
This field is available only on 64-bit Windows and Linux systems.
Valid Range
0 to 20
Default
2
Usage Notes
Java secure servers can be used by Java external stored procedures and secure-mode Java
UDFs. (Secure-mode Java UDFs are those that include the EXTERNAL SECURITY clause in
their UDF SQL definitions.)
Java secure servers are single threaded. This is in contrast to Java hybrid servers, which can
execute multiple Java UDFs, each one in its own thread. The number of threads that can be
run by a Java hybrid server is controlled by the JavaHybridThreads field.
If the JavaServerTasks field is zero, you cannot run any Java external stored procedures, and
Java UDFs must run by a Java hybrid server.
JavaServerTasks is analogous to the UDFServerTasks field used for native based UDFs, external
stored procedures, or UDMs.
Related Topics
For more information on Java external stored procedures and Java user defined functions, see
SQL External Routine Programming.
Utilities
199
Chapter 9: Cufconfig Utility (cufconfig)
JavaVersion
JavaVersion
Purpose
Specifies the JNI version used by Java external routines. This value is a hexadecimal number.
This field is available only on 64-bit Windows and Linux systems.
Note: This field should be changed only by Teradata Support Center personnel.
200
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
JREPath
JREPath
Purpose
Specifies the path where the Java Runtime Environment (JRE) was installed.
This field is available only on 64-bit Windows and Linux systems.
Note: This field should be changed only by Teradata Support Center personnel.
Usage Notes
To run Java external routines, the Java server looks at the JREPath to find the required
executable files and JVM library files.
The Teradata JRE is the Java runtime engine used by Teradata Database. To install the Teradata
JRE, the Teradata JRE package must be installed prior to installing the TDBMS package. The
Teradata JRE package is named differently on different operating systems.
Utilities
Operating System
Teradata JRE Package Name
Linux
teradata-jre5
Windows
JRE5_64
201
Chapter 9: Cufconfig Utility (cufconfig)
JSVServerMemPath
JSVServerMemPath
Purpose
Specifies the location of the hybrid server control files.
This field is available only on 64-bit Windows and Linux systems.
Default
Operating System
Default Path
Linux
/var/opt/teradata/tdtemp/jsvsrv/
Windows
\Program Files\Teradata\Tdat\TdTemp\jsvsrv\
Usage Notes
The hybrid server uses one control shared memory file for each AMP and PE.
202
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
LinkerPath
LinkerPath
Purpose
Specifies the path to the C\C++ linker.
Note: This field should be changed only by Teradata Support Center personnel.
Utilities
203
Chapter 9: Cufconfig Utility (cufconfig)
MallocLimit
MallocLimit
Purpose
Specifies the upper limit (in bytes) on the amount of memory that an external routine (UDF,
UDM, or external stored procedure) can allocate using the Teradata C library function
FNC_malloc.
Valid Range
0 to the upper limit of an unsigned long data type, for example:
Operating System
Upper Limit of Unsigned Long Data Type
Linux
18446744073709551615 bytes
(hex FFFF FFFF FFFF FFFF)
Windows and MP-RAS
4294967295 bytes
(hex FFFF FFFF)
Default
33554432 bytes (32MB)
Usage Notes
For performance reasons, a function should allocate only as much memory as absolutely
required. UDFs execute in parallel on the database. An all-AMP query that includes a UDF
which allocates 10MB of memory can use up to 1GB for all instances on a 100 AMP system for
just the one transaction.
Related Topics
For more information on the Teradata C-library functions, see SQL External Routine
Programming.
204
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
MaximumCompilations
MaximumCompilations
Purpose
Specifies the maximum number of UDFs, external stored procedures, or UDMs that can be
compiled simultaneously, in different sessions, on any one node.
Default
10
Usage Notes
Teradata recommends leaving this setting at the default.
Utilities
205
Chapter 9: Cufconfig Utility (cufconfig)
MaximumGLOPMem
MaximumGLOPMem
Purpose
Specifies the maximum amount of memory, in pages, an AMP or PE can use to store global
and persistent (GLOP) data mappings. A page of memory is 4096 bytes.
GLOP data is a type of data available to external routines. The persistence in memory of GLOP
data is based on specific boundaries, such as a user role or user account.
Caution:
This value should be changed only under the direction of Teradata Support Center personnel.
Valid Range
Operating System
Valid Range
Linux or Windows
0 through 16777216 pages (64 GB)
MP-RAS
0 through 131072 pages (0.5 GB)
Note: System memory availability, system disk availability, and system performance
considerations can substantially reduce the upper limit.
Default
2560 pages
Usage Notes
This value must be larger than the value of MaximumGLOPSize.
Related Topics
For more information on global and persistent data, see SQL External Routine Programming.
206
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
MaximumGLOPPages
MaximumGLOPPages
Purpose
Specifies the maximum number of global and persistent (GLOP) data pages that can be
allocated per item of read-only GLOP data. (Read/write GLOP data can consist of only a single
page.)
GLOP data is a type of data available to external routines. The persistence in memory of GLOP
data is based on specific boundaries, such as a user role or user account.
Valid Range
0 to 2147483647 (the maximum size of an SQL INTEGER data type)
Default
8 pages
Related Topics
For more information on global and persistent data, see SQL External Routine Programming.
Utilities
207
Chapter 9: Cufconfig Utility (cufconfig)
MaximumGLOPSize
MaximumGLOPSize
Purpose
Specifies the maximum size, in memory pages, of an item of global and persistent (GLOP)
data. A page of memory is 4096 bytes.
GLOP data is a type of data available to external routines. The persistence in memory of GLOP
data is based on specific boundaries, such as a user role or user account.
Caution:
This value should be changed only under the direction of Teradata Support Center personnel.
Valid Range
Operating System
Valid Range
Linux or Windows
0 through 511985 pages (about 2 GB)
MP-RAS
0 through 8192 pages (32 MB)
Note: System memory availability, system disk availability, and system performance
considerations can substantially reduce the upper limit.
Default
256 pages (1 MB)
Related Topics
For more information on global and persistent data, see SQL External Routine Programming.
208
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
ModTime
ModTime
Purpose
Displays the timestamp of the last UDF GDO modification.
Note: This field should not be modified.
Utilities
209
Chapter 9: Cufconfig Utility (cufconfig)
ParallelUserServerAMPs
ParallelUserServerAMPs
Purpose
Specifies the maximum number of secure servers allowed to run simultaneously using the
same operating system user ID on an AMP vproc.
Valid Range
1 through the value of SecureServerAMPs
Default
2
Usage Notes
For example, a value of 2 allows two UDFs using identical operating system user IDs to run
simultaneously. A third UDF would have to wait until one of the other UDFs finish.
Related Topics
For information on the maximum number of secure servers allowed to run simultaneously
using the same operating system user ID on a PE vproc, see the UDF GDO
field,“ParallelUserServerPEs” on page 211.
210
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
ParallelUserServerPEs
ParallelUserServerPEs
Purpose
Specifies the maximum number of secure servers allowed to run simultaneously using the
same operating system user ID on a PE vproc.
Valid Range
1 through the value of SecureServerPEs
Default
2
Usage Notes
For example, a value of 2 allows two external stored procedures using identical operating
system user IDs to run simultaneously. A third external stored procedure would have to wait
until one of the other external stored procedures finish.
Related Topics
For information on the maximum number of secure servers allowed to run simultaneously
using the same operating system user ID on an AMP vproc, see “ParallelUserServerAMPs” on
page 210.
Utilities
211
Chapter 9: Cufconfig Utility (cufconfig)
SecureGroupMembership
SecureGroupMembership
Purpose
Sets the default group membership required to run the secure server UDFs, external stored
procedures, or UDMs.
Default
tdatudf
Note: The tdatudf group is always present on the server and is created when the database is
installed. It is used for protected mode UDF servers and is independent of the
SecureGroupMembership field setting.
Usage Notes
The authorized operating system user must be a member of the group defined by the
SecureGroupMembership field. However, this group does not have to be the primary group of
the user.
The group does not have to be defined on the server. It can be defined using the operating
system user authentication mechanism used by the site. On Windows, the fully qualified
group name, including the domain name, must be specified if the group is not a local group
on the server.
Example
SecureGroupMembership: CorpSales/UDFApplications
212
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
SecureServerAMPs
SecureServerAMPs
Purpose
Specifies the maximum number of secure servers allowed to run simultaneously on an AMP
vproc.
Valid Range
0 through 120
Default
20
Usage Notes
Teradata recommends this be set to a value between 15 and 30.
Utilities
213
Chapter 9: Cufconfig Utility (cufconfig)
SecureServerPEs
SecureServerPEs
Purpose
Specifies the maximum number of secure servers allowed to run simultaneously on a PE
vproc.
Valid Range
0 through 120
Default
20
Usage Notes
The required number of secure servers is limited to the value of the SecureServerPEs field.
214
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
SourceDirectoryPath
SourceDirectoryPath
Purpose
Specifies the default path of the UDF source directory when copying the UDF source code
onto the server.
Default
Operating System
Default Path
Linux
config_path/Teradata/tdbs_udf/usr/
where config_path is the PDE configuration directory.
Typically, this is /etc/opt/teradata/tdconfig/Teradata/tdbs_udf/usr/
Windows
config_path\Teradata\tdbs_udf\usr\
where config_path is the PDE configuration directory.
Typically, this is \Program Files\Teradata\Tdat\tdconfig\Teradata\tdbs_udf\usr\
MP-RAS
/Teradata/tdbs_udf/usr/
To determine the path of the PDE configuration directory, enter the following on the
command line:
pdepath -c
Note: Teradata recommends leaving this set to the default value.
Usage Notes
The UDF source code is located on the software distribution node.
Utilities
215
Chapter 9: Cufconfig Utility (cufconfig)
SWDistNodeID
SWDistNodeID
Purpose
Displays the node ID of the software distribution node.
Note: This field is for informational purposes only, and should not be changed.
Usage Notes
The node ID is the location of the source or object code when using the server option of the
EXTERNAL clause in a CREATE statement for a UDF, external stored procedure, or UDM.
For example:
SS!myudf!myudf.c
216
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
TDSPLibBase
TDSPLibBase
Purpose
Specifies the base library directory for stored procedures.
Note: This field should be changed only by Teradata Support Center personnel.
Default
Operating System
Default Path
Linux
config_path/tdsplib/
where config_path is the PDE configuration directory.
Typically, this is /etc/opt/teradata/tdconfig/tdsplib/
Windows
config_path\tdsplib\
where config_path is the PDE configuration directory.
Typically, this is \Program Files\Teradata\Tdat\tdconfig\tdsplib\
MP-RAS
config_path/tdsplib/
where config_path is the PDE configuration directory.
Typically, this is /ntos/tdsplib
To determine the path of the PDE configuration directory, enter the following on the
command line:
pdepath -c
Utilities
217
Chapter 9: Cufconfig Utility (cufconfig)
UDFEnvFile
UDFEnvFile
Purpose
Specifies the path and name of the file containing the environment settings used by UDFs,
external stored procedures, or UDMs that specify a data access clause of NO SQL in the
CREATE or REPLACE statement for the UDF, external stored procedure, or UDM.
Default
This field has no default value.
Usage Notes
An environment file with the same path and file name must exist on all nodes of the system.
Example
The environment file contains a list of the required environment variables formatted using the
standard shell script semantics. There is no support for shell-like parameter substitution. The
following shows sample content for an environment file.
COPLIB=/usr/lib;
COPERR=/usr/lib;
218
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
UDFIncPath
UDFIncPath
Purpose
Specifies a header file path in addition to the standard path for use by UDFs, external stored
procedures, or UDMs that specify a data access clause of NO SQL in the CREATE
PROCEDURE or REPLACE PROCEDURE statement.
Default
This field has no default value.
Utilities
219
Chapter 9: Cufconfig Utility (cufconfig)
UDFLibPath
UDFLibPath
Purpose
Specifies a library path in addition to the standard library path for use by UDFs, external
stored procedures, or UDMs that specify a data access clause of NO SQL in the CREATE
PROCEDURE or REPLACE PROCEDURE statement.
Default
This field has no default value.
220
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
UDFLibraryPath
UDFLibraryPath
Purpose
Specifies the root directory path of the UDF, external stored procedure, or UDM linked
libraries.
Note: This field should be changed only by Teradata Support Center personnel.
Default
Operating System
Default Path
Linux
config_path/udflib/
where config_path is the PDE configuration directory.
Typically, this is /etc/opt/teradata/tdconfig/udflib/
Windows
config_path\udflib\
where config_path is the PDE configuration directory.
Typically, this is \Program Files\Teradata\Tdat\tdconfig\udflib\
MP-RAS
config_path/udflib/
where config_path is the PDE configuration directory.
Typically, this is /ntos/udflib
To determine the path of the PDE configuration directory, enter the following on the
command line:
pdepath -c
Utilities
221
Chapter 9: Cufconfig Utility (cufconfig)
UDFServerMemPath
UDFServerMemPath
Purpose
Specifies the path to the UDF, external stored procedure, or UDM shared memory files.
Note: This field should be changed only by Teradata Support Center personnel.
Default
Operating System
Default Path
Linux
PDE_temp_path/udfsrv/
where PDE_temp_path is the PDE Temp directory.
Typically, this is /var/opt/teradata/tdtemp/udfsrv/
Windows
PDE_temp_path\udfsrv\
where PDE_temp_path is the PDE Temp directory.
Typically, this is \Program Files\Teradata\Tdat\TdTemp\udfsrv\
MP-RAS
PDE_temp_path/udfsrv/
where PDE_temp_path is the PDE Temp directory.
Typically, this is /tmp/udfsrv/
To determine the path of the PDE Temp directory, enter the following on the command line:
pdepath -S
222
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
UDFServerTasks
UDFServerTasks
Purpose
Determines the number of protected mode UDFs, external stored procedures, or UDMs that
can be run simultaneously per vproc.
Valid Range
0 through 20
Default
2
Usage Notes
Each routine is a separate process that runs as the built-in operating system user named
tdatuser.
If the UDFServerTasks field is zero, no C or C++ protected mode UDFs, external stored
procedures, or UDMs can run.
Utilities
223
Chapter 9: Cufconfig Utility (cufconfig)
Version
Version
Purpose
Specifies the version of the UDF GDO. The version only changes when new fields are added or
modified.
Note: This field should not be modified.
224
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
Examples
Examples
Example 1
To display the cufconfig help, type the following:
cufconfig -h
The following appears:
_______
|
|
___
|
/
|
--|
\___
|/
|
|
__
\
____
____|
/
|
\____|
|
|
____|
/
|
\____|
____
____|
/
|
\____|
|
__|__
|
|
|__
____
____|
/
|
\____|
Release 13.00.00.00 Version 13.00.00.00
UDF GDO Configuration Utility (Mar 2006)
cufconfig may be used from the dbw
or from the command line
valid options:
-o
outputs the gdo to the screen.
-i
initializes the gdo to the defaults.
-f filename
modifies the gdo fields identified
in filename. Fields not specified in filename
will not be set or reset by this tool.
Note that if several options are specified
they are analyzed in the following order:
-i -f -o
If only a few fields need to be set differently than
the defaults, specify the -i option with the -f option.
Use the -o option to verify the settings.
Exiting cufconfig...
Example 2
To display the contents of the UDF GDO, type the following:
cufconfig -o
On a 64-bit Windows system, the output is similar to the following:
Version: 6
ModTime: 1206484172
SWDistNodeID: 33
SourceDirectoryPath: E:\Program Files\Teradata\Tdat\tdconfig\Teradata\tdbs_udf\usr\
CompilerTempDirectory: E:\Program Files\Teradata\Tdat\TdTemp\UDFTemp\
UDFLibraryPath: E:\Program Files\Teradata\Tdat\tdconfig\udflib\
CompilerPath: E:\Program Files (x86)\Microsoft Visual Studio 8\VC\bin\amd64\CL.EXE
LinkerPath: E:\Program Files (x86)\Microsoft Visual Studio 8\VC\bin\amd64\LINK.EXE
UDFServerMemPath: E:\Program Files\Teradata\Tdat\TdTemp\udfsrv\
MaximumCompilations: 10
UDFServerTasks: 2
SecureServerAMPs: 20
ParallelUserServerAMPs: 2
SecureServerPEs: 20
Utilities
225
Chapter 9: Cufconfig Utility (cufconfig)
Examples
ParallelUserServerPEs: 2
TDSPLibBase: E:\Program Files\Teradata\Tdat\tdconfig\tdsplib\
SecureGroupMembership: tdatudf
UDFLibPath:
UDFIncPath:
UDFEnvFile:
CLILibPath: C:\Program Files\Teradata\Teradata Client\cliv2\lib
CLIIncPath: C:\Program Files\Teradata\Teradata Client\cliv2\inc
CLIEnvFile:
JavaLibraryPath: E:\Program Files\Teradata\Tdat\tdconfig\jarlib\
JREPath: C:\Program Files\Teradata\jvm\jre5_64\
JavaLogPath: c:\Temp\
JavaEnvFile:
JavaServerTasks: 2
JavaHybridThreads: 20
JavaVersion: 0x10004
JavaBaseDebugPort: 0
JSVServerMemPath: E:\Program Files\Teradata\Tdat\TdTemp\jsvsrv\
MallocLimit: 33554432
GLOPLockTimeout: 130
GLOPLockWait: 120
MaximumGLOPSize: 256
MaximumGLOPMem: 2560
MaximumGLOPPages: 8
GLOPMapMemPath: E:\Program Files\Teradata\Tdat\TdTemp\udfglopdata
On a Linux system, the output is similar to the following:
Version: 6
ModTime: 1205003529
SWDistNodeID: 33
SourceDirectoryPath: /etc/opt/teradata/tdconfig/Teradata/tdbs_udf/usr/
CompilerTempDirectory: /var/opt/teradata/tdtemp/UDFTemp/
UDFLibraryPath: /etc/opt/teradata/tdconfig/udflib/
CompilerPath: /usr/bin/gcc
LinkerPath: /usr/bin/ld
UDFServerMemPath: /var/opt/teradata/tdtemp/udfsrv/
MaximumCompilations: 10
UDFServerTasks: 2
SecureServerAMPs: 20
ParallelUserServerAMPs: 2
SecureServerPEs: 20
ParallelUserServerPEs: 2
TDSPLibBase: /etc/opt/teradata/tdconfig/tdsplib/
SecureGroupMembership: tdatudf
UDFLibPath:
UDFIncPath:
UDFEnvFile:
CLILibPath: /opt/teradata/client/lib64
CLIIncPath: /opt/teradata/client/include
CLIEnvFile:
JavaLibraryPath: /etc/opt/teradata/tdconfig/jarlib/
JREPath: /opt/teradata/jvm64/jre5/jre
JavaLogPath: /tmp/
JavaEnvFile:
JavaServerTasks: 2
JavaHybridThreads: 20
JavaVersion: 0x10004
JavaBaseDebugPort: 0
226
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
Examples
JSVServerMemPath: /var/opt/teradata/tdtemp/jsvsrv/
MallocLimit: 33554432
GLOPLockTimeout: 130
GLOPLockWait: 120
MaximumGLOPSize: 256
MaximumGLOPMem: 2560
MaximumGLOPPages: 8
GLOPMapMemPath: /var/opt/teradata/tdtemp/udfglopdata/
On an MP-RAS system, the output is similar to the following:
Version: 6
ModTime: 1210967816
SWDistNodeID: 2564
SourceDirectoryPath: /Teradata/tdbs_udf/usr/
CompilerTempDirectory: /tmp/UDFTemp/
UDFLibraryPath: /ntos/udflib/
CompilerPath: /usr/bin/cc
LinkerPath: /usr/bin/ld
UDFServerMemPath: /tmp/udfsrv/
MaximumCompilations: 10
UDFServerTasks: 2
SecureServerAMPs: 20
ParallelUserServerAMPs: 2
SecureServerPEs: 20
ParallelUserServerPEs: 2
TDSPLibBase: /ntos/tdsplib/
SecureGroupMembership: tdatudf
UDFLibPath:
UDFIncPath:
UDFEnvFile:
CLILibPath: /usr/lib
CLIIncPath: /usr/include
CLIEnvFile:
MallocLimit: 33554432
GLOPLockTimeout: 130
GLOPLockWait: 120
MaximumGLOPSize: 256
MaximumGLOPMem: 2560
MaximumGLOPPages: 8
GLOPMapMemPath: /tmp/udfglopdata
Example 3
Note: Before you begin, make sure your computer has a minimum of 256 KB of free disk
space to run the protected or secure mode server process. Each server process requires 256 KB
of space.
Do the following steps to modify the fields of the UDF GDO:
1
Open a new text file, or open an existing UDF configuration file.
2
In a text editor, type the fields and field values you want to modify. For example:
UDFServerTasks:10
ParallelUserServerAMPs:4
ParallelUserServerPEs:4
SecureGroupMembership:abc.corp\udf.users
Utilities
227
Chapter 9: Cufconfig Utility (cufconfig)
Examples
UDFEnvFile:
C:\Documents and Settings\sally\My Documents\standard\UDFEnvFile.txt
CLILibPath:
CLIIncPath:
CLIEnvFile:
C:\Documents and Settings\sally\My Documents\CLI\CLIEnvFile.txt
JavaBaseDebugPort: 8000
JavaLogPath: C:\Temp\
JavaEnvFile: C:\Temp\jvmenv.txt
JavaServerTasks: 10
Note: The field names are case sensitive and must follow the same format as the output
from the -o option. The UDF configuration file should contain only the fields you want to
modify, with each field on a separate line as shown above.
3
Save the UDF configuration file as a text file on the server you are logged on to. For
example, save the file as:
C:\temp\udfsettings.txt
Note: The name of the file must follow the same format as shown in the
-f filename option and must be appropriate to the current platform.
4
Use the -f option to modify the UDF GDO with the field values specified in the UDF
configuration file, and use the -o option to display the updated contents of the UDF GDO.
For example:
cufconfig -f C:\temp\udfsettings.txt -o
Note: The path specified is case sensitive.
Using the UDF GDO contents in “Example 2,” if you apply the field changes from this
example, the new output of the UDF GDO would be the following:
Version: 6
ModTime: 1206484172
SWDistNodeID: 33
SourceDirectoryPath: E:\Program Files\Teradata\Tdat\tdconfig\Teradata\tdbs_udf\usr\
CompilerTempDirectory: E:\Program Files\Teradata\Tdat\TdTemp\UDFTemp\
UDFLibraryPath: E:\Program Files\Teradata\Tdat\tdconfig\udflib\
CompilerPath: E:\Program Files (x86)\Microsoft Visual Studio 8\VC\bin\amd64\CL.EXE
LinkerPath: E:\Program Files (x86)\Microsoft Visual Studio 8\VC\bin\amd64\LINK.EXE
UDFServerMemPath: E:\Program Files\Teradata\Tdat\TdTemp\udfsrv\
MaximumCompilations: 10
UDFServerTasks: 10
SecureServerAMPs: 20
ParallelUserServerAMPs: 4
SecureServerPEs: 20
ParallelUserServerPEs: 4
TDSPLibBase: E:\Program Files\Teradata\Tdat\tdconfig\tdsplib\
SecureGroupMembership: abc.corp\udf.users
UDFLibPath:
UDFIncPath:
UDFEnvFile: C:\Documents and Settings\sally\My Documents\Standard\UDFEnvFile.tst
CLILibPath:
CLIIncPath:
CLIEnvFile: C:\Documents and Settings\sally\My Documents\CLI\CLIEnvFile.txt
JavaLibraryPath: E:\Program Files\Teradata\Tdat\tdconfig\jarlib\
JREPath: C:\Program Files\Teradata\jvm\jre5_64\
JavaLogPath: C:\Temp\
JavaEnvFile: C:\Temp\jvmenv.txt
228
Utilities
Chapter 9: Cufconfig Utility (cufconfig)
Examples
JavaServerTasks: 10
JavaHybridThreads: 20
JavaVersion: 0x10004
JavaBaseDebugPort: 8000
JSVServerMemPath: E:\Program Files\Teradata\Tdat\TdTemp\jsvsrv\
MallocLimit: 33554432
GLOPLockTimeout: 130
GLOPLockWait: 120
MaximumGLOPSize: 256
MaximumGLOPMem: 2560
MaximumGLOPPages: 8
GLOPMapMemPath: E:\Program Files\Teradata\Tdat\TdTemp\udfglopdata
Example 4
This example illustrates a use of the JavaEnvFile environment file. By default, a DBS Java
external stored procedure uses the tdgssconfig.jar file located in the TDBMS\bin directory
when connecting to the DBS session. To have the DBS use nondefault values for the
tdgssconfig.jar file, copy, modify, and place the file in the tdconfig directory on each node.
Then create a JavaEnvFile to configure the JVM to use the copy of the tdgssconfig.jar file.
Note: When modifying the tdgssconfig.jar file, make a copy of it. The file may be overwritten
when the DBS package is re-installed.
In this case, the contents of the JavaEnvFile on a Windows system would be as follows:
-Djava.class.path=C:\\Program Files\\Teradata\\Tdat\\LTDBMS\\bin\\
javFnc.jar;C:\\Program Files\\Teradata\\Tdat\\LTDBMS\\bin\\terajdbc4.jar
;C:\\Program Files\\Teradata\\Tdat\\LTDBMS\\bin\\tdgssjava.jar;C:\\
Program Files\\Teradata\\Tdat\\tdconfig\\tdgssconfig.jar;
On a Linux system, the contents of the JavaEnvFile would be as follows:
-Djava.class.path=/usr/tdbms/bin/javFnc.jar;/usr/tdbms/bin/
terajdbc4.jar;/usr/tdbms/bin/tdgssjava.jar;/etc/opt/teradata/tdconfig/
tdgssconfig.jar;
This configures the JVM to use the standard javFnc.jar, terajdbc4.jar, and tdgssjava.jar files,
but use the alternate tdgssconfig.jar file.
For more information about the Teradata JDBC driver, see Teradata JDBC Driver User Guide.
For more information about the javFnc.jar file, see SQL External Routine Programming.
Utilities
229
Chapter 9: Cufconfig Utility (cufconfig)
Examples
230
Utilities
CHAPTER 10
Database Initialization Program
(DIP)
The Database Initialization Program, DIP, is a series of executable script files packaged with
Teradata Database. Each file creates one or more system users, databases, macros, tables, and
views, for use by the following:
•
•
Teradata Database:
•
To implement a feature, such as security access logging, client character sets, calendar
arithmetic, and so forth
•
To store system information, such as PDE crashdumps, resource usage, query text,
error message text, and so forth
Users:
•
To view the contents of the Data Dictionary (system catalog) tables
•
To generate system reports, such as resource usage statistics
DIP allows you to execute one, several, or all script files, which create a variety of database
objects. All the DIP scripts you need are executed during Teradata Database installation, but
you can run the DIP utility at any time to add optional features.
Audience
Users of DIP include the following:
Utilities
•
Teradata Database developers
•
Teradata Database system test and verification personnel
•
Field engineers
•
Teradata Database system administrators
231
Chapter 10: Database Initialization Program (DIP)
User Interfaces
User Interfaces
DIP runs on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Database Window
Windows
Database Window
Linux
Database Window
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
Running DIP from DBW
DIP runs from Database Window on MP-RAS, Windows, and Linux. After starting DIP and
entering the password for user DBC on your system, the DIP utility presents a numbered list
of available DIP scripts. Type the number of the script to be executed, and press Enter.
Depending on what script you selected, a message similar to the following appears:
Executing DIPERR at Tue Feb 20 13:04:29 2001
Please wait...
DIPERR is complete at Tue Feb 20 13:04:46 2001
Please review the results in /tmp/dip1.txt on node 1- 4
Note: The results of the script are saved into a text file, which is named using DIP and the
number of the script.
To run another script, type Y and press Enter, then continue as above. To exit DIP, type N and
press Enter.
DIP Scripts
The following table provides a summary of the script files, the objects they create, and their
purpose.
SQL Script File
Items Created
Purpose
DIPACC
DBC.AccLogRule macro.
• Runs the access logging script.
• Enables a user who has the EXECUTE privilege on the
DBC.AccLogRule macro to submit BEGIN LOGGING and END
LOGGING statements, which are used to enable and disable logging
of privilege checking.
232
Utilities
Chapter 10: Database Initialization Program (DIP)
DIP Scripts
SQL Script File
Items Created
Purpose
DIPACR
Access rights.
Loads and initializes data access rights.
DIPALL
Runs all scripts listed in the DIP
screen above its own entry.
Runs all DIP scripts except the following:
DIPCAL
• Sys_Calendar database
• CalDates table
• Calendar view
Loads the calendar tables and views. Used for date arithmetic in OLAP
and other functions.
DIPCCS
Client character sets
• Loads client character set definitions.
• Creates pre-defined translation tables for non-English languages,
including Japanese, Unicode, and various European languages.
• For more information on character sets, see International Character
Set Support.
DIPCRASH
Crashdumps database and table in
database DBC
• Loads the crashdumps database.
• Default location for storing PDE crashdumps.
DIPDEM
• The system database, SYSLIB, if
SYSLIB does not already exist
• The following items in the
SYSLIB database:
Tables
Loads tables, stored procedures, and a UDF that enable propagation,
backup, and recovery of database extensibility mechanisms (DEMs).
DEMs include stored procedures, UDFs, and UDTs that are distributed
as packages, which can be used to extend the capabilities of Teradata
Database.
• dem
• demddl
• dempart
Stored Procedures
• installsp
• savepkg
• loadpkg
External Stored Procedures
• DIPACC
• DIPPATCH
Also loads UDFs and external stored procedures for use with
Query Banding and Workload Management. For more information on
these UDFs and external stored procedures, see Workload Management
API: PM/API and Open API.
Note: GLOP support is available only on Linux.
Successful creation of the stored procedures, external stored procedures,
and UDFs requires a C compiler to be available on the Teradata system.
• Procedures for use with
Query Banding
User-Defined Functions
(UDFs)
• installpkg
• UDFs for use with
Query Banding and
Workload Management
• A release-dependent number of
Teradata DEM objects
DIPERR
Utilities
Error message logs
Loads error message files for storing the text of messages generated by
Teradata Database components, software, and client connections.
233
Chapter 10: Database Initialization Program (DIP)
DIP Scripts
SQL Script File
Items Created
Purpose
DIPGLOP
• DBCExtension database
• The following items in the
DBCExtension database:
Tables
Enables global and persistent (GLOP) data support.
• GLOP_Map
• GLOP_Set
• GLOP_Data
Stored Procedures
•
•
•
•
GLOP_Add
GLOP_Remove
GLOP_Change
GLOP_Report
GLOP data is a type of data available to external routines. The
persistence in memory of GLOP data is based on specific boundaries,
such as a user role or user account.
For more information on GLOP data, see SQL External Routine
Programming.
Note: GLOP support is available only on Linux.
Successful creation of the stored procedures requires that there be a C
compiler on the Teradata system.
DIPOLH
Online help
Loads online help text messages.
DIPOCES
• Dictionary table with standard
cost profile data
• View definitions
• Macro definitions
Initializes OCES metadata with standard cost profile definitions, which
include the following:
Stand-alone patches
Loads stand-alone patches.
DIPPATCH
DIPPWRSTNS
• V2R4 - Legacy Type 1 costing with original (V2R1) CPU coefficients
• V2R5/V2R6 - Type 1 costing with updated (V2R5) CPU coefficients
• Teradata12/T2_32Bit - Type 2 costing with new cost formulas and
coefficients for 32 bit platforms
• T2_Linux64 - Type 2 costing with cost formulas and coefficients for
64 bit Linux platforms
• T2_Win64 - Type 2 costing with cost formulas and coefficients for 64
bit Windows platforms
Note: Reverting to earlier versions of cost profiles should only be done
under direction of Teradata Support Center personnel.
Allows system administrators to create a list of restricted words that are
prohibited from use in new or modified passwords.
Initializes table DBC.PasswordRestrictions (which is created
automatically during system initialization).
DIPRCO
• views
• secondary indexes
• check constraints (restrictions
on the range of values specific
table columns will accept)
These items are related to the
Reconfig tables in the Data
Dictionary
Reserved for future development.
DIPRSS
ResUsage tables
Loads stored sample Resource Usage table data.
234
Utilities
Chapter 10: Database Initialization Program (DIP)
DIP Scripts
SQL Script File
Items Created
Purpose
DIPRUM
ResUsage macros and views
Loads Resource Usage views and macros and enables you to look at and
generate reports from the data stored in the ResUsage system tables.
Note: If your Teradata platform has co-existence nodes, during tpa
installation or upgrade the identifiers for each node group should be
defined in the CASE clauses of the appropriate view definitions in the
diprum.bteq file.
• SQLJ system database
• The following external stored
procedures in the SQLJ
database:
• INSTALL_JAR
• REPLACE_JAR
• REMOVE_JAR
• ALTER_JAVA_PATH
• REDISTRIBUTE_JAR
• The following views in the SQLJ
database:
• JAR_JAR_USAGE
• JARS
• ROUTINE_JAR_USAGE
Enables Java-based external stored procedures.
DIPSYSFE
• SytemFE macros
• Macros and tables used with
Target Level Emulation
• Loads SystemFE macros and Target Level Emulation-related tables.
• Generates diagnostic reports used by Teradata support personnel.
• Enables implementation of Target Level Emulation for analyzing the
performance of your SQL applications.
DIPUDT
• The SYSUDTLIB database
• The following macros in the
SYSUDTLIB database:
• HelpDependencies
• HelpUdtCastMacro
• HelpUdtCastMacro_Src
• HelpUdtCastMacro_Dst
• ShowUdtCastMacro
• Five types of Period data types
and their associated methods.
• The SYSSPATIAL database
(Windows and Linux only)
• ST_Geometry and MBR
geospatial data types and their
associated methods (Windows
and Linux only)
Creates SYSUDTLIB, the system database for Period data types and all
User Defined Types (UDTs), User Defined Methods (UDMs), and the
User Defined Functions (UDFs) that are used by UDTs as Cast,
Ordering, or Transform routines.
DIPSQLJ
Utilities
For more information on programming and administration of Java
classes and JAR files for Java external stored procedures, see SQL
External Routine Programming.
For more information on defining Java external stored procedures using
CREATE and REPLACE PROCEDURE, see SQL Data Definition
Language.
The new views are based on tables in the Data Dictionary. For more
details on these views, see Data Dictionary.
Note: Successful creation of the external stored procedures requires a C
compiler to be available on the Teradata system.
On Windows and Linux only, creates SYSSPATIAL, the system database
for geospatial tables, stored procedures, and UDFs.
235
Chapter 10: Database Initialization Program (DIP)
DIP Scripts
SQL Script File
Items Created
Purpose
DIPVIEW
• The SYSLIB database with 0
PERM space
• User-accessible views defined
on the Teradata Database
system tables
• Special DBQL objects,
including the control macro,
the query log tables, and views
into the logs
• DBC.LogonRule macro
• Creates the system database SYSLIB for User Defined Functions.
• Loads system views which let you investigate the contents of the
system tables in the Teradata Database data dictionary. (System views
are created in database DBC; thus, a fully-qualified reference,
required by users other than DBC, is DBC.viewname.)
• Creates DBQL objects in database DBC.
• Enables a user who has the EXECUTE privilege on the
DBC.LogonRule macro to submit the following SQL:
• GRANT LOGON/REVOKE LOGON statements
• WITH NULL PASSWORD clause,which is required to implement
external authentication. On channel-connected mainframes, null
passwords also require TDP exits, as described in Teradata
Director Program Reference
236
Utilities
Chapter 10: Database Initialization Program (DIP)
Examples
Examples
Example 1
The following is an example of running the DIPERR (Error Messages) script.
______________
|
|
___
__
____
|
____
__|__
____
|
/
|/ \
____|
____|
____|
|
____|
|
--|
/
|
/
|
/
|
|
/
|
|
\___
|
\____|
\____|
\____|
|__
\____|
Release 13.00.00.00 Version 13.00.00.00
DIP Utility (Jul 95)
Type the password for user DBC or press the Enter key to quit:
***
*** Logon successfully completed.
Select one of the following DIP SQL scripts to execute
or press the Enter key to quit:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
DIPERR
DIPDEM
DIPRSS
DIPVIEW
DIPOLH
DIPSYSFE
DIPACR
DIPCRASH
DIPRUM
DIPCAL
DIPCCS
DIPOCES
DIPUDT
DIPSQLJ
DIPPWRSTNS
DIPRCO
DIPALL
DIPACC
DIPGLOP
DIPPATCH
-
Error Messages
UDF/UDT/XSP/SPL Macros
ResUsage Tables
System Views
Online Help
System FE Macros
Access Rights
CrashDumps Database
ResUsage Views/Macros
Calendar Tables/Views
Client Character Sets
Cost Profiles
UDT Macros
SQLJ Views/Procedures
Password Restrictions
Reconfig
All of the above
Access Logging
GLOP Tables/Procedures
Stand-alone patches
1
Executing DIPERR at Wed Apr 19 13:40:47 2008
Please wait...
DIPERR is complete at Wed Apr 19 13:42:47 2008
Please review the results in /tmp/dip1.txt on node 1- 4
Would you like to execute another DIP script (Y/N)?
n
Exiting DIP...
Utilities
237
Chapter 10: Database Initialization Program (DIP)
Examples
Example 2
The following is the output of the DIPACC (Access Logging) script from the Database
Window on Windows.
Executing DIPACC at Tue Aug 01 18:06:52 2006
Please wait...
DIPACC is complete at Tue Aug 01 18:06:55 2006
Please review the results in
D:\Program Files\Teradata\Tdat\TdTemp\dip15.txt on node 1- 1
Would you like to execute another DIP script (Y/N)?
n
Exiting DIP...
The following is the script session output stored in the text file dip15.txt.
TSTSQL here. Enter your logon or TSTSQL command:
logon dbc,***;
*** Logon successfully completed.
*** Time was 1 second.
TSTSQL -- Enter DBC/SQL request or TSTSQL command:
run file = "D:\Program Files\Teradata\TDAT\TDBMS\13.00.00.00\ETC\dipacc.bteq";
TSTSQL -- Enter DBC/SQL request or TSTSQL command:
database dbc;
database dbc;
*** New default database accepted.
*** Time was 0 seconds.
TSTSQL -- Enter DBC/SQL request or TSTSQL command:
/* Added 11-09-92 to test Deadlock problem : DDT 1863 */
***********************************************************
*
D I P A C C
G0_00
*
*
*
*
D I P
I N P U T
S C R I P T
Supplement
*
*
*
*
History:
*
*
*
*
G0_00 88Oct10 TAW DCR4661 Created
*
*
*
*
Notes: This DIP script contains the macro which,
*
*
when installed, enables the access logging
*
*
function of the DBC (BEGIN/END LOGGING).
*
*
If this macro is not installed, the feature
*
*
will not operate.
*
*
*
*
***
To install the feature, two steps are required.*
*
1) Execute this script to define the macro.
*
*
2) RESTART the DBC so that it can figure out
*
*
that the feature is installed.
*
*
*
*
***
For those sites who used to use the SecurityLog*
*
facility and wish to continue to do so, this
*
*
feature must be installed. Following
*
*
installation, the following command will cause *
*
the new feature to log the same statements as *
*
the SecurityLog feature used to:
*
*
BEGIN LOGGING WITH TEXT
*
*
ON EACH USER, DATABASE, GRANT;
*
*
Of course if more or less logging is desired, *
*
the full function of BEGIN and END LOGGING
*
*
statements may be used.
*
*
*
*
***
It is strongly recommended that the site NOT
*
*
install the macro unless they intend to use
*
*
the access logging function. There is a
*
*
performance penalty for having the feature
*
*
installed even if little or no logging is
*
*
actually being performed.
*
**
*
***********************************************************
238
Utilities
Chapter 10: Database Initialization Program (DIP)
Examples
*****************************************************
*
*
* The AccLogRule macro exists to allow the system *
* administrator to control the use of the
*
* BEGIN/END LOGGING statements. The statements
*
* will only be allowed if the user has EXEC
*
* privilege on this macro.
*
*
*
* It also serves as a switch for the access logging*
* facility. If this macro is not installed, the
*
* feature will not operate.
*
*
*
*****************************************************
REPLACE MACRO DBC.AccLogRule AS (;);
REPLACE MACRO DBC.AccLogRule AS (;);
*** Macro has been replaced.
*** Time was 0 seconds.
TSTSQL -- Enter DBC/SQL request or TSTSQL command:
COMMENT ON MACRO DBC.AccLogRule AS
'The AccLogRule macro exists to allow the DBA to grant execute
Warning - Request not complete due to unterminated quote string.
privilege for the BEGIN/END LOGGING statements.';
COMMENT ON MACRO DBC.AccLogRule AS
'The AccLogRule macro exists to allow the DBA to grant execute privilege for
the BEGIN/END LOGGING statements.';
*** Comment has been set.
*** Time was 0 seconds.
TSTSQL -- Enter DBC/SQL request or TSTSQL command:
TSTSQL -- Enter DBC/SQL request or TSTSQL command:
quit;
*** You are now logged off from the DBC.
TSTSQL exiting
Utilities
239
Chapter 10: Database Initialization Program (DIP)
Examples
240
Utilities
CHAPTER 11
Database Window (xdbw)
Teradata Database Window (xdbw), abbreviated throughout the documentation as DBW,
allows system administrators to control the operation of Teradata Database.
DBW provides a graphical user interface to the Teradata Console Subsystem (CNS). Use DBW
to issue database commands and run many of the database utilities. CNS is a part of the
Parallel Database Extensions (PDE) software upon which the database runs.
The following figure illustrates the logical relationship among DBW, CNS, and Teradata
Database.
CNS
DBW
Teradata
Database
1095D041
Audience
Users of DBW include Teradata Database administrators, system operators, and support
personnel.
User Interfaces
DBW runs on the following platforms and interfaces.
Platform
Interfaces
MP-RAS
Command line
Windows
Command line (“Teradata Command Prompt”)
Linux
Command line
Type xdbw to start DBW.
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
Utilities
241
Chapter 11: Database Window (xdbw)
Running DBW
Running DBW
You can run DBW from the following locations:
•
System Console
•
Administration Workstation (AWS)
•
Remote workstation or computer
If you are not at the system console (SMP systems) or at an administrative workstation
(MPP systems), use a remote console application like Telnet (MP-RAS or Linux) or remote
desktop terminal services (Windows) to log on to a node on the system.
Related Topics
For instructions on how to...
See...
run DBW on MP-RAS, Linux, and Windows
Appendix B: “Starting the Utilities.”
use database commands
“Commands” on page 246.
start Teradata Database utilities from DBW
“START” on page 291.
DBW Main Window
The following table describes the three main areas of the DBW main window.
This Area …
Contains …
Menu Bar
two pull-down menus:
• File
• Help
Status line
242
• the connection identifier. This consists of the connection number by
which this DBW console is known, followed by a slash and the total
number of DBW consoles connected to the system.
• the name of the node to which this DBW is connected.
• the current status of the system.
Note: When the PDE is running, the Supervisor Status line shows the status
as Reading to allow you to enter commands into the Supervisor window.
Utilities
Chapter 11: Database Window (xdbw)
Granting CNS Permissions
This Area …
Contains …
Buttons
the following subwindows:
• Four DBW application windows. In each DBW application window, you
can run one database utility or program at a time.
• DBS I/O window. This window contains messages from database
programs that are not running in DBW application windows.
• Supervisor window. In the Supervisor window, you can run commands
and execute utilities.
Note: Database commands run in the Supervisor window. Database
utilities run in one of the DBW application windows.
These subwindows can communicate simultaneously with the Teradata
Database.
Granting CNS Permissions
You can run DBW as a root or a nonroot user. Access to CNS is controlled by the GRANT,
REVOKE, and GET PERMISSIONS commands.
If you log on as root, or if the root user has granted all permissions to nonroot users, you can
run DBW from any TPA node. Nonroot users must have all permissions granted to them on
the control node.
To grant all permissions to a nonroot user in DBW, type:
grant xxx@ localhost all
where xxx is the user ID.
Warning:
The permissions should be granted or revoked only when all TPA nodes are running.
Related Topics
For information on the three commands that manage CNS permissions, see:
•
“GET PERMISSIONS” on page 266
•
“GRANT” on page 272
•
“REVOKE” on page 278
For information on the DBW commands that are always accepted if CNS accepts the
connection, see:
Utilities
•
“Accessing Help on DBW” on page 245
•
“CNSGET” on page 250
•
“GET ACTIVELOGTABLE” on page 257
•
“GET CONFIG” on page 259
•
“GET LOGTABLE” on page 264
•
“GET PERMISSIONS” on page 266
243
Chapter 11: Database Window (xdbw)
Repeating Commands
•
“GET RESOURCE” on page 267
•
“GET SUMLOGTABLE” on page 268
•
“GET TIME” on page 269
•
“GET VERSION” on page 270
Repeating Commands
The following table describes how to repeat database commands.
IF you …
THEN the command…
double-click on the command in the command
history area of the Supervisor window
re-executes immediately.
single-click on the command
is placed on the command input line. You can
edit the command before pressing Enter.
Saving Output
There are two methods for saving output in DBW:
•
“Logging Information to Files”
•
“Saving the Window Buffer”
Logging Information to Files
This method maintains an ongoing log of information from the output display area.
When logging is enabled, the session from the current window is saved to a default or specified
text-format log file. These logs can be useful for reviewing or printing out the session.
When logging all windows, standard log files are opened. If a log file already exists, DBW
appends the current session text to the existing file.
To start a log file for all windows, click File>Log All Windows from the DBW main window.
DBW immediately begins logging output from all DBW subwindows into separate files. If log
files already exist for each of the subwindows, new text is appended to these files.
To start a log file for a single application window:
1
Click File>Logging from one of the active application windows.
2
In the Select Log File For Window dialog box, click the log file or specify a new file name,
and click Okay or Save.
DBW immediately begins logging the text in the output display area to the specified log
file. If the log file already exists, the new text is appended to it.
244
Utilities
Chapter 11: Database Window (xdbw)
DBW Scripts
Saving the Window Buffer
This method saves only the current contents of the window buffer to a file. Any output created
after the window is saved is lost.
To save current contents of an application or Supervisor window:
1
Click File>Save Buffer from the active subwindow.
2
In the Save Buffer dialog box, click the log file or specify a new file name, and click Okay or
Save.
The text in the output display area of the application window is saved in the specified file.
If the file already exists, it is overwritten.
Viewing Log and Buffer Files
1
Click File>View from the DBW main window.
2
In the Select File to View dialog box, click the log file or specify a new file name, and click
Okay or Open.
3
In the Viewing File window, click Okay to exit.
DBW Scripts
Creating Scripts
To save time, you can create scripts of frequently used commands in any DBW application
window. When you run the scripts, DBW treats these scripts as if you typed the commands on
the console.
You can create a script file with any text editor, such as vi or Emacs on MP-RAS and Linux,
and WordPad or Notepad on Windows. The file should contain a list of commands with one
command on each line.
Running Scripts in DBW
1
Click File>Send Script File from an active application window.
2
In the Send Script dialog box, click the log file or specify a new file name, and click Okay or
Open.
Accessing Help on DBW
Utilities
•
Click the Help menu from any window.
•
Press F1 from any window (MP-RAS and Linux only).
•
Click the Help button in dialog boxes.
245
Chapter 11: Database Window (xdbw)
Commands
Commands
The following table summarizes the database commands.
246
Command Name
Description
“ABORT SESSION”
Aborts any outstanding request or transaction of one or more
sessions, and optionally logs those sessions off the Teradata
Database system.
“CNSGET”
Returns parameters used by CNS to control the connections to
console windows.
“CNSSET LINES”
Returns parameters used by CNS to control the connections to
console windows.
“CNSSET STATEPOLL”
Sets how often CNS checks the state of the DBS.
“CNSSET TIMEOUT”
Sets the interval between the time you type a request and the time
CNS rejects the request because a program did not respond to the
input.
“DISABLE LOGONS/
DISABLE ALL LOGONS”
Prevents users from logging on to the system until the ENABLE
LOGONS or ENABLE ALL LOGONS command is sent.
“ENABLE DBC LOGONS”
Allows only new DBC users to log on to the system.
“ENABLE LOGONS/
ENABLE ALL LOGONS”
Allows any new users to log on to the system.
“GET ACTIVELOGTABLE”
Shows the active row filtering status on any ResUsage table.
“GET CONFIG”
Returns the current system configuration.
“GET EXTAUTH”
Returns the current value of external authentication.
“GET LOGTABLE”
Returns the logging status on any ResUsage table.
“GET PERMISSIONS”
Returns CNS permissions for the specified connection-ID.
“GET RESOURCE”
Returns the Resource Sampling Subsytem (RSS) collection and
logging rates for the ResUsage tables.
“GET SUMLOGTABLE”
Returns the Summary Mode status of ResUsage tables.
“GET TIME”
Returns the current date and time on the system.
“GET VERSION”
Returns the current running PDE and DBS version numbers on the
system.
“GRANT”
Grants CNS permissions for the specified connection-ID.
“LOG”
Logs text to the database event log table (DBC.EventLog) and the
message event log file for the current day.
“QUERY STATE”
Returns the operational status of Teradata Database.
Utilities
Chapter 11: Database Window (xdbw)
Commands
Utilities
Command Name
Description
“RESTART TPA”
Restarts or stops Teradata Database after optionally performing a
dump.
“REVOKE”
Revokes CNS permissions for the specified connection-ID.
“SET ACTIVELOGTABLE”
Enables or disables active row filtering on any ResUsage table.
“SET EXTAUTH”
Controls whether Teradata Database users can be authenticated
outside (external) of the Teradata Database software authentication
system.
“SET LOGTABLE”
Enables or disables logging to any ResUsage table.
“SET RESOURCE”
Changes the rates at which the RSS data is collected and logged.
“SET SESSION
COLLECTION”
Sets the session collection rate period.
“SET SUMLOGTABLE”
Enables or disables logging in Summary Mode on most ResUsage
tables.
“START”
Starts a utility in one of the DBW application windows.
“STOP”
Stops the utility running in the specified DBW application window.
247
Chapter 11: Database Window (xdbw)
ABORT SESSION
ABORT SESSION
Purpose
Aborts any outstanding request or transaction of one or more sessions, and optionally logs
those sessions off the Teradata Database system.
Syntax
ABORT SESSION
hostid:sesssionno
hostid.username
LOGOFF
LIST
OVERRIDE
*.username
hostid.*
FE0CA036
*.*
where:
Syntax element …
Is the …
hostid
logical ID of a host (or client) with sessions logged on. A hostid of 0
identifies internal sessions or system console sessions. The range of
values are 0 to 32767.
sessionno
session number. sessionno combined with hostid, represents a unique
session ID. The range of values is 0 to 4,294,967,295.
username
user who is running the session. username can have a maximum length
of 30 characters.
LOGOFF
indicator of whether or not to log the requested session off Teradata
Database in addition to aborting them.
LIST
indicator of whether or not to display a list of sessions that meet
criteria.
OVERRIDE
indicator of whether or not to override an ABORT SESSION failure.
Usage Notes
Aborting a session is useful when a session causes a production job to block other sessions
waiting for locks, or when a session takes up so many resources that a critical application runs
too slowly.
If an identified session cannot be aborted, the following occurs:
248
•
A 3268 error message returns.
•
Additional information is logged to the error log.
•
Processing of the request is terminated.
Utilities
Chapter 11: Database Window (xdbw)
ABORT SESSION
Retry the ABORT SESSION command at a later time after queued up abort requests complete.
You can issue the ABORT SESSION command on both channel and network-attached clients.
Related Topics
For more information about ABORT SESSION, see Workload Management API: PM/API and
Open API.
For a complete description of error 3268, see Messages.
Utilities
249
Chapter 11: Database Window (xdbw)
CNSGET
CNSGET
Purpose
Returns parameters used by CNS to control the connections to console windows.
Syntax
CNSGET
FE0CA016
Usage Notes
To set the value for …
Which is the …
Use the command …
mailbox timeout
interval of time in seconds between the time you
type a request and the time CNS rejects the
request because a program did not solicit the
input
“CNSSET TIMEOUT”
on page 253.
system state polling
interval
interval of time in seconds between checking the
database state and substate
“CNSSET
STATEPOLL” on
page 252.
number of lines to be
queued
number of historical output lines that are sent to
newly connected CNS connections
“CNSSET LINES” on
page 251.
Example
The following example shows the current CNS parameters.
cnsget
mailbox timeout is 5.
system state polling interval is 2.
number of lines to be queued is 9.
250
Utilities
Chapter 11: Database Window (xdbw)
CNSSET LINES
CNSSET LINES
Purpose
Sets the number of lines that are saved from the output display area and sent to DBW after a
reconnect with CNS.
Syntax
CNSSET LINES
n
FE0CA018
where n is the number of lines to be saved. Valid values range from 2 to 72 lines.
The default is 10 lines.
Usage Notes
Teradata recommends setting large values (for example, 10 or more lines), unless the display
of old input and output after a reconnect is a security concern.
To check the current number of lines set, issue the CNSGET command. For more
information, see “CNSGET” on page 250.
Example
In this example, the number of lines set to be queued on the system is 12.
cnsset lines 12
CNSSUPV: cns will save 12 lines per screen.
Utilities
251
Chapter 11: Database Window (xdbw)
CNSSET STATEPOLL
CNSSET STATEPOLL
Purpose
Sets how often CNS checks the state of the DBS.
Syntax
CNSSET STATEPOLL
n
FE0CA019
where n is the time interval, in seconds, between state checks. Valid intervals range from 1 to
10 seconds. The default is 2 seconds.
Usage Notes
The states are displayed in the status line of the main DBW window.
Values greater than 2 seconds reduce system overhead, but increase response time to state
changes.
To check the current setting, use the CNSGET command. For more information, see
“CNSGET” on page 250.
Example
In this example, the rate at which CNS checks the DBS state is set to 4 seconds.
cnsset statepoll 4
CNSSUPV: system state polling interval is 4 seconds.
252
Utilities
Chapter 11: Database Window (xdbw)
CNSSET TIMEOUT
CNSSET TIMEOUT
Purpose
Sets the interval between the time you type a request and the time CNS rejects the request
because a program did not respond to the input.
Syntax
CNSSET TIMEOUT
n
FE0CA020
where n is the time interval in seconds between receipt of input and receipt of error message, if
a utility is not reading input. Valid intervals range from 1 to 60 seconds. The default is 5
seconds.
Usage Notes
The key-in timeout signals that a utility is not reading any input.
Values less than 5 seconds can cause input to be lost too quickly. Values greater than 5 seconds
could cause a delay in delivery of error messages.
To check the current setting, issue the CNSGET command. For more information, see
“CNSGET” on page 250.
Example
In this example, the current timeout value is set to 5 seconds.
cnsset timeout 5
CNSSUPV: timeout for input request is 5 seconds.
Utilities
253
Chapter 11: Database Window (xdbw)
DISABLE LOGONS/ DISABLE ALL LOGONS
DISABLE LOGONS/ DISABLE ALL LOGONS
Purpose
Prevents users from logging on to the system until the ENABLE LOGONS or ENABLE ALL
LOGONS command is sent.
Syntax
DISABLE
LOGONS
ALL
1095A003
Usage Notes
DISABLE LOGONS and DISABLE ALL LOGONS are synonymous.
The status line of the DBW main window must return one of the following messages:
•
Logons are enabled - The system is quiescent.
•
Only user DBC logons are enabled - The system is quiescent.
Users logged on to the system before the DISABLE LOGONS or DISABLE ALL LOGONS
command is executed remain logged on, and their jobs continue to run. All sessions are
reconnected if the system is restarted.
Both commands (DISABLE LOGONS or DISABLE ALL LOGONS) update the logon state in
the PDE Control GDO, viewable using ctl and xctl utilities.
For more information on these utilities, see “Chapter 8 Control GDO Editor (ctl)” on
page 155 and the “Xctl (xctl)” chapter in Utilities, Volume 2.
A database restart is not required when using DISABLE LOGONS or DISABLE ALL
LOGONS. However, Teradata Database must be running for the update to be applied.
Example
The following example shows the current logon status is disabled for all new users.
disable logons
06/07/28 17:30:03 Logons disabled.
Enable Logons of screen debug in xctl/ctl have been updated to None
254
Utilities
Chapter 11: Database Window (xdbw)
ENABLE DBC LOGONS
ENABLE DBC LOGONS
Purpose
Allows only new DBC users to log on to the system.
Syntax
ENABLE DBC LOGONS
1095A001
Usage Notes
The status line of the DBW main window must return one of the following messages:
•
Logons are enabled - The system is quiescent.
•
Logons are disabled - The system is quiescent.
Any new non-DBC users cannot log on to the system when the ENABLE DBC LOGONS
command is executed.
Users, other than the DBC, logged on to the system before the ENABLE DBC LOGONS
command is executed remain logged on, and their jobs continue to run. All sessions are
reconnected if the system is restarted.
The ENABLE DBC LOGONS command updates the logon state in the PDE Control GDO,
viewable using ctl and xctl utilities.
For more information on these utilities, see “Chapter 8 Control GDO Editor (ctl)” on
page 155 and the “Xctl (xctl)” chapter in Utilities, Volume 2.
A database restart is not required when using ENABLE DBC LOGONS. However, Teradata
Database must be running for the update to be applied.
Example
The following example shows the current logon status is enabled for DBC users only. NonDBC users cannot log on to the current system.
enable dbc logons
06/07/28 17:32:19 DBC Logons enabled.
Enable Logons of screen debug in xctl/ctl have been updated to DBC.
Utilities
255
Chapter 11: Database Window (xdbw)
ENABLE LOGONS/ ENABLE ALL LOGONS
ENABLE LOGONS/ ENABLE ALL LOGONS
Purpose
Allows any new users to log on to the system.
Syntax
ENABLE
LOGONS
ALL
1095A002
Usage Notes
ENABLE LOGONS and ENABLE ALL LOGONS are synonymous.
The status line of the DBW main window must return one of the following messages:
•
Logons are disabled - The system is quiescent.
•
Only user DBC logons are enabled - The system is quiescent.
Users logged on to the system before ENABLE LOGONS or ENABLE ALL LOGONS is
executed remain logged on, and their jobs continue to run. All sessions are reconnected if the
system is restarted.
Both commands (ENABLE LOGONS or ENABLE ALL LOGONS) update the logon state in
the PDE Control GDO, viewable using ctl and xctl utilities.
For more information on these utilities, see “Chapter 8 Control GDO Editor (ctl)” on
page 155 and the “Xctl (xctl)” chapter in Utilities, Volume 2.
A database restart is not required when using ENABLE LOGONS or ENABLE ALL LOGONS.
However, Teradata Database must be running for the update to be applied.
Example
The following example shows the current logon status is enabled for new users.
enable logons
06/07/28 17:31:03 Logons enabled.
Enable Logons of screen debug in xctl/ctl have been updated to All.
256
Utilities
Chapter 11: Database Window (xdbw)
GET ACTIVELOGTABLE
GET ACTIVELOGTABLE
Purpose
Shows the active row filtering status on any ResUsage table.
Syntax
GET ACTIVELOGTABLE
tablename
ALL
1095A007
where:
Syntax element …
Specifies …
tablename
name of the ResUsage table for which to show the active row filtering
status.
ALL
the active row filtering status of all ResUsage tables on the system.
Usage Notes
You can also view the active row filtering status from the RSS screen of the Ctl and Xctl
utilities. For more information, see Chapter 8: “Control GDO Editor (ctl)” in Utilities Volume
1 or the Xctl Utility chapter in Utilities Volume 2.
To enable or disable active row filtering, see “SET ACTIVELOGTABLE” on page 280.
For information about active row filtering and its affect on ResUsage tables, see Resource Usage
Macros and Tables.
Example
The following example shows the active row filtering status of all ResUsage tables on the
current system. The system displays a screen similar to this:
get activelogtable all
spma table's active row filtering mode is disabled
ipma table's active row filtering mode is disabled
svpr table's active row filtering mode is disabled
ivpr table's active row filtering mode is disabled
scpu table's active row filtering mode is disabled
sldv table's active row filtering mode is disabled
spdsk table's active row filtering mode is disabled
svdsk table's active row filtering mode is disabled
sawt table's active row filtering mode is disabled
Utilities
257
Chapter 11: Database Window (xdbw)
GET ACTIVELOGTABLE
sps table's active row filtering mode is enabled
shst table's active row filtering mode is disabled
258
Utilities
Chapter 11: Database Window (xdbw)
GET CONFIG
GET CONFIG
Purpose
Returns the current system configuration.
Syntax
GET CONFIG
FE0CA023
Usage Notes
The following table defines the system configuration.
Component
Description
Vproc Number
Uniquely identifies the vproc across the entire system.
Rel. Vproc#
Represents the number of the vproc relative to the node upon which the vproc resides.
Node ID
Identifies the node upon which the vproc resides. Node ID is formatted as CCC-MM, where CCC
denotes the cabinet number and MM denotes the module number. For example, the node ID 003-07
identifies the node in module number 7 of cabinet 3.
Movable
Indicates whether the vproc can be migrated to another node in its defined clique if its primary node
fails.
Utilities
259
Chapter 11: Database Window (xdbw)
GET CONFIG
Component
Description
Crash Count
Represents the number of times the vproc has crashed. The count increments with every attempted
system restart. When the system restart succeeds, Crash Count is reset to 0 for all vprocs.
Vproc State
Represents the current PDE system state of a vproc.
• FATAL: Indicates a serious problem with a vproc and/or its associated storage. For example, if a vproc
crashes repeatedly, the number of crashes might have exceeded the allowable crash count, or if there
are corrupt tables on the vdisk that require a Table Rebuild of some (or all) of the tables, a vproc is set
to the FATAL state.
Note: When a VSS vproc is in FATAL state, all AMPs associated with it will be put into FATAL state at
the next database restart.
• FATAL**: Indicates an AMP cannot access its storage.
Note: The AMP partition is not started for a vproc in this state.
• NEWPROC: Applies only to AMP and PE vprocs. It indicates that either a new vproc is added into
the Teradata database configuration or an existing vproc is deleted.
A vproc with a status of NEWPROC is a member of the PDE message groups but is not a member of
the operational Teradata database message group. For more information, contact the Teradata
Support Center.
• NONODE: Indicates that the physical hardware required to run this vproc is not available. This state
is not accepted as an argument to the SET command, although this state might appear in the Vproc
Status Table produced by the STATUS command.
Note: The AMP or PE partitions are not started for a vproc in this state.
• NULL: Undefined. It is not accepted as an argument to the SET command, although this state might
appear in the Vproc Status Table produced by the STATUS command.
• OFFLINE: Generally, indicates vprocs that are not fully operational and have been forced down by
the Teradata database or system administrator.
A vproc with status OFFLINE is a member of the PDE message group but is not a member of the
operational Teradata Database message group. For more information, contact the Teradata Support
Center.
When a VSS vproc is in OFFLINE state, all AMPs associated with it will be put into FATAL state.
• ONLINE: Indicates that the vproc is fully operational and actively participating with the Teradata
database.
A vproc with status ONLINE is a member of both the PDE and Teradata Database message group.
For more information, contact the Teradata Support Center.
• UTILITY: Is transitional and is used by database recovery and the RECONFIG and Table Rebuild
utilities to indicate that a previously OFFLINE/FATAL/NEWPROC is interacting with the online
Teradata Database.
This vproc is a member of the PDE message groups but not a member of the operational Teradata
Database message groups. A local crash of a UTILITY vproc causes a database restart.
260
Utilities
Chapter 11: Database Window (xdbw)
GET CONFIG
Component
Description
Config Status
Displays the Teradata Database Logical Configuration Map Status of a vproc.
• Online: The vproc is fully operational.
This status usually corresponds to ONLINE Vproc State.
• Down: The vproc has been forced down.
This status usually corresponds to the OFFLINE, UTILITY, FATAL, and NONODE Vproc states.
• Catchup: The Config Status for the AMP was Down and is being recovered in the background. If
System RestartKind is COLDWAIT, the Config Status of the vproc becomes Online when recovery is
complete.
This status usually corresponds to the UTILITY Vproc State.
• Hold: The Config Status for the AMP was Catchup or Down and its data is being recovered. The
Config Status of this vproc becomes Online when recovery is complete.
This status usually corresponds to the ONLINE Vproc State.
• NewReady: This is either a newly added vproc or one that has been removed from the Teradata
Database Logical Configuration.
This status usually corresponds to the UTILITY or NEWPROC Vproc State.
• NewDown: A newly added vproc that is down. It has not yet been added into the DBS configuration
with config or reconfig.
This status usually corresponds to the OFFLINE Vproc State.
• Null: This vproc is not yet in the Teradata Database Logical Configuration. The vproc might have
been added to the PDE Physical Configuration or deleted from the Teradata Database Logical
Configuration, and Teradata Database Startup has not run. Startup notes that this vproc does not
appear in the Startup configuration map and changes the Config Status of the vproc to NewReady.
This status usually corresponds to the NEWPROC Vproc State.
Config Type
Represents the Teradata Database Logical Configuration Map Type of a vproc.
Cluster/
Host No.
Displays the Cluster Number if the Config Type is AMP or displays the Host No. if the Config Type is
PE.
Cluster is the cluster number for the AMP as defined using the Configuration Utility. The valid range
of cluster numbers is 0 to 8099.
Host No. is the host number that was assigned to the PE using the Configuration Utility. The valid
range of host numbers is 1 to 1023.
RcvJrnl/
Host Type
Displays the RcvJrnl (that is, Recovery Journaling) flag if the Config Type is AMP or the Host Type if
the Config Type is PE.
The RcvJrnl flag is Off if an AMP is down and the other AMPs in its cluster are not to create a recovery
journal for the down AMP.
Note: If you anticipate that an AMP will be down for a long period of time, Teradata recommends an
offline rebuild of all tables on the AMP (after the RcvJrnl flag has been set to Off).
The Host Type is the host type for the PE as defined using the Configuration Utility and IBM, COP,
ATT3B, BULLHN, or OS1100.
VSS Vproc
For AMP vprocs, displays the associated VSS vproc.
The following table defines the node configuration.
Utilities
261
Chapter 11: Database Window (xdbw)
GET CONFIG
Component
Description
Node ID
Defined using PUT. See Parallel Upgrade (PUT) Tool User Guide for more information. The node ID is
formatted as CCC-MM, where CCC denotes the cabinet number and MM denotes the module
number. For example, the node ID 003-07 identifies the node in module number 7 of cabinet 3.
Node State
The current state of the node, which is either ONLINE, DOWN, or STANDBY.
Clique Number
The clique number for the node as defined using PUT. See Parallel Upgrade (PUT) Tool User Guide for
more information.
CPUs
The number of CPUs on the node.
Memory (MB)
The total memory size in megabytes for the node (rounded up to the nearest integer).
CHANs
The number of channels attached to the node.
LANs
The number of LANs attached to the node.
AMPs
The number of AMPs running on the node.
Node Name
The network name for the node.
When applicable, a footnote follows the PDE Status Table, indicating which node is defined as
the PDE Distribution Node (on MP-RAS only) or a Non-TPA Node (for nodes in STANDBY).
For more information on TPA and Non-TPA nodes, see Performance Management.
Example
The following example shows the current system configuration on a Windows system.
SYSTEM NAME: comma
DBS LOGICAL CONFIGURATION
-------------------------
08/01/08 09:53:56
Rcv
Jrnl/
Vproc Rel.
Node
Can
Crash Vproc
Config
Config Cluster/ Host VSS
Number Vproc# ID
Move Count State
Status
Type
Host No. Type Vproc
------ ------ ------ ----- ----- ------- -------- ------ -------- ----- ----0*
1
1-01 Yes
0
ONLINE Online
AMP
0
On
10238
1
2
1-01 Yes
0
ONLINE Online
AMP
0
On
10237
8192
4
1-01 No
0
ONLINE N/A
GTW 1
COP
N/A
10237
5
1-01 Yes
0
ONLINE N/A
VSS
0
N/A
N/A
10238
6
1-01 Yes
0
ONLINE N/A
VSS
0
N/A
N/A
16383
3
1-01 Yes
0
ONLINE Online
PE
1
COP
N/A
-------------------------------------------------------------------------------*
DBS Control AMP
DBS State: Logons are enabled - The system is quiescent
DBS RestartKind: COLD
PDE PHYSICAL CONFIGURATION
-------------------------Node
Node
Clique
Memory
ID
State
Number CPUs (MB) CHANs LANs AMPs Node Name
------- ------- ------ ---- ------ ----- ---- ---- --------------------------1-01 ONLINE
0
2
2047
0
1
2 localhost
-----------------------------------------------------------------------------PDE State: RUN/STARTED
262
Utilities
Chapter 11: Database Window (xdbw)
GET EXTAUTH
GET EXTAUTH
Purpose
Returns the current value of external authentication.
Syntax
GET EXTAUTH
1102A085
Usage Notes
To set External Authentication, see “SET EXTAUTH” on page 282. For additional information
on external authentication, see Security Administration.
Example
The following example shows external authentication logons are rejected.
get extauth
The External Authentication mode is set to OFF.
Utilities
263
Chapter 11: Database Window (xdbw)
GET LOGTABLE
GET LOGTABLE
Purpose
Returns the logging status on any ResUsage table.
Syntax
GET LOGTABLE
tablename
ALL
FE0CA024
where:
Syntax element …
Is the …
tablename
name of the ResUsage table for which to display logging status.
• spma: Contains the system-wide node information, which provides a
summary of overall system utilization incorporating the essential
information from most of the other tables.
• ipma: Contains system-wide node information, intended primarily for
Teradata engineers. This table is generally not used at customer sites.
• svpr: Contains data specific to each virtual processor and its file
system.
• ivpr: Contains system-wide virtual processor information, intended
primarily for Teradata engineers. This table is generally not used at
customer sites.
• scpu: Contains statistics on the CPUs within the nodes.
• sldv: Contains system-wide, storage device statistics.
• spdsk: Contains pdisk I/O, cylinder allocation, and migration statistics.
This table is not currently used.
• svdsk: Contains statistics collected from the associated storage of the
AMP.
• sawt: Contains data specific to the AMP worker tasks (AWTs).
• sps: Contains data by Performance Group (PG) ID.
• shst: Contains statistics on the host channels and LANs that
communicate with Teradata Database.
For more information on these tables, see Resource Usage Macros and
Tables.
ALL
264
displays the logging status of all ResUsage tables.
Utilities
Chapter 11: Database Window (xdbw)
GET LOGTABLE
Usage Notes
You can also view the logging status of the ResUsage tables by using any of these methods:
•
From the RSS Settings window in xctl windowing mode (MP-RAS only)
•
By issuing the SCREEN RSS command in xctl non-windowing mode (MP-RAS) or in ctl
from the Windows and Linux command line
For more information, see “Chapter 8 Control GDO Editor (ctl)” on page 155 and the “Xctl
(xctl)” chapter in Utilities, Volume 2.
To enable or disable logging on any ResUsage table, see “SET LOGTABLE” on page 283.
Example
The following example displays the logging status of all ResUsage tables on an MP-RAS
system. The system displays a screen similar to this:
get logtable all
spma table's logging is disabled
ipma table's logging is disabled
svpr table's logging is disabled
ivpr table's logging is disabled
scpu table's logging is disabled
sldv table's logging is disabled
spdsk table's logging is disabled
svdsk table's logging is disabled
sawt table's logging is disabled
sps table's logging is disabled
shst table's logging is disabled
Utilities
265
Chapter 11: Database Window (xdbw)
GET PERMISSIONS
GET PERMISSIONS
Purpose
Returns CNS permissions for the specified connection-ID.
Syntax
GET PERMISSIONS
username@host
FE0CB037
where:
Syntax element …
Is the …
username
Logon user name accessing CNS for whom you want to get or view
permissions.
host
network host name of the computer to which the user is logged on.
Usage Notes
DBW shows the name of the connection-ID followed by its list of permissions and any granted
or revoked utilities.
If the connection-ID does not exist, DBW returns the following message:
user@host: no permissions
Example
The following example shows all CNS permissions granted to connection-ID
jsmith@isdn1954.
get permissions jsmith@isdn1954
jsmith@isdn1954: set logons log restart grant abort interactive
Revoked Utilities: vprocmanager
266
Utilities
Chapter 11: Database Window (xdbw)
GET RESOURCE
GET RESOURCE
Purpose
Returns the RSS collection and logging rates for the ResUsage tables.
Syntax
GET RESOURCE
FE0CA025
Usage Notes
The collection period is used to extract the data for real-time tools to display and the logging
period is used for writing the data to the database.
The default for RSS Collection Rate, Node Logging Rate, and Vproc Logging Rate is 600. A
logging or collection rate of 0 disables that operation.
RSS Collection Rate applies to both nodes and vprocs. The logging rates display separately for
nodes and vprocs.
For details about logging rate of resource utilization and memory clearing rate of vprocs and
node, see Resource Usage Macros and Tables.
Note: You can also view the current logging and collection rates by using any one of these
methods:
•
From the RSS Settings window in xctl windowing mode (MP-RAS only)
•
By issuing the SCREEN RSS command in xctl non-windowing mode (MP-RAS) or in ctl
from the Windows and Linux command line
For more information, see “Chapter 8 Control GDO Editor (ctl)” on page 155 and the “Xctl
(xctl)” chapter in Utilities, Volume 2.
Example
The following example shows the RSS collection and logging rates for the ResUsage tables on
the current system.
get resource
RSS Rate Information:
RSS Collection Rate = 600
Node Logging Rate
= 600 Vproc Logging Rate =
Utilities
600
267
Chapter 11: Database Window (xdbw)
GET SUMLOGTABLE
GET SUMLOGTABLE
Purpose
Returns the Summary Mode status of ResUsage tables.
Syntax
GET SUMLOGTABLE
tablename
1095A009
where tablename is the name of the ResUsage table for which to display the Summary Mode
status.
Except for spma and ipma, you can display the Summary Mode status for the svpr, ivpr, scpu,
sldv, spdsk, svdsk, sawt, sps, and shst tables.
For details on how summary mode affects data reporting for these specific tables, see the table
descriptions in Resource Usage Macros and Tables.
Usage Notes
Summary Mode is not applicable to the spma or ipma table because these ResUsage tables
only report a single row of data for each node in normal mode.
For more information on Summary Mode, see Resource Usage Macros and Tables.
You can also view the status of ResUsage tables logged in Summary Mode from the RSS Screen
of the Ctl and Xctl utilities. For more information, see Chapter 8: “Control GDO Editor (ctl)”
in Utilities Volume 1 or the Xctl Utility chapter in Utilities Volume 2.
To enable or disable logging in Summary Mode on any ResUsage table, see
“SET SUMLOGTABLE” on page 289.
Example
The following example shows the Summary Mode status of the Svpr table.
get sumlogtable svpr
svpr table's summary mode is disabled
268
Utilities
Chapter 11: Database Window (xdbw)
GET TIME
GET TIME
Purpose
Returns the current date and time on the system.
Syntax
GET TIME
FE0CA034
Example
The following example shows the current date and time.
get time
The system time is Tue Jul 18 15:30:01 2006
Utilities
269
Chapter 11: Database Window (xdbw)
GET VERSION
GET VERSION
Purpose
Returns the current running PDE and DBS version numbers on the system.
Syntax
GET VERSION
FE0CA026
Usage Notes
The following table describes the different software versions returned when you execute
GET VERSION.
On...
The current running versions are...
Linux
Teradata Parallel Database Extensions (PDE)
Teradata Database (DBS)
Teradata Relay Services Gateway (RSG)
Teradata Gateway (TGTW)
Teradata Channel (TCHN)
Teradata Database Generic Security Services (TDGSS)
GNU general public license version of Teradata Parallel Database Extensions
(PDEGPL)
MP-RAS
Teradata Parallel Database Extensions (PDE)
Teradata Database (DBS)
Teradata Relay Services Gateway (RSG)
Teradata Gateway (TGTW)
Teradata Host Software (HOST)
Teradata Database Generic Security Services (TDGSS)
Windows
Teradata Parallel Database Extensions (PDE)
Teradata Database (DBS)
Teradata Relay Services Gateway (RSG)
Teradata Gateway (TGTW)
Teradata Channel (TCHN)
Teradata Database Generic Security Services (TDGSS)
Note: TCHN is also known as Teradata Host Software (HOST) on MP-RAS.
For a complete description of the different software versions, see Chapter 8: “Control GDO
Editor (ctl)” in Utilities Volume 1 or the Xctl Utility chapter in Utilities Volume 2.
270
Utilities
Chapter 11: Database Window (xdbw)
GET VERSION
You can also view the versions of the current running PDE and DBS software from the Version
screen of the Ctl and Xctl utilities. For more information, see Chapter 8: “Control GDO Editor
(ctl)” in Utilities Volume 1 or the Xctl Utility chapter in Utilities Volume 2.
Example 1
The following example shows the current running PDE and DBS version numbers on an
MP-RAS system.
get version
The
The
The
The
The
currently
currently
currently
currently
currently
running
running
running
running
running
PDE version: 13.00.00.00
DBS version: 13.00.00.00
TGTW version: 13.00.00.00
TCHN version: 13.00.00.00
TDGSS version: 13.00.00.00
Example 2
The following example shows the current running PDE and DBS version numbers on a Linux
system.
get version
The
The
The
The
The
The
Utilities
currently
currently
currently
currently
currently
currently
running
running
running
running
running
running
PDE version: 13.00.00.00
DBS version: 13.00.00.00
TGTW version: 13.00.00.00
TCHN version: 13.00.00.00
TDGSS version: 13.00.00.00
PDEGPL version: 13.00.00.00
271
Chapter 11: Database Window (xdbw)
GRANT
GRANT
Purpose
Grants CNS permissions for the specified connection ID.
Syntax
GRANT
username@host
permission_list
FE0CB038
where:
Syntax element …
Is the …
username
logon name of the user accessing CNS for whom you want to grant
permissions.
host
network host name of the computer to which you are logged on.
permission_list
list of supervisor privileges to be granted, separated by spaces.
• abort: Grants the user authority to issue the ABORT SESSION
command.
• all: Grants the user authority to issue all CNS commands.
• grant: Grants the user authority to issue the GRANT and REVOKE
commands.
• interactive: Grants the user authority to issue input to interactive
programs.
Note: Enclose the program or program list, separated by spaces, in
parentheses. For example:
(ctl dbscontrol vprocmanager)
• logons: Grants the user authority to issue:
• ENABLE LOGONS/ENABLE ALL LOGONS
• ENABLE DBC LOGONS
• DISABLE LOGONS/DISABLE ALL LOGONS
• log: Grants the user authority to issue the LOG command.
• restart: Grants the user authority to issue the RESTART command.
• set: Grants the user authority to issue CNSSET and SET commands.
• start: Grants the user authority to issue the START and STOP
commands.
Usage Notes
The GRANT command only affects future DBW sessions. You must stop and restart the DBW
session for the updated permissions to take effect.
272
Utilities
Chapter 11: Database Window (xdbw)
GRANT
If interactive permission has been granted, it is added to the GRANT list. If interactive
permission has been revoked, then it is removed from the GRANT list if it exists there. To
revoke interactive permissions, see “REVOKE” on page 278.
Example
In this example, connection-ID jsmith@isdn1954 is granted interactive privileges to the
Vproc Manager utility.
grant jsmith@isdn1954 interactive(vprocmanager)
CNSSUPV: permissions for jsmith@isdn1954 updated
jsmith@isdn1954: set logons log restart grant abort interactive
Granted Utilities: vprocmanager
Utilities
273
Chapter 11: Database Window (xdbw)
LOG
LOG
Purpose
Logs text to the database event log table (DBC.EventLog) and the message event log file for the
current day.
Syntax
LOG
errorlogtext
FE0CA027
where errorlogtext is the text to be sent to the DBC.EventLog table.
Example
In this example, the text “Vproc 1022 added as a parsing engine” is saved to the DBC.EventLog
table and log file.
log Vproc 1022 added as a parsing engine.
274
Utilities
Chapter 11: Database Window (xdbw)
QUERY STATE
QUERY STATE
Purpose
Returns the operational status of Teradata Database.
Syntax
QUERY STATE
FE0CA028
Usage Notes
The following table lists the valid system states.
System State
Description
Database is not running
Teradata Database has not been started; it cannot be accessed
from a client or used for processing.
Database Startup
Teradata Database is undergoing startup processing and is
not yet ready to accept requests.
Logons are disabled - Users are
logged on
No new sessions can log on, but existing sessions are still
logged on.
Logons are disabled - The system is
quiescent
Logons are disabled and no sessions are logged on.
Logons are enabled - Users are
logged on
New sessions can log on and work is in process.
Logons are enabled - The system is
quiescent
Logons are enabled, but no sessions are logged on.
Only user DBC Logons are enabled
Only new DBC sessions can log on and work is in process.
RECONFIG is running
Reconfiguration is being run.
System is operational without PEs Sessions are not allowed
Either there are no PEs configured into the system or all PEs
are offline/down.
TABLEINIT is running
Database startup has detected that there are no tables on the
system and is running TABLEINIT to create the system
tables.
This usually occurs during the next system restart after a
SYSINIT.
Following are valid system substates of Teradata Database Startup:
Utilities
275
Chapter 11: Database Window (xdbw)
QUERY STATE
•
Initializing Database VProcs
•
Initializing Database Configuration
•
Starting AMP Partitions
•
Starting PE Partitions
•
Voting for Transaction Recovery
•
Starting Transaction Recovery
•
Recovering Down AMPs
•
Recovering Database Partitions
Example
This example shows the current operational status of Teradata Database.
query state
TPA is in state: Logons are enabled - The system is quiescent
276
Utilities
Chapter 11: Database Window (xdbw)
RESTART TPA
RESTART TPA
Purpose
Restarts or stops Teradata Database after optionally performing a dump.
Syntax
comment
RESTART TPA
NODUMP
DUMP =
COLD
YES
COLDWAIT
NO
FE0CA029
where:
Syntax element …
Specifies …
NODUMP
restart the database without a dump. This is the default.
DUMP
restart the database with a dump.
If DUMP is set to YES, then Teradata Database restarts with a dump.
If DUMP is set to NO, then Teradata Database restarts without a dump.
COLD
restart Teradata database without waiting for transaction recovery. This
is the default.
COLDWAIT
finish all database transaction recovery before allowing users to log on.
comment
the reason for the restart.
Usage Notes
You must type a comment explaining the reason for the restart.
When you restart Teradata Database, all vprocs are reset.
Caution:
RESTART TPA does not prompt you to confirm the restart.
Example
In this example, a cold restart is issued with a message stating the reason why.
restart tpa cold Screen dump for book.
The reason for the restart is logged as:
Screen dump for book.
Utilities
277
Chapter 11: Database Window (xdbw)
REVOKE
REVOKE
Purpose
Revokes CNS permissions for the specified connection ID.
Syntax
REVOKE
username@host
permission_list
FE0CB039
where:
Syntax element …
Is the …
username
logon name of the user accessing CNS for whom you want to revoke
permissions.
host
network host name of the computer on which that user is logged on.
permission_list
list of supervisor privileges to be revoked, separated by spaces.
• abort: Revokes the user authority to issue the ABORT SESSION
command.
• all: Revokes the user authority to issue all CNS commands.
• grant: Revokes the user authority to issue the GRANT and REVOKE
commands.
• interactive: Revokes the user authority to issue input to interactive
programs.
Note: Enclose the program or program list, separated by spaces, in
parentheses. For example:
(ctl dbscontrol vprocmanager)
• logons: Revokes the user authority to issue:
• ENABLE LOGONS/ENABLE ALL LOGONS
• ENABLE DBC LOGONS
• DISABLE LOGONS/DISABLE ALL LOGONS
• log: Revokes the user authority to issue the LOG command.
• restart: Revokes the user authority to issue the RESTART command.
• set: Revokes the user authority to issue CNSSET and SET
commands.
• start: Revokes the user authority to issue the START and STOP
commands.
278
Utilities
Chapter 11: Database Window (xdbw)
REVOKE
Usage Notes
The REVOKE command only affects future DBW sessions. You must stop the DBW session
and then restart it for the updated permissions to take effect.
If interactive permission is revoked, then it is removed from the GRANT list if it exists there. If
interactive permission has been granted, it is added to the GRANT list. To grant interactive
permissions, see “GRANT” on page 272.
Example
In this example, the following CNS permissions: START, STOP, and LOGONS, and interactive
utilities: System Initializer (SYSINIT) and Filer, are revoked from connection-ID
jsmith@isdn1954.
revoke jsmith@isdn1954 start logons interactive (sysinit filer)
CNSSUPV: permissions for jsmith@isdn1954 updated
jsmith@isdn1954: set log restart abort interactive
Revoked Utilities: sysinit filer
Utilities
279
Chapter 11: Database Window (xdbw)
SET ACTIVELOGTABLE
SET ACTIVELOGTABLE
Purpose
Enables or disables active row filtering on any ResUsage table.
Syntax
SET ACTIVELOGTABLE
tablename
ON
ALL
OFF
1095A008
where:
Syntax element …
Specifies …
tablename
name of the ResUsage table for which to enable or disable active row
filtering.
ALL
enables or disables active row filtering on all ResUsage tables depending
if the ON or OFF option is specified.
ON
enables active row filtering on any one of the ResUsage tables specified.
OFF
disables active row filtering on any one of the ResUsage tables specified.
Usage Notes
Active row filtering mode reduces the number of data rows that are logged to the database. For
the tables for which this option is ON (enabled), only those rows whose data has been
modified during the current logging period will be logged.
For some ResUsage tables, like sps, there are a large number of possible rows, and most of
them are not used at any one time. Logging the inactive rows would waste a large amount of
resources. Therefore, Teradata recommends that Active Row Filter Mode remain enabled for
these tables.
To log rows, the tables that are enabled for Active Row Filter Mode must also have the table
selected for logging and a corresponding Logging Rate (node or vproc) of nonzero.
You can also enable or disable active row filtering from the RSS screen of the Ctl and Xctl
utilities. For more information, see “Chapter 8 Control GDO Editor (ctl)” on page 155 and
the “Xctl (xctl)” chapter in Utilities, Volume 2.
To learn more about active row filtering, see Resource Usage Macros and Tables.
For information on Summary Mode, see “SET SUMLOGTABLE” on page 289.
To display the active row filtering status, see “GET ACTIVELOGTABLE” on page 257.
280
Utilities
Chapter 11: Database Window (xdbw)
SET ACTIVELOGTABLE
Example
The following example enables active row filtering on all ResUsage tables.
set activelogtable all on
Utilities
281
Chapter 11: Database Window (xdbw)
SET EXTAUTH
SET EXTAUTH
Purpose
Controls whether Teradata Database users can be authenticated outside (external) of the
Teradata Database software authentication system.
Syntax
SET EXTAUTH
OFF
ON
ONLY
1102A086
where:
Value
Description
OFF
Rejects external authentication logons; traditional logons are accepted.
ON
Accepts both external authentication and traditional logons. This is the default.
ONLY
Accepts external authentication logons only; traditional logons are rejected.
Usage Notes
External authentication may eliminate the need for your application to declare or store a
password on your client system.
To configure how the network allows or disallows traditional and new external authentication
logons, see “Chapter 15 Gateway Control (gtwcontrol)” on page 571.
Note: If you have scripts that depend on the deprecated SET SSO command, you should
change them to use the SET EXTAUTH command.
The new setting is effective immediately.
To check the current setting, use “GET EXTAUTH” on page 263.
For additional information on external authentication, see Security Administration.
Example
In this example, the external authentication logon value is set to accept external
authentications logons only.
set extauth only
The External Authentication mode has been set to ONLY.
282
Utilities
Chapter 11: Database Window (xdbw)
SET LOGTABLE
SET LOGTABLE
Purpose
Enables or disables logging to any ResUsage table.
Syntax
SET LOGTABLE
tablename
ON
ALL
OFF
FE0CA030
where:
Syntax element …
Is the …
tablename
name of the ResUsage table for which to enable or disable logging.
• spma: Contains the system-wide node information, which provides a
summary of overall system utilization incorporating the essential
information from most of the other tables. The default setting is ON
(enabled).
• ipma: Contains system-wide node information, intended primarily for
Teradata engineers. This table is generally not used at customer sites.
The default setting is OFF (disabled).
• svpr: Contains data specific to each virtual processor and its file
system. The default setting is OFF (disabled).
• ivpr: Contains system-wide virtual processor information, intended
primarily for Teradata engineers. This table is generally not used at
customer sites. The default setting is OFF (disabled).
• scpu: Contains statistics on the CPUs within the nodes. The default
setting is OFF (disabled).
• sldv: Contains system-wide, storage device statistics. The default
setting is OFF (disabled).
• spdsk: Contains pdisk I/O, cylinder allocation, and migration statistics.
The default setting is OFF (disabled).This table is not currently used.
• svdsk: Contains statistics collected from the associated storage of the
AMP. The default setting is OFF (disabled).
• sawt: Contains data specific to the AMP worker tasks (AWTs). The
default setting is OFF (disabled).
• sps: Contains data by Performance Group (PG) ID. The default setting
is OFF (disabled).
• shst: Contains statistics on the host channels and LANs that
communicate with Teradata Database. The default setting is OFF
(disabled).
For more information on these tables, see Resource Usage Macros and
Tables.
Utilities
283
Chapter 11: Database Window (xdbw)
SET LOGTABLE
Syntax element …
Is the …
ALL
enables or disables logging on all ResUsage tables depending if the ON or
OFF option is specified.
ON
enables logging on the ResUsage table specified.
OFF
disables logging on the ResUsage table specified.
Usage Notes
Before a table can be logged, the table must be enabled for logging and the corresponding
Logging Rate (node or vproc) must be set to a nonzero value.
You can also set logging from the RSS screen of the Ctl and Xctl utilities. For more
information, see “Chapter 8 Control GDO Editor (ctl)” on page 155 and the “Xctl (xctl)”
chapter in Utilities, Volume 2.
To display whether logging to a ResUsage table is enabled or disabled, see “GET LOGTABLE”
on page 264.
Example
In this example, logging is enabled on all the ResUsage tables on the current system.
set logtable all on
You can use the GET LOGTABLE command to display whether logging has been enabled on
all ResUsage tables. For more information, see “GET LOGTABLE” on page 264.
284
Utilities
Chapter 11: Database Window (xdbw)
SET RESOURCE
SET RESOURCE
Purpose
Changes the rates at which the RSS data is collected and logged.
Syntax
SET RESOURCE
COLLECTION
n1
COLL
n2
LOGGING
NODE
VPROC
LOGGING
n2
NODE
COLLECTION
VPROC
COLL
n1
1095B004
where:
Syntax element …
Specifies …
COLLECTION n1
the time (0 - 3600 seconds) between occurrences of memory buffer
clearing.
COLLECTION n1 sets both the node and vproc collection rates.
A value of 0 means relevant timer is not used.
LOGGING n2
the time (0 - 3600 seconds) between occurrences of resource
utilization logging.
A value of 0 means relevant timer is not used.
If you type VPROC LOGGING n2, then vproc-related rates are to be
set.
If you type NODE LOGGING n2 or LOGGING n2 (the default), then
node-related rates are to be set.
Usage Notes
Two types of memory buffer exist.
Utilities
Buffer Type …
Contains …
Collection
a snapshot of resource sampling statistics from the previous collection
period.
285
Chapter 11: Database Window (xdbw)
SET RESOURCE
Buffer Type …
Contains …
Logging
resource sampling statistics logged to permanent disk storage at specified
logging intervals.
You can specify any one of the following:
•
Collection
•
Logging
•
Collection and logging, in either order
However, you cannot specify the node and vproc logging in the same command. To set both
node and vproc logging rates, you must type a separate command for each.
The RSS logs data only to resource usage tables that are enabled. If you only want to collect
data for interactive use, but not log it, set the RSS Collect Rate to a nonzero value and the log
rate to zero.
The SET RESOURCE command is effective immediately.
For details about logging and memory clearing rates for vproc and node, see Resource Usage
Macros and Tables.
Example 1
The current RSS Rate Information is as follows:
RSS Rate Information:
RSS Collection Rate =
Node Logging Rate
=
600
1200 Vproc Logging Rate =
600
In this example, the Collection Rate is set to 600, and the Node and Vproc Logging Rates are
set to 1200.
set resource collection 600 vproc logging 1200
RSS rates set.
RSS Collection Rate =
Node Logging Rate
=
286
600
1200 Vproc Logging Rate = 1200
Utilities
Chapter 11: Database Window (xdbw)
SET RESOURCE
Example 2
The current RSS Rate Information is as follows:
RSS Rate Information:
RSS Collection Rate =
Node Logging Rate
=
600
600 Vproc Logging Rate = 600
In this example, the node logging rate is set to 1200.
set resource logging 1200
RSS rates set.
RSS Collection Rate =
Node Logging Rate
=
600
1200 Vproc Logging Rate = 600
Note: Since the node logging rate is the default, you do not have to specify that you are
changing the node logging rate.
If you type an illegal combination, an error message returns. For example, if you typed:
set recource collection 2400
node logging 2400
then the following error message returns:
CNSSUPV:
Illegal combination of RSS Rates would result.
The illegal combination is:
RSS Collection Rate =
2400
Node Logging Rate
=
2400 Vproc Logging Rate =
CNSSUPV: RSS Suggests correcting to the following rates:
RSS Collection Rate =
1800
Node Logging Rate
=
1800 Vproc Logging Rate =
CNSSUPV:
Utilities
600
0
RSS Rates not changed.
287
Chapter 11: Database Window (xdbw)
SET SESSION COLLECTION
SET SESSION COLLECTION
Purpose
Sets the session collection rate period.
Syntax
SET SESSION
COLLECTION
COLL
n
FE0CB035
where n is the period in seconds. Valid rates range from a value between 0 to 3600 inclusive.
Usage Notes
If the SET SESSION COLLECTION command is set to OFF, monitoring of the session rate is
disabled. However, if you type a Set Session Collection rate between 0 and 3600, the
monitoring is enabled automatically.
For more information, see “SET SESSION RATE” in Workload Management API: PM/API and
Open API.
Example
In this example, the session collection rate is set to 10 seconds on the current system.
set session collection 10
Monitor Session Rate set to 10 seconds, prior rate was disabled.
288
Utilities
Chapter 11: Database Window (xdbw)
SET SUMLOGTABLE
SET SUMLOGTABLE
Purpose
Enables or disables logging in Summary Mode on most ResUsage tables.
Syntax
SET SUMLOGTABLE
tablename
ON
OFF
1095A010
where:
Syntax element …
Is the …
tablename
name of the ResUsage table for which you want to enable or disable
Summary Mode.
Except for spma and ipma, you can enable or disable Summary Mode on
the svpr, ivpr, scpu, sldv, spdsk, svdsk, sawt, sps, and shst tables.
Summary Mode for the tables is OFF by default.
For details on how Summary Mode affects data reporting for these
specific tables, see the table descriptions in Resource Usage Macros and
Tables.
ON
enables Summary Mode on the Resusage table specified.
OFF
disables Summary Mode on the ResUsage table specified.
Usage Notes
Summary Mode is not applicable to the spma or ipma table because these ResUsage tables
only report a single row of data for each node in normal mode.
To log summary rows for a table, that table must be enabled in both the RSS Table Logging
Enable group and in the RSS Summary Mode Enable group. The corresponding Logging Rate
(node or vproc) must be set to a nonzero value.
If enabled, Summary Mode reduces database I/O by consolidating and summarizing data on
each node on the system.
Using the SET SUMLOGTABLE command, you can enable Summary Mode individually for
the tables in which great detail is not needed. For more information on Summary Mode, see
Resource Usage Macros and Tables.
Utilities
289
Chapter 11: Database Window (xdbw)
SET SUMLOGTABLE
You can also enable or disable logging in Summary Mode from the RSS screen of the Ctl and
Xctl utilities. For more information, see “Chapter 8 Control GDO Editor (ctl)” on page 155
and the “Xctl (xctl)” chapter in Utilities, Volume 2.
To display the Summary Mode status of any ResUsage table, see “GET SUMLOGTABLE” on
page 268.
Example
The following example enables the sawt table to be logged in Summary Mode.
set sumlogtable sawt on
290
Utilities
Chapter 11: Database Window (xdbw)
START
START
Purpose
Starts a utility in one of the DBW application windows.
Syntax
START
utilityname
1
2
3
4
,VPROC=n
-V=n
-V n
utilityargs
1095A006
where:
Syntax element …
Is the …
1, 2, 3, 4
number of the DBW application window where you want to start the
utility.
The default is the lowest numbered application window available.
Caution:
Do not insert spaces between the word START and the
window number. For example, START 1 produces an error,
but START1 does not.
VPROC=n
-V=n
-V n
the vproc where the program is to be started.
utilityname
name of the utility to be started.
The default is the database control AMP, which is normally AMP 0.
Note: Must be a utility or program in one of the PDE or TPA
directories.
utilityargs
remaining part of the command line to pass to utilityname.
Usage Notes
When a DBW application window is not specified, or the specified window is not available,
the utility starts in the first available DBW application window.
Select a vproc number based on the particular utility being run. For example, if the utility you
are starting provides information on a specific AMP or vproc, be sure to start the utility on
that AMP or vproc.
Utilities
291
Chapter 11: Database Window (xdbw)
START
Example
The following example starts the Vproc Manager utility in window 3.
start3 vprocmanager
Started 'vprocmanager' in window 3
at Mon Jul 16 14:16:22 2007
292
Utilities
Chapter 11: Database Window (xdbw)
STOP
STOP
Purpose
Stops the utility running in the specified DBW application window.
Syntax
STOP
1
2
3
4
FE0CA033
where syntax elements 1, 2, 3, 4, are the numbers of the DBW application window where the
utility is running.
Usage Notes
You must specify a DBW application window number.
The STOP command is issued from the Supervisor window.
You can also stop a utility from the DBW application window where the utility is running by
typing the appropriate command, such as EXIT, STOP, or QUIT.
Example
The following example shows DBW application window 2 has stopped running the Show
Locks (showlocks) utility.
stop 2
Screen 2 (showlocks) is stopped.
at Fri Aug 30 16:59:09 1996
Utilities
293
Chapter 11: Database Window (xdbw)
STOP
294
Utilities
CHAPTER 12
DBS Control (dbscontrol)
The DBS Control utility, dbscontrol, displays and modifies various Teradata Database
configuration settings. DBS Control can also be used to set diagnostic checksum options for
different classes of database tables.
DBS Control settings, also called DBS Control fields, are stored in the DBS Control Globally
Distributed Object (GDO). GDOs are binary files that store Teradata Database configuration
settings. They are distributed to and used by every node in the system. The PDE layer of
Teradata Database ensures that the GDO is consistent across all virtual processors.
Audience
Users of DBS Control include the following:
•
Teradata Database developers
•
Teradata Database system test and verification personnel
•
Teradata Support Center personnel
•
Field engineers
•
Teradata Database system architects
Using DBS Control requires a familiarity with the basic operations of Teradata Database.
User Interfaces
dbscontrol runs on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Database Window
Command line
Windows
Database Window
Command line (“Teradata Command Prompt”)
Linux
Database Window
Command line
The parallel database extensions (PDE) must be running for DBS Control to run. The
database components of Teradata Database do not need to be running.
Utilities
295
Chapter 12: DBS Control (dbscontrol)
DBS Control Field Groups
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
DBS Control Field Groups
The DBS Control fields are grouped logically based on their use by Teradata Database, as
shown in the following table.
Group
Description
File System
Settings that affect the Teradata Database file system.
Performance
Settings that affect Teradata Database performance features.
General
Miscellaneous settings that affect how Teradata Database operates.
Checksum
Special diagnostic settings that are used to ensure the integrity of disk I/O
operations on database tables.
For information on the checksum group of settings, see “Checksum Fields”
on page 446.
DBS Control Commands
The following table summarizes DBS Control commands.
Command
Description
DISPLAY
Displays the current values of DBS Control settings.
HELP
Provides help information for the DBS Control utility and individual DBS
Control settings.
MODIFY
Modifies DBS Control settings.
QUIT
Exits DBS Control.
WRITE
Writes the current DBS Control settings to the DBS Control GDO.
These commands are described in more detail in the sections that follow.
296
Utilities
Chapter 12: DBS Control (dbscontrol)
DISPLAY
DISPLAY
Purpose
Displays the current values for the fields of the GENERAL, FILESYS, PERFORMANCE, and
CHECKSUM DBS Control Record groups.
Syntax
DISPLAY
D
GroupName
GS03C019
where:
Syntax element…
Specifies…
GroupName
which of the following groups the DISPLAY command is directed
towards:
• GENERAL (for General fields)
• FILESYS (for File System fields)
• PERFORMANCE (for Performance fields)
• CHECKSUM (for Checksum fields)
Also, you can type the first character of each GroupName rather than
the entire name.
Usage Notes
If GroupName is provided, only the fields associated with the specified group are displayed. If
GroupName is omitted, the values of all fields are displayed.
Example
The following example shows default values for the listed fields.
Note: Actual field numbers may be different from those shown in the example.
Enter a command, HELP, or QUIT:
DISPLAY
DBS Control Record - General Fields:
1.
2.
3.
4.
5.
Utilities
Version
= 7
SysInit
= TRUE (2007-01-20 10:09)
DeadLockTimeout
= 240
(Reserved for future use)
HashFuncDBC
= 5 (Universal)
297
Chapter 12: DBS Control (dbscontrol)
DISPLAY
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
(Reserved for future use)
(Reserved for future use)
SessionMode
= 0 (Teradata)
LockLogger
= FALSE
RollbackPriority
= FALSE
MaxLoadTasks
= 5
RollForwardLock
= FALSE
MaxDecimal
= 15
Century Break
= 0
DateForm
= 0 (IntegerDate)
System TimeZone Hour
= 0
System TimeZone Minute = 0
RollbackRSTransaction = FALSE
RSDeadLockInterval
= 0 (240)
RoundHalfwayMagUp
= FALSE
(Reserved for future use)
Target Level Emulation = FALSE
Export Width Table ID = 0 (Expected Defaults)
DBQL Log Last Resp
= FALSE
DBQL Options
= 0(No special options)
ExternalAuthentication = 0 (On)
IdCol Batch Size
= 100000 (Expected Defaults)
LockLogger Delay Filter
= FALSE
LockLogger Delay Filter Time = 0
ObjectUseCountCollectRate
= 0 minutes (Disabled)
LockLogSegmentSize
= 64 KB (Expected Defaults)
CostProfileId
= 0
DBQLFlushRate
= 600 (seconds)
Memory Limit Per Transaction = 2 pages
Client Reset Timeout
= 300 seconds
Temporary Storage Page Size = 4K bytes
Spill File Path
= /var/tdrsg
MDS Is Enabled
= FALSE
Checktable Table Lock Retry Limit = 0 (Retry Forever)
EnableCostProfileTLE
= FALSE
EnableSetCostProfile
= 0(==0 => disabled, >0 => enabled)
UseVirtualSysDefault
= 0 (==0 => use static, >0 => use virtual)
DisableUDTImplCastForSysFuncOp = FALSE
CurHashBucketSize
= 20 bits.
NewHashBucketSize
= 20 bits.
MaxLoadAWT
= 0
MonSesCPUNormalization
= FALSE
MaxRowHashBlocksPercent
= 50%
TempLargePageSize
= 64K Bytes
RepCacheSegSize
= 512K Bytes
MaxDownRegions
= 6
*defaulted*
MPS_IncludePEOnlyNodes
= FALSE
PrimaryIndexDefault
= D *Teradata default*
AccessLockForUncomRead
=FALSE
DefaultCaseSpec
= FALSE
DBS Control Record - File System Fields:
1.
2.
3.
4.
5.
6.
298
FreeSpacePercent
MiniCylPackLowCylProd
PermDBSize
JournalDBSize
DefragLowCylProd
PermDBAllocUnit
=
=
=
=
=
=
0%
10 (free cylinders)
127 (sectors)
(Reserved for future use)
100 (free cylinders)
1 (sectors)
Utilities
Chapter 12: DBS Control (dbscontrol)
DISPLAY
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
Cylinders Saved for PERM =
DisableWALforDBs
=
DisableWAL
=
WAL Buffers
=
MaxSyncWALWrites
=
SmallDepotCylsPerPdisk
=
LargeDepotCylsPerPdisk
=
WAL Checkpoint Interval =
Free Cylinder Cache Size
Bkgrnd Age Cycle Interval=
10 (cylinders)
FALSE
FALSE
20 (WAL log buffers)
40 (MaxSyncWalWrites)
2 (cylinders)
1 (cylinders)
60 (seconds)
= 100 (number of cylinders)
60 (seconds)
DBS Control Record - Performance Fields:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
DictionaryCacheSize
= 1024 (kilobytes)
DBSCacheCtrl
= TRUE
DBSCacheThr
= 10%
MaxParseTreeSegs
= 1000
ReadAhead
= TRUE
StepsSegmentSize
= 1024 (kilobytes)
RedistBufSize
= 4 (kilobytes) *defaulted* (amp-level buffering only)
DisableSyncScan
= FALSE
SyncScanCacheThr
= 10%
HTMemAlloc
= 2%
SkewAllowance
= 75%
Read Ahead Count
= 1
PPICacheThrP
= 10
ReadLockOnly
= FALSE
IAMaxWorkloadCache
= 32 (megabytes)
MaxRequestsSaved
= 600 * default *
UtilityReadAheadCount = 10
StandAloneReadAheadCount = 20
DisablePeekUsing
= FALSE
IVMaxWorkloadCache
= 1 (megabytes)
RevertJoinPlanning
= FALSE
MaxJoinTables
= 0 =>(64 * default *)
DBS Control Record - Disk I/O Integrity Fields:
CHECKSUM LEVELS
1.
2.
3.
4.
5.
6.
System Tables
System Journal Tables
System Logging Tables
User Tables
Permanent Journal Tables
Temporary Tables
=
=
=
=
=
=
NONE
NONE
NONE
NONE
NONE
NONE
CHECKSUM LEVEL DEFINITIONS
7.
8.
9.
NONE
LOW
MEDIUM
HIGH
ALL
Utilities
=
0%
=
2%
= 33%
= 67%
= 100%
Sampling
Sampling
Sampling
Sampling
Sampling
299
Chapter 12: DBS Control (dbscontrol)
HELP
HELP
Purpose
Provides general help for DBS Control or detailed help if an option is specified.
Syntax
HELP
H
ALL
A
keyword
GS03C020
where:
Syntax element…
Specifies…
ALL
a composite of the other options and is used to display the entire DBS
Control help text.
keyword
either a command name or a group name. A command name entry
indicates which user-level command the HELP command is directed
towards. A group name entry indicates which group (GENERAL,
FILESYS, PERFORMANCE, or CHECKSUM) the HELP command is
directed towards.
Also, you can type the first character of each keyword rather than the
entire word.
Usage Notes
If the options are omitted (that is, only HELP is specified), a brief introduction to DBS
Control is displayed. If a keyword is specified, a detailed help display is provided.
Example 1
The following example shows basic help:
Enter a command, HELP, or QUIT:
HELP
The DBS Control utility program provides a means to display and modify
the fields of the DBS Control Record which is stored on the system as a
Globally Distributed Object (GDO). The general command syntax is:
<Command> [ <GroupName> ]
That is, a command followed by a group name (if any).
300
Utilities
Chapter 12: DBS Control (dbscontrol)
HELP
Valid commands are:
DISPLAY, MODIFY, WRITE, HELP, and QUIT
Valid group names are:
GENERAL, FILESYS, PERFORMANCE, and CHECKSUM
Note that all commands and groups may be abbreviated to their first
unique character.
Enter "HELP <Command>" or "HELP <GroupName>" for detailed information on
each command or group, respectively. Enter "HELP ALL" for the help text
in its entirety.
Example 2
The following example shows detailed help using the DISPLAY command:
Enter a command, HELP, or QUIT:
HELP DISPLAY
DISPLAY [ <GroupName> ]
o This command will display the values of the DBS Control Record fields.
If a GroupName is provided, only the
fields associated with the
specified group will be
displayed. If GroupName is omitted, the values
of all
fields will be displayed.
o Valid GroupNames are GENERAL, FILESYS, PERFORMANCE, and CHECKSUM.
Enter "HELP <GroupName>" for additional information.
Utilities
301
Chapter 12: DBS Control (dbscontrol)
MODIFY
MODIFY
Purpose
Modifies the value of the writable DBS Control Record fields.
Syntax
MODIFY
M
GroupName
field# = value
1102A164
where:
Syntax Element
Description
GroupName
Indicates which of the following groups the MODIFY command is
directed towards:
• GENERAL (for General fields)
• FILESYS (for File System fields)
• PERFORMANCE (for Performance fields)
• CHECKSUM (for Checksum fields)
Also, you can type the first character of each GroupName rather than
the entire name.
field#=value
Specifies the new value of the specified field.
Note: Use the DISPLAY GroupName command to return the field
numbers for a particular group.
In most cases, this is a decimal value. For Boolean fields, use TRUE or T
and FALSE or F. All values are checked against the valid range of values
of the respective field.
Usage Notes
If you omit the options, DBS Control explicitly prompts for the group name, field number,
and the new value of the field.
Note: Use of the MODIFY command alone does not affect the current system GDO. The
MODIFY command indicates when any changes made will become effective, either after the
next Teradata Database restart or after the DBS Control Record has been written.
302
Utilities
Chapter 12: DBS Control (dbscontrol)
MODIFY
Example
The following example shows how to modify the Deadlock Timeout field:
Enter a command, HELP, or QUIT:
MODIFY GENERAL 3 = 600
The DeadlockTimeout field has been modified from 240 to 600.
NOTE: This change will become effective after the next DBS restart.
Utilities
303
Chapter 12: DBS Control (dbscontrol)
QUIT
QUIT
Purpose
Causes DBS Control to exit.
Syntax
QUIT
Q
1102A216
Usage Notes
If the DBS Control Record has been modified, but you did not issue a WRITE command, you
are asked to write the changes in the system GDO or terminate without saving them.
Example 1
The following example shows how to exit DBS Control and issue a WRITE command after
modifying the DBS Control Record:
Enter a command, HELP, or QUIT:
QUIT
The DBS Control Record has been modified.
Enter 'W' to write to the DBS Control GDO or
'Q' to terminate with no update:
W
Locking the DBS Control GDO...
Updating the DBS Control GDO...
Exiting DBSControl...
Example 2
The following example shows how to exit DBS Control normally:
Enter a command, HELP, or QUIT:
q
Exiting DBS Control Utility...
304
Utilities
Chapter 12: DBS Control (dbscontrol)
WRITE
WRITE
Purpose
Forces the DBS Control Record to be written out to its system GDO counterpart.
Syntax
WRITE
W
GS03C023
Usage Notes
A system-wide lock is placed on the DBS Control GDO to write the changes.
DBS Control notifies you if the DBS Control GDO has been modified by someone else during
your DBS Control session, and offers to overwrite their changes with yours or terminate
without saving them.
Example 1
The following example writes the current settings to the GDO without exiting DBS Control.
Enter a command, HELP, or QUIT:
write
Locking the DBS Control GDO ...
Updating the DBS Control GDO ...
Enter a command, HELP, or QUIT:
Utilities
305
Chapter 12: DBS Control (dbscontrol)
DBS Control Fields (Settings)
Example 2
The following example writes the current settings to the GDO, overwriting changes that have
been made by someone else during the current DBS Control session.
Enter a command, HELP, or QUIT:
WRITE
The DBS Control Record has been modified by someone else.
Do you want to override those changes?
Enter 'W' to overwrite the DBS Control GDO or
'Q' to terminate with no update:
W
Locking the DBS Control GDO...
Updating the DBS Control GDO...
Exiting DBSControl...
DBS Control Fields (Settings)
The following table summarizes the DBS Control fields and indicates their groups. The fields
are described in more detail in the sections that follow.
Field
Description
Group
“AccessLockForUncomRead”
Determines whether SELECT requests embedded in INSERT...SELECT,
UPDATE...SELECT, and DELETE...SELECT requests place READ or
ACCESS locks on the source table.
General
“Bkgrnd Age Cycle Interval”
Determines the amount of time that elapses between background cycles
to write a subset of modified segments in the cache to disk.
File System
“Century Break”
Defines how to interpret character-format Teradata Database date input
when the input format only has two digits representing the year.
General
“CheckTable Table Lock Retry
Limit”
Specifies that in nonconcurrent mode, CheckTable will retry a table
check for a specified limit when the table is locked by another
application.
General
“Client Reset Timeout”
Specifies how long the Relay Services Gateway (RSG) should wait for an
intermediary to reconnect before taking action.
General
“CostProfileId”
Contains the system standard cost profile ID number, which defines the
cost profile that the system will use.
General
“CurHashBucketSize”
Indicates the number of bits used to identify a hash bucket in the current
system configuration.
General
“Cylinders Saved for PERM”
Saves some number of cylinders for permanent data only.
File System
“DateForm”
Defines whether Integer Date or ANSI Date formatting is used for a
session.
General
306
Utilities
Chapter 12: DBS Control (dbscontrol)
DBS Control Fields (Settings)
Field
Description
Group
“DBQLFlushRate”
Controls the length of time between flushing the Database Query Log
(DBQL) cache.
General
“DBQL Log Last Resp”
Determines whether DBQL logs a pseudo step labeled “RESP” after the
last response of a DBQL logged request completes.
General
“DBQL Options”
This field is unused, but reserved for future DBQL use. It should not be
changed.
General
“DBSCacheCtrl”
Enables or disables the performance enhancements associated with the
Cache Control Page-Release Interface associated with the DBSCacheThr
field.
Performance
“DBSCacheThr”
Specifies the percentage value to use for calculating the cache threshold
when the DBSCacheCtrl field is enabled.
Performance
“DeadlockTimeOut”
Used by the Dispatcher to determine the interval (in seconds) between
deadlock timeout detection cycles.
General
“DefaultCaseSpec”
Determines whether character string comparisons consider character
case and whether character columns are considered case specific by
default in Teradata session mode.
General
“DefragLowCylProd”
Determines the number of free cylinders below which cylinder
defragmentation can begin.
File System
“DictionaryCacheSize”
Defines the size of the dictionary cache for each PE on the system.
Performance
“DisableUDTImplCastForSysF
uncOp”
Disables/enables implicit cast/conversion of UDT expressions passed to
built-in system operators/functions.
General
“DisablePeekUsing”
Enables or disables the performance enhancements associated with
exposed USING values in parameterized queries.
Performance
“DisableSyncScan”
Enables or disables the performance enhancements associated with
synchronized full table scans.
Performance
“DisableWAL”
Forces the writing of data blocks and cylinder indexes directly to disk
rather than writing the changes to the WAL log.
File System
“DisableWALforDBs”
Forces data blocks to be written directly to disk rather than having the
changes written to the WAL log.
File System
“EnableCostProfileTLE”
Determines whether Optimizer Cost Estimation Subsystem (OCES)
diagnostics are enabled in combination with Target Level Emulation
(TLE).
General
“EnableSetCostProfile”
Controls use of DIAGNOSTIC SET PROFILE statements, to dynamically
change cost profiles, which are used for query planning.
General
“Export Width Table ID”
Controls the export width of a character in bytes.
General
“ExternalAuthentication”
Controls whether Teradata Database users can be authenticated outside
(external) of the Teradata Database software authentication system.
General
“Free Cylinder Cache Size”
Determines how many cylinders are to be managed in File System cache
for use as spool cylinders.
File System
Utilities
307
Chapter 12: DBS Control (dbscontrol)
DBS Control Fields (Settings)
Field
Description
Group
“FreeSpacePercent”
Determines the percentage of free space reserved on cylinders during
bulk load operations.
File System
“HashFuncDBC”
Defines the hashing function that Teradata Database uses.
General
“HTMemAlloc”
Specifies the percentage of memory to be allocated to a hash table for a
hash join.
Performance
“IAMaxWorkloadCache”
Defines the maximum size of the Index Wizard workload cache when
performing analysis operations.
Performance
“IdCol Batch Size”
Indicates the size of a pool of numbers reserved for generating numbers
for rows to be inserted into a table with an identity column.
General
“IVMaxWorkloadCache”
Defines the maximum size of the Index Wizard workload cache when
performing validation operations.
Performance
“LargeDepotCylsPerPdisk”
Determines the number of depot cylinders the file system allocates per
pdisk (storage device) to contain large slots (512 KB).
File System
“LockLogger”
Defines the system default for the locking logger.
General
“LockLogger Delay Filter”
Controls whether or not to filter out blocked lock requests based on delay
time.
General
“LockLogger Delay Filter Time”
Indicates the value at which blocked lock requests with a delay time
greater than this value are logged.
General
“LockLogSegmentSize”
Specifies the size of the Locking Logger segment.
General
“MaxDecimal”
Defines the maximum number of Decimal Digits in the default
maximum value used in expression typing.
General
“MaxDownRegions”
Determines the number of regions (ranges of rows) in a data or index
subtable that can be marked as down before the entire subtable is marked
down on all AMPs.
General
“MaxJoinTables”
Influences the maximum number of tables that can be joined per query
block.
Performance
“MaxLoadAWT”
Works together with MaxLoadTasks to limit the number of load utility
jobs that can run concurrently on the system.
General
“MaxLoadTasks”
Works together with MaxLoadAWT to limit the number of load utility
jobs that can run concurrently on the system.
General
“MaxParseTreeSegs”
Defines the maximum number of 64 KB tree segments that the parser
allocates while parsing a request.
Performance
“MaxRequestsSaved”
Specifies the number of request-to-step cache entries allowed on each PE
on a Teradata Database system.
Performance
“MaxRowHashBlocksPercent”
Specifies the proportion of available locks that can be used for rowhash
locks by a transaction before the transaction is automatically aborted.
General
“MaxSyncWALWrites”
Determines the maximum number of outstanding WAL log writes to
allow before tasks requiring synchronous writes are delayed to achieve
better buffering.
File System
308
Utilities
Chapter 12: DBS Control (dbscontrol)
DBS Control Fields (Settings)
Field
Description
Group
“MDS Is Enabled”
Controls whether the rsgmain program is started in the RSG vproc when
Teradata Database starts.
General
“Memory Limit Per
Transaction”
Specifies the maximum amount of in-memory, temporary storage that
the Relay Services Gateway (RSG) can use to store the records for one
transaction.
General
“MiniCylPackLowCylProd”
Determines the number of free cylinders below which the File System will
start to perform the MiniCylPacks operation in anticipation of the need
for additional free cylinders.
File System
“MonSesCPUNormalization”
Determines whether CPU data in workload management (PM/API and
Open API) calls is normalized. This affects workload rules defined in
Teradata Dynamic Workload Manager.
General
“MPS_IncludePEOnlyNodes”
Excludes PE-only nodes from MONITOR PHYSICAL SUMMARY
Workload Management API statistics calculations.
General
“NewHashBucketSize”
Specifies the number of bits that will be used to identify hash buckets on
the system after the next sysinit or reconfig.
General
“ObjectUseCountCollectRate”
Specifies the amount of time between collections of object use-count
data.
General
“PermDBAllocUnit”
Determines the allocation unit for multirow data blocks in units of 512byte sectors for permanent tables.
File System
“PermDBSize”
Determines the maximum size, in consecutive 512-byte sectors, of the
multirow data blocks of a permanent table.
File System
“PPICacheThrP”
The PPICacheThrP field is used to specify the percentage of cache
memory that is available (per query) for multiple-context operations
(such as joins and aggregations) on PPI tables and join indexes
Performance
“PrimaryIndexDefault”
Determines whether Teradata Database automatically creates primary
indexes for tables created by CREATE TABLE statements that do not
include PRIMARY INDEX, NO PRIMARY INDEX, PRIMARY KEY, or
UNIQUE clauses.
General
“ReadAhead”
Enables or disables the performance enhancements associated with the
Read-Ahead Sequential File Access Workload operation.
Performance
“Read Ahead Count”
Specifies the number of data blocks that will be preloaded in advance of
the current file position while performing sequential scans.
Performance
“ReadLockOnly”
Enables or disables the special read-or-access lock protocol on the
DBC.AccessRights table during access rights validation and on other
dictionary tables accessed by read-only queries during request parsing.
Performance
“RedistBufSize”
Determines the size (in KB) of units of hashed row redistribution buffers
for use by load utilities.
Performance
“RepCacheSegSize”
Specifies the size (in KB) of the cache segment that is allocated in each
AMP to store EVL objects. These objects are used by the replication
system to convert row images into the appropriate external format.
General
Utilities
309
Chapter 12: DBS Control (dbscontrol)
DBS Control Fields (Settings)
Field
Description
Group
“RevertJoinPlanning”
Determines whether the Teradata Database query Optimizer uses newer
or older join planning techniques.
Performance
“RollbackPriority”
Defines the system default for the rollback performance group.
General
“RollbackRSTransaction”
Used when a subscriber-replicated transaction and a user transaction are
involved in a deadlock.
General
“RollForwardLock”
Defines the system default for the RollForward using Row Hash Locks
option.
General
“RoundHalfwayMagUp”
Indicates how rounding should be performed when computing values of
DECIMAL types.
General
“RSDeadLockInterval”
Checks for deadlocks between subscriber- replicated transactions and
user transactions.
General
“SessionMode”
Defines the system default transaction mode, case sensitivity, and
character truncation rule for a session.
General
“SkewAllowance”
Specifies a percentage factor used by the Optimizer in deciding the size of
each hash join partition.
Performance
“SmallDepotCylsPerPdisk”
Determines the number of depot cylinders the file system allocates per
pdisk (storage device) to contain small slots (128 KB).
File System
“Spill File Path”
Specifies a directory that the Relay Services Gateway (RSG) can use for
spill files.
General
“StandAloneReadAheadCount”
Specifies the number of data blocks the Teradata utilities will preload
when the utilities or File System startup run as standalone tasks.
Performance
“StepsSegmentSize”
Defines the maximum size (in KB) of the plastic steps segment (also
known as OptSeg).
Performance
“SyncScanCacheThr”
Specifies the percentage of File Segment (FSG) cache that is expected to
be available for all synchronized full-file scans occurring simultaneously.
Performance
“SysInit”
Ensures the system has been initialized properly using the System
Initializer utility.
General
“System TimeZone Hour”
Defines the System Time Zone Hour offset from Universal Coordinated
Time (UTC).
General
“System TimeZone Minute”
Defines the System Time Zone Minute offset from Universal Coordinated
Time (UTC).
General
“Target Level Emulation”
Allows a test engineer to set the costing parameters considered by the
Optimizer.
General
“TempLargePageSize”
Specifies the size (in KB) of the large memory allocation storage page
used for Relay Services Gateway (RSG) temporary storage.
General
General
“Temporary Storage Page Size”
Specifies the size (in KB) of the standard memory allocation storage page
size used for Relay Services Gateway (RSG) temporary storage.
310
Utilities
Chapter 12: DBS Control (dbscontrol)
DBS Control Fields (Settings)
Field
Description
Group
“UseVirtualSysDefault”
This field is no longer used. For cost profiling information, see
“CostProfileId” on page 318.
General
“UtilityReadAheadCount”
Specifies the number of data blocks the Teradata utilities will preload
when performing sequential scans.
Performance
“Version”
Indicates the version number of the DBS Control Record.
General
“WAL Buffers”
Determines the number of WAL append buffers to allocate.
File System
“WAL Checkpoint Interval”
Determines the amount of time that elapses between WAL checkpoints.
File System
Utilities
311
Chapter 12: DBS Control (dbscontrol)
AccessLockForUncomRead
AccessLockForUncomRead
Purpose
Determines whether SELECT requests embedded in INSERT...SELECT, UPDATE...SELECT,
and DELETE...SELECT requests place READ or ACCESS locks on the source table.
Field Group
General
Valid Settings
Setting
Default Locking for Outer SELECT
and Ordinary SELECT Subqueries
Default Locking for SELECT Embedded In
DELETE, INSERT, MERGE, or UPDATE Request
TRUE
READ
ACCESS
FALSE
READ
READ
Default
FALSE
Changes Take Effect
After the DBS Control Record has been written or applied
Related Topics
312
For more information on…
See…
Locks and concurrency control
SQL Request and Transaction Processing
Session isolation level
SQL Request and Transaction Processing
Utilities
Chapter 12: DBS Control (dbscontrol)
Bkgrnd Age Cycle Interval
Bkgrnd Age Cycle Interval
Purpose
Used by the File System to determine the amount of time that elapses between background
cycles to write a subset of modified segments in the cache to disk. This background activity
serves to reduce the size of the WAL log, and allows for improved disk space utilization in
WAL modes.
Field Group
File System
Valid Range
1 through 240 seconds
Default
60 seconds
Changes Take Effect
After the DBS Control Record has been written or applied
Utilities
313
Chapter 12: DBS Control (dbscontrol)
Century Break
Century Break
Purpose
Defines how to interpret character data when converting to date. The data only has two digits
representing the year, and the applicable format only has two digits representing the year.
Century Break specifies which two-digit years, if any, are to be interpreted as 20th-century
years and which two-digit years, if any, are to be interpreted as 21st-century years.
Field Group
General
Valid Range
0 through 100
Default
0
Changes Take Effect
When the DBS Control Record is written. For sessions logged on at the time of a change, the
new setting becomes effective at the next logon, or after the next Teradata Database restart.
Rules
The following two-year digit rules apply:
IF a two-digit year yy is…
THEN the year is…
less than Century Break
20yy and is considered to be in the 21st century.
greater than or equal to Century Break
19yy and is considered to be in the 20th century.
The following Century Break value rules apply:
IF Century Break is…
THEN all years yy are…
0
19yy.
100
20yy.
Century Break does not affect four-digit years.
314
Utilities
Chapter 12: DBS Control (dbscontrol)
Century Break
The Century Break setting has no effect on Teradata Database dates input in numeric (as
opposed to character) format.
If the character data specifies a four-digit year, and the format specifies a two-digit year,
Century Break does not affect the conversion. The four digits in the character data are used as
the year.
Teradata recommends you convert to four-digit years and corresponding four-digit-year
formats. However, Century Break provides a transitional facility while you use two-digit years.
Example 1
If Century Break = 25, strings such as '00/01/01' and '24/01/01' are interpreted as years 2000
and 2024, respectively. A string inserted as '25/01/01' is interpreted as year 1925.
Note: The choice of 25 for Century Break indicates that the installation wants a cushion of up
to 25 years to handle input dates into the 21st century and does not have historic input data
prior to the year 1925.
Example 2
If Century Break = 100, two-digit years are inserted as years in the 21st century (that is, 2000,
2001, and so forth).
Utilities
315
Chapter 12: DBS Control (dbscontrol)
CheckTable Table Lock Retry Limit
CheckTable Table Lock Retry Limit
Purpose
Specifies the duration, in minutes, that CheckTable, in nonconcurrent mode, will retry a table
check when the table is locked by another application.
Field Group
General
Valid Range
0 through 32767 minutes
If the CheckTable Table Lock Retry Limit field is greater than 0, then CheckTable will retry a
table check within the specified limit.
Default
0, which indicates that in nonconcurrent mode, CheckTable will retry a table check until
CheckTable can access the table.
Changes Take Effect
Immediately
316
Utilities
Chapter 12: DBS Control (dbscontrol)
Client Reset Timeout
Client Reset Timeout
Purpose
Specifies how long the Relay Services Gateway (RSG) should wait for an intermediary to
reconnect after one of the following before taking action:
•
Communication failure
•
Intermediary reset
•
Server reset
Field Group
General
Valid Range
0 through 65535 seconds
Default
300 seconds
Changes Take Effect
After the next Teradata Database restart
Utilities
317
Chapter 12: DBS Control (dbscontrol)
CostProfileId
CostProfileId
Purpose
Contains the system standard cost profile ID number and defines the cost profile the Teradata
Database system will use as its default.
Note: This setting should be changed only under the direction of Teradata Support Center
personnel.
Field Group
General
Valid Range
0 through 32760
Default
IF CostProfileID is…
THEN…
0
the system automatically chooses the appropriate default cost profile, based
on the operating system:
• Teradata12 (id = 36) - All 32 bit platforms
• T2_Linux64 (id = 21) - 64 bit Linux platforms
• T2_Win64 (id = 37) - 64 bit Windows platforms
This selection is made during the first session logon that occurs after
CostProfileId is set to 0. When the system selects its default cost profile, it
changes CostProfileId to that profile's id value. For example, on a 64 bit
Linux platform, once the selection is made, you will see:
CostProfileId = 21
This will not cause any running session to change the cost profile it is
currently using.
greater than 0
the standard default cost profile is defined by
DBC.CostProfiles.CostProfileId = N, where N is the CostProfileId value.
The non-zero value N for CostProfileId implies that the system will use the
cost profile values in DBC.ConstantValues where
DBC.ConstantValues.ProfileId = N.
Note: The set of constant values in DBC.ConstantValues for a particular
cost profile consists of the values to be used instead of what the system uses
by default. In nearly all cases, this set represents only a part of the full set of
cost profile constant definitions.
318
Utilities
Chapter 12: DBS Control (dbscontrol)
CostProfileId
Changes Take Effect
For new sessions after the DBS Control Record has been written or applied
Example 1
To return the list of cost profiles defined for the system, type the following:
SELECT * FROM DBC.CostProfiles_v;
The following appears:
*** Query completed. 38 rows found. 5 columns returned.
*** Total elapsed time was 3 seconds.
Type Name
--------------Type2
Legacy
Legacy
Legacy
Legacy
Type2
Type2
Legacy
Type2
Legacy
Type2
Type2
Legacy
Type2
Legacy
Legacy
Legacy
Legacy
Legacy
Type2
Type2
Legacy
Type2
Legacy
Legacy
Type2
Legacy
Type2
Legacy
Legacy
Type2
Type2
Type2
Legacy
Legacy
Legacy
Type2
Type2
Profile Name
ProfileId Cat Description
------------------------------ --------- --- -------------T2_emc
26 F
EMC cost value
lsi6285_40
17 F
Disk array cos
symbios_half
7 F
Disk array cos
SysDefault
0 F
SysDefault DBS
V2R5_Bynet_V1
5 F
Bynet V1 cost
T2_lsi6285_40
34 F
LSI 6285 array
T2_Bynet_V1
22 F
Bynet V1 cost
V2R5_Array
19 F
Disk array cos
T2_symbios_half
24 F
Half Populated
lsi6840_28
13 F
Disk array cos
T2_Win64
37 F
Default Type 2
Teradata12
36 F
Standard cost
V2R5_Solaris
3 F
V2R5 DBS cost
T2_lsi6840_56
32 F
LSI 6840 array
V2R6
35 F
Generic standa
lsi6840_56
15 F
Disk array cos
V2R4
1 F
V2R4 DBS cost
lsi6288_40
11 F
Disk array cos
lsi6288_52
12 F
Disk array cos
T2_lsi6840_28
30 F
LSI 6840 array
T2_32Bit
20 F
Default Type 2
emc
9 F
Disk array cos
T2_lsi6283
27 F
LSI 6283 array
lsi6840_40
14 F
Disk array cos
V2R4_Array
18 F
Disk array cos
T2_lsi6288_40
28 F
LSI 6288 array
V2R4_Bynet
4 F
Bynet cost val
T2_lsi6840_40
31 F
LSI 6840 array
symbios_full
8 F
Disk array cos
lsi6285_20
16 F
Disk array cos
T2_lsi6288_52
29 F
LSI 6288 array
T2_symbios_full
25 F
Fully Populate
T2_lsi6285_20
33 F
LSI 6285 array
V2R5_Bynet_V2
6 F
Bynet V2 cost
V2R5
2 F
V2R5 DBS cost
lsi6283
10 F
Disk array cos
T2_Bynet_V2
23 F
Bynet V2 cost
T2_Linux64
21 F
Default Type 2
Example 2
To return the list of constant values defined for a specific profile, for example ProfileId = 36,
type the following:
SELECT * FROM DBC.CostProfileValues_v WHERE ProfileId = 36;
The following is a portion of the output:
*** Query completed. 177 rows found. 7 columns returned.
*** Total elapsed time was 2 seconds.
Utilities
319
Chapter 12: DBS Control (dbscontrol)
CostProfileId
Profile Name
P-Id Constant Name
C-Id Cat
--------------- ------ ------------------------------ ------ --Teradata12
36 OptIndexBlockSize
10002 I
Teradata12
36 OptMaxBldKeySize
10003 I
Teradata12
36 OptMaxRowIdSIndex
10004 I
Teradata12
36 OptRowidSize
10005 I
Teradata12
36 OptSpoolBlockSize
10006 I
Teradata12
36 OptTableBlockSize
10007 I
Teradata12
36 OptBitInst
10060 I
Teradata12
36 OptBMAndRowInst
10061 I
Teradata12
36 OptCharFieldInst
10062 I
Teradata12
36 OptNumFieldInst
10063 I
Teradata12
36 OptOutputRowInst
10064 I
Teradata12
36 OptOvhdOfRowCompInst
10065 I
Teradata12
36 OptRedistributeInst
10066 I
Teradata12
36 OptRowAccessInst
10067 I
Teradata12
36 OptRowIdInst
10068 I
Teradata12
36 OptSynonymInst
10069 I
Teradata12
36 InitBlockProcessOv
10100 I
Teradata12
36 InitBlockProcessCf
10101 I
Teradata12
36 InitAccessRowOv1
10102 I
Teradata12
36 InitAccessRowCf1
10103 I
320
Utilities
Chapter 12: DBS Control (dbscontrol)
CurHashBucketSize
CurHashBucketSize
Purpose
Indicates the number of bits used to identify hash buckets in the current system configuration.
Field Group
General
Valid Settings
16 or 20
This field is informational only. It cannot be changed.
See also “NewHashBucketSize” on page 399.
Utilities
321
Chapter 12: DBS Control (dbscontrol)
Cylinders Saved for PERM
Cylinders Saved for PERM
Purpose
Reserves a specified number of cylinders to be used only for permanent data storage.
Field Group
File System
Valid Range
1 through 524,287 cylinders
Default
10 cylinders
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
Free cylinders are used for permanent, spool, and temporary space in Teradata Database.
•
Permanent (“perm”) space is used for storing data rows of tables.
•
Spool space is used to hold intermediate query results or formatted answer sets for queries.
Once the query is complete, the spool space is released as free cylinders.
•
Temporary (“temp”) space is used for global temporary tables.
All requests for new cylinders are satisfied from the pool of free cylinders, so requests for perm
cylinders compete with requests for spool and temp cylinders. If not enough free cylinders are
available to meet a request, the request fails with a disk full error.
When the failed request involves perm cylinders, a lengthy rollback can be required, which
adversely affects system performance. However, if the failed request involves spool or temp
cylinders, very little rollback is generally required. Therefore, it is preferable to run out of
spool and temp cylinders before running out of perm cylinders.
Cylinders Saved for PERM reserves a number of free cylinders to be used only to fulfill
requests for perm cylinders. If this number or fewer free cylinders is available, requests for
spool and temp cylinders will fail, but requests for new perm cylinders will not fail unless
there are no free cylinders available.
322
Utilities
Chapter 12: DBS Control (dbscontrol)
DateForm
DateForm
Purpose
Defines whether IntegerDate or ANSIDate formatting is used for a session.
Field Group
General
Valid Settings
Setting
Description
0 (IntegerDate)
Columns with DATE and PERIOD(DATE) type values use the format
specified in the SDF file. The default format is ‘YY/MM/DD’, however the
SDF file can be customized by a system administrator to change this date
format.
1 (ANSIDate)
Date columns are formatted for output as 'YYYY-MM-DD'.
Default
0 (IntegerDate)
Changes Take Effect
For new sessions begun after the DBS Control Record has been written. Existing sessions are
not affected.
Usage Notes
DateForm can be overridden at the user level and at the session level (at logon or during the
session).
Related Topics
Utilities
For more information on…
See…
The SDF file
Tdlocaledef in Utilities Volume 2.
DateForm
SQL Fundamentals
323
Chapter 12: DBS Control (dbscontrol)
DBQLFlushRate
DBQLFlushRate
Purpose
Defines the interval in seconds between flushings of the DBQL caches to the DBQL dictionary
tables.
Field Group
General
Valid Range
1 through 3600 seconds
Note: Teradata does not recommend values less than 600 seconds, and DBS Control issues a
warning if you set the value below 600 seconds.
Default
600 seconds
Changes Take Effect
After the DBS Control Record has been written or applied. However, DBQL will not become
aware of the new setting until the current timer expires (or 10 minutes passes). Therefore, the
change could take up to 10 minutes to become effective.
Usage Notes
If an END QUERY LOGGING statement is issued, all the caches (except the DBQLSummary
cache, which is flushed at the selected Flush Rate) are flushed as part of the END QUERY
LOGGING statement.
Example
Assume that the DBQLFlushRate is 300 seconds. This means that the cache entries are written
to the DBQL dictionary tables at least every 5 minutes. If a cache is filled up after 3 minutes,
entries are written at 3 minutes and then again at the 5-minute interval.
324
Utilities
Chapter 12: DBS Control (dbscontrol)
DBQLFlushRate
Related Topics
Utilities
For more information on…
See…
BEGIN QUERY LOGGING and END QUERY
LOGGING statements
SQL Data Definition Language
tracking processing behavior with the DBQL
Database Administration
325
Chapter 12: DBS Control (dbscontrol)
DBQL Log Last Resp
DBQL Log Last Resp
Purpose
Determines whether DBQL logs a pseudo step labeled “RESP” in the DBQLStepTbl table
when the last response to a DBQL logged request is completed. This can be used together with
the FirstRespTime log entry in the DBQLogTbl to calculate the approximate response time
experienced by the client.
Field Group
General
Valid Settings
Setting
Description
TRUE
DBQL logs a pseudo step labeled RESP to the DBQLStepTbl when the last response
of a DBQL logged request is complete.
FALSE
DBQL does not log a RESP pseudo step for completed requests.
Default
FALSE
Changes Take Effect
After the DBS Control Record has been written or applied.
Usage Notes
This setting is effective only when DBQL logging is enabled. The RESP pseudo step entry is
logged for every query that is logged to the DBQL default table, regardless of whether
STEPINFO or other DBQL options are requested.
When enabled, the RESP pseudo step is logged even if the request aborts or experiences an
error. It indicates when the logged request ends for any reason.
326
Utilities
Chapter 12: DBS Control (dbscontrol)
DBQL Log Last Resp
Related Topics
Utilities
For more information on…
See…
BEGIN QUERY LOGGING and END QUERY
LOGGING statements
SQL Data Definition Language
tracking processing behavior with the DBQL
Database Administration
327
Chapter 12: DBS Control (dbscontrol)
DBQL Options
DBQL Options
Purpose
This field is unused, but reserved for future DBQL use. It should not be changed.
328
Utilities
Chapter 12: DBS Control (dbscontrol)
DBSCacheCtrl
DBSCacheCtrl
Purpose
Enables or disables preferential caching for data blocks from smaller tables. The threshold size
for tables whose data blocks are to be preferentially cached is set using the DBSCacheThr field.
Field Group
Performance
Valid Settings
Setting
Description
TRUE
Enables preferential caching.
FALSE
Disables preferential caching.
Default
TRUE
Changes Take Effect
After the DBS Control Record has been written or applied. Operations in progress at the time
of the change are not affected.
Related Topics
See “DBSCacheThr” on page 330.
Utilities
329
Chapter 12: DBS Control (dbscontrol)
DBSCacheThr
DBSCacheThr
Purpose
Specifies the threshold table size that demarcates “small” from “large” tables for purposes of
system caching decisions. DBSCacheThr is expressed as a percentage of FSG Cache.
•
Tables that would occupy the DBSCacheThr percentage or less of the cache are considered
small. Data blocks from these tables are preferentially cached.
•
Tables that would occupy more than the DBSCacheThr percentage of the cache are
considered large. Data blocks from these tables are preferentially excluded from the cache.
DBSCacheThr is effective only when DBSCacheCtrl is set to TRUE.
Field Group
Performance
Valid Range
0 through 100%
Note: A value of 1% is recommended for most moderate to large systems.
Default
10%
Changes Take Effect
After the DBS Control Record has been written or applied. Any operations in progress at the
time of the change are not affected.
Usage Notes
Caching frequently accessed data blocks can improve system performance because reading
from memory (cache) is much faster than reading from disk. Because the FSG Cache size is
limited, older data blocks are removed from the cache (“aged out”) as space is required to
cache more recently accessed data.
Reference tables are typically accessed frequently. Caching data from these tables can markedly
improve system performance. Because reference tables are often relatively small, data from
several reference tables will fit into the cache.
Full table scans of large, non-reference tables, however, if cached, can quickly overwhelm the
cache, displacing all existing cached data. If full table scans of large tables are infrequent
occurrences, as is typical, there is little benefit to caching this data.
330
Utilities
Chapter 12: DBS Control (dbscontrol)
DBSCacheThr
DBSCacheThr provides a way to exclude data from larger tables from the FSG Cache, helping
to ensure that data from smaller tables is retained in the cache as long as possible.
DBSCacheThr specifies the proportion of the FSG Cache that may be occupied by a table in
order that the table data be cached. Because it is a threshold value, tables equal to or smaller
than DBSCacheThr will be preferentially cached. Larger tables will not be cached, under most
circumstances.
Note: DBSCacheThr is one of a number of factors that influence whether a data block is
cached. It does not solely determine which data blocks are cached.
Considerations
•
DBSCacheThr affects the caching of spool tables in addition to permanent data tables. If
typical system work produces large spool tables, setting DBSCacheThr to a small value
might prevent spool tables from being cached. This would slow query performance.
•
Large tables that would normally be excluded from the cache by DBSCacheThr may
qualify for synchronized table scans if two or more queries perform a full table scan on the
large table simultaneously. In these cases, data from large tables may be cached, regardless
of the DBSCacheThr setting. For more information see “SyncScanCacheThr” on page 434.
Recommendations
•
Use DBSCacheThr to prevent large, sequentially read or written tables from pushing other
data out of the cache. Set DBSCacheThr to a value that corresponds to the demarcation
between smaller, more frequently access tables, and larger tables that are infrequently
accessed. Ideally, there will be a jump in size between these types of tables which makes
distinguishing them easy.
•
If moderately sized tables are accessed frequently, setting DBSCacheThr to cache these
tables might cause smaller, less frequently accessed tables to be cached, which could
impact system performance. Carefully evaluate the performance impacts of any changes to
DBSCacheThr before committing those changes on a production system.
•
Because DBSCacheThr also affects caching of spool tables, set DBSCacheThr to the
smallest possible value that will not adversely affect spool tables generated by the typical
system workload. For most moderate to large systems today, this would be a DBSCacheThr
setting of 1%. If the average spool table size per node or AMP is greater than 1% of FSG
Cache per node or AMP, DBSCacheThr can be set to a higher value.
•
To calculate a DBSCacheThr threshold value, determine the size of frequently accessed
tables that should be preferentially cached. Assume the table is evenly distributed across all
nodes and AMPs of the system, and determine the percentage of cache on each node or
AMP that table would occupy. Set DBSCacheThr to this percentage. For example:
DBSCacheThr setting = Per-node table size/FSG Cache per node
Example 1
Assume a system with many small reference tables that are frequently accessed. The goal of the
DBSCacheThr setting is to preferentially retain these tables in the cache for as long as possible.
Utilities
331
Chapter 12: DBS Control (dbscontrol)
DBSCacheThr
Base the DBSCacheThr setting on the size of a typical reference table, and the amount of space
such a table occupies on each node of the system:
•
Typical small, frequently accessed reference table = 100 MB
(one million rows with 100 bytes per row)
•
System has 10 nodes
•
Teradata Database distributes the table rows evenly across all AMPs and nodes of the
system: 100 MB / 10 nodes = 10 MB per node occupied by the table
RAM per Node
FSG Cache per Node
DBSCacheThr Setting
1GB
500MB
2%
2GB
1.5 GB
1%
4 GB
3.5 GB
1%
These DBSCacheThr settings influence system caching decisions in favor of caching these
tables.
Example 2
Assume a system with a workload that requires full-table scans of large tables. The goal of the
DBSCacheThr setting is to preferentially exclude these tables from the cache, so that smaller,
more frequently access tables will stay in the cache longer. Base the DBSCacheThr setting on
the size of a typical large table, and the amount of space such a table occupies on each node of
the system:
•
Typical large, infrequently accessed table = 1000 MB
(10 million rows with 100 bytes per row)
•
System has 10 nodes
•
Teradata Database distributes the table rows evenly across all AMPs and nodes of the
system: 1000 MB / 10 nodes = 100 MB per node
RAM per Node
FSG Cache per Node
DBSCacheThr Setting
1GB
500MB
less than 20%
2GB
1.5 GB
less than 6%
4 GB
3.5 GB
less than 2%
These DBSCacheThr settings influence system caching decisions in favor of excluding these
tables from the cache.
332
Utilities
Chapter 12: DBS Control (dbscontrol)
DBSCacheThr
Related Topics
Utilities
•
“DBSCacheCtrl” on page 329
•
“SyncScanCacheThr” on page 434.
333
Chapter 12: DBS Control (dbscontrol)
DeadlockTimeOut
DeadlockTimeOut
Purpose
Used by the Dispatcher to determine the interval (in seconds) between deadlock timeout
detection cycles.
The value in DeadLockTimeout specifies the time-out value for requests that are locking each
other out on different AMPs. When the system detects a deadlock, it aborts one of the jobs.
Pseudo table locks reduce deadlock situations for all-AMP requests that require write or
exclusive locks. However, deadlocks still may be an issue on large systems with heavy
concurrent usage. In batch operations, concurrent requests may contend for locks on Data
Dictionary tables.
Field Group
General
Valid Range
0 though 3600 seconds
Default
240 seconds
Changes Take Effect
After the next Teradata Database restart
Recommendation
Reduce the value in this field to cause more frequent retries with less time in a deadlock state.
Faster CPUs significantly reduce the system overhead for performing deadlock checks, so you
can set the value much lower than the current default of 240 seconds. The following general
recommendations apply.
334
IF your applications…
THEN you should…
incur some dictionary deadlocks
set the value to between 30 and 45 seconds.
incur few dictionary deadlocks
retain the default value of 240 seconds.
incur many true deadlocks
set the value as low as 10 seconds.
Utilities
Chapter 12: DBS Control (dbscontrol)
DeadlockTimeOut
Utilities
IF your applications…
THEN you should…
are predominantly Online Transaction
Processing (tactical) applications
set the value as low as 10 seconds.
335
Chapter 12: DBS Control (dbscontrol)
DefaultCaseSpec
DefaultCaseSpec
Purpose
Determines whether the default behavior for character string comparisons is to consider the
case of characters when the transaction semantics are set to Teradata session mode. Also
determines whether character column data is treated as case specific by default in Teradata
session mode.
Note: In ANSI session mode this setting has no effect. Character comparisons consider
character case, and character columns are treated as case specific by default in ANSI session
mode.
Field Group
General
Valid Settings
Setting
Description
TRUE
In Teradata session mode, character data is treated as case specific by default. Character
string comparisons take into account character case differences.
FALSE
In Teradata session mode, character data is treated as not case specific by default.
Character string comparisons do not take into account character case differences.
Default
FALSE
Changes Take Effect
After the next Teradata Database restart
Related Topics
For more information on the CASESPECIFIC character data type attribute, see SQL Data
Types and Literals.
336
Utilities
Chapter 12: DBS Control (dbscontrol)
DefragLowCylProd
DefragLowCylProd
Purpose
Determines the number of free cylinders below which cylinder defragmentation can begin.
The system dynamically keeps cylinders defragmented.
If the system has less than the specified number of free cylinders, defragmentation occurs on
cylinders with at least 25% free space, but not enough contiguous sectors to allocate a data
block.
Field Group
File System
Valid Range
0 through 100 free cylinders
Default
100 free cylinders
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
A cylinder becomes fragmented when its free sector list contains many groups of small
numbers of consecutive sectors. In this case, a large number of free sectors can exist, but not
enough consecutive sectors exist to allocate a new data block.
Defragmentation groups all the free sectors on an individual cylinder into a single block of
contiguous free sectors. This requires the moving data blocks from the fragmented cylinder to
a new cylinder, where each of the blocks can be packed together, leaving a single, large block of
consecutive free sectors.
To keep costs to a minimum, defragmentation is enabled only when the free cylinder count
falls below the current value in the DefragLowCylProd field.
Setting this field to a value of 0 disables defragmentation. In this case, the lack of enough
adjacent sectors to allocate a data block causes a new cylinder to be allocated.
The disadvantage of this approach is that free cylinders are a critical resource, and when no
free cylinders remain, minicylpacks begin.
The operation is performed in the background.
Utilities
337
Chapter 12: DBS Control (dbscontrol)
DefragLowCylProd
Performance Implications
A low value in this field reduces the performance impact of defragmentation. However, setting
the value extremely low might cause cylinders to be consumed more quickly, which could
cause more minicylpacks to run.
Set DefragLowCylProd higher than MiniCylPackLowCylProd because defragmentation has a
smaller performance impact than cylinder pack.
338
Utilities
Chapter 12: DBS Control (dbscontrol)
DictionaryCacheSize
DictionaryCacheSize
Purpose
Defines the size of the dictionary cache for each PE on the system.
Field Group
Performance
Valid Range
Operating System
Range (kilobytes)
Linux or Windows
64 through 16,384
MP-RAS
64 through 4,096
Default
3072 KB
Changes Take Effect
After the next Teradata Database restart
Performance Implications
The default value allows more caching of table header information and reduces the number of
I/Os required. It is especially effective for workloads that access many tables (more than 200)
and for those that generate many dictionary seeks.
Increase the size of the dictionary cache to allow the parser to cache additional data dictionary
and table header information.
For tactical and Online Complex Processing (OLCP) type workloads, maintaining a
consistently short, few-second response time is important. These workloads may benefit from
a larger dictionary cache, particularly when their query plans have not been cached in the
request cache. A larger dictionary cache will allow more dictionary detail, needed for parsing
and optimizing, to remain in memory for a longer period of time. For query workloads with a
response time of more than one minute, there may be no measurable difference when this
field is set to a higher value.
Utilities
339
Chapter 12: DBS Control (dbscontrol)
DisableUDTImplCastForSysFuncOp
DisableUDTImplCastForSysFuncOp
Purpose
Disables/enables implicit cast/conversion of UDT expressions passed to built-in system
operators/functions.
Field Group
General
Valid Settings
Setting
Description
TRUE
Disable implicit conversions.
FALSE
Enable implicit conversions.
Default
FALSE
Usage Notes
The system implicitly converts a UDT expression to a compatible pre-defined type when there
is a CAST...AS ASSIGNMENT definition whose target is a compatible data type.
Note: This field only affects built-in system operators and functions. The system always
invokes implicit casts for INSERT, UPDATE, and parameter passing (UDF, UDM, Stored
Procedure) operations.
340
Utilities
Chapter 12: DBS Control (dbscontrol)
DisablePeekUsing
DisablePeekUsing
Purpose
Enables or disables the performance enhancements associated with exposed USING values in
parameterized queries.
Note: This setting should be changed only under the direction of Teradata Support Center
personnel.
Field Group
Performance
Valid Settings
Setting
Description
TRUE
Optimizer performance enhancements are disabled.
FALSE
Optimizer performance enhancements are enabled.
Default
FALSE
Changes Take Effect
As soon as the request cache is purged
Performance Implications
The Teradata Database Query Optimizer determines the most efficient way to execute an SQL
request in the Teradata parallel environment. It generates several possible plans of action,
which involve alternate methods of accessing and joining database tables to satisfy the request.
The Optimizer evaluates the relative costs of each plan in terms of resource usage and speed of
execution, then chooses the plan with the lowest cost. Plans that are sufficiently generic are
cached for fast reuse by the Optimizer in similar situations.
For some parameterized queries, the Optimizer can generate better plans by “peeking” at the
specific USING values (data parcels) in the queries. Because the plans are specific for the
USING values, they are not cached, which in rare cases may have an adverse affect on
performance. The DisablePeekUsing field allows you to disable this feature of the Optimizer if
you suspect it is a problem.
Utilities
341
Chapter 12: DBS Control (dbscontrol)
DisablePeekUsing
Related Topics
342
For more information on…
See…
the request cache and query optimization
SQL Request and Transaction Processing.
parameterized queries
Performance Management.
Utilities
Chapter 12: DBS Control (dbscontrol)
DisableSyncScan
DisableSyncScan
Purpose
Disables or enables caching portions of large tables for the purpose of synchronized full table
scans. The amount of cache that can be used for synchronized scans is set using the
SyncScanCacheThr field.
Field Group
Performance
Valid Settings
Setting
Description
TRUE
Disables synchronized full table scanning of large tables.
FALSE
Enables synchronized full table scans.
Default
FALSE
Changes Take Effect
After the next Teradata Database restart
Usage Notes
The synchronized full table scan feature of Teradata Database allows several table scanning
tasks to simultaneously access the portion of a large table that is currently in the cache.
Synchronized table scans happen only when full table scans are accessing similar areas of a
large table. Synchronized table scans can improve database performance by reducing the
required disk I/O. Synchronized table scans are available only to large tables undergoing full
table scans.
Related Topics
See “SyncScanCacheThr” on page 434.
Utilities
343
Chapter 12: DBS Control (dbscontrol)
DisableWAL
DisableWAL
Purpose
Forces data blocks and cylinder indexes to be written directly to disk rather than to the Write
Ahead Logging (WAL) log.
Field Group
File System
Valid Settings
Setting
Description
TRUE
Changed file system blocks are always written directly to disk. This includes writing
changed cylinder index blocks to disk.
FALSE
Changes to the file system data are written to the WAL log and cached in memory,
or are written to disk, depending on the specific operation.
Default
FALSE
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
Data blocks inserted by the MultiLoad utility are always written to disk regardless of the
DisableWAL field setting.
The DisableWAL field writes both data blocks and cylinder index changes to disk, whereas the
DisableWALforDBs field writes only the data blocks to disk.
Related Topics
For more information, see “DisableWALforDBs” on page 345.
344
Utilities
Chapter 12: DBS Control (dbscontrol)
DisableWALforDBs
DisableWALforDBs
Purpose
The DisableWALforDBs field is used to force data blocks to be written directly to disk rather
than to the Write Ahead Logging (WAL) log.
Field Group
File System
Valid Settings
Setting
Description
TRUE
Changed user table data blocks are always written directly to disk.
FALSE
Changes to user table data blocks are written to the WAL log and cached in
memory, or are written to disk, depending on the specific operation.
Default
FALSE
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
The DisableWALforDBs field writes only the data blocks to disk, whereas the DisableWAL
field writes both data blocks and cylinder index changes to disk.
Related Topics
See “DisableWAL” on page 344.
Utilities
345
Chapter 12: DBS Control (dbscontrol)
EnableCostProfileTLE
EnableCostProfileTLE
Purpose
Enables/disables Optimizer Cost Estimation Subsystem (OCES) diagnostics for use in
combination with Target Level Emulation (TLE).
Note: This setting should be changed only under the direction of Teradata Support Center
personnel.
Field Group
General
Valid Settings
Setting
Description
TRUE
Enables OCES diagnostics to be used in combination with TLE. When set to TRUE,
all TLE diagnostics are also processed by the OCES logic.
FALSE
Disables combined OCES/TLE diagnostics.
Note: If the value of the EnableSetCostProfile field is equal to 0, the EnableCostProfileTLE
field is set to FALSE. For additional information, see “EnableSetCostProfile” on page 348.
Default
FALSE
Changes Take Effect
For new sessions after the DBS Control Record has been written or applied
Example
When EnableCostProfileTLE is set to FALSE, the following syntax for the DIAGNOSTIC SET
COSTS statement is valid.
346
Utilities
Chapter 12: DBS Control (dbscontrol)
EnableCostProfileTLE
DIAGNOSTIC
SET COSTS
target_system_name
A
TPA
ON FOR
A
REQUEST
NOT
;
SESSION
IFP
SYSTEM
1101A242
When EnableCostProfileTLE is set to TRUE, the following syntax for the DIAGNOSTIC SET
COSTS statement is valid.
DIAGNOSTIC
SET COSTS
target_system_name
A
,PROFILE
TPA
ON FOR
A
NOT
profile_name
REQUEST
;
SESSION
IFP
SYSTEM
1101B242
Related Topics
Utilities
For More Information on…
See…
DIAGNOSTIC SET COSTS statement
SQL Data Manipulation Language
using TLE and OCES diagnostics
SQL Request and Transaction Processing
347
Chapter 12: DBS Control (dbscontrol)
EnableSetCostProfile
EnableSetCostProfile
Purpose
Controls usage of DIAGNOSTIC SET PROFILE statements to dynamically change cost
profiles, which are used for query planning.
Note: This setting should be changed only under the direction of Teradata Support Center
personnel.
Field Group
General
Valid Settings
Setting
Description
0
Disable DIAGNOSTIC SET PROFILE statements.
1
Enable DIAGNOSTIC SET PROFILE statements for SESSION and REQUEST
levels.
2
enables DIAGNOSTIC SET PROFILE statements at all levels.
Default
0
Changes Take Effect
For new sessions after the DBS Control Record has been written or applied
Related Topics
348
For more information on…
See…
using the DIAGNOSTIC SET PROFILE
statement
SQL Data Manipulation Language
cost profiles
SQL Request and Transaction Processing
Utilities
Chapter 12: DBS Control (dbscontrol)
Export Width Table ID
Export Width Table ID
Purpose
Allows you to set the system default for the export width of a character in bytes.
Field Group
General
Valid Settings
Setting
Description
0 - Expected defaults
Provides reasonable default widths for the character data type and
client form of use.
1 - Compatibility defaults
Allows Unicode columns to work in a way compatible with
applications that were written to use Latin or Kanji1 columns.
2 - Maximum defaults
Provides maximum default width of the character data type and
client form of use.
Default
0 (Expected defaults)
Changes Take Effect
After the next Teradata Database restart
Expected Default Export Widths
The following table illustrates the number of bytes exported from the various server character
sets to the various client character sets for the Expected Default export width table (Export
Width Table ID = 0).
IF the client character set is…
any single-byte character set
AND the server
character set is…
THEN the number of bytes exported for
a CHARACTER(n) column is…
•
•
•
•
n
LATIN
UNICODE
KANJISJIS
KANJI1
Note: GRAPHIC is not supported.
Utilities
349
Chapter 12: DBS Control (dbscontrol)
Export Width Table ID
IF the client character set is…
KANJIEUC_0U
KANJIEBCDIC5026_0I
KANJIEBCDIC5035_0I
KATAKANAEBCDIC
KANJISJIS_0S
KANJI932_1S0
UTF8
AND the server
character set is…
THEN the number of bytes exported for
a CHARACTER(n) column is…
• LATIN
• KANJISJIS
• KANJI1
n
• UNICODE
• GRAPHIC
2n
• LATIN
• KANJISJIS
• KANJI
n
UNICODE
2n+2
GRAPHIC
• 2n (record and indicator modes)
• 2n+2 (field mode)
• LATIN
• KANJISJIS
• KANJI
n
• UNICODE
• GRAPHIC
2n
LATIN
2n
• UNICODE
3n
• KANJISJIS
• KANJI
n
Note: GRAPHIC is not supported.
UTF16
350
•
•
•
•
•
LATIN
UNICODE
KANJISJIS
KANJI1
GRAPHIC
2n
Utilities
Chapter 12: DBS Control (dbscontrol)
Export Width Table ID
IF the client character set is…
AND the server
character set is…
THEN the number of bytes exported for
a CHARACTER(n) column is…
•
•
•
•
•
•
•
•
•
•
HANGUL949_7R0
HANGULKSC5601_2R4
SCHGB2312_1T0
SCHINESE936_6R0
SDHANGULKSC5601_4R4
SDTCHBIG5_3R0
SDSCHGB2312_2T0
TCHBIG5_1R0
TCHINESE950_8R0
Extended site-defined
multibyte client character
sets that use one of the
following encoding forms:
• Single-byte characters in
the range 0x00-0x81,
first byte of double-byte
characters in the range
0x82-0xFF
•
• LATIN
• KANJISJIS
• KANJI1
n
• UNICODE
• GRAPHIC
2n
•
•
•
•
•
•
HANGULEBCDIC933_1II
SCHEBCDIC935_21J
SDHANGULEBCDIC933_5II
SDSCHEBCDIC935_6IJ
SDTCHEBCDIC937_7IB
TCHEBCDIC937_3IB
• LATIN
• KANJISJIS
• KANJI1
n
UNICODE
2n+2
GRAPHIC
• 2n (record and indicator modes)
• 2n+2 (field mode)
Compatibility Default Export Widths
The following table illustrates the number of bytes exported from the various server character
sets to the various client character sets for the Compatibility Default export width table
(Export Width Table ID = 1).
IF the client character set is…
any single-byte character set
AND the server
character set is…
THEN the number of bytes exported for
a CHARACTER(n) column is…
•
•
•
•
n
LATIN
UNICODE
KANJISJIS
KANJI1
Note: GRAPHIC is not supported.
Utilities
351
Chapter 12: DBS Control (dbscontrol)
Export Width Table ID
IF the client character set is…
• KANJIEUC_0U
• KANJISJIS_0S
• KANJI932_1S0
KANJIEBCDIC5026_0I
KANJIEBCDIC5035_0I
KATAKANAEBCDIC
UTF8
AND the server
character set is…
THEN the number of bytes exported for
a CHARACTER(n) column is…
•
•
•
•
n
LATIN
UNICODE
KANJISJIS
KANJI
GRAPHIC
2n
•
•
•
•
n
LATIN
UNICODE
KANJISJIS
KANJI
GRAPHIC
• 2n (record and indicator modes)
• 2n+2 (field mode)
LATIN
2n
• UNICODE
3n
• KANJISJIS
• KANJI
n
Note: GRAPHIC is not supported.
UTF16
352
•
•
•
•
•
LATIN
UNICODE
KANJISJIS
KANJI1
GRAPHIC
2n
Utilities
Chapter 12: DBS Control (dbscontrol)
Export Width Table ID
IF the client character set is…
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
HANGUL949_7R0
HANGULKSC5601_2R4
SCHGB2312_1T0
SCHINESE936_6R0
SDHANGULKSC5601_4R4
SDTCHBIG5_3R0
SDSCHGB2312_2T0
TCHBIG5_1R0
TCHINESE950_8R0
Extended site-defined
multibyte client character
sets that use one of the
following encoding forms:
• Single-byte characters in
the range 0x00-0x81,
first byte of double-byte
characters in the range
0x82-0xFF
HANGULEBCDIC933_1II
SCHEBCDIC935_21J
SDHANGULEBCDIC933_5II
SDSCHEBCDIC935_6IJ
SDTCHEBCDIC937_7IB
TCHEBCDIC937_3IB
AND the server
character set is…
THEN the number of bytes exported for
a CHARACTER(n) column is…
•
•
•
•
n
LATIN
UNICODE
KANJISJIS
KANJI1
• GRAPHIC
2n
•
•
•
•
n
LATIN
UNICODE
KANJISJIS
KANJI1
GRAPHIC
• 2n (record and indicator modes)
• 2n+2 (field mode)
Maximum Default Export Widths
The following table illustrates the number of bytes exported from the various server character
sets to the various client character sets for the Maximum Default export width table (Export
Width Table ID = 2).
IF the client character set is…
any single-byte character set
AND the server
character set is…
THEN the number of bytes exported for
a CHARACTER(n) column is…
•
•
•
•
n
LATIN
UNICODE
KANJISJIS
KANJI1
Note: GRAPHIC is not supported.
Utilities
353
Chapter 12: DBS Control (dbscontrol)
Export Width Table ID
AND the server
character set is…
THEN the number of bytes exported for
a CHARACTER(n) column is…
• LATIN
• UNICODE
• GRAPHIC
3n
• KANJISJIS
• KANJI
n
n
KANJIEBCDIC5035_0I
• LATIN
• KANJI
KATAKANAEBCDIC
UNICODE
3n+1
KANJISJIS
2n
GRAPHIC
• 2n (record and indicator modes)
• 2n+2 (field mode)
• LATIN
• UNICODE
• GRAPHIC
2n
• KANJISJIS
• KANJI
n
• LATIN
• UNICODE
• KANJISJIS
3n
KANJI
n
IF the client character set is…
KANJIEUC_0U
KANJIEBCDIC5026_0I
KANJISJIS_0S
KANJI932_1S0
UTF8
Note: GRAPHIC is not supported.
UTF16
354
•
•
•
•
•
LATIN
UNICODE
KANJISJIS
KANJI1
GRAPHIC
2n
Utilities
Chapter 12: DBS Control (dbscontrol)
Export Width Table ID
IF the client character set is…
Utilities
AND the server
character set is…
THEN the number of bytes exported for
a CHARACTER(n) column is…
•
•
•
•
•
•
•
•
•
•
HANGUL949_7R0
HANGULKSC5601_2R4
SCHGB2312_1T0
SCHINESE936_6R0
SDHANGULKSC5601_4R4
SDTCHBIG5_3R0
SDSCHGB2312_2T0
TCHBIG5_1R0
TCHINESE950_8R0
Extended site-defined
multibyte client character
sets that use one of the
following encoding forms:
• Single-byte characters in
the range 0x00-0x81,
first byte of double-byte
characters in the range
0x82-0xFF
•
• LATIN
• UNICODE
• GRAPHIC
2n
• KANJISJIS
• KANJI1
n
•
•
•
•
•
•
HANGULEBCDIC933_1II
SCHEBCDIC935_21J
SDHANGULEBCDIC933_5II
SDSCHEBCDIC935_6IJ
SDTCHEBCDIC937_7IB
TCHEBCDIC937_3IB
• KANJISJIS
• KANJI1
n
UNICODE
3n+1
LATIN
2n
GRAPHIC
• 2n (record and indicator modes)
• 2n+2 (field mode)
355
Chapter 12: DBS Control (dbscontrol)
ExternalAuthentication
ExternalAuthentication
Purpose
Controls whether Teradata Database users can be authenticated outside (external) of the
Teradata Database software authentication system. External authentication may eliminate the
need for your application to declare or store a password on your client system.
Field Group
General
Valid Settings
The following settings apply to the Microsoft Windows gateway only and to channel connects
to Microsoft Windows or MP-RAS-based servers.
Setting
Description
0 (ON)
External authentication sessions and traditional logons are accepted.
1 (OFF)
External authentication sessions are rejected; traditional logons are accepted.
2 (ONLY)
External authentication sessions are accepted; traditional logons are rejected.
Default
0 (On)
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
You should modify ExternalAuthentication before the Teradata Database or PDE starts.
To define whether your application can use external authentication mode to allow a more
automatic logon to eliminate the need for your application to declare or store a password on
your client system, use the SET EXTAUTH command.
356
Utilities
Chapter 12: DBS Control (dbscontrol)
ExternalAuthentication
Related Topics
Utilities
For more information on…
See…
the SET EXTAUTH command
Chapter 11: “Database Window (xdbw).”
configuring how the network allows or disallows
traditional and new external authentication
logons
Chapter 15: “Gateway Control (gtwcontrol).”
357
Chapter 12: DBS Control (dbscontrol)
Free Cylinder Cache Size
Free Cylinder Cache Size
Purpose
Determines the maximum number of cylinders that can be in the File System cache of free
spool cylinders. Using such a cache reduces the number of times storage services must be
called to allocate new spool cylinders.
Field Group
File System
Valid Range
1 through 1,000
Default
100
358
Utilities
Chapter 12: DBS Control (dbscontrol)
FreeSpacePercent
FreeSpacePercent
Purpose
Specifies the amount of space on each cylinder that is to be left unused during the following
operations:
Type
Operations
Utility
•
•
•
•
•
•
SQL
• An ALTER TABLE that adds fallback protection.
• A CREATE INDEX that defines or redefines any type of secondary index on a
populated table.
• Fallback creation during an INSERT...SELECT into an empty table that is defined
with fallback protection.
• Index creation during an INSERT...SELECT into an empty table that is defined
with any type of secondary index.
FastLoad and MultiLoad
Archive/Recovery RESTORE
Table Rebuild
Reconfiguration
Ferret PACKDISK
Minicylpack operations attempt to honor the FreeSpacePercent (FSP) setting, or
the FSP value specified in CREATE TABLE and ALTER TABLE statements.
However, if few cylinders are available, and storage space is limiting, minicylpack
may not be able to honor that FSP.
The reserved free space allows table data to expand on current table cylinders, preventing or
delaying the need for additional table cylinders to be allocated, therefore preventing or
delaying data migration operations associated with new cylinder allocations. Keeping new
table data physically close to existing table data, and avoiding data migrations, can improve
overall system performance.
Field Group
File System
Valid Range
0 through 75%
Default
0%
Utilities
359
Chapter 12: DBS Control (dbscontrol)
FreeSpacePercent
Changes Take Effect
After the DBS Control Record has been written or applied, and during the next data load
operation. Any operations in progress when the setting is changed are not affected.
Note: After setting a non-zero value for the free space percentage, all subsequent operations
listed above will respect that setting, and will continue to reserve free space beyond what table
data requires. In order to have Teradata Database utilize the reserved free space for data
storage and avoid data migrations, the free space percentage must be reduced after the initial
data is loaded.
Usage Notes
Free space percent also can be specified for individual tables by using the FREESPACE option
to the CREATE TABLE and ALTER TABLE SQL statements. The free space percent specified
for individual tables takes precedence over the setting in the FreeSpacePercent field. Changes
to the FreeSpacePercent field will not affect these tables. To change the free space percent of
these tables, use ALTER TABLE.
Performance Implications
The reserved free space allows table data to expand on current table cylinders, preventing or
delaying the need for additional table cylinders to be allocated and associated data migration
operations. Keeping new table data physically close to existing table data and avoiding data
migrations can improve overall system performance.
After setting a non-zero value for the free space percentage, all subsequent operations listed
above will respect that setting, and will continue to reserve free space beyond what table data
requires. In order for Teradata Database to utilize the reserved free space for data storage and
avoid data migrations, the free space percentage must be reduced after the initial data is
loaded.
•
Evaluating a Higher Global FSP
Free space is allocated dynamically when a table expands as a result of UPDATE operations
or during INSERT operations. If some space is left free during the initial load, the need for
subsequent migrations is reduced. If most of your tables will grow extensively, use a higher
FSP value to achieve the best performance. However, if the percentage is too high,
additional cylinders might be required to store the data.
For example, if you specify an FSP of 25%, a table that would otherwise fit on 100
cylinders would require 134 cylinders. If you specify an FSP of 75%, the same table would
require 400 cylinders.
Make sure that the requisite cylinders are available. If they are not, minicylinder packs
might run, which can result in some performance degradation.
•
Evaluating a Lower Global FSP
If little or no table expansion is expected, the lowest value (0%) should remain as the
global default.
360
Utilities
Chapter 12: DBS Control (dbscontrol)
FreeSpacePercent
Any non-zero value for FreeSpacePercent causes tables to occupy more cylinders than are
absolutely necessary for table data. The extra space cannot be reclaimed until one of the
following occurs:
•
•
The Ferret utility is used to initiate PACKDISK on a table.
•
Minicylinder packs are performed due to a lack of available cylinders.
•
Operations that do not honor the FreeSpacePercent setting are used to modify the
table.
Evaluating Global Versus Table-Level FSP
If many tables benefit from one FSP value and a few tables benefit from a different value,
consider setting the FSP at the table level for the exceptions.
Carefully consider the performance impact of long-term growth against short-term needs
before changing the global default value.
Related Topics
For more information on the CREATE TABLE and ALTER TABLE statements, see SQL Data
Definition Language.
Utilities
361
Chapter 12: DBS Control (dbscontrol)
HashFuncDBC
HashFuncDBC
Caution:
You must use the System Initializer utility to modify this field. Using the System Initializer
utility program destroys all user and dictionary data.
Purpose
Defines the hashing function that the Teradata Database uses.
Field Group
General
Valid Settings
Setting
Description
3
Kanji
4
International
5
Universal
Default
5 (Universal).
The Universal hash is the preferred hash. System Initializer automatically chooses the hash.
362
Utilities
Chapter 12: DBS Control (dbscontrol)
HTMemAlloc
HTMemAlloc
Purpose
Specifies the percentage of cache memory to be allocated to a hash table that is used for a hash
join. Hash joins are used as optimizations to a merge joins under specific conditions.
Field Group
Performance
Valid Range
0 through 10
A setting of zero prevents hash joins from being used as optimizations.
Default
System
Default
32-bit
2
63-bit
10
Note: Teradata recommends using the default values for HTMemAlloc.
Changes Take Effect
After the DBS Control Record has been written or applied.
Usage Notes
Hash joins can benefit performance when skew is low.
Larger values for HTMemAlloc allow hash join optimizations to be applied to larger tables.
The Optimizer uses the value of HTMemAlloc to determine the size of the hash table as
follows:
Hash Table Size = (HTMemAlloc/100) * memory set aside for hash joins
Where the memory set aside for hash joins depends on the operating system:
Utilities
System
Maximum Hash Table Size
Memory Set Aside for Hash Joins
32-bit
1 MB
10 MB
363
Chapter 12: DBS Control (dbscontrol)
HTMemAlloc
System
Maximum Hash Table Size
Memory Set Aside for Hash Joins
64-bit
2 MB
20 MB
A request for more than the maximum hash table size will default to the maximum.
HTMemAlloc works together with the SkewAllowance setting.
If your system is using large spool files, and the Optimizer is not using the hash join because of
the HTMemAlloc limit, increase HTMemAlloc and see if performance improves.
For 32-bit systems:
•
The only time that higher settings should be considered is when a system is always lightly
loaded. This means that very few, if any, concurrent operations are performed using the
system. In this case, you might want to increase HTMemAlloc to a value in the range of 3
through 5.
•
Values in the 6 through10 range can improve hash join performance in some single-user
situations but should not be specified on production systems. Do not use these values
when more than one user is logged on. Most end users never have a need to use settings in
these ranges.
Related Topics
364
For more information on…
See…
SkewAllowance field
“SkewAllowance” on page 428
hash table size calculations and possible values
Performance Management
Utilities
Chapter 12: DBS Control (dbscontrol)
IAMaxWorkloadCache
IAMaxWorkloadCache
Purpose
Defines the maximum size of the Index Wizard workload cache when performing analysis
operations. This parameter is applicable to both the INITIATE INDEX ANALYSIS and
INITIATE PARTITION ANALYSIS statements.
Field Group
Performance
Valid Range
32 through 187 MB
Default
32 MB
Changes Take Effect
After the DBS Control Record has been written or applied
Related Topics
For more information on Index Wizard, see Teradata Index Wizard User Guide.
Utilities
365
Chapter 12: DBS Control (dbscontrol)
IdCol Batch Size
IdCol Batch Size
Purpose
Indicates the size of a pool of numbers reserved by a vproc for assigning identity values to
rows inserted into an identity column table.
Identity columns are used mainly to ensure row uniqueness by taking a system-generated
unique value. They are valuable for generating simple unique indexes and primary and
surrogate keys when composite indexes or keys are not desired. Identity columns are also
useful for ensuring column uniqueness when merging several tables or to avoid significant
preprocessing when loading and unloading tables. For more information on identity columns,
see SQL Data Definition Language.
Field Group
General
Valid Range
1 through 1,000,000
Default
100,000
Changes Take Effect
After the DBS Control Record has been written or applied.
Note: The IdCol Batch Size field settings survive system restarts.
Usage Notes
When the initial batch of rows for a bulk insert arrives on a PE/AMP vproc, the following
occurs:
1
A range of numbers is reserved before processing the rows.
2
Each PE/AMP retrieves the next available value for the identity column from the IdCol
table.
3
Each PE/AMP immediately updates this value with an increment equal to the IdCol Batch
Size setting.
Bulk loads using MultiLoad, FastLoad, and INSERT-SELECT, have identity values assigned by
the AMPs. For these types of loads, base the IdCol Batch Size setting on the number of AMPs
in the system. Bulk loads using TPump and iterated inserts have identity values assigned by
the PEs. For these types of loads, base the setting on the number of PEs in the system.
366
Utilities
Chapter 12: DBS Control (dbscontrol)
IdCol Batch Size
Performance Implications
The IdCol Batch Size setting involves a trade-off between insert performance and potential
gaps in the numbering of rows inserted into tables that have identity columns.
A larger setting results in fewer updates to DBC.IdCol in reserving batches of numbers for a
load. This can improve the performance of buk inserts into an identity column table.
However, because the reserved numbers are kept in memory, unused numbers will be lost if a
database restart occurs, resulting in a gap in the numbering of identity columns.
Utilities
367
Chapter 12: DBS Control (dbscontrol)
IVMaxWorkloadCache
IVMaxWorkloadCache
Purpose
Defines the maximum size of the Index Wizard workload cache when performing validation
operations. This parameter is applicable to all SQL statements issued within a session when
DIAGNOSTIC “VALIDATE INDEX” has been enabled.
Field Group
Performance
Valid Range
1 through 32 MB
Default
1
Changes Take Effect
After the DBS Control Record has been written or applied
Related Topics
For more information on Index Wizard, see Teradata Index Wizard User Guide.
368
Utilities
Chapter 12: DBS Control (dbscontrol)
LargeDepotCylsPerPdisk
LargeDepotCylsPerPdisk
Purpose
Determines the number of depot cylinders the file system allocates per pdisk (storage device)
to contain large slots (512 KB). A large slot can hold several data blocks (DBs) during depot
operations.
The actual number of large-depot cylinders used per AMP is this value multiplied by the
number of pdisks per AMP.
Field Group
File System
Valid Range
0 through 10
Default
1 cylinder
Changes Take Effect
After the next Teradata Database restart
Usage Notes
The Depot is a set of transitional storage locations (a number of cylinders) used by the file
system for performing in-place writes of DBs or WAL DBs (WDBs). An in-place write means
that the changed DB is written back to exactly the same place on disk from which it was
originally read. In-place writes are only performed for modifications to DBs that do not
change the size of table rows, and therefore do not require any reallocation of space.
Writing the changed DB directly back to its original disk location could leave the data
vulnerable to various hardware and system problems that can occur during system resets, such
as a disk controller malfunctions or power failures. If such a problem occurred during the
write operation, the data could be irretrievably lost.
The Depot protects against such data loss by allowing the file system to perform disk writes in
two stages. First the changed DB (or WDB) is written to the Depot. After the data has been
completely written to the Depot, it is written to its permanent location on the disk. If there is a
problem while the data is being written to the Depot, the original data is still safe in its
permanent disk location. If there is a problem while the data is being written to its permanent
location, the changed data is still safe in the Depot. During database startup, the Depot is
Utilities
369
Chapter 12: DBS Control (dbscontrol)
LargeDepotCylsPerPdisk
examined to determine if any of the DBs or WDBs should be rewritten from the Depot to their
permanent disk locations.
370
Utilities
Chapter 12: DBS Control (dbscontrol)
LockLogger
LockLogger
Purpose
Determines whether the Locking Logger feature of Teradata Database is enabled or disabled.
The Locking Logger logs delays caused by blocked database locks, and can help identify lock
conflicts.
Field Group
General
Valid Settings
Setting
Description
TRUE
Enable the Locking Logger.
FALSE
Disable the Locking Logger.
Note: If LockLogger is set to FALSE, the LockLogger Delay Filter, LockLogger Delay
Filter Time, and LockLogSegmentSize DBS Control fields are ignored.
Default
FALSE
Changes Take Effect
After the next Teradata Database restart
Usage Notes
Locking Logger runs as a background task, recording lock information in a table. Use the
Locking Logger (dumplocklog) utility to create or designate a table to be used for storing lock
log entries.
LockLogger is useful for troubleshooting problems such as determining whether locking
conflicts are causing high overhead.
Some values in the lock log table represent internal IDs for the object on which the lock was
requested. The lock log table defines the lock holder and the lock requester as transaction
session numbers. The lock log table can be joined with the DBC.DBase, DBC.TVM, and
DBC.EventLog tables to gain additional information about the object IDs and transaction
session numbers.
Utilities
371
Chapter 12: DBS Control (dbscontrol)
LockLogger
Related Topics
See also “LockLogger Delay Filter” on page 373, “LockLogger Delay Filter Time” on page 374,
and “LockLogSegmentSize” on page 376.
For more information on the Locking Logger utility, see Utilities Volume 2.
372
Utilities
Chapter 12: DBS Control (dbscontrol)
LockLogger Delay Filter
LockLogger Delay Filter
Purpose
Enables or disables log filtering of blocked lock requests based on delay time.
Field Group
General
Valid Settings
Setting
Description
TRUE
Enable filtering
FALSE
Disable filtering
Default
FALSE.
Note: If the LockLogger field is set to FALSE, then the settings for the LockLogger Delay Filter
field and the LockLogger Delay Filter Time field are ineffective.
Changes Take Effect
After the DBS Control Record has been written or applied.
Related Topics
See also “LockLogger” on page 371, “LockLogger Delay Filter Time” on page 374, and
“LockLogSegmentSize” on page 376.
Utilities
373
Chapter 12: DBS Control (dbscontrol)
LockLogger Delay Filter Time
LockLogger Delay Filter Time
Purpose
Specifies a threshold time delay value (in seconds). Lock requests that are blocked for less than
this amount of time are not logged. This prevents the log table from being filled with data for
insignificantly blocked locks.
Field Group
General
Valid Range
0 through 1,000,000 seconds
IF you set LockLogger Delay Filter Time to…
THEN blocked lock requests with a delay of…
0 seconds
0 seconds would not be logged.
10 seconds
10 seconds or less would not be logged.
1,000,000 seconds
1,000,000 seconds or less would not be logged.
Default
0 seconds
Changes Take Effect
After the DBS Control Record has been written or applied.
Usage Notes
The LockLogger Delay Filter Time field has a dependency on the Lock Logger Delay field and
the LockLogger field.
374
IF the LockLogger field is…
THEN LockLogger Delay Filter Time field is…
FALSE, and the LockLogger Delay Filter field
is TRUE
ineffective.
TRUE, and the LockLogger Delay Filter field is
FALSE
ineffective.
TRUE, and the LockLogger Delay Filter field is
TRUE
effective.
Utilities
Chapter 12: DBS Control (dbscontrol)
LockLogger Delay Filter Time
Related Topics
See also “LockLogger” on page 371,“LockLogger Delay Filter” on page 373, and
“LockLogSegmentSize” on page 376.
For more information on the Locking Logger utility, see Utilities Volume 2.
Utilities
375
Chapter 12: DBS Control (dbscontrol)
LockLogSegmentSize
LockLogSegmentSize
Purpose
Specifies the size of the Locking Logger segment. This field allows you to control the size of the
buffer that is used to store lock information.
Field Group
General
Valid Range
64 through 1024 KB
Default
64 KB
Changes Take Effect
After the next Teradata Database restart
Usage Notes
If the LockLogger field is set to FALSE, then the setting for the LockLogSegmentSize field is
ineffective.
Related Topics
See also “LockLogger” on page 371, “LockLogger Delay Filter” on page 373, and “LockLogger
Delay Filter Time” on page 374.
For more information on the Locking Logger utility, see Utilities Volume 2.
376
Utilities
Chapter 12: DBS Control (dbscontrol)
MaxDecimal
MaxDecimal
Purpose
Defines the maximum number of decimal digits in the default maximum value used in
expression typing.
Field Group
General
Default
15 decimal digits
Changes Take Effect
After the next Teradata Database restart.
Note: After you perform a SYSINIT where SYSINIT re-initializes DBS Control, MaxDecimal
is set to zero.
Usage Notes
This MaxDecimal value…
Sets a default DECIMAL maximum size for expression evaluation to…
0
15.
15
15.
18
18.
38
38.
For more information on DECIMAL, see “Decimal/Numeric Data Types” in SQL Data Types
and Literals.
Utilities
377
Chapter 12: DBS Control (dbscontrol)
MaxDownRegions
MaxDownRegions
Purpose
Teradata Database can isolate some file system errors to a specific data or index subtable, or to
a contiguous range of rows (“region”) in a data or index subtable. In these cases, Teradata
Database marks only the affected subtable or region down. This improves system performance
and availability by allowing transactions that do not require access to the down subtable or
rows to proceed, without causing a database crash that would require a system restart.
However, if several regions in a subtable are marked down, it could indicate a fundamental
problem with the subtable itself. Therefore, when a threshold number of down regions is
exceeded, the entire subtable is marked down on all AMPs, making it unavailable to most SQL
queries. This threshold can be adjusted by means of the DBS Control utility.
The MaxDownRegions field determines this threshold. If the number of down regions in a
subtable exceeds MaxDownRegions, the entire subtable will be marked down and become
inaccessible.
Field Group
General
Valid Range
0 through 12
When MaxDownRegions is set to 0, any fatal file system error in a data subtable causes the
subtable to be marked down on all AMPs.
Default
6
Changes Take Effect
After the next Teradata Database restart
Usage Notes
If a data or index subtable is marked down on all AMPs, it is highly recommended that the
associated table be rebuilt using the Table Rebuild utility after the source of the problem has
been corrected. Alternatively, the table can be dropped and restored from a backup copy. If the
problem is with an index subtable, the index can be dropped and recreated.
378
Utilities
Chapter 12: DBS Control (dbscontrol)
MaxDownRegions
Tables that are marked down at the time they are rebuilt will remain down after the rebuild. To
clear the down status, use the ALTER TABLE ... RESET DOWN statement after the table has
been rebuilt. For more information on the Table Rebuild utility, see Utilities Volume 2.
Utilities
379
Chapter 12: DBS Control (dbscontrol)
MaxJoinTables
MaxJoinTables
Purpose
Influences the maximum number of tables that can be joined per query block.
Field Group
Performance
Valid Values
0, and 64 through 128
Zero means MaxJoinTables will use the system default upper limit of 128.
Default
0
Usage Notes
MaxJoinTables sets a system-wide upper bound on the MaxJoinTables cost parameter used
during optimization. The value 0 is mapped internally to 128.
The actual maximum number of join tables in a query block is determined by both this field,
and by the MaxJoinTables configuration setting in the Type 2 Cost Profile.
If the RevertJoinPlanning DBS Control field is set to TRUE, or if the RevertJoinPlanning
configuration setting for the Type 2 Cost Profile is set to TRUE, the maximum join tables is 64,
regardless of the MaxJoinTables setting.
Related Topics
380
For more information on…
See…
RevertJoinPlanning
“RevertJoinPlanning” on page 420.
join planning, optimization, and cost profiles,
SQL Request and Transaction Processing.
database limits
SQL Fundamentals
Utilities
Chapter 12: DBS Control (dbscontrol)
MaxLoadAWT
MaxLoadAWT
Purpose
Specifies the combined number of AMP Worker Tasks (AWTs) that can be used by FastLoad
and MultiLoad at any time. It allows more FastLoad, MultiLoad, and FastExport tasks (jobs)
to run concurrently, and sets a limit on AWT usage to prevent excessive consumption or
exhaustion of AWT resources.
This field also acts as a “switch” on the function of MaxLoadTasks field:
•
When MaxLoadAWT set to zero, the number of load utilities that can run concurrently is
controlled entirely by the MaxLoadTasks field. In this case, MaxLoadTasks specifies the
maximum number of combined FastLoad, MutiLoad, and FastExport jobs that can run
concurrently.
•
When MaxLoadAWT is set to an integer greater than zero, MaxLoadTasks applies only to
the combined number of FastLoad and MultiLoad jobs, which are also limited by the
MaxLoadAWT setting. In this case, the number of FastExport jobs that can run is always
60 minus the number of combined FastLoad and MultiLoad jobs currently running.
Note: Throttle rules for load utility concurrency set by Teradata Dynamic Workload Manager
override the MaxLoadAWT setting, unless MaxLoadAWT is set to the maximum. For more
information, see Teradata Dynamic Workload Manager User Guide.
Field Group
General
Valid Settings
If MaxLoadAWT is set to a non-zero value, it should be a value greater than or equal to five,
which allows at least one FastLoad and one MultiLoad job to run concurrently.
The maximum allowable value is 60% of the total AWTs per AMP. By default, the maximum
number of AWTs started for each AMP vproc is 80, so the default maximum value for
MaxLoadAWT is 48.
Default
0
Changes Take Effect
After the DBS Control Record has been written. A system restart is not required.
Utilities
381
Chapter 12: DBS Control (dbscontrol)
MaxLoadAWT
Usage Notes
Consider using MultiLoad, rather than FastLoad, especially in cases of many small load jobs.
MultiLoad generally consumes fewer AWTs per job than FastLoad.
The MaxLoadAWT field works together with the MaxLoadTasks field to limit the number of
concurrent load utilities allowed to run:
•
•
If MaxLoadAWT is zero (the default):
•
MaxLoadTasks can be an integer from zero through 15.
•
The MaxLoadTasks field specifies the maximum number of combined FastLoad,
MultiLoad, and FastExport jobs that can run concurrently.
•
The system does not consider the number of available AWTs when limiting the number
of load utilities that can run concurrently.
If MaxLoadAWT is greater than zero:
•
MaxLoadTasks can an integer from zero through 30.
•
The MaxLoadTasks field sets the maximum number of combined FastLoad and
MultiLoad jobs that can run concurrently. MaxLoadTasks does not directly limit the
number of FastExport jobs that can run.
•
The number of combined FastLoad and MultiLoad jobs that can run concurrently is
limited by the values of both the MaxLoadTasks field and the MaxLoadAWT field.
When either limit is met, no further FastLoad or MultiLoad jobs are allowed to start
until the limiting factor is reduced. See “About AWTs” on page 382.
•
The maximum number of load utility jobs of any type—FastLoad, MultiLoad, or
FastExport—that can run concurrently is 60. Consequently, the number of FastExport
jobs allowed to run at any time is 60 minus the number of combined FastLoad and
MultiLoad jobs that are running.
For example, if the sum of currently running FastLoad and MultiLoad jobs is 29, the
number of FastExport jobs that can be started is 31 (60 minus 29), regardless of the
MaxLoadAWT and MaxLoadTasks settings.
•
If MaxLoadAWT is set to anything greater than zero, it can only be reset to zero if
MaxLoadTasks is 15 or less.
Because load utilities share system resources with other system work, such as tactical and DSS
queries, limiting the number of load utility jobs can help ensure sufficient system resources
are available for other work.
About AWTs
AWTs are processes (threads on some platforms) dedicated to servicing the Teradata Database
work requests. A fixed number of AWTs are pre-allocated during Teradata Database system
initialization for each AMP vproc. Each AWT looks for a work request to arrive in the Teradata
Database system, services the request, and then looks for another. An AWT can process
requests of any work type.
The number of AWTs required by FastLoad and MultiLoad changes as their jobs run. More
AWTs are required in the early phases of the jobs than in the later phases. Teradata Database
382
Utilities
Chapter 12: DBS Control (dbscontrol)
MaxLoadAWT
dynamically calculates the total AWTs required by active jobs, and allows more jobs to start as
AWTs become available. If MaxLoadAWT is greater than zero, new FastLoad and MultiLoad
jobs are rejected when the MaxLoadAWT limit is reached, regardless of the MaxLoadTasks
setting. Therefore, FastLoad and MultiLoad jobs may be rejected before MaxLoadTasks limit is
reached.
Example
FastLoad and MultiLoad require different numbers of AWTs at different phases of execution.
The following table shows how many AWTs are used at each phase.
Load Utility and Phase
Number of AWTs Required
FastLoad: Loading
3
FastLoad: End Loading
1
MultiLoad: Acquisition
2
MultiLoad: Application
1 per target table
Assume that MaxLoadAWT = 48 and MaxLoadTasks = 30. The list below shows some
permitted combinations of load utility jobs. The limiting condition(s) for each combination is
shown in bold:
Utilities
•
16 FastLoads in Loading phase
16 concurrent load tasks
48 AWTs in use: (16 x 3)
•
9 FastLoads in Loading phase and 21 FastLoads in End Loading phase
30 concurrent load tasks
48 AWTs in use: (9 x 3) + (21 x 1)
•
24 MultiLoads in Acquisition phase
24 concurrent load tasks
48 AWTs in use: 24 x 2
•
5 MultiLoads in Acquisition phase and 25 MultiLoads in Application phase
30 concurrent load tasks
35 AWTs in use: (5 X 2) + (25 x 1)
•
6 FastLoads in Loading phase and 15 MultiLoads in Acquisition phase
21 concurrent load tasks
48 AWTs in use: (6 x 3) + (15 x 2)
383
Chapter 12: DBS Control (dbscontrol)
MaxLoadTasks
MaxLoadTasks
Purpose
Specifies the combined number of FastLoad, MultiLoad, and FastExport tasks (jobs) that are
allowed to run concurrently on Teradata Database.
Note: This field is ignored if the throttles category of Teradata Dynamic Workload Manager is
enabled. For more information, see Teradata Dynamic Workload Manager User Guide.
Field Group
General
Default
5 tasks
Changes Take Effect
After the DBS Control Record has been written. A system restart is not required.
Usage Notes
If MaxLoadTasks is set to 0, no load utilities can be started.
The MaxLoadTasks field works together with the MaxLoadAWT field to limit the number of
concurrent load utilities allowed to run:
•
•
384
If MaxLoadAWT is zero (the default):
•
MaxLoadTasks can be an integer from zero through 15.
•
The MaxLoadTasks field specifies the maximum number of combined FastLoad,
MultiLoad, and FastExport jobs that can run concurrently.
•
The system does not consider the number of available AWTs when limiting the number
of load utilities that can run concurrently.
If MaxLoadAWT is greater than zero:
•
MaxLoadTasks is an integer from zero through 30.
•
The MaxLoadTasks field sets the maximum number of combined FastLoad and
MultiLoad jobs that can run concurrently. MaxLoadTasks does not directly limit the
number of FastExport jobs that can run.
•
The number of combined FastLoad and MultiLoad jobs that can run concurrently is
limited by the values of both the MaxLoadTasks field and the MaxLoadAWT field.
When either limit is met, no further FastLoad or MultiLoad jobs are allowed to start
until the limiting factor is reduced. See “About AWTs” on page 382.
Utilities
Chapter 12: DBS Control (dbscontrol)
MaxLoadTasks
•
The maximum number of load utility jobs of any type—FastLoad, MultiLoad, or
FastExport—that can run concurrently is 60. Consequently, the number of FastExport
jobs allowed to run at any time is 60 minus the number of combined FastLoad and
MultiLoad jobs that are running.
For example, if the sum of currently running FastLoad and MultiLoad jobs is 29, the
number of FastExport jobs that can be started is 31 (60 minus 29), regardless of the
MaxLoadAWT and MaxLoadTasks settings.
•
If MaxLoadAWT is set to anything greater than zero, it can only be reset to zero if
MaxLoadTasks is 15 or less.
About AWTs
AMP Worker Tasks (AWTs) are processes (threads on some platforms) dedicated to servicing
the Teradata Database work requests. A fixed number of AWTs are pre-allocated during
Teradata Database system initialization for each AMP vproc. Each AWT looks for a work
request to arrive in the Teradata Database system, services the request, and then looks for
another. An AWT can process requests of any work type.
The number of AWTs required by FastLoad and MultiLoad changes as their jobs run. More
AWTs are required in the early phases of the jobs than in the later phases. Teradata Database
dynamically calculates the total AWTs required by active jobs, and allows more jobs to start as
AWTs become available. If MaxLoadAWT is greater than zero, new FastLoad and MultiLoad
jobs are rejected when the MaxLoadAWT limit is reached, regardless of the MaxLoadTasks
setting. Therefore, FastLoad and MultiLoad jobs may be rejected before MaxLoadTasks limit is
reached.
Example
FastLoad and MultiLoad require different numbers of AWTs at different phases of execution.
The following table shows how many AWTs are used at each phase.
Load Utility and Phase
Number of AWTs Required
FastLoad: Loading
3
FastLoad: End Loading
1
MultiLoad: Acquisition
2
MultiLoad: Application
1 per target table
Assume that MaxLoadAWT = 48 and MaxLoadTasks = 30. The list below shows some
permitted combinations of load utility jobs. The limiting condition for each combination is
shown in bold:
•
Utilities
16 FastLoads in Loading phase
16 concurrent load tasks
48 AWTs in use: (16 x 3)
385
Chapter 12: DBS Control (dbscontrol)
MaxLoadTasks
386
•
9 FastLoads in Loading phase and 21 FastLoads in End Loading phase
30 concurrent load tasks
48 AWTs in use: (9 x 3) + (21 x 1)
•
24 MultiLoads in Acquisition phase
24 concurrent load tasks
48 AWTs in use: 24 x 2
•
5 MultiLoads in Acquisition phase and 25 MultiLoads in Application phase
30 concurrent load tasks
35 AWTs in use: (5 X 2) + (25 x 1)
•
6 FastLoads in Loading phase and 15 MultiLoads in Acquisition phase
21 concurrent load tasks
48 AWTs in use: (6 x 3) + (15 x 2)
Utilities
Chapter 12: DBS Control (dbscontrol)
MaxParseTreeSegs
MaxParseTreeSegs
Purpose
Defines the maximum number of 64 KB tree segments that the parser allocates while parsing a
request.
Field Group
Performance
Valid Range
System
Valid Range
32-bit
12 through 6,000
64-bit
12 through 12,000
System
Valid Range
32-bit
1,000
64-bit
2,000
Default
Changes Take Effect
For new sessions after the DBS Control Record has been written or applied. Any sessions in
progress are not affected.
Usage Notes
This field value should be increased if large, complex queries generate insufficient memory
errors (error numbers 3710 and 3711). The value can also be increased to provide additional
memory, if needed, for parser activities.
Utilities
387
Chapter 12: DBS Control (dbscontrol)
MaxRequestsSaved
MaxRequestsSaved
Purpose
Specifies the maximum number of request-to-step cache entries that can be saved per PE.
Field Group
Performance
Valid Range
300 through 2,000.
The value must be a multiple of 10.
Default
600
Changes Take Effect
After the next Teradata Database restart
388
Utilities
Chapter 12: DBS Control (dbscontrol)
MaxRowHashBlocksPercent
MaxRowHashBlocksPercent
Purpose
Specifies the proportion of available locks a transaction can use for rowhash locks before the
transaction is automatically aborted. This setting protects Teradata Database against
misbehaving database transactions.
In order to run, every query must acquire a lock on the objects upon which it will act. There
are different types of locks: rowhash, rowrange, table, and database. A special table is used to
keep track of the locks that have been applied. This lock table is maintained in memory for
each AMP.
If a transaction performs a large number of single-row updates without closing the
transaction, the lock table can fill up with rowhash locks on one or more AMPs. This prevents
other transactions from running, and no further work can be accomplished until the
problematic transaction is aborted to make space in the lock tables. This generally requires a
database restart, unless the system returns an error, in which case the locks held by the query
will be freed.
The Lock Manager Fault Isolation feature of Teradata Database monitors the number of locks
each transaction uses. If the proportion of rowhash locks used reaches the threshold defined
by MaxRowHashBlocksPercent, the transaction is automatically aborted. Other transactions
are not affected, and the system need not be restarted.
Field Group
General
Default
50%, which means that a transaction will be stopped if its rowhash locks require more than
50% of the maximum space allowed in the lock table on any AMP.
To allow transactions to acquire more locks, set MaxRowHashBlocksPercent to a greater value.
Utilities
389
Chapter 12: DBS Control (dbscontrol)
MaxSyncWALWrites
MaxSyncWALWrites
Purpose
Determines the maximum number of outstanding WAL log writes to allow before tasks
requiring synchronous writes are delayed to achieve better buffering.
Field Group
File System
Valid Range
1 through 40
Default
5
Changes Take Effect
After the next Teradata Database restart
Usage Notes
If the number of outstanding WAL log writes is less than or equal to the MaxSyncWALWrites
value, requests for synchronous operations will append the record to the current buffer, force
a new buffer, and issue a synchronous write on the current buffer.
If the number of outstanding writes is greater than the MaxSyncWALWrites value, the record
will be appended, but the write will be delayed until one of the outstanding write requests is
completed.
390
Utilities
Chapter 12: DBS Control (dbscontrol)
MDS Is Enabled
MDS Is Enabled
Purpose
Controls whether the rsgmain program is started in the RSG vproc when Teradata Database
starts.
Teradata Meta Data Services (MDS) tracks changes to the Teradata Data Dictionary, and can
be used to store business metadata. Automatic Database Information Metamodel (DIM)
Update is an optional feature that tracks Data Dictionary changes for multiple Teradata
systems, and automatically updates the MDS repository database. The MDS MetaManager
utility can be used to enable DIM Update for a Teradata system.
In order for DIM Update to work, the MDS DDL Gateway and Action Processor services must
be automatically started at system boot time, or manually started later, and the rsgmain
program must be started in the Relay Services Gateway (RSG) vproc. The MDS Is Enabled
field controls whether the rsgmain program is started automatically in the RSG vproc when
Teradata starts.
When the MDS Automatic DIM Update feature is enabled, Data Definition Language
statements and other transactional messages are sent to the RSG vproc, which relays them to
MDS. If the MDS services are stopped, or if the MetaManager utility is used to disable DIM
Update, Teradata Database logs the names of databases affected by these transactions to the
MDS recovery table until the MDS services are started, or until the MetaManager utility is
used to enable DIM Update. The recovery table tells MDS what databases were affected while
MDS was down.
Field Group
General
Valid Settings
Setting
Description
TRUE
Use the MDS Automatic DIM Update feature after MDS has been installed. The
rsgmain program will be started in the RSG vproc the next time Teradata Database
is restarted.
FALSE
Use this setting if MDS is not being used, or will not be used for an extended period
of time, to prevent the MDS recovery table from growing very large.
Default
FALSE
Utilities
391
Chapter 12: DBS Control (dbscontrol)
MDS Is Enabled
Changes Take Effect
After the next Teradata Database restart
Related Topics
See the Meta Data Services documentation, which is available from http://
www.info.teradata.com.
392
Utilities
Chapter 12: DBS Control (dbscontrol)
Memory Limit Per Transaction
Memory Limit Per Transaction
Purpose
Specifies the maximum amount of in-memory, temporary storage that the Relay Services
Gateway (RSG) can use to store the records for one transaction. If the transaction exhausts
this amount, the transaction data moves to a disk (or spill) file. This limit prevents a few, large
transaction from swamping the memory pool.
Field Group
General
Valid Range
0 through 127 pages
Default
2 pages
Changes Take Effect
After the next Teradata Database restart
Utilities
393
Chapter 12: DBS Control (dbscontrol)
MiniCylPackLowCylProd
MiniCylPackLowCylProd
Purpose
Determines the number of free cylinders below which the File System will perform a
minicylpack operation. Minicylpack attempts to free additional cylinders by packing cylinders
that are currently in use.
Field Group
File System
Valid Range
0 through 65,535 free cylinders
Default
10 free cylinders
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
The minicylinder pack (minicylpack) operates in the background as follows:
1
Minicylpack scans the Master Index, a memory-resident structure with one entry per
cylinder, looking for a number of logically adjacent cylinders with a lot of free space.
2
When minicylpack finds the best candidate cylinder, it packs these logically adjacent
cylinders to use one less cylinder than is currently being used. For example, minicylpack
packs four cylinders that are each 75% full into three cylinders that are 100% full.
3
The process repeats on pairs of cylinders until minicylpack successfully moves all the data
blocks on a cylinder, resulting in a free cylinder. The process continues until either:
•
No additional cylinders can be freed.
•
The number of free cylinders reaches the value in MiniCylPackLowCylProd.
By running in the background and starting at a threshold value, a minicylpack minimizes the
impact on response time for a transaction requiring a new cylinder. Over time, minicylpack
may not be able to keep up with demand, due to insufficient free CPU and I/O bandwidth, or
to the increasing cost of freeing up cylinders as the demand for free cylinders continues.
394
Utilities
Chapter 12: DBS Control (dbscontrol)
MiniCylPackLowCylProd
IF you set this field to…
THEN…
a nonzero value
minicylpacks run in the background. When running in this mode, each
minicylpack scans and packs a maximum of 20 cylinders.
If minicylpack cannot free a cylinder, further minicylpacks do not run until
another cylinder allocation request notices that the number of free cylinders
has fallen below MiniCylPackLowCylProd.
Setting this field to a low value reduces the impact of minicylpacks on
performance. However, there is a risk that free cylinders will not be available
for tasks that require them. This would cause minicylpacks to run while
tasks are waiting, seriously impacting the response time of such tasks.
0
minicylpacks do not run until a task needs a cylinder and none are available.
In these cases, the requesting task is forced to wait until the minicylpack is
complete. When a minicylpack runs while a task is waiting, the number of
cylinders that minicylpack can scan is unlimited. If necessary, minicylpack
scans the entire disk in an attempt to free a cylinder.
Recommendation
Set this value to no more than 20 free cylinders.
Utilities
395
Chapter 12: DBS Control (dbscontrol)
MonSesCPUNormalization
MonSesCPUNormalization
Purpose
Controls whether normalized or non-normalized statistical CPU data is reported in the
responses to workload management API calls. API calls that return CPU data include
MONITOR SESSION (PM/API), MonitorSession (open API), and MonitorMySessions (open
API).
“Coexistent” Teradata Database systems combine different types of node hardware that might
use different types of CPUs running at different speeds. CPU normalization adjusts for these
differences when calculating statistics across the system. Although normalized CPU data is
used in other areas of the system, such as DBQL and AMPusage, the
MonSesCPUNormalization field affects only the CPU data reported by PM/API and open API
calls.
Field Group
General
Valid Settings
Setting
Effect
TRUE
CPU statistical data is normalized.
If Teradata DWM workloads are enabled, asynchronous and synchronous exception
detection on CPU thresholds are performed using normalized CPU values.
FALSE
CPU statistical data is not normalized.
If Teradata DWM workloads are enabled, asynchronous and synchronous exception
detection on CPU thresholds are performed using normalized CPU values.
Default
FALSE
Changes Take Effect
When DBS Control changes are written to the DBS Control GDO
396
Utilities
Chapter 12: DBS Control (dbscontrol)
MonSesCPUNormalization
Changing the Setting
1
Disable Teradata Dynamic Workload Manager (DWM) workloads (Category 3 rules), if
they are enabled. Teradata DWM workloads are affected by MonSesCPUNormalization
because they use the MONITOR SESSION PM/API call to obtain CPU information.
2
Adjust any CPU-related exception thresholds associated with the workloads to account for
the change in CPU data normalization.
3
Disable session monitoring either from Database Window, using the SET SESSION
COLLECTION command, or from Teradata Manager.
4
Change the MonSesCPUNormalization field in DBS Control.
5
Re-enable session monitoring.
6
Re-enable workload rules again in Teradata DWM.
The AMPCPUSec field in the response to workload management API calls contains
accumulated CPU seconds for all requests in the session. If the value of
MonSesCPUNormalization changes in the middle of a session, AMPCPUSec will no longer be
valid during the current session, and will return -1 in record mode, or NULL in indicator
mode. All other CPU fields are valid, even if MonSesCPUNormalization changes in the
middle of a session.
Related Topics
For more information about Teradata DWM and Teradata Database APIs, see
Utilities
•
Teradata Dynamic Workload Manager User Guide
•
Workload Management API: PM/API and Open API
•
Performance Management
397
Chapter 12: DBS Control (dbscontrol)
MPS_IncludePEOnlyNodes
MPS_IncludePEOnlyNodes
Purpose
Excludes PE-only (AMP-less) nodes from MONITOR PHYSICAL SUMMARY Workload
Management API statistics calculations.
Field Group
General
Valid Settings
Setting
Effect
FALSE
PE-only nodes are excluded from calculations.
TRUE
PE-only nodes are included in calculations.
Default
FALSE
Changes Take Effect
When DBS Control changes are written to the DBS Control GDO
Related Topics
MONITOR PHYSICAL SUMMARY collects statistical system information which includes
average, high, and low CPU and disk usage per node. It can be used to determine node level
system skew.
Some sites use PE-only nodes to aid in balancing the AMP workload. Because PE-only nodes
are likely to experience substantially lower CPU and disk usage than nodes running AMPs,
they can cause the node statistics to appear as if there is a data skew condition, when no such
condition exists. In these cases, excluding PE-only nodes provides statistics that more
accurately represent the true system workload conditions.
For more information about Teradata Database APIs, see Workload Management API: PM/API
and Open API.
398
Utilities
Chapter 12: DBS Control (dbscontrol)
NewHashBucketSize
NewHashBucketSize
Purpose
Specifies the number of bits that will be used by the system to identify hash buckets after the
next system initialization or reconfiguration. This setting determines how many hash buckets
the system can create. A setting of 16 bits gives Teradata Database 65,536 hash buckets; a
setting of 20 bits gives Teradata Database 1,048,576 hash buckets.
One goal of the Teradata Database parallel system is to distribute work evenly among the
system resources (nodes, virtual processes, and storage). The number of hash buckets an AMP
vproc uses is directly related to the amount of work an AMP must do. AMPs with more hash
buckets manage more data, and therefore do more work than those with fewer hash buckets.
On many systems, the number of AMPs is not evenly divisible into the number of available
hash buckets. Consequently, some AMPs have one more hash bucket than other AMPs. If the
number of hash buckets per AMP is relatively high, the imbalance is proportionately low, and
the difference in the amount of work the AMPs must do is relatively small.
However, as the number of AMPs is increased on the system, the hash buckets available to
each AMP decreases. With fewer hash buckets per AMP, the effect of any imbalance in the
number of hash buckets per AMP becomes proportionately greater. This results in the system
operating less efficiently.
For example, an AMP using 656 hash buckets must do 1/656 or 0.15% more work than an
AMP using 655 hash buckets, but an AMP using only 66 hash buckets must do 1/66 or 1.52%
more work than an AMP using 65 hash buckets.
Making more hash buckets available to the system and to each AMP reduces the effects of the
imbalance when some AMPs have one more hash bucket than others.
Field Group
General
Valid Settings
16 and 20 bits
Default
Utilities
Default
Description
20 bits
New installations of Teradata Database.
16 bits
Systems upgraded from earlier versions of Teradata Database
399
Chapter 12: DBS Control (dbscontrol)
NewHashBucketSize
Changes Take Effect
After the next sysinit or reconfig
Related Topics
See “CurHashBucketSize” on page 321.
400
Utilities
Chapter 12: DBS Control (dbscontrol)
ObjectUseCountCollectRate
ObjectUseCountCollectRate
Purpose
Specifies the amount of time (in minutes) between collections of object use-count data. The
system updates the AccessCount and LastAccessTimeStamp columns of the Data Dictionary
with this data.
This field allows you to find the use count and last access timestamps of any of the following
database objects:
•
Columns
•
Databases
•
Indexes
•
Macros
•
Stored Procedures
•
Tables
•
Triggers
•
UDFs
•
Users
•
Views
Note: Object use-count data is not recorded for EXPLAIN, INSERT EXPLAIN, or DUMP
EXPLAIN.
Field Group
General
Valid Settings
IF you set this field to…
THEN…
a negative value
an error message is displayed.
0
object use-count data collection is disabled.
an integer between 1 and
32767
object use-count data collection is enabled, and the Data Dictionary
fields AccessCount and LastAccessTimeStamp are updated based on
that set value.
Note: If you specify a decimal value, DBS Control ignores the
fractional part and uses only the integer part. For example, if you
specify 12.34, DBS Control uses 12 for the field value and ignores .34.
Warning: The recommended minimum value is 10 minutes. Any rate
below 10 minutes severely impacts performance.
Utilities
401
Chapter 12: DBS Control (dbscontrol)
ObjectUseCountCollectRate
IF you set this field to…
THEN…
a value higher than 32767
DBS Control displays a warning message.
Default
0 minutes, (disabled)
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
If this feature is enabled and you are performing an Archive/Restore operation for Data
Dictionary tables, the system may disable ObjectUseCountCollectRate if it experiences any
contention or locking. If this occurs, re-enable the feature after the Archive/Restore operation
is complete.
Related Topics
402
For more information on…
See…
the AccessCount and LastAccessTimeStamp
Data Dictionary columns
Data Dictionary.
resetting the AccessCount and
LastAccessTimeStamp Data Dictionary columns
Database Administration.
Utilities
Chapter 12: DBS Control (dbscontrol)
PermDBAllocUnit
PermDBAllocUnit
Purpose
Specifies the size of the storage allocation unit used for data blocks in permanent tables.
Field Group
File System
Valid Range
1 to 255 sectors
A sector is 512 bytes.
Default
1 sector
Changes Take Effect
After the DBS Control Record has been written or applied
Recommendation
As tables are modified, rows are added, deleted, and changed. Data blocks grow and shrink
dynamically to accommodate their current contents. However, data block sizes can change
only in units of PermDBAllocUnit. This means there will nearly always be some unused space
left at the end of the data block. If table modifications are relatively even, such incremental
changes in data block size result in an average of approximately half an allocation unit of space
wasted for every data block. (This is a rough approximation, and will depend on many factors
that differ from database to database.)
In environments where new rows are added frequently to tables, or where tables with variable
length rows are frequently growing, system performance might be improved slightly by
increasing the allocation unit size. With a larger allocation unit, data blocks will not need to be
enlarged as frequently, because there will already be room for additional changes. However, in
environments where new rows are not added frequently, the additional space in each block
can degrade performance by increasing the average I/O size.
Make only small changes to this setting at a time, and carefully evaluate the results before
committing the change on a production system. Set the allocation unit to a multiple of the
average row size of tables that change frequently, rounded up to the nearest sector.
Because the benefit of larger allocation units is often offset by the consequent increase in
average wasted space, Teradata recommends that PermDBAllocUnit be left at the default
setting.
Utilities
403
Chapter 12: DBS Control (dbscontrol)
PermDBAllocUnit
Maximum Multirow Data Block Size
The PermDBAllocUnit and PermDBSize fields together determine the maximum size of
multirow data blocks. Because data blocks can grow only in steps (allocation units) defined by
PermDBAllocUnit, the size of a data block at any time will always be an integer multiple of
PermDBAllocUnit, regardless of the PermDBSize setting.
Consequently, if the PermDBAllocUnit setting is not an integer factor of 255 sectors
(127.5 KB, the largest possible data block size), then the largest multirow data blocks will be
smaller than 255 sectors. For example, if PermDBAllocUnit is set to 4 sectors, even if
PermDBSize is set to 255, the largest multirow data block can be only 252 sectors (the greatest
multiple of 4 that is less than or equal to 255). Similarly, if PermDBAllocUnit is set to 16, the
largest multirow data block can be only 240 sectors.
404
Utilities
Chapter 12: DBS Control (dbscontrol)
PermDBSize
PermDBSize
Purpose
Specifies the maximum size for multirow data blocks in permanent tables. Rows that are larger
than PermDBSize are stored in single-row data blocks, which are not limited by PermDBSize.
Field Group
File System
Valid Range
14 through 255 sectors
A sector is 512 bytes.
Default
127 sectors
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
When tables are initially populated, Teradata Database stores as many rows as possible into
each data block, until the block reaches the size specified by PermDBSize. As tables are
subsequently modified, rows can grow such that the existing data blocks would exceed the
maximum PermDBSize. When this happens, the data block is split, and roughly half the rows
are moved to a new data block, with the result that the original and new data blocks are each
one half of the original size. The result of this type of growth and splitting is that data blocks
for heavily modified tables tend to be about 75% of the maximum size defined by
PermDBSize.
Maximum Multirow Data Block Size
The PermDBAllocUnit and PermDBSize fields together determine the maximum size of
multirow data blocks. Because data blocks can grow only in steps (allocation units) defined by
PermDBAllocUnit, the size of a data block at any time will always be an integer multiple of
PermDBAllocUnit, regardless of the PermDBSize setting.
Consequently, if the PermDBAllocUnit setting is not an integer factor of 255 sectors
(127.5 KB, the largest possible data block size), then the largest multirow data blocks will be
smaller than 255 sectors. For example, if PermDBAllocUnit is set to 4 sectors, even if
PermDBSize is set to 255, the largest multirow data block can be only 252 sectors (the greatest
Utilities
405
Chapter 12: DBS Control (dbscontrol)
PermDBSize
multiple of 4 that is less than or equal to 255). Similarly, if PermDBAllocUnit is set to 16, the
largest multirow data block can be only 240 sectors.
PermDBSize and System Performance
Database performance can be affected by the relationship of data block size to the type of
work typically performed by the database:
•
When database queries are tactical in nature, involving one or a few table rows, it is
advantageous to have fewer rows stored per data block to speed data access. Online
transaction processing (OLTP) is an example of this type of work.
•
Alternatively, when database queries are strategic in nature, involving complex queries that
involve many table rows per table, it is advantageous to have many rows stored in each data
block, to minimize costly data I/O operations. Decision support software (DSS) and
complex report generation are examples of this type of work.
PermDBSize sets the default maximum size used by the system for multirow data blocks in
permanent tables. Use a larger value if the database is used primarily for strategic work, and a
smaller value if the database is used primarily for tactical work.
In a mixed-work environment, determine a value for PermDBSize based on the kind of work
typically performed by the database. For tables involved in other types of work, PermDBSize
can be overridden on a table-by-table basis using the DATABLOCKSIZE option of the
CREATE TABLE and ALTER TABLE SQL statements.
For example, if only a few large tables are involved in decision support, defining a data block
size of 63 sectors for those tables, while setting PermDBSize to 14, to serve as the default block
size for the remaining tables, which are involved in tactical work, can improve overall
throughput by as much as 50%.
406
Utilities
Chapter 12: DBS Control (dbscontrol)
PPICacheThrP
PPICacheThrP
Purpose
Specifies the percentage of FSG Cache memory which is available (per query) for multiplecontext operations (such as joins and aggregations) on partitioned primary index (PPI) tables
and join indexes.
Field Group
Performance
Valid Range
0 through 500.
The value is in units of 1/10th of a percent.
Default
The default is 10 (1%).
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
For some PPI operations, Teradata Database processes a subset of the non-empty, noneliminated partitions together, rather than processing one partition at a time. A context is kept
for each partition to be processed. The context defines the current position within the
corresponding partition.
The PPICacheThrP value can be used to reduce swapping and avoid running out of memory
by limiting the amount of memory used for these data blocks during PPI operations.
However, larger values for PPICacheThrP may improve the performance of these PPI
operations by allowing them to use more memory.
.
For a multilevel PPI, a context is associated with a combined partition. In the following
discussions, partition means combined partition.
PPI Cache Threshold (PCT)
PCT is the amount of memory to be made available for PPI operations.
On 64-bit platforms where the file system cache per AMP is less than 100 MB, and on 32-bit
platforms:
Utilities
407
Chapter 12: DBS Control (dbscontrol)
PPICacheThrP
PCT =
Total size of file system cache per AMP x PPICacheThrP
1000
On 64-bit platforms where the file system cache per AMP is greater than 100 MB:
PCT =
100MB x PPICacheThrP
1000
Note: On 64-bit systems, the output of the DISPLAY command includes some additional
information for PPICacheThrP (under Performance fields). This information can help DBAs
determine the actual amount of memory available for multiple-context operations on PPI
tables.
The value PCT is used in the following operations for a partitioned table, as the amount of
memory to use for data blocks associated with the multiple contexts:
•
Join
•
Aggregation
The following equations define the maximum number of partitions and contexts to process at
a time, based on the amount of memory (PCT) determined from PPICacheThrP. If there are
fewer non-empty, non-eliminated partitions, only the actual number will be processed at one
time. This means that all the partitions of the table will be processed simultaneously.
Join for a Partitioned Table
In a primary index join, if one table or spool is partitioned and the other is not, then the
maximum number (P) of partitions processed at one time from the partitioned table or spool
is equal to the following:
P
=
PCT
(maximum (minimum(estimated average data block size + 12K , 256) , 8 ))
In a primary index join, if both tables or spools are partitioned, then the maximum number of
partitions processed at one time from the tables or spools is equal to the following:
P
=
(maximum (minimum (f1, 256) + minimum (f2, 256), 16))
where:
(f1 x db1) + (f2 x db2)
<_ PCT
2
408
Utilities
Chapter 12: DBS Control (dbscontrol)
PPICacheThrP
Formula element…
Is the…
f1
number of partitions to be processed at one time from the left table/spool,
as determined by the Optimizer.
f2
number of partitions to be processed at one time from the right table/spool,
as determined by the Optimizer.
db1
estimated average data block size of the left table/spool.
db2
estimated average data block size of the right table/spool.
Note: For each partition being processed, one data block remains in memory, if possible.
Aggregation for a Partitioned Table
If an aggregation is performed on the primary index of a partitioned table, the maximum
number of partitions processed at one time from the table is equal to the following:
P=
PCT
(maximum (minimum(estimated average data block size + 12K , 256) , 8 ))
Note: For each partition being processed, one data block remains in memory, if possible.
Performance Implications
The current data block for the corresponding partition is associated with each context. The
current set of data blocks (one for each context) are kept in memory, if possible, to improve
the performance of processing the set of partitions at the same time. If there is a shortage of
memory, these data blocks may need to be swapped to disk. Excessive swapping, however, can
degrade system performance.
Larger values may improve the performance of PPI operations, as long as the following occur:
•
Data blocks for each context can be kept in memory. When they can no longer be kept in
memory and must be swapped to disk, performance may degrade.
•
The number of contexts does not exceed the number of non-empty, non-eliminated
partitions for PPI operations. (If they do, performance will not improve because each
partition can have a context, and additional contexts would be unused.)
In some cases, increasing the value of PPICacheThrP above the default value can provide a
performance improvement for individual queries that do these PPI operations. However, be
aware of the potential for memory contention and running out of memory if too many of
these queries are running at the same time.
The default setting of 10 is conservative, and intended to avoid such memory problems. With
80 AMP Worker Tasks (AWTs) on a system with the default setting of 10, the maximum
amount of FSG cache memory per AMP that could be used for these PPI operations is 80% of
FSG cache memory, if all AMPs are simultaneously executing PPI operations, such as slidingwindow joins for 80 separate requests. For nonstandard configurations that have more than 80
Utilities
409
Chapter 12: DBS Control (dbscontrol)
PPICacheThrP
AWTs defined as the maximum, the setting is scaled to the number AWTs. For example, at the
default setting of 10, a cap of 80% of FSG cache memory per AMP would still be in effect on
such systems.
For many sites, the default may be too conservative. All 80 AWTs might not be running PPI
operations at the same time. If, at most, 60 PPI operations are expected to occur at the same
time, the value of PPICacheThrP could possibly be raised to 15. If at most 40 are expected, the
value could possibly be raised to 20, and so on. The best value for this parameter is
dependent on the maximum concurrent users expected to be on the system and their
workload. No one value is appropriate for all systems.
Also, consider that the number of concurrent PPI operations, such as sliding-window joins,
may increase as PPI usage is increased. Increasing the value may increase performance now
without memory contention or running out of memory but, in the future, as more PPI
operations run concurrently, performance may decrease, or out of memory situations may
occur.
If less than 80 concurrent PPI operations are expected for your site, and you think that better
performance may be possible with an increased value for PPICacheThrP, you can experiment
with PPICachThrP settings to determine an increased PPICacheThrP setting that is optimal
for your site and safe for your workloads. Measure pre- and post-change performance and
degree of memory contention under expected current and future workloads to evaluate the
effects of the change. If increasing the value to one that is reasonably safe for your site does not
yield adequate performance for PPI operations such as sliding-window joins, consider
defining partitions with larger granularity, so fewer partitions are involved in the slidingwindow join.
Related Topics
For more information on partitioned primary indexes, see Database Design and SQL Data
Definition Language.
410
Utilities
Chapter 12: DBS Control (dbscontrol)
PrimaryIndexDefault
PrimaryIndexDefault
Purpose
The CREATE TABLE statement optionally can include several modifying clauses that
determine how a table is created. The PRIMARY INDEX, NO PRIMARY INDEX, PRIMARY
KEY, and UNIQUE clauses affect whether the table includes a primary index. The
PrimaryIndexDefault field determines whether a table that is created without any of these
modifiers will have a primary index created automatically by Teradata Database, or will be
created as a NoPI table, lacking a primary index.
Loading data into NoPI tables is faster than loading data into tables with primary indexes.
When there is no primary index, data is distributed among AMPs randomly, which is faster
than distributing the data according to a hash of the primary index. Because rows can be
appended quickly to the end of a NoPI table, these kinds of tables can provide a performance
advantage when loading data. Consequently, NoPI tables can be used as staging tables where
data can be loaded quickly, and from which the data is applied later to indexed target tables
using INSERT... SELECT, UPDATE ... FROM, or MERGE INTO statements.
Field Group
General
Valid Settings
Setting
Description
D
Sets or resets this field to the factory default.
Teradata Database automatically creates primary indexes for tables created
with CREATE TABLE statements that lack PRIMARY INDEX, NO PRIMARY
INDEX, PRIMARY KEY, and UNIQUE modifiers. The first column of the
table serves as a non-unique primary index (NUPI).
P
Teradata Database automatically creates primary indexes for tables created
with CREATE TABLE statements that lack PRIMARY INDEX, NO PRIMARY
INDEX, PRIMARY KEY, and UNIQUE modifiers. The first column of the
table serves as a NUPI.
N
Teradata Database does not create primary indexes for tables created with
CREATE TABLE statements that lack PRIMARY INDEX, NO PRIMARY
INDEX, PRIMARY KEY, and UNIQUE modifiers.
Default
D
Utilities
411
Chapter 12: DBS Control (dbscontrol)
PrimaryIndexDefault
Usage Notes
Regardless of the PrimaryIndexDefault setting, tables created using PRIMARY KEY and
UNIQUE constraints will have unique indexes:
•
If PRIMARY INDEX is explicitly specified, the table is created with a primary index, and
the columns specified as PRIMARY KEY or UNIQUE become a unique secondary index.
•
If NO PRIMARY INDEX is explicitly specified, the table is created as a NoPI table, and the
columns designated as PRIMARY KEY or UNIQUE become a unique secondary index.
•
If neither PRIMARY INDEX nor NO PRIMARY INDEX are explicitly specified, the
columns specified as PRIMARY KEY or UNIQUE become a unique primary index.
The best practice is to explicitly specify PRIMARY INDEX or NO PRIMARY INDEX in
CREATE TABLE statements, rather than relying on the DefaultPrimaryIndex setting.
Related Topics
For the rules governing how system-defined primary indexes are created, see SQL Data
Definition Language.
412
Utilities
Chapter 12: DBS Control (dbscontrol)
ReadAhead
ReadAhead
Purpose
Enables or disables read-ahead operations for sequential table access.
Field Group
Performance
Valid Settings
Setting
Description
TRUE
Read-ahead is enabled for sequential file access.
FALSE
No read-ahead is issued.
Default
TRUE
Changes Take Effect
After the DBS Control Record has been written or applied. Operations in progress at the time
of the change are not affected.
Usage Notes
The File System can issue a read-ahead I/O to bring the next data block, or group of data
blocks, into memory whenever a data block is read. Preloading data blocks in advance allows
processing to occur concurrently with I/Os, and can improve processing time significantly
when tables are being scanned sequentially, for example, when running commands such as
SCANDISK.
When ReadAhead is enabled, the number of data blocks that are preloaded is determined by
the Read Ahead Count, StandAloneReadAheadCount, and UtilityReadAheadCount fields.
Note: If the Cylinder Read feature is enabled, and there are cylinder slots available, all data
blocks on the cylinder are read into memory in one read operation. In this case, individual
data blocks are not preloaded. For more information on Cylinder Read, see the Control GDO
Editor (ctl) and Xctl chapters of Utilities.
Utilities
413
Chapter 12: DBS Control (dbscontrol)
ReadAhead
Related Topics
414
•
“Read Ahead Count” on page 415
•
“StandAloneReadAheadCount” on page 432
•
“UseVirtualSysDefault” on page 442
Utilities
Chapter 12: DBS Control (dbscontrol)
Read Ahead Count
Read Ahead Count
Purpose
Specifies the number of data blocks that will be preloaded into memory in advance of the
current data block when performing sequential table scans if the ReadAhead field is set to
TRUE.
Field Group
Performance
Valid Range
1 through 100 data blocks
Default
1 data block
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
Increasing the Read Ahead Count setting can reduce the amount of CPU time spent waiting
for read operations to finish. Read Ahead Count should be set high enough to make the
typical sequential scan limited only by the CPU, rather than by the read I/O.
The CPU must work harder when data blocks are large or row selection criteria are complex.
Consequently, if either of these conditions exist, read ahead counts can be lower.
For example, if the default data block size is used, most of the tables will consist of large data
blocks, and the default ReadAheadCount will suffice. If the default data block size is made very
small, most of the tables will consist of small data blocks and system performance might
benefit by increasing the ReadAheadCount to 25 or higher.
Related Topics
Utilities
•
“ReadAhead” on page 413
•
“StandAloneReadAheadCount” on page 432
•
“UseVirtualSysDefault” on page 442
415
Chapter 12: DBS Control (dbscontrol)
ReadLockOnly
ReadLockOnly
Purpose
Enables or disables the special read-or-access lock protocol on the DBC.AccessRights table
during access rights validation and on other dictionary tables accessed by read-only queries
during request parsing.
Field Group
Performance
Valid Settings
Setting
Description
TRUE
Disable the read-or-access lock protocol.
FALSE
Enable the read-or-access lock protocol.
Default
FALSE
416
Utilities
Chapter 12: DBS Control (dbscontrol)
RedistBufSize
RedistBufSize
Purpose
Determines the buffer size for AMP-level hashed row redistributions, as used by load utilities
(MultiLoad and FastLoad), and Archive/Restore operations.This field also determines the size
of the buffers used to redistribute USI rows when creating an index on a populated table with
the CREATE UNIQUE INDEX SQL statement.
Note: This setting should be changed only under the direction of Teradata Support Center
personnel.
Field Group
Performance
Valid Values
•
-1 specifies the optimal buffer size to avoid extra memory overhead for sending row
redistribution messages on the current system.
•
0 specifies the default buffer size for the current system.
•
1 through 63 specifies the buffer size in units of kilobytes. For settings within this value
range, the actual buffer size may be adjusted internally by Teradata Database for better
memory utilization.
•
512 through 65024 specifies the buffer size in units of bytes. This is equivalent to 0.5
through 63.5 kilobytes. For settings within this value range, the buffer size is fixed, and
used exactly as is, without any adjustment by Teradata Database.
Default
Approximately 4 KB:
•
On 32-bit platforms, the default is somewhat greater than 4 KB to optimize buffer
memory utilization.
•
On 64-bit platforms, the default is somewhat less than 4 KB to optimize message
efficiency.
Changes Take Effect
After the DBS Control Record has been written or applied
Usage Notes
For the redistribution of data from AMP to AMP, the system reduces message overhead by
grouping individual rows before sending them on to their destination AMPs. The rows are
grouped into buffers on the originating AMP, one for each destination AMP. When all the
Utilities
417
Chapter 12: DBS Control (dbscontrol)
RedistBufSize
rows to be sent to a particular AMP have been collected in the corresponding buffer, they are
sent to their destination AMP with a single message.
If there are N AMPs in the system, then each AMP has N buffers for managing redistribution
data, making a total of N2 buffers in the system used per redistribution. Multiplying the
number of redistribution buffers per node by the value of RedistBufSize gives the total amount
of system memory that will be used on each node for each redistribution. For example, each
FastLoad or MultiLoad job that is importing data at a given time requires a separate
redistribution.
Example
Assume a system has 12 nodes, with 8 AMPs per node.
The system would have a total of 12 x 8 = 96 AMPs.
Therefore, each AMP would need to use 96 buffers for each redistribution.
To calculate the amount of memory per node used for each AMP-level redistribution, first
multiply the number of buffers per AMP by the number of AMPs per node:
96 x 8 = 768 redistribution buffers per node per redistribution
Then multiply the number of redistribution buffers by the RedistBufSize value:
768 x 4 KB = 3,072 KB or 3 MB per node used for each AMP-level redistribution
The memory used for redistributions scales with system size. Adding more nodes or more
AMPs to a system necessitates more memory for redistributions.
Performance Implications
If a system has relatively few AMPs, a larger redistribution buffer size usually has a positive
effect on load performance. However, on larger systems with many AMPs, a large buffer size
can consume excessive memory, especially if many load jobs are run concurrently.
For more information on row redistribution and performance, see Performance Management.
418
Utilities
Chapter 12: DBS Control (dbscontrol)
RepCacheSegSize
RepCacheSegSize
Purpose
Specifies the size (in KB) of the cache segment that is allocated in each AMP to store data
objects used specifically for data replication.
These data objects consist of executable code and related metadata used by the replication
system to convert row images into an external format. There is a unique replication data
object for each replicated table.
Field Group
General
Valid Range
64 through 4096 KB
Default
512 KB
Changes Take Effect
After the next Teradata Database restart
Usage Notes
Because replication data objects are referenced each time a row image is processed for
replication, the cache segment size should be large enough to accommodate these objects for
all tables that are actively being replicated. A typical 20-column table uses about 5500 bytes of
cache storage.
Utilities
419
Chapter 12: DBS Control (dbscontrol)
RevertJoinPlanning
RevertJoinPlanning
Purpose
Determines whether the Teradata Database query Optimizer uses newer or older join
planning techniques.
Field Group
Performance
Valid Settings
Setting
Description
TRUE
Causes the Teradata Database query Optimizer to use older join planning logic, and
limits the maximum number of tables that can be used for join planning to 64.
To restrict all users to no more than 64 tables joined per query block, set
RevertJoinPlanning in DBS Control to TRUE.
FALSE
Causes the Teradata Database query Optimizer to use new join planning logic,
unless the RevertJoinPlanning option for the Type 2 Cost Profile is set to TRUE, in
which case the Cost Profile option takes precedence. In that case, the Teradata
Database query Optimizer uses the older join planning logic and table limit.
The normal case is RevertJoinPlanning equal FALSE in both the DBS Control
record and in the Cost Profile.
To restrict only certain users to no more than 64 tables joined per query block, leave
RevertJoinPlanning in DBS Control set to FALSE, and assign the restricted users to a special
cost profile that has RevertJoinPlanning set in the profile to TRUE.
Default
FALSE
Related Topics
420
For more information on…
See…
the maximum number of tables that can be
joined
“MaxJoinTables” on page 380.
join planning, optimization, and cost profiles
SQL Request and Transaction Processing.
Utilities
Chapter 12: DBS Control (dbscontrol)
RollbackPriority
RollbackPriority
Caution:
This setting affects all users on the system.
Purpose
Determines the priority given to rollback operations.
Field Group
General
Valid Settings
Setting
Description
FALSE
Rollback is at system priority, a super-high priority, greater than any user-assigned
priority, which is reserved for critical internal work.
TRUE
Rollbacks are executed within the aborted job's Performance Group or workload.
Default
FALSE
This value should be changed only after careful consideration of the consequences to system
performance, based on the information here and in Performance Management.
Changes Take Effect
After the next Teradata Database restart.
Usage Notes
RollbackPriority affects only individual rollbacks resulting from aborted sessions. It does not
affect rollbacks resulting from a Teradata Database restart. The priority of a rollback resulting
from a restart is determined by a setting in the Recovery Manager (rcvmanager) utility. This
setting can be changed at any time, even if there is no system rollback underway. If a system
rollback is underway when the setting is changed, the tasks working on the rollback will have
their priorities changed immediately.
Performance Implications
Because rollbacks can involve millions or billions of rows, competing for CPU and other
system resources, rollbacks can impact system performance. Rollbacks can keep locks on
Utilities
421
Chapter 12: DBS Control (dbscontrol)
RollbackPriority
affected tables for hours or days until the rollback is complete. During a rollback, a trade-off
occurs between overall system performance and table availability.
How RollbackPriority affects performance is not always straightforward, and is related to the
Priority Scheduler configuration, job mix, and other processing dynamics. The
RollbackPriority setting should only be changed after full consideration of the performance
consequences:
•
When RollbackPriority is set to FALSE, rollbacks are performed at system priority, a
special priority higher than any user-assigned priority, that is reserved for critical internal
work. As a result, faster rollbacks occur at the expense of other online performance.
The default setting of FALSE is especially appropriate when rollbacks are large, occurring
to critical tables that are accessed by many users. It is better to complete these rollbacks as
quickly as possible to maximize table availability.
•
When RollbackPriority is set to TRUE, rollbacks are executed within the aborted job's
Performance Group or workload. This isolates the rollback processing to the aborted job's
priority, and minimizes the effect on the performance of the rest of the system. However, if
the rollback places locks on tables that other users are waiting for, this causes a greater
performance impact for those users, especially if the rollback is running at a low priority.
A setting of TRUE is appropriate when rollbacks are typically smaller, occurring to smaller
tables that are less critical, and less extensively used.
Regardless of the RollbackPriority setting, rollbacks are never subject to CPU limits:
•
When RollbackPriority is FALSE, rollbacks run in the system performance group where
jobs use as much CPU as they require.
•
When RollbackPriority is TRUE, rollbacks are not constrained by CPU limits at the
system, Resource Partition, or Allocation Group level.
Related Topics
422
For more information on…
See…
rollbacks, rollback priority, and their affect on
performance
Introduction to Teradata and
Performance Management
Performance Groups, Allocation Groups, and the
Priority Scheduler utility
Utilities Volume 2.
the Recovery Manager utility
Utilities Volume 2.
Utilities
Chapter 12: DBS Control (dbscontrol)
RollbackRSTransaction
RollbackRSTransaction
Purpose
Used when a subscriber-replicated transaction and a user transaction are involved in a
deadlock.
Field Group
General
Valid Settings
Setting
Description
TRUE
Roll back the subscriber-replicated transaction (feature enabled).
FALSE
Roll back the user transaction (feature disabled).
Default
FALSE
Changes Take Effect
After the next Teradata Database restart
Utilities
423
Chapter 12: DBS Control (dbscontrol)
RollForwardLock
RollForwardLock
Caution:
This feature affects all users on the system.
Purpose
Defines the system default for the RollForward using Row Hash Locks option. This allows the
database administrator to specify that row hash locks should be used to lock the target table
rows during a RollForward. Row hash locks reduce lock conflicts, so that users are more likely
to be able to access data during the RollForward operation.
Field Group
General
Valid Settings
Setting
Description
TRUE
Enables this feature.
FALSE
Disables this feature.
Default
FALSE
Changes Take Effect
After the next Teradata Database restart
424
Utilities
Chapter 12: DBS Control (dbscontrol)
RoundHalfwayMagUp
RoundHalfwayMagUp
Purpose
Indicates how rounding should be performed when computing values of DECIMAL type. A
halfway value is exactly halfway between representable decimal values.
Field Group
General
Valid Settings
The rounding behavior is different depending upon the setting of the RoundHalfwayMagUp
field.
IF you set the field to…
THEN the Teradata Database system uses the rounding semantics…
TRUE
appropriate for many business applications:
The magnitudes of halfway values are rounded up. Halfway values are
rounded away from zero so that positive halfway values are rounded up
and negative halfway values are rounded down (toward negative infinity).
For example, a value of 2.5 is rounded to 3.
FALSE
traditional for Teradata Database:
A halfway value is rounded up or down so that the least significant digit is
even. For example, a value of 2.5 is rounded to 2.
Default
FALSE
Changes Take Effect
After the next Teradata Database restart
Utilities
425
Chapter 12: DBS Control (dbscontrol)
RSDeadLockInterval
RSDeadLockInterval
Purpose
Sets the interval, in seconds, between checks for deadlocks between subscriber-replicated
transactions and user transactions.
Deadlock checking between subscriber-replicated transactions and user transactions only
occurs if the system is configured with RSG vprocs and if Relay Services Gateway is up.
Field Group
General
Valid Range
0 through 3600 seconds.
If the interval is set to 0, then the DeadlockTimeOut value is used.
Note: The deadlock detection interval is in seconds.
Default
0 (240) seconds
Changes Take Effect
426
On…
The new setting becomes effective after the…
Linux
DBS Control Record has been applied.
MP-RAS
next Teradata Database restart.
Windows
DBS Control Record has been applied.
Utilities
Chapter 12: DBS Control (dbscontrol)
SessionMode
SessionMode
Purpose
Defines the Teradata Database system default transaction mode, case sensitivity, and character
truncation rule for a session.
Field Group
General
Valid Settings
The setting…
Defaults SQL sessions to…
0
Teradata Database transaction semantics, case insensitive data, and no error
reporting on truncation of character data.
1
ANSI transaction semantics, case sensitive data, and error reporting on
truncation of character data.
Default
0. The default can be overridden at the user or session level (at logon).
Changes Take Effect
After the next Teradata Database restart
Utilities
427
Chapter 12: DBS Control (dbscontrol)
SkewAllowance
SkewAllowance
Purpose
Makes allowance for data skew in the build relation. SkewAllowance specifies a percentage
factor used by the Optimizer in choosing the size of each hash join partition.
SkewAllowance reduces the memory size for the hash join specified by HTMemAlloc. This
allows the Optimizer to take into account a potential skew of the data that could make the
hash join run slower than a merge join.
Field Group
Performance
Valid Range
20 though 80%
Default
75%
This is the recommended setting.
Changes Take Effect
After the DBS Control Record has been written or applied
Related Topics
See “HTMemAlloc” on page 363.
428
Utilities
Chapter 12: DBS Control (dbscontrol)
SmallDepotCylsPerPdisk
SmallDepotCylsPerPdisk
Purpose
Determines the number of depot cylinders the file system allocates per pdisk (storage device)
to contain small slots (128 KB). A small slot can hold a single data block during depot
operations.
The actual number of small-depot cylinders used per AMP is this value multiplied by the
number of pdisks per AMP.
Field Group
File System
Valid Range
1 through 10 cylinders
Default
2
Changes Take Effect
After the next Teradata Database restart
Usage Notes
The Depot is a set of transitional storage locations (a number of cylinders) used by the file
system for performing in-place writes of DBs or WAL DBs (WDBs). An in-place write means
that the changed DB is written back to exactly the same place on disk from which it was
originally read. In-place writes are only performed for modifications to DBs that do not
change the size of table rows, and therefore do not require any reallocation of space.
Writing the changed DB directly back to its original disk location could leave the data
vulnerable to various hardware and system problems that can occur during system resets, such
as a disk controller malfunctions or power failures. If such a problem occurred during the
write operation, the data could be irretrievably lost.
The Depot protects against such data loss by allowing the file system to perform disk writes in
two stages. First the changed DB (or WDB) is written to the Depot. After the data has been
completely written to the Depot, it is written to its permanent location on the disk. If there is a
problem while the data is being written to the Depot, the original data is still safe in its
permanent disk location. If there is a problem while the data is being written to its permanent
location, the changed data is still safe in the Depot. During database startup, the Depot is
Utilities
429
Chapter 12: DBS Control (dbscontrol)
SmallDepotCylsPerPdisk
examined to determine if any of the DBs or WDBs should be rewritten from the Depot to their
permanent disk locations.
430
Utilities
Chapter 12: DBS Control (dbscontrol)
Spill File Path
Spill File Path
Purpose
Specifies a directory that the Relay Services Gateway (RSG) can use for spill files.
Field Group
General
Valid Setting
Any existing path
Default
IF your platform is…
THEN your default directory is…
Linux
/opt/teradata/tdat/temp/tdrsg.
MP-RAS
/var/tdrsg.
Windows
tdrsg, which is a relative path. You can specify a relative path,
which is interpreted as relative to the tdtemp directory.
Changes Take Effect
After the next Teradata Database restart
Utilities
431
Chapter 12: DBS Control (dbscontrol)
StandAloneReadAheadCount
StandAloneReadAheadCount
Purpose
Specifies the number of data blocks beyond the current block that will be preloaded into
memory during the following sequential scan operations:
•
File system startup
•
Ferret and Filer utility operations, such as SCANDISK, that occur when Teradata Database
is not running
Note: To enable the read-ahead feature, the ReadAhead field must be set to TRUE.
Field Group
Performance
Valid Range
1 through 100 blocks
Default
20 blocks
Usage Notes
The number of data blocks to be preloaded during Ferret and Filer operations when Teradata
Database is running is determined by the UtilityReadAheadCount setting.
Related Topics
432
•
“ReadAhead” on page 413
•
“Read Ahead Count” on page 415
•
“UseVirtualSysDefault” on page 442
Utilities
Chapter 12: DBS Control (dbscontrol)
StepsSegmentSize
StepsSegmentSize
Purpose
Defines the maximum size (in KB) of the plastic steps segment (also known as OptSeg).
Field Group
Performance
Valid Range
64 through 1024 KB
Default
1024 KB
Changes Take Effect
After the DBS Control Record has been written or applied. Any plastic step generation
operations in progress at the time of the change are not affected.
Usage Notes
When decomposing a Teradata Database SQL statement, the parser generates plastic steps,
which the AMPs then process.
Large values allow the parser to generate more SQL optimizer steps that the AMPs use to
process more complex queries.
Set this field to a small number to limit the query complexity.
Utilities
433
Chapter 12: DBS Control (dbscontrol)
SyncScanCacheThr
SyncScanCacheThr
Purpose
Determines the amount of FSG cache that can be used for synchronized table scans.
SyncScanCacheThr is effective only when DisableSyncScan is set to FALSE.
Note: This setting should be changed only under the direction of Teradata Support Center
personnel.
Field Group
Performance
Valid Range
0 through 100%
When this field is set to zero, Teradata Database uses the default value for the field.
Default
10%.
Usage Notes
The synchronized full table scan feature of Teradata Database allows several table scanning
tasks to simultaneously access the portion of a large table that is currently in the cache.
Synchronized table scans happen only when full table scans are accessing similar areas of a
large table. Synchronized table scans can improve database performance by reducing the
required disk I/O. Synchronized table scans are available only to large tables undergoing full
table scans.
Full table scans of large tables can fill up the cache quickly, flushing data from smaller
reference tables out of the cache too soon, or preventing such data from being cached at all.
Because it is unlikely that the cached data from these large tables will be accessed again before
the data has been replaced in the cache, the benefits of caching are normally not realized for
full table scans of large tables. Therefore, large tables are normally excluded from the cache.
The DBSCacheThr setting demarcates “small” from “large” tables for purposes of most system
caching decisions.
However, if several tasks are scanning the same large table, efficiencies can be realized by
caching a portions of the table, and allowing several scans to access the cached data
simultaneously. When synchronized scanning is enabled, a portion of the cache can be used
for synchronized scanning of large tables that might otherwise be excluded from the cache.
Teradata Database determines which tables qualify for potential synchronized scanning.
434
Utilities
Chapter 12: DBS Control (dbscontrol)
SyncScanCacheThr
The relative benefits of synchronized large table scans versus allowing more of the cache to be
used for small reference tables depends on the specific mix of work types on the system, and
may change with time. Cache in use for synchronized scans is not available for caching
frequently accessed data from small reference tables. Therefore, Teradata recommends
changing SyncScanCacheThr only if you are directed to do so by Teradata Support personnel.
Make only small changes to the setting, and carefully observe the effects on system
performance before committing the change on a production system.
Related Topics
Utilities
•
“DBSCacheThr” on page 330
•
“DisableSyncScan” on page 343
435
Chapter 12: DBS Control (dbscontrol)
SysInit
SysInit
Purpose
Caution:
You must use the System Initializer utility to modify this field. Using the System Initializer
utility program destroys all user and dictionary data.
Ensures the system has been initialized properly using the System Initializer utility.
Field Group
General
Valid Settings
TRUE or FALSE.
Upon successful completion, the System Initializer utility sets this field to TRUE. This flag
must be TRUE for Teradata Database startup to begin.
When SysInit is TRUE, the SysInit timestamp displays the year, month, day, hour and minute
in a yyyy-mm-dd hh:mi format, if available.
The following table describes when the timestamp will appear.
For a Teradata Database system that is upgraded…
The SysInit field displays…
without SysInit
SysInit = TRUE (time unknown).
with an unsuccessful SysInit
SysInit = FALSE.
with a successful SysInit
SysInit = TRUE (2003-09-12 10:22).
Default
The default is FALSE.
436
Utilities
Chapter 12: DBS Control (dbscontrol)
System TimeZone Hour
System TimeZone Hour
Purpose
Defines the System Time Zone Hour offset from Universal Coordinated Time (UTC).
Field Group
General
Valid Range
-12 through 13
Default
0
Changes Take Effect
For new sessions begun after the DBS Control Record has been written. Existing sessions are
not affected.
Utilities
437
Chapter 12: DBS Control (dbscontrol)
System TimeZone Minute
System TimeZone Minute
Purpose
Defines the System Time Zone Minute offset from Universal Coordinated Time (UTC).
Field Group
General
Valid Range
-59 through 59
Default
0
Changes Take Effect
After the DBS Control Record has been written. Existing sessions are not affected
438
Utilities
Chapter 12: DBS Control (dbscontrol)
Target Level Emulation
Target Level Emulation
Note: Teradata does not recommend enabling Target Level Emulation on a production
system.
Purpose
A set of DIAGNOSTIC SQL statements in the database server to set cost information.
The Target Level Emulation field allows field test engineers to set the costing parameters
considered by the Optimizer for the system.
Field Group
General
Valid Settings
Setting
Description
TRUE
Enable this feature.
FALSE
Disable this feature.
When this field is disabled, the DIAGNOSTIC SET COSTS statement will not be
accepted. (An error is reported.) The DIAGNOSTIC DUMP COSTS statement can
be executed when the feature is disabled.
Default
FALSE
Changes Take Effect
After the DBS Control Record has been written or applied
Related Topics
See “Target Level Emulation” in SQL Request and Transaction Processing.
Utilities
439
Chapter 12: DBS Control (dbscontrol)
TempLargePageSize
TempLargePageSize
Purpose
Specifies the size (in KB) of the large memory allocation storage page used for temporary
storage in the Relay Services Gateway (RSG).
Field Group
General
Valid Range
64 through 1024 KB
Default
64 KB
Changes Take Effect
After the next Teradata Database restart
Usage Notes
If a transaction contains any record that exceeds the standard page size defined by the
Temporary Storage Page Size DBS Control field, the RSG will allocate memory using the large
page size. If the record exceeds the large page size, the entire transaction is spilled to disk.
Note: Pages are not shared between transactions, so increasing the page size may cause a
decrease in storage utilization efficiency.
Related Topics
See “Temporary Storage Page Size” on page 441.
440
Utilities
Chapter 12: DBS Control (dbscontrol)
Temporary Storage Page Size
Temporary Storage Page Size
Purpose
Specifies the standard memory allocation granule for Relay Services Gateway (RSG)
temporary storage.
Field Group
General
Valid Range
1 through 1024 KB
Default
4 KB
Changes Take Effect
After the next Teradata Database restart
Usage Notes
If a transaction contains any record that exceeds the standard page size, the RSG allocates
memory from the large page size defined by the TempLargePageSize field. If the large page size
is not sufficient to hold the record, the entire transaction is spilled to disk.
Note: Pages are not shared between transactions, so an increase in the page size may cause a
decrease in storage utilization efficiency.
Related Topics
See “TempLargePageSize” on page 440.
Utilities
441
Chapter 12: DBS Control (dbscontrol)
UseVirtualSysDefault
UseVirtualSysDefault
Purpose
This field is no longer used. For cost profiling information, see “CostProfileId” on page 318.
Field Group
General
442
Utilities
Chapter 12: DBS Control (dbscontrol)
UtilityReadAheadCount
UtilityReadAheadCount
Purpose
Specifies the number of data blocks beyond the current block that will be preloaded into
memory during Ferret and Filer utility operations, such as SCANDISK, that occur when
Teradata Database is running.
Note: To enable the read-ahead feature, the ReadAhead field must be set to TRUE.
Field Group
Performance
Valid Range
1 through 100 blocks
Default
10 blocks
Usage Notes
The number of data blocks to be preloaded during Ferret and Filer operations when Teradata
Database is not running is determined by the StandAloneReadAheadCount setting.
Related Topics
Utilities
•
“ReadAhead” on page 413
•
“Read Ahead Count” on page 415
•
“StandAloneReadAheadCount” on page 432
443
Chapter 12: DBS Control (dbscontrol)
Version
Version
Caution:
You must use the System Initializer utility to modify this field. Using the System Initializer
utility program destroys all user and dictionary data.
Purpose
Indicates the version number of the DBS Control Record.
Field Group
General
Valid Range
1 ... MAXLONGINT
Default
4
Usage Notes
The Version field is incremented by one when the DBS Control Record must be migrated to a
new format.
444
Utilities
Chapter 12: DBS Control (dbscontrol)
WAL Buffers
WAL Buffers
Purpose
Determines the number of WAL append buffers allocated by the File System.
Field Group
File System
Valid Range
5 through 40
Default
20
Changes Take Effect
After the next Teradata Database restart
Usage Notes
A larger number of buffers increases the chance that there will be an available buffer to hold a
row when a task needs to append a WAL log record.
A smaller number of buffers risks that buffers may be unavailable because they are full and
writes are pending.
Utilities
445
Chapter 12: DBS Control (dbscontrol)
WAL Checkpoint Interval
WAL Checkpoint Interval
Purpose
Determines the amount of time that elapses between WAL checkpoints.
Field Group
File System
Valid Range
1 through 240 seconds
Default
60 seconds
Usage Notes
A WAL checkpoint is used to indicate the oldest part of the WAL log that must be scanned
when recovering from a system crash. It differentiates the WAL log records that have been
written to disk from the records that must be applied during system recovery. The checkpoint
is used as the starting point for the Redo forward scan of the WAL log during recovery.
Checksum Fields
Checksums can be used to check the integrity of Teradata Database disk I/O operations. A
checksum is a calculated numeric value computed from a given set of data, or specific portions
of the data. For a given set of data, the checksum value will always be the same, provided the
data is unchanged.
Checksums can be used to detect when there are errors in disk I/O operations. When
checksums are enabled, and data is initially read, a checksum is calculated for the data and
stored in the system. When the same data is subsequently read, the checksum is recalculated
and compared to the original checksum value. Differing checksum values for a given set of
data indicate an inconsistency in the data, most often due to errors in disk I/O operations.
Because calculating checksums requires system resources, and may affect system performance,
the checksum feature is usually enabled only when disk corruption is suspected.
446
Utilities
Chapter 12: DBS Control (dbscontrol)
Checksum Fields
Checksum Levels
Checksum levels define different data sampling percentages that are used to calculate the
checksum value. For example, a checksum level of 2% uses 2% of the bytes that are read in a
single I/O to calculate the checksum for the data read. Because higher sampling percentages
use more of the data to calculate the checksum, they are more likely to detect errors that
smaller sampling rates might miss. However, higher sampling percentages are also more
computationally intense, requiring more system resources, and therefore are likely to affect
system performance to a greater degree.
Use the Checksum Level Definitions fields of DBS Control (Checksum fields 7 through 9) to
customize the LOW, MEDIUM, and HIGH checksum levels. The NONE and ALL levels are
fixed at 0% sampling (checksums are disabled) and 100% sampling (checksums calculated
based on every byte of data) , respectively. The default definitions for LOW, MEDIUM, and
HIGH are listed below.
Checksum levels are selectable based on table types. You can select from the following
checksum levels:
Checksum Level
Description
NONE
Checksums are disabled for classes of tables for which the checksum level is
NONE. The definition of this level cannot be changed.
LOW
Checksum calculations use a low percentage of the data read to generate a
checksum value. The default is 2%. The valid range is 1 - 100%.
MEDIUM
Checksum calculations use a medium percentage of the data read to generate a
checksum value. The default is 33%. The valid range is 1 - 100%
HIGH
Checksums calculations use a high percentage of the data read to generate a
checksum value. The default is 67%. The valid range is 1 - 100%.
ALL
100% of the data read is used to generate a checksum value of tables for which
the checksum level is ALL. The definition of this level cannot be changed.
Enabling Checksums
Use the Checksum Levels fields of DBS Control (Checksum fields 1 through 6) to set
checksum levels for these six classes of tables:
•
System
•
System Journal
•
System Logging
•
User
•
Permanent Journal
•
Temporary
These table classes are described in the sections that follow.
Note: Setting a checksum level for Checksum field 0 applies the specified checksum level to all
classes of tables.
Utilities
447
Chapter 12: DBS Control (dbscontrol)
Checksum Fields
Examples
_______
|
|
___
|
/
|
--|
\___
__
|/
|
|
\
____
____|
/
|
\____|
|
|
____|
/
|
\____|
____
____|
/
|
\____|
|
__|__
|
|
|__
____
____|
/
|
\____|
Release 13.00.00.00 Version 13.00.00.00
DBSControl Utility (Dec 99)
The current DBS Control GDO has been read.
Enter a command, HELP, or QUIT:
modify checksum 6 = medium
The Temporary Tables field has been modified to MEDIUM
NOTE: This change will become effective after the DBS Control Record
has been written.
m c 0=none
The System Tables field has been modified to NONE
The System Journal Tables field has been modified to NONE
The System Logging Tables field has been modified to NONE
The User Tables field has been modified to NONE
The Permanent Journal Tables field has been modified to NONE
The Temporary Tables field has been modified to NONE
NOTE: This change will become effective after the DBS Control Record
has been written.
Enabling Checksums on Individual Tables
To enable or display checksums on individual tables, use the CHECKSUM option of the
following Teradata SQL statements:
•
CREATE TABLE
•
CREATE JOIN INDEX
•
CREATE HASH INDEX
•
ALTER TABLE
•
SHOW TABLE
•
SHOW JOIN INDEX
•
SHOW HASH INDEX
For more information on these statements, see SQL Data Definition Language.
448
Utilities
Chapter 12: DBS Control (dbscontrol)
System Tables
System Tables
Purpose
Sets the checksum level of the system tables.
Default
NONE
Usage Notes
System tables include all system table types in database DBC (data dictionaries, session
information, and so forth) and table unique IDs 0, 1 through 0, 999.
The System Tables checksum setting does not affect tables included under System Journal
tables and System Logging tables.
Utilities
449
Chapter 12: DBS Control (dbscontrol)
System Journal Tables
System Journal Tables
Purpose
Sets the checksum level of system journal tables.
Default
NONE
Usage Notes
System journal tables include the following transient journals, change tables, and recovery
journals:
450
•
DBC.TransientJournal
•
DBC.ChangedRowJournal
•
DBC.LocalTransactionStatusTable
•
DBC.UtilityLockJournalTable
•
DBC.LocalSessionStatusTable
•
DBC.SysRcvStatJournal (System Recovery Status Journal)
•
DBC.SavedTransactionStatusTable
•
DBC.OrdSysChngTable (Ordered System Change Table)
•
DBC.RecoveryLockTable
•
DBC.RecoveryPJTable (Recovery Permanent Journal Table)
Utilities
Chapter 12: DBS Control (dbscontrol)
System Logging Tables
System Logging Tables
Purpose
Sets the checksum level of system logging tables.
Default
NONE
Usage Notes
System logging tables include the following:
Table
Description
DBC.AccLogTbl
Logging activity controlled by DBC.AccLogRuleTbl
DBC.Acctg
Log of each account a user owns on each AMP
DBC.EventLog
Log of session events
DBC.RCEvent
Log of storage media for events
DBC.SW_Event_Log
Log of software system errors
All Resource Usage (RSS) tables are also affected by the System Logging tables checksum level
setting.
Utilities
451
Chapter 12: DBS Control (dbscontrol)
User Tables
User Tables
Purpose
The User Tables field sets the checksum level of user tables.
Default
NONE
Usage Notes
User tables include all user tables (table unit IDs 0, 1001 through 16383, 65535), which
include the following:
•
Stored procedures
•
User-defined functions
•
User-defined methods
•
Join indexes
•
Hash indexes
This also includes fallback for these and secondary indexes for tables and join indexes.
452
Utilities
Chapter 12: DBS Control (dbscontrol)
Permanent Journal Tables
Permanent Journal Tables
Purpose
Sets the checksum level of permanent journal tables.
Default
NONE
Usage Notes
Permanent journal tables include all permanent journal tables (table uniq[0] IDs 16384
through 32767).
Utilities
453
Chapter 12: DBS Control (dbscontrol)
Temporary Tables
Temporary Tables
Purpose
Sets the checksum level of temporary and spool tables.
Default
NONE
Usage Notes
Temporary tables include all temporary and spool tables (table uniq[0] IDs 32768 through
65535), which include the following:
454
•
Global temporary tables
•
Volatile tables
•
Intermediate result spool tables
•
Response spool tables
Utilities
CHAPTER 13
Dump Unload/Load Utility
(dul, dultape)
The Dump Unload/Load utilities, dul and dultape save or restore system dump tables.
If a Teradata Database failure occurs, the system automatically saves the contents of the
affected AMP and its associated PE in a system-generated “dump table.” The information in
these tables can be used to determine the cause of a system failure. Dul and dultape can be
used to load the dump tables to DVD, disk, or tape media.
In order to use dul and dultape, the associated packages must be installed on a Teradata
Database system.
Audience
Users of the Dump Unload/Load utilities include the following:
•
Field service representatives
•
Network administrators
•
Teradata Database system administrators
User Interfaces
Dul and dultape run on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Command line
Windows
Command line (“Teradata Command Prompt”)
Linux
Command line
What DUL Does
•
Utilities
Dul transfers dump information from a Teradata Database system to files on a host, or to
removable storage media (DVD or tape). This unload operation is normally performed at
455
Chapter 13: Dump Unload/Load Utility (dul, dultape)
What DULTAPE Does
the customer site, where the crash dump information is copied and shipped or transferred
to the Teradata Support Center for analysis.
•
Dul is used by Teradata Support Center personnel to restore the dump information to a
system for analysis.
•
Dul can be used to drop tables and obtain summary information about dumps without
performing a load or unload operation.
Tables containing crash dump information are named according to the following syntax:
CrashDumps.
Crash_yyyymmdd_hhmmss_nn
HZ01B003
where:
Syntax element …
Is the …
yyyymmdd
year, month, and day.
hhmmss
hour, minute, and second.
nn
sequence number, which is increased by one for each dump saved.
Dul can unload dump tables having any name.
Dul automatically saves unloaded dumps to a file with a compressed, gzip file format, and
adds a .gz extension to the dump file name. When loading dumps, dul looks for the specified
file name including a .gz extension. If that file is not found, dul looks for the specified file
name without a .gz extension.
What DULTAPE Does
Dultape provides the same functionality as dul; however, dultape offloads tables only to tape,
rather than to disk. If a dump was unloaded with dultape 07.00.00, use dultape 06.01.00 or
higher to load the dump.
The following table shows the tape drives that dultape supports.
456
On …
dultape supports …
Linux
4mm DAT tape drives.
MP-RAS
•
•
•
•
4mm DAT tape drives.
8mm Exabyte tape drives.
Exabyte Mammoth 8900 tape drives.
Quarter-Inch Cartridge (QIC) tape drives.
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Starting and Running DUL/DULTAPE
On …
dultape supports …
Windows
• 4mm DAT tape drives.
• Exabyte Mammoth 8900 tape drives.
Starting and Running DUL/DULTAPE
Because dul and dultape are used to move large amounts of data, these operations are usually
performed in batch mode.
•
On MP-RAS, dul and dultape run in either batch or interactive mode, providing the same
functionality in the following environments:
•
UNIX SRV4 version 3.2
•
UNIX SRV4 version 3.3
•
The Windows versions of dul or dultape run in either batch or interactive mode using MSDOS batch commands.
•
On Linux, dul and dultape run in either batch or interactive mode in the following
environments:
•
SUSE Linux Enterprise 9 (EM64T)
•
SUSE Linux Enterprise 10 (EM64T)
Starting DUL and DULTAPE on MP-RAS
Note: The default password for the crashdumps logon is crashdumps. Verify whether your
System Administrator has changed the password or is still using the default.
You can start and exit dul and dultape interactively from the command line.
To start dul interactively, do the following:
1
At the command prompt, type the following and press Enter:
dul
Dul prompts you for your logon:
Dump Unload/Load - Enter your logon:
2
Type:
.logon crashdumps
and press Enter.
The password for the crashdumps logon is crashdumps.
To start dultape interactively, do the following:
1
At the command prompt, type the following and press Enter:
dultape
Dultape prompts you for the tape drive path:
Utilities
457
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Starting and Running DUL/DULTAPE
Dump Unload/Load to Tape - Please insert tape and input tape device
name:
2
Insert the tape, type the tape drive path, and press Enter. For example:
/dev/rmt/c0t3d0s0
Dultape prompts you for your logon:
Dump Unload/Load to Tape - Enter your logon:
3
Type:
.logon crashdumps
and press Enter.
The password for the crashdumps logon is crashdumps.
To log off the Teradata Database and exit dul or dultape, do the following:
✔ At the command prompt, type one of the following and press Enter:
•
LOGOFF;
•
END;
•
QUIT;
Starting DUL and DULTAPE on Windows
Note: The default password for the crashdumps logon is crashdumps. Verify whether your
System Administrator has changed the password or is still using the default.
You can start and exit dul and dultape interactively from the Start menu.
To start dul interactively, do the following:
1
Select Start >Programs>Teradata Database>Teradata Command Prompt.
The Teradata Command Prompt window opens.
2
At the command prompt, type the following and press Enter:
dul
3
Dul prompts you for your logon:
Dump Unload/Load Version 05.02.02 - Enter your logon:
4
Type:
.logon crashdumps
and press Enter.
The password for the crashdumps logon is crashdumps.
The following message appears:
***
***
***
***
***
Logon successfully completed.
Transaction Semantics are BTET.
Character Set Name is 'ASCII'.
Changing current database to crashdumps.
New default database accepted.
To log off the Teradata Database and exit dul, do the following:
✔ At the command prompt, type one of the following and press Enter:
458
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Starting and Running DUL/DULTAPE
•
LOGOFF;
•
END;
•
QUIT;
To start dultape interactively, do the following:
1
Select Start >Programs>Teradata Database>Teradata Command Prompt.
The Teradata Command Prompt window opens.
2
At the command prompt, type the following and press Enter:
dultape
3
Dultape prompts you:
Dump Unload/Load to Tape Version 07.02.02
Please insert tape and input tape device name:
4
Insert a tape into the drive.
5
In the MS-DOS window, type the tape drive path as follows and press Enter:
\\.\TapeX
where X is the number of the tape.
IF the node has …
THEN name the …
a single drive
single drive Tape0.
more than one drive
the first drive Tape0 and increment each succeeding drive by
one, such as Tape1, Tape2, and so on.
To determine the drive name, do the following:
1 Use REGEDIT to find the data string called TapePeripheral
located in \HKEY_LOCAL_MACHINE.
2 Then right-click your mouse and select Find.
3 Press F3 to find all occurrences of TapePeripheral.
For more information, see your Microsoft user documentation.
For example:
\\.\Tape0
The following appears:
Tape device is \\.\Tape0
Dump Unload/Load to Tape - Enter your logon:
6
Type:
.logon crashdumps
and press Enter.
The password for the crashdumps logon is crashdumps.
The following appears:
*** Logon successfully completed.
*** Transaction Semantics are BTET.
*** Character Set Name is 'ASCII'.
Utilities
459
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Starting and Running DUL/DULTAPE
*** Changing current database to crashdumps.
*** New default database accepted.
Dump Unload/Load to Tape - Enter your command:
To log off Teradata Database and exit dultape, do the following:
✔ At the command prompt, type the following and press Enter:
quit;
Starting DUL and DULTAPE on Linux
Note: The default password for the crashdumps logon is crashdumps. Verify whether your
System Administrator has changed the password or is still using the default.
You can start and exit dul and dultape interactively from the command line.
To start dul interactively, do the following:
1
At the command prompt, type the following and press Enter:
dul
Dul prompts you for your logon:
Dump Unload/Load - Enter your logon:
2
Type:
.logon crashdumps
and press Enter.
The password for the crashdumps logon is crashdumps.
To start dultape interactively, do the following:
1
At the command prompt, type the following and press Enter:
dultape
Dultape prompts you for the tape drive path:
Dump Unload/Load to Tape - Please insert tape and input tape device
name:
2
Insert the tape, type the tape drive path, and press Enter. For example:
/dev/rmt/c0t3d0s0
Dultape prompts you for your logon:
Dump Unload/Load to Tape - Enter your logon:
3
Type:
.logon crashdumps
and press Enter.
The password for the crashdumps logon is crashdumps.
To log off the Teradata Database and exit dul or dultape, do the following:
✔ At the command prompt, type one of the following and press Enter:
460
•
LOGOFF;
•
END;
•
QUIT;
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Starting and Running DUL/DULTAPE
Space Requirements
The following sections discuss the space requirements for the Teradata Database and hosts.
Crashdumps Database
The Crashdumps database is used to store system dumps. Because system dumps take up disk
space, you should periodically clear old or unwanted dump tables from the Crashdumps
database.
If the space available in the Crashdumps database is exceeded, a message is displayed on the
system console every hour until you make space available for the dump to be saved. To make
more room for new dumps, examine the existing dumps and delete the ones that you no
longer need.
If you still need the existing dumps, then copy them to removable media. Once the Teradata
Support Center has received and evaluated a dump, you can delete the tables.
Database Space Allocation
The proper size for the Crashdumps database depends on the following configuration
information:
•
Number of nodes on the system
•
Number of dumps you want to have available online at any one time (minimum of one
dump)
The size of the dump for any given node depends on several unpredictable variables. To
calculate the approximate size of the Crashdumps database that you need for your system, use
the following formula:
number of DBS nodes x number of dumps x 100 MB
At least 100 MB of space should be allowed per dump, although Teradata recommends larger
multiples.
Privileges
For Dump/Unload Operations
Before you can perform a dump/unload operation, your username specified in the logon ID
must have the following:
•
CREATE, DROP, and SELECT privileges on the tables in the Crashdumps database
•
SELECT privileges on the DBC.SW_EVENT_LOG system table
For Load Operations
Before you can perform a load operation, your username specified in the logon ID must have
CREATE, DROP, and SELECT privileges on the DataBase Computer (DBC) tables of the
Teradata Database.
If the username you specify when you invoke dul does not have the appropriate privileges,
then the Teradata Database returns an error message, and the operation is cancelled.
Utilities
461
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Saving Dumps to Removable Media
For detailed information on access privileges, see Database Administration or consult your
system administrator.
Saving Dumps to Removable Media
Saving MP-RAS Dumps
Only the first node in a cabinet has a tape drive.
Use the MP-RAS commands below to save the dumps to tape. See the man pages for
descriptions of the commands. Each site is different, so the exact commands, file names, and
device names will vary. To save the dump from one node to another that has an internal tape
drive, do the following:
1
After the node where the dump occurred has been recovered and is up and running, do a
dumpsave. For example:
dumpsave -o dumpfile1=size, dumpfile2=size, dumpfile3=size, dumpfile4
-O unixfile
where dumpfilen is the name of the dump files you want the to save to disk. The amount of
system memory determines how many dump files are required to save the dump. The size
of each dump file is limited to a 4K. A dump file without a size will copy the remaining
data into that file. unixfile is the file you want to contain the UNIX kernel. Sufficient disk
space should be available on the internal disk drives to save each section of the dumpfile
and kernel.
2
After the files have been written to disk, FTP each dumpfile and unixfile to the node with
the tape drive. If a system has the OpenSSH package installed for security, use the secure
commands of sftp (instead of ftp) or scp (instead of the remote copy command, rcp). Refer
to the UNIX Man pages for more information.
3
After you have FTPed the dumpfiles to the node with the tape drive, copy the files to tape:
dd if=<dumpfile> of=<devicename> bs=<block_size>
If necessary, perform step 3 again until all of the dump files are on the tape. Then
remember to use the “no rewind on open” and “no rewind on close” tape device options,
that is /dev/rmt/c100t6d0s0nn.
Use standard operating system commands to burn the dump file to removable optical media,
such as DVD.
Saving Linux Dumps
Use the Linux commands below to save a dump to removable media. Each site is different, so
the exact commands, file names, and device names will vary.
To save the dump from one node to another that has a tape drive, do the following:
1
After the node where the dump occurred has been recovered and is up and running, save
the dump to a disk file. For example:
csp -mode save -target stream
462
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Mailing Crash Dumps to the Teradata Support Center
Note: For a description of the csp command, see the pdehelp for the command.
2
After the file is written to disk, FTP the dumpfile to the node with the tape drive.
3
After you have FTPed the dumpfile to the node with the tape drive, copy the file to tape:
dd if=<dumpfile> of=<devicename> bs=<block_size>
Note: For a description of the dd command, see the Linux man page for the command.
4
If necessary, perform step 3 again until all of the dump files are on the tape. Then
remember to use the “no rewind on open” and “no rewind on close” tape device options,
that is /dev/rmt/c100t6d0s0nn.
Use standard operating system commands to burn the dump file to removable optical media,
such as DVD.
Mailing Crash Dumps to the
Teradata Support Center
To mail a crash dump that has been saved to removable media:
1
Label every cartridge or disk with the following:
•
The incident number.
•
The Teradata Database version number (for example, 13.00.00.00 or 13.01.00.00).
•
A Teradata Database dump or a UNIX system dump.
If applicable, write a volume number, such as 1 of 4, 2 of 4, and so on, on each cartridge or
disk.
2
Write the Incident number on the outside of the package.
3
Include your name or some other person to contact as part of your return business
address.
4
Address the package to the following address:
Teradata Corporation
Dump Administrator
Teradata Customer Support Engineering
Ref.: Incident #
17095 Via del Campo
San Diego, CA 92127
Note: The version, incident number, and volume number are necessary in order for the
Teradata Support Center tester to know which installation the dump is reporting and how to
load the crash dump properly.
Utilities
463
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Transferring Windows Dump Files
Transferring Windows Dump Files
Removable media is rarely used for transferring dump files from Windows systems to the
Teradata Support Center. The center has a virtual private network (VPN) dump server, which
has a network-type connection to most major Windows sites. This connection allows dump
files to be transferred directly from those systems to Teradata. For multi-node systems, the
center might have a connection to the system AWS and to some of the nodes in that system. To
allow dump files to be transferred directly to Teradata:
1
Copy the dump file to the node to which the Teradata Database has access. This is usually
C:\lnetpub\ftproot directory or a directory below that on the AWS.
2
Map to a network where both nodes reside to copy the files over.
3
FTP the file from that node to the dump server. You can usually log in as Anonymous.
4
Copy the directory where you put the file and use the “get” or “mget” command to transfer
the file. You can enter the “hash” command, before starting the transfer, to get feedback
that the transfer is in progress.
If Teradata has no access to your node, the following procedure could be used for moving the
dump files to removable media:
1
If the removable media drive is not on the same node as the dump files, map drives to both
nodes and copy the dump files over to the node with the removable media drive.
2
Use a backup utility to copy it from its location to the removable media.
Restarting DUL
Restarting During a Load Operation
If your system fails during a dul load operation, dul must be restarted. You must resubmit the
LOAD command.
To restart dul during a load operation, type the following commands.
Command
Description
LOGON ZZ/Admin, abc ;
The LOGON command logs user Admin onto the
Teradata Database.
HELP DATABASE crashdumps ;
The HELP DATABASE command lists any dump tables
that might have been created during a load operation.
DROP crashdumps.crash_20000606_142500_01 ;
The DROP command ensures that any partially created
tables are deleted prior to submitting a new load
operation.
464
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Return Codes
Command
Description
SELECT ERROR ;
The SELECT command sets the selection criteria that
determines the dump data that is loaded into tables on
your Teradata Database. In the example above, only
processors that contain error codes are selected.
LOAD
The LOAD command can then be used to resubmit the
operation.
For example:
LOAD CL200, FALLBACK FILE = FILEPATH;
For a detailed explanation of loading dump files into
tables, see “LOAD” on page 476.
Restarting During an Unload Operation
If your system fails during an unload operation, dul must be restarted. You must resubmit the
UNLOAD command.
To restart dul during an unload operation, include the following commands.
Command
Description
LOGON ZZ/Admin, abc ;
The LOGON command logs user Admin onto the
Teradata Database.
SELECT ERROR ;
The SELECT command sets the selection criteria that
determines the dump data that is unloaded onto your
host system. In the previous example, only processors
that contain error codes are selected.
UNLOAD
The UNLOAD command can then be used to resubmit
the unload operation.
For example:
UNLOAD crashdumps.crash_20000606_142500_01
FILE=filepath;
For a detailed explanation, see “UNLOAD” on
page 489.
Return Codes
Dul issues return codes to report processing success or failure:
Utilities
On …
dul and dultape support return codes via …
Linux
shell commands ($?).
MP-RAS
shell commands ($?).
465
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Return Codes
On …
dul and dultape support return codes via …
Windows
normal batch commands (%errorlevel%).
A return code of “0” indicates that dul processing was successful; a nonzero return code
indicates that processing failed. In order of severity, dul return codes are 02, 04, 08, and 12.
These codes are defined as shown below:
•
02 - Special warning
•
04 - Warning
•
08 - User error
•
12 - Severe internal error
A 02 code is returned if you attempt a Teradata Database operation without logging onto the
system.
The following table shows messages resulting in dul Return Code 04.
Return Code
Error Code
Description
04
3747
No startup string defined for this user.
04
3803
Table “%VSTR” already exists.
04
3804
View “%VSTR” already exists.
04
3805
Macro “%VSTR” already exists.
The following table shows messages resulting in dul Return Code 08.
466
Return Code
Error Code
Description
08
2538
A disk read error occurred in the tables area.
08
2541
End of Hash Code range reached.
08
2631
Transaction ABORTED due to %VSTR.
08
2632
All AMPs own sessions for this Fast/MultiLoad.
08
2639
Too many simultaneous transactions.
08
2641
%DBID.%TVMID was restructured. Resubmit.
08
2644
No more room in database %DBID.
08
2654
Operation not allowed: %DBID.%TVMID is being
Restored.
08
2805
Maximum row length exceeded in %TVMID.
08
2809
Invalid recovery sequence detected.
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Return Codes
Utilities
Return Code
Error Code
Description
08
2815
Apparent invalid restart of a restore.
08
2818
Invalid lock to dump table without after-image journaling.
08
2825
No record of the last request was found after Teradata
Database restart.
08
2826
Request completed but all output was lost due to Teradata
Database restart.
08
2827
Request was aborted by user or due to command error.
08
2828
Request was rolled back during system recovery.
08
2830
Unique secondary index must be dropped before restoring
table.
08
2835
A unique index has been invalidated; resubmit request.
08
2837
Table being FastLoaded; no data dumped.
08
2838
Table is unhashed; no data dumped.
08
2840
Data rows discarded due to inconsistent hash codes.
08
2843
No more room in the database.
08
2866
Table was Recovery Aborted; no data dumped.
08
2868
This permanent journal table is damaged; no data dumped.
08
2920
Delete journal and AMP down without dual.
08
2921
No saved subtable for journal %DBID.%TVMID.
08
2926
No more room in %DBID.%TVMID.
08
3001
Session is already logged on.
08
3111
The dispatcher has timed out the transaction.
08
3116
Response buffer size is insufficient to hold one record.
08
3119
Continue request submitted but no response to return.
08
3120
The request is aborted because of a Teradata Database
recovery.
08
3523
%FSTR does not have %VSTR access to %DBID.%TVMID.
08
3524
%FSTR does not have %VSTR access to database %DBID.
08
3566
Teradata Database does not have a PERMANENT journal.
08
3596
RESTORE Teradata Database invalid if table, view, or macro
exists outside of Teradata Database.
08
3598
Concurrent change conflict on Teradata Database. Please try
again.
467
Chapter 13: Dump Unload/Load Utility (dul, dultape)
Return Codes
Return Code
Error Code
Description
08
3603
Concurrent change conflict on table. Please try again.
08
3613
Dump/Restore, no hashed nonfallback tables found.
08
3656
Journal table specified no longer exists.
08
3658
ROLLBACK/ROLLFORWARD table specifications are
invalid.
08
3705
Teradata Database/SQL request is longer than the Simulator
maximum.
08
3737
Name is longer than 30 characters.
08
3802
Database “%VSTR” does not exist.
08
3807
Table/view “%VSTR” does not exist.
The following table shows messages resulting in dul Return Code 12.
Return Code
Error Code
Description
12
CLI0001
Parameter list invalid or missing.
12
CLI0002
Invalid number of parameters received.
12
CLI0003
Error validating HSIRCB.
12
CLI0004
Error validating HSICB.
12
CLI0005
Error validating HSISPB.
12
CLI0006
Invalid destination HSICB detected.
12
CLI0007
Invalid destination RCB detected.
12
CLI0008
DBCFRC unable to free RCB/HSICB control blocks because
they are not contiguous in storage.
12
CLI0009
Invalid DBCAREA pointer or id.
12
CLI0010
ECB already waiting.
12
CLI0530
Character Set Name or Code unknown.
12
2123
A segment could not be read successfully.
12
2971
The AMP Lock table has overflowed.
12
2972
No table header exists for table.
For details on error messages, see Messages.
468
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
DUL Commands
DUL Commands
DUL Command Syntax
Dul and dultape commands must either begin with a period or be terminated with a
semicolon. They also can use both, as shown below:
.SHOW VERSIONS
SHOW VERSIONS ;
.SHOW VERSIONS ;
DUL Command Categories
Dul commands are divided into the following two categories:
•
Session Control
•
Data Handling
The following table summarizes the functions of each dul command.
Activity
Command
Function
Operating System
Session Control
ABORT
Aborts a LOAD or UNLOAD command
All
DATABASE
Changes the default database.
All
HELP
Displays information about dul commands and
dump databases.
All
LOGOFF
END
QUIT
Ends a Teradata Database session and exits the
dul utility. The END and QUIT commands are
synonyms of LOGOFF.
All
LOGON
Begins a Teradata Database session.
All
.OS
Submits a command to your host operating
system.
All
SHOW TAPE
Lists the files on the tape.
All
SHOW
VERSIONS
Displays dul software module release versions
All
DROP
Removes a dump table from the Teradata
Database.
All
LOAD
Moves dump data from removable media to a
Teradata Database system.
All
SEE
Reports statistics about the contents of a dump.
All
SELECT
Sets selection criteria.
All
UNLOAD
Moves dump data from a table on the Teradata
Database to a file on the host.
All
Data Handling
Utilities
469
Chapter 13: Dump Unload/Load Utility (dul, dultape)
ABORT
ABORT
Purpose
The ABORT command aborts a LOAD or UNLOAD request.
Syntax
ABORT
GT14A029
Usage Notes
470
IF you are working under …
THEN …
TSO
press the PA1 key twice before you can execute either ABORT or HX.
CMS
use the CMS command HX to unconditionally terminate dul.
Linux
press the Ctrl+C keys prior to typing the ABORT command.
MP-RAS
press the Esc key prior to typing the ABORT command.
Windows
press the Ctrl+C keys prior to typing the ABORT command.
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
DATABASE
DATABASE
Purpose
The DATABASE command changes the default database for the current Teradata Database
session.
Syntax
;
DATABASE database
1102B031
where:
Syntax element …
Is the …
database
name of the new default database.
Usage Notes
When you invoke dul and log onto the Teradata Database, the Crashdumps database
automatically becomes your default database. The Teradata Database uses the database
specified in the DATABASE command as the default database until the end of the session, or
until you type a subsequent DATABASE command.
To use the DATABASE command, you must have SELECT privileges on the specified database.
Example
To make the Personnel database the default database for the current session, type the
following:
DATABASE Personnel ;
The following appears:
*** Sending database Personnel to Teradata Database.
*** New default database accepted.
Utilities
471
Chapter 13: Dump Unload/Load Utility (dul, dultape)
DROP
DROP
Purpose
The DROP command removes an existing table (created as a result of a system dump) and all
of its rows from the specified database on the Teradata Database.
Syntax
table
DROP
;
database.
1102B033
where:
Syntax element …
Is the …
database
name of the database in which the table resides. If the database is not specified,
the currently set database is assumed. Use a period (.) to separate the database
name from the table name.
table
name of the table to be dropped.
Usage Notes
The DROP command removes the specified table and any tables with the same name that end
with a _C, _L, or _M suffix.
In general, enter a DROP command before performing a load operation to remove any
existing tables that might have the same name as the table specified on the next LOAD
command.
To use the DROP command, you must have the DROP privilege on the specified table.
Example
Assume that Crash_20000407_1013_02, Crash_20000407_1013_02_C,
Crash_20000407_1013_02_L, and Crash_20000407_1013_02_M tables were produced as an
result of the previous load operation.
To drop all four tables from the Crashdumps database, type the following:
DROP Crash_20000407_1013_02;
472
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
DROP
The following appears:
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
***
Dropping table Crash_20000407_1013_02;
Table has been dropped.
Dropping table Crash_20000407_1013_02_1;
Failure 3807 Table/view 'Crash_20000407_1013_02_1' does not exist.
Statement# 1, Info =0
Dropping table Crash_20000407_1013_02_2;
Failure 3807 Table/view 'Crash_20000407_1013_02_2' does not exist.
Statement# 1, Info =0
Dropping table Crash_20000407_1013_02_C;
Table has been dropped.
Dropping table Crash_20000407_1013_02_C_1;
Failure 3807 Table/view 'Crash_20000407_1013_02_C_1' does not exist.
Statement# 1, Info =0
Dropping table Crash_20000407_1013_02_C_2;
Failure 3807 Table/view 'Crash_20000407_1013_02_C_2' does not exist.
Statement# 1, Info =0
Dropping table Crash_20000407_1013_02_L;
Table has been dropped.
Dropping table Crash_20000407_1013_02_L_1;
Failure 3807 Table/view 'Crash_20000407_1013_02_L_1' does not exist.
Statement# 1, Info =0
Dropping table Crash_20000407_1013_02_L_2;
Failure 3807 Table/view 'Crash_20000407_1013_02_L_2' does not exist.
Statement# 1, Info =0
Dropping table Crash_20000407_1013_02_M;
Table has been dropped.
Dropping table Crash_20000407_1013_02_M_1;
Failure 3807 Table/view 'Crash_20000407_1013_02_M_1' does not exist.
Statement# 1, Info =0
Dropping table Crash_20000407_1013_02_M_2;
Failure 3807 Table/view 'Crash_20000407_1013_02_M_2' does not exist.
Statement# 1, Info =0
Dropping table Crash_20000407_1013_02_S;
Failure 3807 Table/view 'Crash_20000407_1013_02_S' does not exist.
Statement# 1, Info =0
Dul displays additional messages if you abort the load operation because it tries to dump
tables used by the FastLoad utility even if they do not exist.
Utilities
473
Chapter 13: Dump Unload/Load Utility (dul, dultape)
END
END
Purpose
The END command terminates a Teradata Database session and exits the dul utility.
Syntax
END
;
GT14A034
Usage Notes
The LOGOFF and QUIT commands are synonyms for the END command.
Example
To terminate a Teradata Database session and exit dul, type the following:
END ;
The following appears:
*** DUL Terminated
*** Highest return code = <n>
where n is the highest return code by dul.
474
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
HELP
HELP
Purpose
The HELP command returns syntax information about dul commands and lists the tables,
views, and macros stored in a database.
Syntax
HELP
;
DATABASE database
DUL
1102B035
where:
Syntax element …
Returns a …
DATABASE database
list of tables, views, and macros stored in the specified database.
DUL
syntax summary of all the commands available with the dul utility.
Example
To list all tables, views, and macros on the Crashdumps database, type the following:
HELP DATABASE Crashdumps ;
The following appears:
*** Sending HELP DATABASE crashdumps; to Teradata Database.
*** Help information returned. 10 rows.
Crash_20000526_134503_01
Crash_20000526_154702_01
Crash_20000526_154925_01
Crash_20001120_103233_01
Crash_20001225_163434_01
If the database does not exist, the following appears:
help database non_exist;
*** Sending HELP DATABASE non_exist; to Teradata Database.
*** Failure 3802 Data base 'non_exist' does not exist.
Statement# 1, Info =0
Utilities
475
Chapter 13: Dump Unload/Load Utility (dul, dultape)
LOAD
LOAD
Purpose
The LOAD command moves dump data from files (dul) or tape (dultape) created by an
unload operation into tables on a Teradata Database.
Syntax
table
LOAD
database.
FILE = filepath
;
FALLBACK
1102B039
where:
Syntax element …
Is the …
database
name of the database in which the table resides. Use a period (.) to separate
the database name from the table name.
table
name of the table to receive dump data.
The table cannot already exist. However, the database in which the table will
reside must already exist.
FALLBACK
option that creates a fallback copy of the table specified in the LOAD
command as well as the tables with the same name ending in _L, _C, and
_M.
FILE = filepath
the path of the file created in a previous unload operation.
The filepath specification is required. On Windows a drive specification
optionally may be included as part of filepath.
The filepath is specified as directory/filename.
Note: Dul expects dump files to be in a compressed, gzip format. For
compatibility, dul can load compressed files regardless of whether the file
name includes the .gz extension.
Usage Notes
Before you can perform a load operation, your username specified in the logon ID must have
CREATE, DROP, and SELECT privileges on the DataBase Computer (DBC) tables of the
Teradata Database.
If the username you specify when you invoke dul does not have the appropriate privileges,
Teradata Database returns an error message, and the operation is cancelled.
The following table shows the tape drives that dultape supports.
476
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
LOAD
On …
dultape supports …
Linux
4mm DAT tape drives.
MP-RAS
•
•
•
•
Windows
• 4mm DAT tape drives.
• Exabyte Mammoth 8900 tape drives.
4mm DAT tape drives.
8mm Exabyte tape drives.
Exabyte Mammoth 8900 tape drives.
Quarter-Inch Cartridge (QIC) tape drives.
Before you can perform a load operation, dul displays summary information, such as which
processors are selected, error dates, and so forth, about the selection criteria that is set. After
the load operation, dul displays event codes, if any exist, for the specified processors. For
information on setting selection criteria, see “SELECT” on page 485.
As a general rule, type a DROP command before performing a load operation. This removes
any existing tables that might have the same name as the name specified on the current LOAD
command. For more information, see “DROP” on page 472.
Note: Dul uses the FastLoad utility to improve transfer speed. However, you can still transfer
dump files on your host to a Teradata Database using BTEQ.
In most instances, a load operation is not performed at a customer site.
For detailed information on privileges, see Database Administration or consult your system
administrator.
For more information about the FastLoad utility, see Teradata FastLoad Reference.
For information about BTEQ, see Basic Teradata Query Reference.
Example 1
To check whether tables with the same name exist in the Crashdumps database, type the
following:
HELP DATABASE crashdumps ;
The following list of all the tables in the Crashdumps database appears:
*** Sending HELP DATABASE crashdumps to Teradata Database.
*** Help information returned. 4 rows.
CL100
CL100_C
CL100_L
CL100_M
Since no tables have the same name, you can type the LOAD command next. If tables with the
same name already exist, verify that they are not needed and then use the DROP command to
delete them from the Crashdumps database.
Utilities
477
Chapter 13: Dump Unload/Load Utility (dul, dultape)
LOAD
Example 2
To transfer dump information from tape or disk files to tables onTeradata Database, type the
following:
load crashdumptable1 file=TPFILE;
The following appears:
load crashdumptable1 file=TPFILE;
*** Opening tape file TPFILE
*** Creating table 'crashdumptable1'.
*** Table has been created.
*** Loading data into 'crashdumptable1' .
*** Logging on Amp sessions.
*** Growing Buffer to 4114
*** Starting Row 100 at Wed Jan 16 15:54:46 2002
*** Starting Row 200 at Wed Jan 16 15:54:46 2002
*** Starting Row 300 at Wed Jan 16 15:54:46 2002
.
.
.
*** Starting Row 377600 at Wed Jan 16 16:07:10 2002
*** Starting Row 377700 at Wed Jan 16 16:07:10 2002
*** Starting Row 377800 at Wed Jan 16 16:07:10 2002
*** Starting Row 377900 at Wed Jan 16 16:07:10 2002
*** END LOADING phase...Please stand by...
Loading data into crashdumptable1 completes successfully.
*** Opening tape file TPFILE.log
*** End of Tape
*** Closing Tape file (and rewinding tape)
Dump Unload/Load to Tape - Enter your command:
478
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
LOGOFF
LOGOFF
Purpose
The LOGOFF command exits the dul utility and terminates the Teradata Database session.
Syntax
LOGOFF
;
GT14A041
Usage Notes
The END and QUIT commands are synonyms for the LOGOFF command.
Example
To terminate a Teradata Database session and exit the dul utility, type the following command:
LOGOFF ;
The following appears:
*** DUL Terminated
*** Highest return code = <n>
where n is the highest return code by dul.
Utilities
479
Chapter 13: Dump Unload/Load Utility (dul, dultape)
LOGON
LOGON
Purpose
The LOGON command establishes a Teradata Database session.
Syntax
Batch Mode
username ,password
.LOGON
;
,'acctid '
tdpid/
1102B042
Interactive Mode
username
.LOGON
;
tdpid/
1102B043
where:
Syntax element …
Is the …
tdpid
identifier which is associated with a particular Teradata Database.
The default identifier is TDP0.
username
ID of the user on the corresponding Teradata Database.
The maximum length of username is 30 characters.
password
password associated with the username.
The maximum length of a password is 30 characters.
Note: In interactive mode, the password is accepted on the next line in a
protected area.
acctid
account identifier associated with the username. The acctid can contain up to
30 characters.
Each doubled apostrophe, if any, counts as one character of the 30 characters.
Note: This describes the standard TD 2 (Teradata authentication) logon format. For more
information about other logon formats and types of authentication see Security
Administration.
480
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
LOGON
Usage Notes
The LOGON command is the first command you type in a dul session. The command
establishes a session on a Teradata Database and identifies you and the account that is charged
for system resources used during a dul operation. To ensure system security, the password is
not displayed when dul is used interactively.
You must have CREATE, DROP, and SELECT privileges on DBC.SW_EVENT_LOG for
Teradata Database, as well as on the dump table on the Teradata Database, to perform a load
or unload operation.
After you log onto the Teradata Database, dul changes the default database to Crashdumps.
The Crashdumps database is where the Teradata Database initially saves a dump. However,
you can change the default database with the DATABASE command.
For additional information on granting privileges, see Database Administration. For more
information on the Crashdumps database, see “DATABASE” on page 471
Example 1 - Batch Mode
To log onto a Teradata Database with a tdpid of TDP0, a username of Admin, and a password
of abc, type the following:
LOGON TDP0/Admin,abc ;
The following appears:
***
***
***
***
Logon successfully completed.
Changing current database to crashdumps.
New default database accepted.
Dump Unload/Load - Enter your command:
Example 2 - Interactive Mode
To log onto a Teradata Database with a tpid of crashdumps, a username of crashdumps, and a
password of crashdumps, type the following:
1
Type your logon and press Enter. For example:
.logon crashdumps
Your logon is repeated, and you are prompted for your password:
.logon crashdumps
Password:
2
Type your password and press Enter. For example:
crashdumps
Your password will not appear on screen.
The following appears:
*** Logon successfully completed.
*** Transaction Semantics are BTET.
*** Character Set Name is 'ASCII'.
*** Changing current database to crashdumps.
*** New default database accepted.
Dump Unload/Load to Tape - Enter your command:
Utilities
481
Chapter 13: Dump Unload/Load Utility (dul, dultape)
.OS
.OS
Purpose
The .OS command submits an operating system command to your host during a dul session.
Syntax
.OS oscommand
;
GT14A045
where:
Syntax element …
Is the …
oscommand
command that is legal on your host operating system.
Usage Notes
At the prompt below, type one of the following, depending on your system:
Dump Unload/Load to Tape - Enter your command:
On…
Type …
Linux
.os date
The following appears:
.os date
Mon Aug 5 13:00:24 EDT 2002
MP-RAS
.os date
The following appears:
.os date
Mon Aug 5 13:00:24 EDT 2002
Windows
.os date /T
The following appears:
.os date /T
Mon 08/05/2002
482
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
QUIT
QUIT
Purpose
The QUIT command exits the dul utility and terminates the Teradata Database session.
Syntax
QUIT
;
GT14A046
Usage Notes
The LOGOFF and END commands are synonyms for the QUIT command.
Example
To terminate a Teradata Database session and exit the dul utility, type the following command:
QUIT ;
The following appears:
*** DUL Terminated
*** Highest return code = <n>
where n is the highest return code by dul.
Utilities
483
Chapter 13: Dump Unload/Load Utility (dul, dultape)
SEE
SEE
Purpose
The SEE command provides a summary of the processors and error codes captured in a
dump. The information is retrieved from the dump table.
Syntax
;
table
SEE
database.
1102B047
where:
Syntax element …
Is the name of the …
database
database in which the table resides.
table
table that contains dump information.
Usage Notes
The SEE command returns information that helps you confirm system failure information
forwarded to the Teradata Support Center. Information about tables with the _L, _C, and _M
suffixes is not returned.
Example
To access a specific dump table, type the following:
see crashdumps.Crash_20000412_194517_02;
The following appears:
*** Looking at crashdumps.Crash_20000412_194517_02.
*** Query completed. One row found. 7 columns returned.
the node number is 1024
the instigating node is 1024
the time the error occurred is Wed April 12 19:45:17 2000
the event is 12140, severity is 40 and category 10
Severity = UserError
Category = User
484
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
SELECT
SELECT
Purpose
The SELECT command sets criteria determining the dump data selected for a load or unload
operation.
Syntax
SELECT
;
ALL
ERRORDATE
ERROR
PROC
NOD-NUM
'yymmdd'
'yyyymmdd'
RESET
1102A040
where:
Syntax element …
Is the …
ERROR
load or unload operation.
Only processors with an error code are selected.
RESET
same as ERROR.
PROC NOD-NUM
unload operation.
NOD-NUM specifies the number of the virtual processor to be
selected. Dump data will be loaded or unloaded only from this
node. Valid values for NOD-NUM are16384 and greater.
ALL
load or unload operation select dump data from all processors.
This is the default.
ERRORDATE ‘yymmdd’ or
error date to be selected from the DBC.SW_EVENT_LOG table.
‘yyyymmdd’
The ‘yymmdd’ or ‘yyyymmdd’ represents the year, month, and day,
respectively. Dul retrieves all error records with a date equal or later
than the yymmdd or yyyymmdd specification.
Note: You can type the date in either single or double quotes.
Usage Notes
You must always type the SELECT command before you type an UNLOAD command. If you
do not specify a SELECT command, the entire dump is selected.
Negative processor numbers are used for special record types. When RESET is specified, dul
and dultape retrieve dump data with negative processor numbers.
Utilities
485
Chapter 13: Dump Unload/Load Utility (dul, dultape)
SELECT
Example
To select dump data, type the following:
select proc 16384;
The following appears:
*** Processor selection set to list of processors.
486
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
SHOW TAPE
SHOW TAPE
Purpose
The SHOW TAPE command allows you to retrieve a list of files residing on a tape.
Syntax
SHOW TAPE
;
GT14A062
Example
To display the files on a tape, type the following:
SHOW TAPE ;
The following appears:
The files on the tape:
filename is TPFILE volume 1
***End of tape
***Closing Tape file (and rewinding tape)
Utilities
487
Chapter 13: Dump Unload/Load Utility (dul, dultape)
SHOW VERSIONS
SHOW VERSIONS
Purpose
The SHOW VERSIONS command displays the current level of all dul utility software
modules.
Syntax
SHOW VERSIONS
;
SHOW VERSION
1102B054
Usage Notes
The SHOW VERSIONS command is helpful in reporting software problems.
Example
MP-RAS:
SHOW VERSIONS ;
Dump Unload/Load Version 05.00.00 for UNIX 5.4 running Streams TCP/IP
DULMain
: 05.00.00.03
CapAAUtl : H3_04
CapCLUtl : H5_04
CapIOUtl : H5_06
CapERUtl : H5_01
OSIDEP
: 04.06.01.04
CLIV2
: 04.06.01.13
MTDP
: 04.06.01.08
OSENCRYPT : N/A
OSERR
: 04.06.01.00
MOSIos
: 04.06.01.05
488
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
UNLOAD
UNLOAD
Purpose
The UNLOAD command moves dump data from a system-generated table on the Teradata
Database to a file on a host or directly to removable media.
Syntax
table
UNLOAD
;
FILE = filepath
f
database.
1102B060
where:
Syntax element …
Is …
database
the name of the database in which the table resides and the separator between
the database name and table name. Use a period (.) to separate the database
name from the table name.
The default database is Crashdumps.
table
the name of the table that contains the dump data.
FILE =filepath
the path of the into which the dump data is unloaded.
filepath is specified as directory/filename.
Note: Dul saves files in a compressed, gzip format, and appends a file
extension of .gz to the file name automatically, if the specified name does not
include .gz.
f
an option for unloading dumps from a foreign Teradata Database.
Usage Notes
Before you can perform a dump/unload operation, your username specified in the logon ID
must have the following:
•
CREATE, DROP, and SELECT privileges on the tables in the Crashdumps database
•
SELECT privileges on the DBC.SW_EVENT_LOG system table
The UNLOAD command selects dump data according to the selection criteria specified in a
previous SELECT command. If you do not specify a SELECT command, the entire dump
table is searched. You should always type a SELECT command before an UNLOAD command.
Before you perform an unload operation, dul displays summary information, such as which
processors are selected, error dates, and so forth, about the selection criteria that is set. After
Utilities
489
Chapter 13: Dump Unload/Load Utility (dul, dultape)
UNLOAD
the load operation, dul displays event codes, if any exist, for the specified processors. For
information on setting selection criteria, see “SELECT” on page 485.
In moving dump data, the _C, _L, and _M files are always included. If the _C or _L table is not
found, it is generated.
If you load the unloaded data back into some other Teradata Database system, the table
created by the LOAD command is called dump data in a foreign Teradata Database and will be
one of the following tables:
•
tname_C
•
tname_L
•
tname_M
When you enter the UNLOAD command without the F option, dul or dultape unloads the
data specified in tname from the DBC.SW_EVENT_LOG table.
The F option is required to unload the dump data from the foreign Teradata Database. Dul or
dultape will unload data from the table specified in the UNLOAD command and the
corresponding tname_C, tname_L, and tname_M tables.
Dul uses the FastExport utility to improve transfer speed for unloading dumps. However, you
can still unload dumps using BTEQ.
To use the UNLOAD command, you must have CREATE, DROP, and SELECT privileges on
the following:
•
Dump table (Crashdumps.Crash_YYYYMMDD_HHMMSS_NN)
•
System table DBC.SW_EVENT_LOG
For more information about the FastExport utility, see Teradata FastExport Reference.
For information about BTEQ, see Basic Teradata Query Reference.
For additional information on access privileges, see Database Administration.
Example 1
The following example assumes that a system failure occurred on 6/6/00 at 2:36 p.m. To
confirm that a dump table has been created, you would examine the Crashdumps databases
using the following command:
HELP DATABASE Crashdumps ;
Since dump tables are named according to the date and time that the system failure occurred,
you should be able to find the correct dump table. In this example, the table named
crash_200000606_143623_01 contains the dump information.
To display the contents of the dump table, use the SEE command:
SEE crash_20000606_143623_01 ;
The SEE command displays summary information about all of the processors and error codes
that were captured in the dump. Some of the processors might not contain any information.
Generally, only processors that contain errors are needed for an unload operation.
490
Utilities
Chapter 13: Dump Unload/Load Utility (dul, dultape)
UNLOAD
By typing a SELECT command next, you can choose the processors for the unload operation.
To select only the processors that contain error codes, type the following command:
SELECT ERROR ;
Dul responds with this message:
*** Processor selection set to list of processors.
Example 2
Now you are ready to unload the dump data from table crash_20000606_143623_01 on the
Crashdumps database onto your host using the following command:
UNLOAD crash_20000606_143623_01 file=filepath;
Dul responds with these messages:
*** Unloading data from crash_20000606_143623_01 for processor(s) 1-6.
*** Query completed. 263 rows found. 3 columns returned.
*** Processor 1-6
***
***
***
***
Number of rows = 263
Unloading Procedure Information
Query completed. 200 rows found. 4 columns returned.
Number of rows = 200
*** Unloading Errorlog Information
*** Query completed. 413 rows found. 6 columns returned.
*** Number of rows = 413
*** Unloading Memo Information
*** Query completed. 123 rows found. 5 columns returned.
*** Number of rows = 123
Event = 2490:
On 6/6/96 at 09:26:27 in processor 1-6, partition 14,
task SEMTSK.
Severity = UserError
Category = User
HostEvent = None
*** Number of rows = 263
Dul creates two files when accessing Teradata Database on the tape. The tape can then be sent
to the Teradata Support Center for analysis.
Note: If the dump data is copied to the local hard disk on your host, you must copy the data to
a removable medium for shipment.
Utilities
491
Chapter 13: Dump Unload/Load Utility (dul, dultape)
UNLOAD
492
Utilities
CHAPTER 14
Ferret Utility (ferret)
The Ferret utility, ferret, lets you display and set storage space utilization attributes of the
Teradata Database. Ferret dynamically reconfigures the data in the Teradata file system while
maintaining data integrity during changes. Ferret works on various data levels as necessary:
vproc, table, subtable, WAL log, disk, and cylinder.
Audience
Users of Ferret include the following:
•
Field engineers
•
Teradata Database system engineers
•
Teradata Database developers
Users should have an understanding of the basic storage structures of the Teradata Database
file system.
User Interfaces
Ferret runs on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Database Window
Windows
Database Window
Linux
Database Window
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
Note: You must use Ferret while the Teradata Database is running and in the Logons Enabled
state and not in Debug-Stop under the System Debugger (GDB).
Utilities
493
Chapter 14: Ferret Utility (ferret)
Redirecting Input and Output (INPUT and OUTPUT)
Redirecting Input and Output (INPUT and
OUTPUT)
When you first start Ferret, it accepts input from STDIN, the console input file, and directs
output to STDOUT, the console output file.
By using the Ferret INPUT and OUTPUT commands, you can redirect the Ferret input and
output data to files that you specify. Also, you can do the following:
•
Append Ferret data output to an existing output file
•
Overwrite an existing output file
•
Write only to a new file
•
Display the current output file
When you direct output to a file with the OUTPUT command, Ferret echoes all input and
diagnostic messages to that file. For more information, see “INPUT” on page 517 and
“OUTPUT” on page 518.
The Teradata Database File System
The Teradata Database file system is not a general-purpose file system. It helps isolate the
Teradata Database from hardware platform dependencies.
The file system supports the creation and maintenance of database tables under the direction
of the Teradata Database.
The file system provides a set of low-level operations and is largely unaware of the higher-level
intent of a particular sequence of calls made by the Teradata Database to file system
procedures.
Data Blocks and Cylinders
In the Teradata Database file system, a data block (also known as a DB) is a disk-resident
structure that contains one or more rows from the same table.
Any single row is fully contained within a single data block, and every data block must be fully
contained within a cylinder.
Because new disks use different zones of cylinder sizes to increase the storage capacity of the
disk, the file system uses an average cylinder size. Each cylinder used by the file system is
logically independent from other cylinders.
For more information about data blocks and cylinders, see Database Design.
Performance Considerations
Depending on how the Ferret utility is used, certain Ferret commands can alter the
performance behavior of the Teradata Database.
494
Utilities
Chapter 14: Ferret Utility (ferret)
About Write Ahead Logging (WAL)
For example, if the cylinders are packed 100% full, inserts take longer than if the cylinders are
50% full.
About Write Ahead Logging (WAL)
WAL is a log-based file system recovery scheme in which modifications to permanent data are
written to a log file, the WAL log. The log file contains change records (Redo records) which
represent the updates. At key moments, such as transaction commit, the WAL log is forced to
disk. In the case of a reset or crash, Redo records can be used to transform the old copy of a
permanent data block on disk into the version that existed in memory at the time of the reset.
By maintaining the WAL log, the permanent data blocks that were modified no longer have to
be written to disk as each block is modified. Only the Redo records in the WAL log must be
written to disk. This allows a write cache of permanent data blocks to be maintained.
WAL protects all permanent tables and all system tables but is not used to protect the
Transient Journal (TJ), since TJ records are stored in the WAL log. WAL also is not used to
protect spool or global temporary tables.
The WAL log is maintained as a separate logical file system from the normal table area. Whole
cylinders are allocated to the WAL log, and it has its own index structure.
The WAL log data is a sequence of WAL log records and includes the following:
Utilities
•
Redo records, used for updating disk blocks and insuring file system consistency during
restarts.
•
TJ records used for transaction rollback.
495
Chapter 14: Ferret Utility (ferret)
Ferret Command Syntax
Ferret Command Syntax
This section describes the general conventions involved in typing Ferret commands. Then,
each Ferret command is described, with syntax options, parameters, and keywords. Usage
notes and examples are provided if appropriate to the command.
Entering Commands
The following is the general form for a command entry. You can type cmdoption or /dispopt at
the beginning or end of the command. The syntax diagram only shows the option at the
beginning.
;
cmd
cmdoption
parameter
/dispopt
/Y
GT06C001
where:
Syntax element …
Specifies …
cmd
the command or commands separated by a semicolon (;).
cmdoption
options that are specific to the command that you type.
Different options pertain to specific commands. To determine the allowed
options, see the specific command.
/dispopt
the following:
Syntax element …
Specifies the …
/S
short display of information.
/M
medium display of information.
/L
long display of information for the
objects within the scope.
This option is available only for the SHOWBLOCKS command. For detailed
information, see “SHOWBLOCKS” on page 550.
496
Utilities
Chapter 14: Ferret Utility (ferret)
Ferret Command Syntax
Syntax element …
/Y
Specifies …
the following:
IF the …
THEN Ferret…
option is set to /Y,
does not prompt for confirmation of the
command.
/Y confirmation setting is
omitted for a command
where confirmation is
required
prompts for confirmation of the
command before it executes the final
step.
This confirmation option is not available for all commands. Only specific
commands allow you to use it. If a command takes /Y, the command does
not take any other options.
parameter
Parameters are included only with specific commands. Types of parameters
include multitoken parameters and decimal or hexadecimal numeric input.
Command Usage Rules
Although several command options are specific to a particular Ferret command, the following
command usage rules apply to Ferret commands:
•
•
A space is required between the following:
•
The cmdoption and the parameter
•
The cmd and the parameter if either you do not specify cmdoption or if the cmdoption is
at the end of the command
You can combine multiple Ferret commands on a single command line, as shown below:
command
;
command
;
GS04A004
•
Ferret is case insensitive, which means that you can enter any command, keyword, or
option in uppercase, lowercase, or mixed-case letters.
•
If you end a command line with a backward slash (\), the command line continues on the
next line.
•
If an error occurs during processing of a string of commands, processing stops because the
latter commands usually depend on the correct processing of commands entered earlier.
•
If an error occurs while parsing the command line, you can type a single question mark (?)
as the next command. Ferret displays a list of alternatives expected by the parser at that
point of the syntax error. If you type another single question mark (?) after Ferret displays
the alternatives, Ferret returns the HELP text for the command in question. For example:
Ferret ==>radix in dex
radix in de<-Syntax error->x
Utilities
497
Chapter 14: Ferret Utility (ferret)
Using Ferret Parameters
Ferret ==> ?
Valid input at the point of error is:
;
end of command
Ferret ==> ?
RADIX [ ( IN/PUT | OUT/PUT ) ] [ ( H/EX | D/EC ) ]
Ferret ==> ?
RADIX [ ( IN/PUT | OUT/PUT ) ] [ ( H/EX | D/EC ) ]
Sets the Flags for how to treat Unqualified numbers.
Either Hex
(base 16) or Decimal (Base 10),
respectively. See HELP NUMBER for a
description of
unqualified INPUT. The initial setting of these
Flags
is HEX. If neither INPUT nor OUTPUT is specified the
command applies to both Flags. If neither HEX nor DEC
is
specified, the current setting of the Flag is displayed.
Ferret ==> ?
No more information available.
•
Use HELP /L
Comments are allowed anywhere a blank is allowed. Enclose comments between braces
{ }. If Ferret does not find a closing brace ( } ) on a line, Ferret interprets the rest of the
line as a comment. For example:
Ferret ==> SCANDISK db 0 2 1fa { this is the broken data block
•
If you use a single question mark (?) in place of a legal syntactic element, Ferret informs
you as to what you can type at that point in the command. For example:
Ferret==> output ?
Valid input at the ? is :
,
INTO
OVER
end of command
TO
ON
;
Other examples of using a single question mark (?) in place of a legal syntactic element are
shown below:
output ?{comment}
output {comment} ?
Using Ferret Parameters
The variable, parameter, in the Ferret syntax diagram includes various types of parameters,
including multitoken parameters and decimal and hexadecimal numeric input.
Multitoken Parameters
Multitoken parameters, such as subtable identifiers, which are typed as three values, are typed
on a single line with one or more spaces separating the individual tokens. Ferret also accepts
multitoken parameters separated by hyphens.
For example, Ferret accepts either of the following as subtable identifiers:
0 200 0
498
Utilities
Chapter 14: Ferret Utility (ferret)
Using Ferret Parameters
0-200-0
You can separate parameters from each other and from the command by spaces or a comma.
To specify a command option (cmdoption), type it on the same line as the command (cmd).
Numeric Input
Numeric values can be entered into Ferret in either decimal or hexadecimal format. The
default numeric base for data input to and output from Ferret depends on the radix settings
for input and output:
•
When the radix for input is decimal, Ferret interprets numeric input as decimal numbers.
For example, a number entered as 45 will be interpreted as the value forty five.
•
When the radix for input in hexadecimal, Ferret interprets numeric input as hexadecimal
numbers. For example, a number entered as 45 will be interpreted as the value sixty nine.
The initial radix setting in Ferret is hexadecimal for input and output. For more information
on setting the radix, see “RADIX” on page 527.
The following special numeric formatting conventions can be used to force Ferret to interpret
entered numeric values as decimal or hexadecimal, regardless of the radix setting:
•
Decimal values can be signified by adding a 0i or i prefix, or by adding a decimal point
suffix:
0i45 45. 0I45
•
Hexadecimal values can be signified by adding a 0x or x prefix, or by adding an h
suffix. Leading zeros are optional:
0x2D X2D 2Dh 002DH
The valid range of numeric values in Ferret are unsigned 16-bit values, 0 through 65535 (0x0
through 0xFFFF), except when patching using /L or /W for 32-bit integers, in which case the
valid range is 0 through 4,294,967,295 (0x0 through 0xFFFFFFFF). For more information, see
“Specifying a Subtable Identifier (tid)” on page 499.
You must separate two numbers on the same line from each other by a space or a comma.
Specifying a Subtable Identifier (tid)
A subtable identifier (tid) uniquely identifies a subtable. A subtable is a collection of rows. The
rows of a particular subtable may be data rows, index rows, table header rows, etc. A table is a
collection of subtables. Ferret commands operate on subtables, not on tables. The tid
argument is used to identify a subtable to a Ferret command.
The tid must identify not only the table of interest, but also the subtable of interest. A table is
identified in the data dictionary by a table number (tvm.tvmid). Each table number is unique
across the whole system, rather than local to a database. To identify a subtable, you must
supply a tid, which is composed of a tablenumber (or TableNameString) and a typeandindex
value. The tablenumber or TableNameString identifies the relevant table, and the typeandindex
value identifies the relevant subtable.
The subtable identifier, tid, is defined as follows:
Utilities
499
Chapter 14: Ferret Utility (ferret)
Using Ferret Parameters
TableNameString
typeandindex
tablenumber
=
1102A159
where:
Syntax element …
Specifies …
TableNameString
the database name and table name, which uniquely identifies a table in the system.
A period (.) is required to separate databasename from tablename, and the string must be entered
using either single ( ' ) or double ( " ) quotes.
"databasename.tablename"
"databasename"."tablename"
'databasename.tablename'
'databasename'.'tablename'
1102A160
tablenumber
a pair of numbers, separated by a space, used to uniquely identify a table in the system.
=
Specifies that Ferret should use the most recently saved value for tablenumber
This option cannot be used unless an input value has already been defined for tablenumber.
typeandindex
a kind of subtable, such as a table header, data subtable, or a particular index subtable.
Note: typeandindex is a required component of tid. For example, you must use Scope Table
("xyz.empx"*) instead of Scope Table ("xyz.empx".
The syntax for typeandindex is as follows:
type
/
index
/
variant
number
1102A095
where:
500
Utilities
Chapter 14: Ferret Utility (ferret)
Using Ferret Parameters
Syntax Element
Description
type
The type of subtable. type can be one of the following:
index
Type
Description
*
All the subtables of this table.
H
The table header subtable.
P
The primary data subtable.
F number
The fallback data subtable specified by number.
The default is 1.
F*
All of the fallback subtables.
The index subtable to examine. If the index subtable is not given, it defaults to
the data subtable. Index 1 is the first secondary index, Index 2 is the second
secondary index, and so on.
index can be one of the following:
Index
Description
number
A secondary index and can be used instead of
specifying X number. If you enter number by
itself, then number must be a multiple of 4 and it
is interpreted in the following way:
•
•
•
•
variant
number = 0 is the same as entering X0
number = 4 is the same as entering X1
number = 8 is the same as entering X2
number = 12 is the same as entering X3
etc.
*
All the indexes of the table.
D
The primary data index (same as X0 or 0).
X number
The secondary index specified by number. The
default is 1.
X*
All the secondary indexes, starting at 0.
The possible subtables.
variant can be one of the following:
Utilities
501
Chapter 14: Ferret Utility (ferret)
Using Ferret Parameters
Syntax Element
Description
variant (cont)
Variant
Description
*
All the possible variant subtables.
0
The default. If you do not specify variant, 0 is
assumed.
1
The value during a Sort or Table Modify
operation.
2
A value that is not used.
3
A value that is not used.
Note: If you do not specify /variant, it defaults to 0 because this field is always
0. However, during Sort or Table Modify processing, this value becomes 1.
number
A single number that represents internally both the type of subtable (header,
primary, or fallback) and the index to use to order the rows by (primary data
index or one of the secondary indexes) and the variant.
Any number used instead of type[/index[/variant]].
Since the table header subtable has only one row and has no secondary indexes or work
subtables such as H/X4/3, Ferret ignores the second part of this format and gives a table
header display.
The following table gives examples that describe the type and index fields.
502
Subtable Description
Number
Type
Table header
0
H
Primary data subtable
1024
(0x0400)
P
First secondary index
1028
(0x0404)
First fallback table
2048
(0x0800)
Second secondary index of the third
fallback table
4104
(0x1008)
Type/
Index
Type/Index/
Variant
P/D
P/X1
F1
F1/D
F3/X2
All primary subtables
P/*
All primary secondary indexes
P/X*
All fallback subtables
F*
All subtables of this table
*
Utilities
Chapter 14: Ferret Utility (ferret)
Classes of Tables
Subtable Description
Number
Sort table of the first secondary index
1029
(0x0405)
Both tables during a sort of the first
secondary index
Type
Type/
Index
Type/Index/
Variant
P/X1/1
P/X1/ *
Example
Assume that table T4 is a table in database XYZ and has a tablenumber of 0 1198. Also assume
that Ferret is currently set so that input is accepted in hexadecimal format.
Some valid specifications of a tid for primary subtables of table T4 are as follows:
•
“XYZ.T4” 400
•
“XYZ.T4” 1024.
•
“XYZ”.”T4” P
•
‘XYZ’.‘T4’ 400 h
•
0 1198 400
•
0 1198 P
Classes of Tables
The Teradata Database differentiates among the following four classes of tables. You can
specify a table classification when defining the SCOPE parameters of an action.
Table Type
Description
Permanent
Journal (PJ)
Tables
Tables that can survive system restarts, but do not contain user-visible data.
Permanent
Tables
Tables containing the real data, which can survive system restarts.
Temporary
Tables
Tables that can exist as either global or volatile temporary tables as defined
below:
Each PJ table contains data generated internally by the Teradata Database. The
PJ data is usually used to restore the journaled tables to a given checkpointed
state by rolling transactions forward or backwards from an archived copy of the
tables.
• Global temporary tables exist only during the duration of the SQL session in
which they are used.
• Volatile temporary tables reside in memory and do not survive a system
restart.
Utilities
503
Chapter 14: Ferret Utility (ferret)
Vproc Numbers
Table Type
Description
Spool Tables
Tables that contain non-permanent data and can be divided into classes
according to their scope of persistence.
Intermediate result spool tables hold temporary results during the processing of
a single SQL query and persist only for the duration of that processing.
Response spool tables hold the final answer set from a query and a limited
number can optionally persist across further queries in the same session. Spool
tables can be discarded as follows:
•
•
•
•
Normally, when they are no longer needed.
As part of a specific resource cleanup on a transaction abort or session logoff.
As part of a general resource cleanup every time the system restarts.
Rows for volatile tables are placed in spool spaces and are discarded at the
end of a transaction or at the end of a session (depending on a table option
or by a DROP TABLE statement).
• Volatile table definitions reside in memory and do not survive a system
restart.
The attributes associated with each class of tables can affect system performance, since the
attributes are set individually, and each class of tables is used for a different purpose.
For example, you might want to pack only Permanent and PJ tables and leave Spool tables
unmodified. Therefore, you would specify these tables when defining the SCOPE of the
PACKDISK command.
Vproc Numbers
In Ferret, the vproc number (vproc_number) is used in the SCOPE command to specify one
AMP or a range of AMPs for which the utility performs an action, such as reconfiguration or
disk space display.
Valid AMP vprocs have numbers in the range of 0 through 8192 (0x0 through 0x2000).
504
Utilities
Chapter 14: Ferret Utility (ferret)
Ferret Commands
Ferret Commands
Ferret gives you a wide range of commands that allow you to display specific information
about the Teradata Database system, and optimize the system to improve performance. Use
Ferret commands to:
•
Define the level or scope of a command, such as a range of tables, vprocs, cylinders, or the
WAL log.
•
Display the parameters and scope of a previous action.
•
Perform an action to increase performance, such as moving data to reconfigure data
blocks and cylinders.
•
Displaying utilization of storage space by percentage and cylinders
Defining Command Parameters Using SCOPE
Some Ferret commands require that you first define the parameters, or scope, of the action
you want to initiate.
The SCOPE command allows you to limit the command action in the following areas:
SCOPE Parameter
Variables
VPROCS
• A single vproc
• A range of vprocs
• All vprocs in a configuration
CYLINDERS
• A single cylinder
• A set of cylinders
TABLES
•
•
•
•
WAL
The WAL log
A single table
A set of tables
All tables in the system
A class of tables
For more information, see “SCOPE” on page 539.
Utilities
505
Chapter 14: Ferret Utility (ferret)
Ferret Commands
Summary of Ferret Commands
The following table summarizes the Ferret commands, their valid scopes, and gives a brief
description of the function of each command:
Command
Valid SCOPE Options
Function
DATE/TIME
None
Displays the current system day, date, and time.
DEFRAGMENT
Vprocs or tables
Combines free sectors, and moves them to the end of a cylinder.
DISABLE
None
Sets a specific flag in the file system to FALSE, disabling certain
features of Ferret. Most such flags are for internal use only.
ENABLE
None
Sets a specific flag in the file system to TRUE, enabling certain
features of Ferret. Most such flags are for internal use only.
ERRORS
None
Redirects diagnostic messages to a file that you specify or to the
default file STDERR.
Using the ERRORS command, you can append an existing
message file, overwrite an existing message file, write only to a
new file, or display the current diagnostic message file.
HELP
None
Provides general help for Ferret or detailed help if you specify an
option or parameter.
INPUT
None
Informs Ferret to read commands from a specified file rather
than from the default input file STDIN.
OUTPUT
None
Redirects Ferret output to a file you specify or to the default file
STDOUT.
PACKDISK
Vprocs or tables
Reconfigures cylinders within a defined scope.
PRIORITY
None
Sets the priority class of the Ferret process.
QUIT
None
Ends a Ferret session.
RADIX
None
Sets the default radix used as the numeric base for Ferret data
input and output.
SCANDISK
WAL log, vprocs, or tables
Performs a verification of the file system B-Tree structure.
SHOWBLOCKS
WAL log, vprocs, or tables
Displays data block size and the number of rows per data block,
or the data block size only for a defined scope.
SHOWDEFAULTS
None
Displays the current default radix for input and output, the
current input, output, and error file names, and the current scope
settings.
SHOWFSP
Vprocs or tables
Displays table names and space utilization for those tables that
would free or consume some number of cylinders if PACKDISK
is executed at a particular FSP. The scope can include one or more
tables, one or more vprocs, or the entire system.
SHOWSPACE
WAL log, vprocs, or tables
Displays storage space utilization for permanent, spool, WAL log,
temporary, and journal data, including the amount of free space
remaining.
506
Utilities
Chapter 14: Ferret Utility (ferret)
Ferret Commands
Command
Valid SCOPE Options
Function
SHOWWHERE
Vprocs, tables, or WAL log
Displays information about cylinder allocation, grade, and
temperature for cylinders in the currently set scope.
TABLEID
None
Displays the table number of the specified table when given the
database name and table name.
UPDATE DATA
INTEGRITY FOR
None
Updates the data integrity checksum level of a table type.
Ferret Error Messages
All Ferret error messages are directed by default to your system console screen.
Ferret and file system error messages can be redirected through use of the ERRORS command.
For more information on file system messages, see Messages.
Utilities
507
Chapter 14: Ferret Utility (ferret)
DATE/TIME
DATE/TIME
Purpose
The DATE or TIME command shows the current system day, date, and time.
Syntax
DATE
GS04A013
TIME
GT06A027
Usage Notes
The DATE or TIME command shows a timestamp in the following format:
DDD mmm dd, yyyy HH:MM:SS
where:
Format …
Specifies …
DDD
the day of the week.
mmm dd, yyyy
the calendar month, day, and year.
HH:MM:SS
the time in hour, minutes, and seconds.
Example
Example outputs generated by Ferret are shown below:
Ferret ==>
date
Tue Jun 13, 2000 12:04:58
Ferret ==>
time
Tue June 13, 2000 12:06:15
508
Utilities
Chapter 14: Ferret Utility (ferret)
DEFRAGMENT
DEFRAGMENT
Purpose
Caution:
Because of the difficulty in determining how severely a cylinder is fragmented, only trained
personnel should use this command.
The DEFRAGMENT command causes free sectors on each qualifying logical cylinder
contained within the current scope to be combined together into a single contiguous block of
free space on that cylinder. The scope used is designated by the SCOPE command or the
default if no SCOPE command was issued.
Syntax
DEFRAGMENT
DEFRAG
/Y
FORCE
/Y
GT06C007
where:
Syntax element …
Specifies …
/Y
the confirmation of the DEFRAGMENT command.
You are then given the option to stop the command.
FORCE
to defragment every cylinder whether a cylinder meets the criteria specified in
Usage Notes below.
Usage Notes
Unless the FORCE option is used, the DEFRAGMENT command performs defragmentation
of a logical cylinder within the scope only if all of the following criteria are met.
•
More than one free block is on the cylinder, so there is something that could be combined.
•
25% or more free sectors are on the cylinder.
•
The average free block size is less than the average size of a data block on that cylinder.
If the FORCE option is used, the DEFRAGMENT command performs defragment of a logical
cylinder within the scope if it has more than one block of free space. In that case, it will
defragment a cylinder regardless of the percentage of free sectors or average block sizes.
Note that the DEFRAGMENT operation does not change the size or content of any data
blocks. Existing blocks may be moved, but they are otherwise unchanged.
Utilities
509
Chapter 14: Ferret Utility (ferret)
DEFRAGMENT
For each AMP, the Ferret indicates how many cylinders it will attempt to defragment. For
example:
vproc 2 response
Requested Cylinder Range had 197 cylinders placed on the defrag list
Upon completion, for each amp, the actual number of cylinders defragmented will be logged
into the system's message log. For example:
002507 14:47:07 043100d6 ... 44 8
340516600|appl|1|S|I|U|0|0|M|0|0|PDE|0|0|0|1#PDElogd: Event number 3405166-00
(severity 10, category 10)
5166: Defragment of cylinder[s] occurred.
On Wed Jun 20 14:47:07 2007 on NODE 001-01, VPROC 2, partition 9, task
fsubackgrnd
197 cylinder[s] defragmented
Defragmentation proceeds as a background task, and can continue to run even after the Ferret
prompt returns. To determine when defragmentation has completed, check
DBC.SW_Event_Log for event_tag=34-05166-00.
Teradata Database can isolate some file system errors to a specific data or index subtable, or to
a range of rows (“region”) in a data or index subtable. In these cases, Teradata Database marks
only the affected subtable or region down. This allows transactions that do not require access
to the down subtable or rows to proceed, without causing a database crash or requiring a
database restart. If DEFRAGMENT encounters down regions, it skips these regions, and
displays the percentage of total space that was skipped.
Example
The following example shows the output that DEFRAGMENT generates:
Ferret ==>
defrag /y
Defrag has been sent to all AMP vprocs in the SCOPE.
Type ‘ABORT’ to stop them before completion
vproc 0 response
Requested Cylinder Range had 5 cylinders placed on the defrag list
vproc 1 response
Requested Cylinder Range had 9 cylinders placed on the defrag list
Aborting DEFRAGMENT
To stop the DEFRAGMENT command at any time, use the following ABORT command
syntax:
ABORT
.
GT06A021
510
Utilities
Chapter 14: Ferret Utility (ferret)
DEFRAGMENT
ABORT can take up to 30 seconds to begin. After it is initiated, ABORT stops the
defragmenting and reports the current status of the vprocs.
When the abort is successful, you get the following prompt:
Ferret ==>
defrag /y
Defrag has been sent to all AMP vprocs in the SCOPE.
Type ’ABORT’ to stop before completion
ABORT
Abort request has been sent
vproc 0 response
Aborted: Requested Cylinder Range had 11 cylinders placed on the defrag
list
vproc 1 response
Aborted: Requested Cylinder Range had 11 cylinders placed on the defrag
list
Ferret ==>=
Utilities
511
Chapter 14: Ferret Utility (ferret)
DISABLE
DISABLE
Purpose
The DISABLE command sets a specified flag in the file system to FALSE. This disables certain
features of Ferret.
Syntax
,
DISABLE
DISA
flag
?
GS04C014
where:
Syntax element …
Specifies …
flag
Specifies the flag that will be set to false. The following flags are available:
• SCRIPT, SCRIPT MODE, and SCRIPTMODE
When script mode is disabled, the Ferret ABORT and INQUIRE
commands are enabled. Script mode is disabled by default. Script mode
should be enabled before running scripts that call Ferret.
The following flags are for internal use only, and should not be disabled or
enabled:
• CHECK and CHECKTS
?
Displays a complete list of available flags.
Example
An example is shown below:
Ferret ==>
disable script
512
Utilities
Chapter 14: Ferret Utility (ferret)
ENABLE
ENABLE
Purpose
The ENABLE command sets a specified flag in the file system to TRUE. This enables certain
features of Ferret.
Syntax
,
ENABLE
ENA
flag
?
GS04C015
where:
Syntax element …
Specifies …
flag
Specifies the flag that will be set to true. The following flags are available:
• SCRIPT, SCRIPT MODE, and SCRIPTMODE
Script mode is disabled by default. When script mode is enabled, the
Ferret ABORT and INQUIRE commands are disabled to allow a script to
run without interruption. Script mode should be enabled before running
scripts that call Ferret.
The following flags are for internal use only, and should not be disabled or
enabled:
• CHECK and CHECKTS
?
Displays a complete list of available flags.
Example
An example is shown below:
Ferret ==>
enable script
Utilities
513
Chapter 14: Ferret Utility (ferret)
ERRORS
ERRORS
Purpose
The ERRORS command redirects diagnostic messages to a file that you specify or to the
default file STDERR. If nothing is specified, the name of the files where errors are directed
displays.
Syntax
ERRORS
TO
file
INTO
STDERR
OVER
ME
GT06C008
where:
Syntax element …
Specifies …
TO
that Ferret is to write diagnostic messages to a new file or to STDERR.
If the file specified exists, Ferret returns an error.
INTO
that Ferret is to append diagnostic messages to a specified file or to STDERR.
If the file already exists, Ferret appends the error messages to the end of the file.
If the file does not exist, Ferret creates the file automatically.
OVER
that Ferret is to overwrite an existing file or STDERR with current diagnostic
messages.
If the file already exists, Ferret writes over the file.
If the file does not exist, Ferret creates the file.
file
the name of the destination file for diagnostic messages.
STDERR
the default file to which Ferret writes diagnostic messages.
ME
the synonym for STDERR.
Usage Notes
When you start Ferret, it writes diagnostic messages to STDERR by default.
You can use the ERRORS command to redirect the diagnostic messages in the following ways:
514
•
To write to a new file only
•
To append an existing file
Utilities
Chapter 14: Ferret Utility (ferret)
ERRORS
•
To overwrite an existing file
•
To display on your console
If you include the file parameter in the ERRORS command, the file you specify becomes the
destination for diagnostic messages redirected from STDERR.
If you type the ERRORS command without any options, Ferret shows the name of the current
diagnostic messages file STDERR on your system console.
Example
The following example shows the format for redirecting diagnostic messages into a specific file
and directory:
Ferret ==>
Errors over /home/user1/Ferret.error
Utilities
515
Chapter 14: Ferret Utility (ferret)
HELP
HELP
Purpose
The HELP command provides general help for Ferret or detailed help if you specify an option
or parameter.
Syntax
HELP
H
/L
ALL
keyword
?
GT06C009
where:
Syntax element …
Specifies the …
/L
long form of a help display, showing parameter descriptions in addition
to command syntax.
If you do not type /L, Ferret assumes the short form of the help display,
which shows command syntax only.
ALL
available help for all Ferret commands and parameters.
This is the default.
keyword
Ferret command or parameter name.
If you do not specify either a keyword or ALL, HELP lists all of the
commands that you can type at the Ferret prompt.
?
a list of all keywords for which help is available.
Example
The following is an example of output that HELP generates:
Ferret ==>
Help /L Radix
RADIX [ ( IN/PUT | OUT/PUT ) ] [ ( H/EX | D/EC ) ]
Sets the Flags for how to treat Unqualified numbers. Either Hex (base 16)
or Decimal (Base 10), respectively. See HELP NUMBER for a description of
unqualified INPUT. The initial setting of these Flags is HEX. If
neither INPUT nor OUTPUT is specified,the command applies to both Flags.
If neither HEX nor DEC is specified, the current setting of the Flag is
displayed.
516
Utilities
Chapter 14: Ferret Utility (ferret)
INPUT
INPUT
Purpose
The INPUT command informs Ferret to read commands from a file you specify rather than
from the default input file STDIN.
Syntax
INPUT
FROM file
IN
GT06B010
where:
Syntax element …
Specifies …
file
the name of a file you specify as the source of command input to Ferret.
Usage Notes
When you first start Ferret, it accepts input from the STDIN file by default. Using the INPUT
command, you can redirect input from any file you specify.
When Ferret reaches the end of the file you specify, Ferret again accepts input from you, if the
file does not instruct Ferret to quit.
You can nest the INPUT command inside command files to a maximum of nine files deep.
Example
The following command example shows the format for redirecting input from a file in a
specific directory:
Ferret ==>
Input from /home/user1/commands
Utilities
517
Chapter 14: Ferret Utility (ferret)
OUTPUT
OUTPUT
Purpose
The OUTPUT command redirects Ferret output to a file you specify or to the default file
STDOUT.
Syntax
OUTPUT
OUT
TO
file
INTO
STDOUT
OVER
ME
GT06C011
where:
Syntax element …
Specifies …
TO
that Ferret is to redirect output to a new file or to STDOUT.
If the file exists, Ferret returns an error.
INTO
that Ferret is to append output to an existing file specified by file or to
STDOUT.
If the file exists, Ferret appends the output to the end of the file.
If the file does not exist, Ferret creates the file.
OVER
that Ferret is to overwrite an existing file or STDOUT with new Ferret output.
If the file exists, Ferret writes over the file.
If the file does not exist, Ferret creates the file.
file
name of the file you specify as the destination of Ferret output.
STDOUT
default file to which Ferret writes output.
ME
synonym for STDOUT.
Usage Notes
When you start Ferret, diagnostic messages are written to STDOUT by default. You can use
the OUTPUT command to redirect Ferret output in any of the following ways:
518
•
To write to a new file only
•
To append an existing file
•
To overwrite an existing file
•
To display on your console
Utilities
Chapter 14: Ferret Utility (ferret)
OUTPUT
When Ferret redirects output to a file, all input and diagnostic messages are echoed to the
output file as well as to their usual destinations.
If you include the file parameter in the OUTPUT command, Ferret uses that parameter as the
destination for output redirected from STDOUT.
If you type the OUTPUT command without any options, Ferret displays the name of the
current output file STDOUT to your system console.
Example
The following command example shows the redirecting of Ferret output into a specific file:
Ferret ==>
Output into /home/user1/output.file
Utilities
519
Chapter 14: Ferret Utility (ferret)
PACKDISK
PACKDISK
Purpose
The PACKDISK command reconfigures the storage, leaving a percentage of free space for
cylinders within a scope defined by the SCOPE command. For additional information, see
“SCOPE” on page 539.
Syntax
PACKDISK
/Y
number
FREESPACEPERCENT
FREE
FSP
=
GT06B012
where:
Syntax element …
Specifies …
/Y
to bypass user confirmation.
If you do not specify the /Y option, then Ferret requests for user
confirmation before proceeding with PACKDISK.
number
percentage of free space to be left in the cylinder(s).
Usage Notes
Teradata Database can isolate some file system errors to a specific data or index subtable, or to
a range of rows (“region”) in a data or index subtable. In these cases, Teradata Database marks
only the affected subtable or region down. This allows transactions that do not require access
to the down subtable or rows to proceed, without causing a database crash or requiring a
database restart. If PACKDISK encounters down regions, it skips these regions, and displays
the percentage of total space that was skipped.
What PACKDISK Does
PACKDISK packs either the entire storage associated with the scoped object or a single table,
leaving a specified percentage of the scoped object empty free up cylinders for new storage
allocations.
Packing applies only to entire logical cylinders, not to the space inside individual data blocks
within those cylinders. Data block sizes are the same before and after the PACKDISK
operation.
520
Utilities
Chapter 14: Ferret Utility (ferret)
PACKDISK
For a cylinder, the free space percent is a minimum percentage that indicates how much extra
free space should be left on each cylinder during a data loading operation. This allows you to
reserve space for future updates. For details, see “Ferret Command Syntax” on page 496 and
“Ferret Commands” on page 505. The amount of free space to be allotted is determined from
one of three possible values. In order of precedence, these values are as follows:
1
Caution:
The FREESPACEPERCENT value specified using PACKDISK.
Generally, you should accept the default values for Free Space Percent established in the Table
Level Attributes or in the DBS Control record.
Assuming that your system-level default is acceptably defined and the table-level attributes
for any tables not conforming to the defined system-level default also are reasonably
defined, you should not specify a value for FREESPACEPERCENT unless you are testing
for the optimal FSP setting in situations such as the following:
•
Preliminary tests scoped to a small prototype table to determine the most optimal
value for the FSP.
•
Later tests of various FSPs on live tables that you think might require adjustment to
ensure more optimal cylinder packing in the future.
This list is not exhaustive. The point is that you should not routinely specify a
FREESPACEPERCENT for PACKDISK.
2
The Table Level Attribute value specified by the FREESPACE variable in CREATE TABLE
and ALTER TABLE.
For more information on CREATE TABLE and ALTER TABLE, see SQL Data Definition
Language.
3
The system-level value specified by the FreeSpacePercent field in the DBS Control record.
For additional information, see Chapter 12: “DBS Control (dbscontrol).”
In other words, PACKDISK determines the FSP to use for packing cylinders as follows.
1
Check for a FREESPACEPERCENT value in the PACKDISK command stream.
IF …
THEN …
a value is specified
use it as the FSP specification for all tables affected by this
PACKDISK operation.
Begin the PACKDISK operation.
no value is specified
2
Utilities
Go to stage 2.
Check for an FSP value in the Table Level Attributes for each affected table as PACKDISK
runs.
521
Chapter 14: Ferret Utility (ferret)
PACKDISK
IF …
THEN …
a value is specified for
the current table
use it as the FSP specification for packing cylinders that contain the
current table.
Begin the PACKDISK operation.
no value is specified
check the FreeSpacePercent field in the DBS Control record and use
the value specified there as the free space percent when packing
cylinders containing the current table.
Begin the PACKDISK operation.
Guidelines for Selecting an Optimal Value for FSP
It is not possible to provide concrete guidelines for determining the values of FSP you should
use because the optimal percentages are highly dependent on the individual mix of workload,
tables, and applications in use by a site.
Because of the way PACKDISK packs cylinders, some might still be fragmented after the
procedure completes. If this happens, use the DEFRAGMENT command to defragment the
cylinder. For information, see “DEFRAGMENT” on page 509. The scope for PACKDISK can
be either vprocs or TABLES (but not both).
The SHOWFSP command helps estimate the effectiveness of PACKDISK commands by
providing information regarding cylinder utilization. In particular, you can use SHOWFSP to
find loosely packed tables.
The SHOWFSP command displays table names and space utilization for those tables that
would free or consume some number of cylinders if PACKDISK is executed at a particular FSP.
The scope can include one or more tables, one or more vprocs, or the entire system.
For general guidelines for selecting optimal FSP values, see “FreeSpacePercent” on page 359.
For additional guidelines for selecting an appropriate value for FSP, see Performance
Management.
Mini Cylinder Packs
The file system automatically performs a mini cylinder pack (minicylpack) background
operation if the number of free cylinders equals or falls below the threshold value specified for
MiniCylPackLowCylProd in the DBS Control record. For additional information, see
“MiniCylPackLowCylProd” on page 394.
Minicylpacks ignore all definitions for FSP and use a free space target value of 0. This means
that they pack data as tightly as possible within a cylinder. When, as is often the case, a table
affected by a minicylpack requires a percentage of free space to permit growth without
requiring the addition of new cylinders, this operation can result in non-optimal system
performance.
For example, when many minicylpacks occur, they can produce a state known as thrashing.
This happens because storage is packed too tightly to permit growth, so frequent allocation of
new cylinders to the table space is required. Because this action removes cylinders from the
522
Utilities
Chapter 14: Ferret Utility (ferret)
PACKDISK
available cylinder pool, more minicylpacks are required and an ongoing cycle of thrashing
results.
Because all minicylpacks have a negative effect on system performance, you should monitor
your cylinder usage activity closely.
To monitor …
Do the following …
cylinder utilization
Use the Ferret SHOWSPACE command.
occurrences of minicylpacks
• For Linux, check the software event log (DBC.SW_Event_Log),
or the Linux event log, /var/log/messages.
• For MP-RAS, check the software event log
(DBC.SW_Event_Log). A system log also exists under MP-RAS
at /var/adm/streams/error.<date>.
• For Windows, check the software event log
(DBC.SW_Event_Log), or the Windows event log.
Schedule PACKDISK operations as often as is necessary to prevent minicylpacks from
occurring.
For more information about minicylpacks, see Performance Management.
Example
The following is an example report that PACKDISK generates:
Ferret ==>
packdisk fsp=50
Wed Jan 15, 2003 16:39:18 : Packdisk will be started
On all AMP Vprocs.
Do you wish to continue based upon this scope?? (Y/N)
Y
Wed Jan 15, 2003 16:39:19 : Packdisk has been started
On all AMP Vprocs.
Type ’ABORT’ to stop the command before completion
Type ’INQUIRE’ to check on progress of command
Wed Jan 15, 2003 16:40:35 : vproc 2 response
Packdisk completed successfully. Allocated 72 (000048) cylinders
Aborting PACKDISK
Depending on the defined SCOPE, PACKDISK might take a long time to run. You can stop it
at any time by typing this command:
ABORT
.
GT06A021
Utilities
523
Chapter 14: Ferret Utility (ferret)
PACKDISK
After typing the ABORT request, as many as 30 seconds can pass before the command
initiates.
Once initiated, ABORT stops PACKDISK immediately and reports the status. No more
packing is done to the remaining cylinders.
If the abort is successful, the system redisplays the following prompt:
Ferret ==>
packdisk fsp=20
Are you sure you want to packdisk? (Y/N) y
Tue Feb 28, 1995 15:16:50 : Packdisk has been started on all AMP Vprocs
in the SCOPE.
Type ’ABORT’ to stop them before completion
ABORT
Abort request has been sent
Vproc 0 response
Tue Feb 28, 1995 15:16:50 : Packdisk Freed up 100
(0064)
cylinders
Checking PACKDISK Status
Because PACKDISK can take a long time to run, you might want to do a status check. You can
do this using the INQUIRE command, which reports PACKDISK progress as a percentage of
total time to completion:
INQUIRE
.
INQ
GT01A005
Example
Type ‘INQUIRE’ to check on progress of command
> INQUIRE
inquire
Inquire request has been sent
> Tue Feb. 9, 1999 17:11:01
Slowest vproc0 is
4% done
Fastest vproc0 is
4% done
The packdisk is about 4% done
Type ‘ABORT’ to abort current Operation.
Type ‘INQUIRE’ to check on progress of command
You can use this information to estimate how long the operation will take to complete. For
example, if the process is 10 percent done after one hour, then the entire operation will take
approximately 10 hours, and there are nine more hours until PACKDISK completes.
524
Utilities
Chapter 14: Ferret Utility (ferret)
PRIORITY
PRIORITY
Purpose
The PRIORITY command allows you to set the priority of the Ferret process.
Syntax
PRIORITY
=
priorityclass
SET
1102A183
where:
Syntax element …
Specifies …
priorityclass
the priority of the Ferret process.
Valid values for priorityclass are:
•
•
•
•
LOW, L, or 0
MEDIUM, M, or 1 (This is the default.)
HIGH, H, or 2
RUSH, R, or 3
Usage Notes
These values are not case-sensitive.
The PRIORITY command is most commonly used with the SCANDISK and PACKDISK
commands.
Example
An example is shown below:
Ferret ==>
SET PRIORITY = 3
Utilities
525
Chapter 14: Ferret Utility (ferret)
QUIT
QUIT
Purpose
The QUIT command ends a Ferret session.
Syntax
QUIT
Q
STOP
ST
END
EXIT
GT06B013
Usage Notes
STOP, END, and EXIT are synonyms for the QUIT command.
Example
The following command example shows the output that QUIT generates:
Ferret ==>
quit
Waiting for Ferret Slave Tasks to exit
Ferret Exited
526
Utilities
Chapter 14: Ferret Utility (ferret)
RADIX
RADIX
Purpose
The RADIX command sets the default radix used as the numeric base for data input to and
output from Ferret as either hexadecimal or decimal. If you type just the command RADIX,
the current settings of the input and output are displayed.
Syntax
RADIX
RAD
INPUT
HEX
IN
OUTPUT
H
DEC
OUT
D
GT06C014
where:
Syntax element …
Specifies that numeric …
INPUT
input to Ferret defaults to the radix you select, either hexadecimal or decimal.
OUTPUT
output from Ferret defaults to the radix you select, either hexadecimal or
decimal.
HEX
input to or output from Ferret defaults to a radix of hexadecimal.
DEC
input to or output from Ferret defaults to a radix of decimal.
Usage Notes
When you start Ferret, the default radix for both input and output from Ferret is decimal.
If you omit both the INPUT and OUTPUT options from the RADIX command, the radix
(HEX or DEC) that you select applies to both numeric input and output.
You can select either INPUT or OUTPUT, but not both when changing the settings of Ferret
data.
If you omit both the HEX and DEC options, Ferret displays the current RADIX setting.
Example
The following command example shows how to set the input to decimal:
Ferret ==>
Rad Input Dec
Utilities
527
Chapter 14: Ferret Utility (ferret)
SCANDISK
SCANDISK
Purpose
The SCANDISK command validates the file system and reports any errors found, including
discrepancies in the following:
•
Key file system data structures, such as master index, cylinder index, and data blocks,
including those associated with the WAL log.
•
The internal partition number for a row matching the internal index structures in the
cylinder and master indexes.
•
Within a subtable, the internal partition number of a row being greater than or equal to
the internal partition number in the preceding row, if any.
•
Within a partition, a RowID of a row being greater than the RowID in the preceding row, if
any.
•
Within a subtable, either rows are all partitioned (a row includes the internal partition
number) or nonpartitioned (a row does not include the internal partition number).
In addition, SCANDISK calculates, modifies, and verifies checksums and reports any
discrepancies.
Note: The internal partition number is not validated for consistency with the result of the
partitioning expression applied to the partitioning columns in a row.
528
Utilities
Chapter 14: Ferret Utility (ferret)
SCANDISK
Syntax
Note: The online help lists the display options (/S, /M, /L) as /dispopt.
SCANDISK
/S
CI
/M
DB
/L
FREECIS
inquire_opt
NOCR
CR
MI
WCI
WDB
WMI
1102B149
where:
Syntax Element
Description
/S
Scans the MI and WMI.
/M
Scans the MI, CIs, WMI, and WCIs.
/L
Scans the MI, CIs, DBs, WMI, WCIs, and WDBs.
CI
Scans the MI and CIs. If the scope is AMP or all subtables, rather than selected subtables, the free CIs
are also scanned.
DB
Scans the MI, CIs, and DBs. This is the default for the normal file system, which can be overridden by
the CI, MI, or FREECIS options. If the scope is AMP or all subtables, rather than selected subtables,
the free CIs are also scanned.
FREECIS
Scans the free CIs only. This option also detects missing WAL and Depot cylinders.
MI
Scans the MI only.
WCI
Scans the WMI and WCIs.
WDB
Scans the WMI, WCIs, and WDBs. This is the default for the WAL log, which can be overridden by the
WCI or WMI options.
WMI
Scans the WMI only.
Utilities
529
Chapter 14: Ferret Utility (ferret)
SCANDISK
Syntax Element
Description
inquire_opt
Displays the lowest tid and rowid being scanned among the AMPS involved. This option also reports
SCANDISK progress as a percentage of total time to completion and displays the number of errors
encountered so far.
The online help refers to tid as tableid. For more information on tid formatting, see “Specifying a
Subtable Identifier (tid)” on page 499.
The syntax for the INQUIRE option is as follows:
INQUIRE
INQ
-
NONE
number
timeopt
1102E422
where:
Syntax Element
Description
NONE
Specifies that only one INQUIRE request is sent for the SCANDISK job.
number
An integer which defines the time interval to send an INQUIRE request to
display SCANDISK progress.
If you do not specify timeopt, then number defaults to SECONDS.
timeopt
Specifies the time unit which number represents. It should be one of the
following:
•
•
•
•
SECONDS, SECOND, SECON, SECO, SECS, SEC, S
MINUTES, MINUTE, MINUT, MINU, MINS, MIN, M
HOURS, HOUR, HOU, HO, HRS, HR, H
DAYS, DAY, DA, D
For example, scandisk inquire 5 m will start a SCANDISK job which reports SCANDISK
progress every five minutes.
Note: The maximum time interval allowed is seven days.
NOCR
Specifies to use regular data block preloads instead of cylinder reads. This is the default.
CR
Specifies to use cylinder reads instead of regular data block preloads.
Usage Notes
You can run SCANDISK while the system is online and the Teradata Database is available for
normal operations.
Teradata recommends you run SCANDISK in the following situations:
530
•
To validate data integrity before or after a system upgrade or expansion.
•
If you suspect data corruption.
•
As a routine data integrity check (perhaps weekly).
Utilities
Chapter 14: Ferret Utility (ferret)
SCANDISK
Note: A slight performance impact might occur while SCANDISK is running.
You can rearrange the order of the syntax following the SCANDISK command. For example,
the command SCANDISK NOCR MI is the same as the command SCANDISK MI NOCR.
If you do not type any options, SCANDISK defaults to DB and all subtables on the vproc. The
default scope is to scan both the normal file system and the WAL log, each from the lowest
(DB, WDB) level through the highest (MI, WMI). The free CIs are also scanned.
The SCANDISK command can be limited by the SCOPE command to scan, for example, just
one table, just the WAL log, or just certain AMPs. For more information, see “SCOPE” on
page 539.
By default, SCANDISK uses regular data block preloads instead of cylinder reads. The CR
option allows you to run SCANDISK using cylinder reads to preload data into cylinder slots
which may improve SCANDISK performance. However, if other work also requires the use of
cylinder slots, the competition for slots could slow down SCANDISK and the other work. In
addition, the performance gain is dependent on the amount of data loaded, the distribution of
the data, and the average block I/O size.
The NOCR option lets you turn off cylinder slot usage by SCANDISK, which could result in
slower SCANDISK performance, but which will allow other work requiring cylinder slots to
progress unimpeded.
SCANDISK reports only what it finds when scanning is completed.
The RowID output from SCANDISK consists of the following:
Utilities
•
A two-byte internal partition number. For a nonpartitioned table, the internal partition
number is zero.
•
A four-byte row hash value.
•
A four-byte uniqueness value.
531
Chapter 14: Ferret Utility (ferret)
SCANDISK
Example 1
The following is an example of output that SCANDISK generates.
Ferret ==>
scandisk
Tue Feb 28, 1995 15:16:50 :Scandisk has been started on all AMP Vprocs in
the SCOPE.
Vproc 0 response
DB @ Cylinder 0 2 (0000 0002) Sector 16 (0010) length 1 (0001)
DB ref count doesn’t match DBD row Count
Tue Feb 28, 1995 15:16:50 : The scandisk found problems
Vproc 1 response
Tue Feb 28, 1995 15:16:50 : The scandisk found nothing wrong
Example 2
The following is an example of output that SCANDISK generates when it finds an LSI
interrupted write pattern in a CI.
Ferret ==>
scandisk ci
Mon May 06, 2002 15:12:20 :Scandisk has been started on all AMP Vprocs in the SCOPE.
vproc 0 (0000) response
Mon May 06, 2002 15:55:21 : CI @ Cylinder 0 7 (0000 000007)
Mon May 06, 2002 15:55:21 : LSI interrupted write pattern found in CI.
0120 MAY 05 05:02:35 LUN 1111, Start Block 00004545, Blocks 0400
SRD
num
table id
firstdbd dbdcount offset
u0
u1
tai
---- ----- ----- ----- -------- -------- -----0001 0000 0494 0800
FFFF
0014
001E
Mon May 06, 2002 15:55:21 : Invalid DBD sector length of 14901 (3A35) found
Mon May 06, 2002 15:55:21 : Invalid DBD sector length of 26912 (6920) found
Mon May 06, 2002 15:55:21 : Invalid DBD sector length of 25972 (6574) found
Mon May 06, 2002 15:55:21 : Invalid DBD sector length of 12336 (3030) found
Mon May 06, 2002 15:55:21 : First rowid out of order dbds 18 (0012) and 19 (0013)
532
Utilities
Chapter 14: Ferret Utility (ferret)
SCANDISK
Example 3
The following is an example of output that SCANDISK generates when it finds an LSI
interrupted write pattern in a DB.
Ferret ==>
scandisk db
Tue Feb 28, 1995 15:16:50 :Scandisk has been started on all AMP Vprocs in the SCOPE.
vproc 0 (0000) response
Mon May 06, 2002 15:12:20
1 of 1 vprocs responded with no messages or errors.
Type 'ABORT' to stop the command before completion
Type 'INQUIRE' to check on progress of command
Reading
vproc 0 (0000) response
Mon May 06, 2002 15:11:11 : CI @ Cylinder 0 7 (0000 000007)
Mon May 06, 2002 15:11:11 : LSI interrupted write pattern found in DB.
0120 MAY 05 05:02:35 LUN 1111, Start Block 00004545, Blocks 0400
Mon May 06, 2002 15:11:11 : rows -1 (FFFFFFFF) and 0 (0000) are out of order
Example 4
The following example limits the scan to just the WAL log.
scope wal
scandisk
Example 5
The following example scans one user data subtable.
scope table 'employee.emp' p
scandisk
Example 6
In the following example, an INQUIRE command is sent per minute. Therefore, you get a
display of SCANDISK progress every minute.
Ferret ==>
scandisk inq 1 m
Tue Apr 17, 2007
11:18:51 : Scandisk ALL will be started
On All AMP vprocs
Do you wish to continue based upon this scope?? (Y/N)
y
Command has been sent to Slave tasks.
Type 'ABORT' to stop the command before completion
Type 'INQUIRE' to check on progress of command
Tue Apr 17, 2007 11:19:53
SCANDISK STATUS :
Slowest vproc
1 is 89% done
Fastest vproc
0 is 92% done
The scandisk is about
90% done
Scanning Table: 0 1001 1024
Utilities
533
Chapter 14: Ferret Utility (ferret)
SCANDISK
Scanning Row: 0 45284 58386 0 1
Tue Apr 17, 2007 11:19:53
2 of 2 vprocs responded with no messages or errors.
Type 'ABORT' to stop the command before completion
Type 'INQUIRE' to check on progress of command
Tue Apr 17, 2007 11:20:53
Everyone is Scanning FREE CIS
SCANFREE STATUS FOR SCANDISK:
Slowest vproc
1 is 38% done
Fastest vproc
0 is 42% done
The scanfree is about
40% done
Tue Apr 17, 2007 11:20:53
2 of 2 vprocs responded with no messages or errors.
Type 'ABORT' to stop the command before completion
Type 'INQUIRE' to check on progress of command
Tue Apr 17, 2007 11:21:53
Everyone is Scanning FREE CIS
SCANFREE STATUS FOR SCANDISK:
Slowest vproc
1 is 92% done
Fastest vproc
0 is 95% done
The scanfree is about
93% done
Tue Apr 17, 2007 11:21:53
2 of 2 vprocs responded with no messages or errors.
Type 'ABORT' to stop the command before completion
Type 'INQUIRE' to check on progress of command
Tue Apr 17, 2007 11:22:01
2 of 2 vprocs responded with no messages or errors.
Tue Apr 17, 2007
11:22:01: Scandisk Completed
Aborting SCANDISK
Since SCANDISK DB verifies that every byte in the file system is accounted for, this process
can be very time consuming. Therefore, you have the option of stopping the process by typing
the following command:
ABORT
.
GT06A021
ABORT can take up to 30 seconds to process.
After it is initiated, ABORT stops the SCANDISK process and reports the current status.
When the abort is successful, the following appears:
534
Utilities
Chapter 14: Ferret Utility (ferret)
SCANDISK
Are you sure you want to scandisk? (Y/N) y
Tue Feb 28, 1995 15:16:50 : Scandisk has been started on all AMP Vprocs
in the SCOPE.
Type ‘ABORT‘ to stop them before completion
Type ‘INQUIRE‘ to check on progress of command
ABORT
Abort request has been sent
Vproc 0 response
DB @ Cylinder 0 2 (0000 0002) Sector 16 (0010) length 1 (0001)
DB ref count doesn’t match DBD row Count
Tue Feb 28, 1995 15:16:50 : The scandisk found problems
Vproc 1 response
Tue Feb 18, 1995 15:16:50 : The scandisk found nothing wrong
Ferret ==>
Checking SCANDISK Status
Because SCANDISK can take a long time to run, you might want to do a status check. You can
do this using the INQUIRE command:
INQUIRE
.
INQ
GT01A005
INQUIRE displays the lowest tid and rowid being scanned among the AMPS involved, and
reports SCANDISK progress as a percentage of total time to completion. It also displays a list
of errors that occurred since the last INQUIRE command. Sample INQUIRE output is shown
below.
Ferret ==>
scandisk
Wed Apr 11, 2007
y
16:46:24 : Scandisk will be started
On All AMP vprocs
Do you wish to continue based upon this scope?? (Y/N)
Wed Apr 11, 2007 16:46:26 : Scandisk has been started
Type 'ABORT' to stop the command before completion
Type 'INQUIRE' to check on progress of command
inq
Inquire request has been sent
Wed Apr 11, 2007 16:46:30
SCANDISK STATUS :
Slowest vproc
0 is
2% done
Fastest vproc
0 is
2% done
The scandisk is about
2% done
Scanning Table: 0 1434 1024
Scanning Row: 0 1972 32488 0 1
Nobody has reached the SCANFREE stage
Wed Apr 11, 2007 16:46:30
2 of 2 vprocs responded with no messages or errors.
Type 'ABORT' to stop the command before completion
Type 'INQUIRE' to check on progress of command
inq
Inquire request has been sent
Utilities
535
Chapter 14: Ferret Utility (ferret)
SCANDISK
Wed Apr 11, 2007 16:46:36
SCANDISK STATUS :
Slowest vproc
1 is
4% done
Fastest vproc
0 is
5% done
The scandisk is about
4% done
Scanning Table: 0 1434 1024
Scanning Row: 0 28847 58771 0 95
Nobody has reached the SCANFREE stage
vproc
1 (0001)
response
Wed Apr 11, 2007 16:46:33 :
Wed Apr 11, 2007 16:46:33 :
0 1434 1024 (0000 059A
0 1435 1024 (0000 059B
/nWed Apr 11, 2007 16:46:33
0 1434 1024 (0000 059A
0 1435 1024 (0000 059B
CI @ Cylinder 0 100 (0000 000064)
CID's First TID doesn't Match CI's First TID
0400) ( test.trans )
0400) ( TEST.data1 )
: CID's Last TID doesn't Match CI's Last TID
0400) ( test.trans )
0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:33 : Table Header is missing. May be in the process of being dropped
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 3418 (0D5A) length 126 (007E)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 3629 (0E2D) length 70 (0046)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 3699 (0E73) length 70 (0046)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 48 (0030) length 65 (0041)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 432 (01B0) length 65 (0041)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 113 (0071) length 66 (0042)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 179 (00B3) length 66 (0042)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
Wed Apr 11, 2007
536
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 2646 (0A56) length 122 (007A)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
Utilities
Chapter 14: Ferret Utility (ferret)
SCANDISK
0
0
1435 (0000 059B)
1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 1953 (07A1) length 66 (0042)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 2346 (092A) length 66 (0042)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 2932 (0B74) length 72 (0048)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 3004 (0BBC) length 72 (0048)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 684 (02AC) length 72 (0048)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 756 (02F4) length 72 (0048)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 1191 (04A7) length 109 (006D)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 1606 (0646) length 74 (004A)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 2768 (0AD0) length 74 (004A)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 3150 (0C4E) length 119 (0077)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 1056 (0420) length 69 (0045)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Utilities
537
Chapter 14: Ferret Utility (ferret)
SCANDISK
Wed Apr 11, 2007
16:46:34 : DB @ Cylinder 0 100 (0000 000064) Sector 3544 (0DD8) length 69 (0045)
: TID :
0 1435 1024 (0000 059B 0400) ( TEST.data1 )
Wed Apr 11, 2007 16:46:34 : SRD TID.uniq doesn't match DB tid.uniq
0 1435 (0000 059B)
0 1434 (0000 059A)
Wed Apr 11, 2007
Wed Apr 11, 2007
0 1435 1024
0 1434 1024
16:46:34 :
16:46:34 :
(0000 059B
(0000 059A
CI @ Cylinder 0 266 (0000 00010A)
Table Ids are out of order
0400) ( TEST.data1 )
0400) ( test.trans )
Wed Apr 11, 2007 16:46:36
1 of 2 vprocs responded with no messages or errors.
Wed Apr 11, 2007 16:46:36
1 of 2 vprocs responded with messages or errors as specified above
Type 'ABORT' to stop the command before completion
Type 'INQUIRE' to check on progress of command
538
Utilities
Chapter 14: Ferret Utility (ferret)
SCOPE
SCOPE
Purpose
The SCOPE command defines the scope for subsequent DEFRAGMENT, PACKDISK,
SCANDISK, SHOWBLOCKS, SHOWFSP, SHOWSPACE, and SHOWWHERE commands. It
defines the class of tables, range of tables, vprocs (AMPs), cylinders and vprocs, or the WAL
log to be used as parameters with these Ferret commands.
Each SCOPE command defines a new scope and is not a continuation of the last one.
Syntax Rules
The Ferret SCOPE syntax is unusual. The following rules apply.
IF you specify a …
THEN …
single_syntax_element
commas and spaces are not required. Parentheses are optional.
list_of_syntax_elements
each syntax element must be separated by either a comma or space.
Parentheses are required.
The following example illustrates these syntax rules.
single_syntax_element
,
(
list_of_syntax_elements
(
1102A059
The following are valid examples of the SCOPE command:
scope
scope
scope
scope
vproc
vproc
vproc
vproc
1
(1)
(1,3)
(1, 3)
The following are invalid examples of the SCOPE command:
scope vproc 1,3
scope vproc 1 3
Syntax
Note: Not all combinations of acceptable syntax are shown.
Utilities
539
Chapter 14: Ferret Utility (ferret)
SCOPE
,
SCOPE
class
cylinder
table
vproc
WAL
all
1102F028
class
,
(
)
JRNL
PERMANENT
P
TEMPORARY
TEMP
SPOOL
ALL
1102C029
cylinder
,
cylid
(
)
R
cylid
ALL
1102G030
vproc
vproc_number
VPROC
,
vproc_number
(
)
vproc_number TO vproc_number
ALL
1102E032
540
Utilities
Chapter 14: Ferret Utility (ferret)
SCOPE
table
,
TABLE
(
tid
)
tid
ALL
where:
Syntax element …
Specifies …
CLASS
the class of tables for the scope of a subsequent command. To type more than one class of table, use an
opening parenthesis before the first class of table, separate each class of table using a comma or space,
and use a closing parenthesis after the last class of table.
Note: Classes are subtable ranges, so if a command requires tables in the recorded scope, you may
specify a CLASS on the SCOPE command.
,
CLASS
(
JRNL
)
PERMANENT
P
TEMPORARY
TEMP
SPOOL
ALL
where:
Utilities
Syntax element …
Specifies …
JRNL
the permanent journal tables containing non-visible user data.
PERMANENT
the permanent tables containing visible user data.
TEMPORARY
the temporary worktables (global temporary table instances) containing
non-permanent data.
SPOOL
the intermediate worktables containing non-permanent data
ALL
all the table classes in the configuration.
541
Chapter 14: Ferret Utility (ferret)
SCOPE
Syntax element …
Specifies …
CYLINDER
the cylinders that are to be acted upon only by a subsequent DEFRAGMENT command. No other
command uses the CYLINDER SCOPE. To type more than one cylid, use an opening parenthesis
before the first cylid, separate each cylid with a comma or space, and use a closing parenthesis after
the last cylid.
Note: A SCOPE command with CYLINDER arguments must also include VPROC arguments to be
valid.
cylinder
,
CYL
cylid
(
)
INDER
cylid
ALL
1102G030
where:
VPROC
Syntax element …
Specifies …
cylid
the cylinder ID number, a 16-character hexadecimal value.
ALL
all the cylinders in the vproc.
the range of AMP vproc ID numbers, or all AMPs in the current configuration that are to be acted
upon by a subsequent command. The keyword VPROC must be entered before the vproc number. To
include more than one vproc, use an opening parenthesis before the first vproc_number, separate each
vproc_number with a comma or space, and use a closing parenthesis after the last vproc_number.
vproc
vproc_number
VPROC
,
vproc_number
(
)
vproc_number TO vproc_number
ALL
1102E032
542
Utilities
Chapter 14: Ferret Utility (ferret)
SCOPE
Syntax element …
Specifies …
VPROC
(continued)
where:
Syntax element …
Specifies …
vproc_number
specifies a single AMP vprocID number or a range of numbers.
Valid AMP vproc ID numbers are from 0 to 16383. There is no default. If
you type a vproc identifier that is assigned to a PE, Ferret issues an error.
Note: Ferret also informs you when you select a vproc number assigned
to a down AMP.
ALL
TABLE
all the AMP vprocs in the configuration.
the subtables that are to be acted upon by a subsequent command.
To type more than one subtable identifier, use an opening parenthesis before the first tid, separate each
tid with a comma or space, and use a closing parenthesis after the last tid.
table
,
TABLE
(
tid
)
tid
ALL
1102E031
where:
Syntax element …
Specifies …
tid
the subtable to process.
Note: The typeandindex component of tid is required. For more
information on tid formatting, refer to “Specifying a Subtable Identifier
(tid)” on page 499.
ALL
WAL
all the subtables in the configuration.
the WAL log.
WAL
WAL
Utilities
543
Chapter 14: Ferret Utility (ferret)
SCOPE
Syntax element …
Specifies …
ALL
to reset SCOPE to the default startup settings of Ferret. The initial scope will consist, as appropriate to
each command, of all tables, all cylinders, all vprocs, and the WAL log.
all
all
1102A079
Usage Notes
The following table shows how scopes are interpreted.
Scope Type
Interpretation
Table
Specified subtables, which can be selected subtables, all subtables, or classes.
Table scopes imply cylinder scopes, and all-table scopes imply free CIs.
Cylinder
All the specified cylinders.
Cylinder scopes can be specified by implication as subtable scopes. For the
DEFRAGMENT command, this means all cylinders containing the specified
subtables. An explicit cylinder specification is only meaningful to the
DEFRAGMENT command. If you specify a free cylinder in CYLINDER
cylid, Ferret SCANDISK will respond with an indication that the cylinder did
not need to be defragmented.
Vproc
All subtables, all free CIs, and the WAL log.
WAL
The entire WAL log.
The Ferret commands utilize SCOPE as follows, where classes are collections of tables and
result in Table on AMP scopes.
Command
Description
DEFRAGMENT
The command uses Table on AMP scopes, or Cylinder on AMP scopes.
The scope can contain tables only, both tables and vprocs, or both vprocs
and cylinders. For more information, see “DEFRAGMENT” on page 509.
PACKDISK
The command uses AMP scopes, or Table on AMP scopes.
The scope selected must include either vprocs or tables, but not both. For
more information, see “PACKDISK” on page 520.
SCANDISK
The command uses AMP scopes, or Table on AMP scopes, and WAL scope.
The scope can contain tables only, vprocs only, both tables and vprocs, or the
WAL log. For more information, see “SCANDISK” on page 528.
544
Utilities
Chapter 14: Ferret Utility (ferret)
SCOPE
Command
Description
SHOWBLOCKS
The command uses Table on AMP scopes and WAL scope.
The scope can contain tables only or the WAL log. For more information, see
“SHOWBLOCKS” on page 550.
SHOWFSP
The command uses AMP scopes, or Table on AMP scopes.
The scope can include one or more tables, one or more vprocs, or the entire
system. For more information, see “SHOWFSP” on page 554.
SHOWSPACE
The command uses AMP scopes, or Table on AMP scopes, and WAL scope.
The scope can contain tables only, vprocs only, both tables and vprocs, or the
WAL log. For more information, see “SHOWSPACE” on page 561.
SHOWWHERE
The scope can be vprocs, tables, or the WAL log. For more information, see
“SHOWWHERE” on page 563.
All scopes are not applicable to all commands. When a command is executed and the scope is
not applicable to the command, either the command is rejected or the inapplicable portions of
the scope are ignored.
The SHOWDEFAULTS command displays the various components of a recorded scope, which
are interpreted individually by each of the commands: DEFRAGMENT, PACKDISK,
SCANDISK, SHOWBLOCKS, SHOWFSP, SHOWSPACE, and SHOWWHERE.
The scopes appear in the SHOWDEFAULTS output as tables on vprocs, cylinders on vprocs,
vprocs, or the WAL log. For more information, see “SHOWDEFAULTS” on page 553.
Note: Although disk space allocated for TJ and WAL records is charged against the table 0 26,
no actual TJ or WAL records are found in the subtables of this table. Instead, these records are
in the WAL log. The only row that exists in any subtable of the table 0, 26 is the table header in
subtable 0.
Utilities
545
Chapter 14: Ferret Utility (ferret)
SCOPE
Example 1
The following command examples are representative of how the SCOPE command is
normally used:
Command
Action
Scope vproc ALL
Select all the AMPs on the system.
Scope vproc 4, vproc 6
Select vprocs 4 and 6.
or
Scope vproc (4,6)
Scope Table 400H 0 400H
Select table 400H 0 400H on all vprocs.
Scope Class (P, JRNL)
Select all the Permanent and Journal tables on all
vprocs.
or
Scope Class P, Class JRNL
Scope Class P, Class JRNL, vproc
4
Select all the Permanent and Journal tables on
vproc 4.
Scope Cyl (000100000000003C
0001000000000043)
Select cylinders 000100000000003C through
0001000000000043.
Note: All cylinders specified in a single SCOPE
command must belong to the same AMP.
Example 2
Assume that table T4 is a table in database XYZ and has a table number of 0 1198.
•
•
One of the following commands would place all T4 fallback subtables under scope.
•
SCOPE “TABLE XYZ.T4 F”*
•
SCOPE TABLE 0 1198 F*
One of the following commands would place all T4 subtables under scope.
•
SCOPE TABLE “XYZ.T4” *
•
SCOPE TABLE 0 1198 *
Example 3
The following example resets the scope to the default Ferret startup settings. Some of the
output is omitted to condense the example.
scope all
showd
Scope for the Defrag command is :
All Cylinders
On all AMP vprocs
Scope for the Packdisk command is :
546
Utilities
Chapter 14: Ferret Utility (ferret)
SCOPE
All of the AMP vprocs
Scope for the Scandisk command is :
All of the AMP vprocs
Scope for the Showblocks command is :
All tables
WAL Log
On all AMP vprocs
Scope for the Showspace command is :
All of the AMP vprocs
Scope for the Showwhere command is :
All tables
On all AMP vprocs
Example 4
The following example shows a whole AMP scope, which includes the WAL log.
SCOPE VPROC 1
The SCOPE has been set
Ferret => SHOWD
The current setting of the Input Radix is Decimal
The current setting of the Output Radix is Decimal
Scope for the Defrag command is :
All Cylinders
On AMP vproc(s) 1
Scope for the Packdisk command is :
AMP vproc(s) 1
Scope for the Scandisk command is :
AMP vproc(s) 1
Scope for the Showblocks command is :
All tables
WAL Log
On AMP vproc(s) 1
Scope
AMP
Scope
AMP
for the Showspace command is :
vproc(s) 1
for the ShowFSP command is :
vproc(s) 1
Scope for the Showwhere command is :
AMP vproc(s) 1
Example 5
The following example shows the scope set to the WAL log on all AMPs.
Utilities
547
Chapter 14: Ferret Utility (ferret)
SCOPE
Note: Only SCANDISK, SHOWBLOCKS, SHOWSPACE, and SHOWWHERE work with this
scope.
SCOPE WAL
The SCOPE has been set
Ferret => SHOWD
The current setting of the Input Radix is Decimal
The current setting of the Output Radix is Decimal
Scope for the Defrag command is :
On all AMP vprocs
Scope for the Packdisk command is :
On all AMP vprocs
Scope for the Scandisk command is :
WAL Log
On all AMP vprocs
Scope for the Showblocks command is :
WAL Log
On all AMP vprocs
Scope for the Showspace command is :
WAL Log
On all AMP vprocs
Scope for the ShowFSP command is :
Invalid, because it contains WAL
Scope for the Showwhere command is :
WAL Log
On all AMP vprocs
Example 6
The following example shows the scope set to all the tables on all amps, but not the WAL log.
scope class all
showd
Scope for the Defrag command is :
Table(s) 0 0 0 TO 65535 65535 65535
On All AMP vprocs
Scope for the Packdisk command is :
Table(s) 0 0 0 TO 65535 65535 65535
On All AMP vprocs
Scope for the Scandisk command is :
Table(s) 0 0 0 TO 65535 65535 65535
On All AMP vprocs
Scope for the Showblocks command is :
Table(s) 0 0 0 TO 65535 65535 65535
548
Utilities
Chapter 14: Ferret Utility (ferret)
SCOPE
On All AMP vprocs
Scope for the Showspace command is :
Table(s) 0 0 0 TO 65535 65535 65535
On All AMP vprocs
Scope for the ShowFSP command is :
Table(s) 0 0 0 TO 65535 65535 65535
On All AMP vprocs
Scope for the Showwhere command is :
Table(s) 0 0 0 TO 65535 65535 65535
On All AMP vprocs
Utilities
549
Chapter 14: Ferret Utility (ferret)
SHOWBLOCKS
SHOWBLOCKS
Purpose
The SHOWBLOCKS command displays statistics about data block size and/or the number of
rows per data block for all the tables defined by the SCOPE command. SHOWBLOCKS can
display WAL log statistics also.
Syntax
Note: The online help lists the display options (/S, /M, /L) as /dispopt.
SHOWBLOCKS
SHOWB
/S
/M
/L
1102A150
where:
Syntax element …
Specifies...
/S
For each primary data subtable, display the following:
• a histogram of block sizes
• the minimum, average, and maximum block size per subtable
This is the default display.
/M
Same as the /S option, but display the statistics for all subtables.
/L
For each subtable, for each block size, display the following:
• number of blocks
• the minimum, average, and maximum number of rows per data block
size
Displays the statistics for all subtables.
Usage Notes
The output of the long display is basically one line for every size data block from every
subtable of every table in the scope.
The output can be quite lengthy; therefore, consider using the OUTPUT command to redirect
the output to a file.
This command can be used with the CREATE/ALTER table option, DATABLOCKSIZE, to
determine the best data block size for tables based on performance requirements.
550
Utilities
Chapter 14: Ferret Utility (ferret)
SHOWBLOCKS
Teradata Database can isolate some file system errors to a specific data or index subtable, or to
a range of rows (“region”) in a data or index subtable. In these cases, Teradata Database marks
only the affected subtable or region down. This allows transactions that do not require access
to the down subtable or rows to proceed, without causing a database crash or requiring a
database restart. If SHOWBLOCKS encounters down regions, it skips these regions, and
displays the percentage of total space that was skipped.
Aborting SHOWBLOCKS
You can abort the display at any time using the ABORT command.
ABORT
.
GT06A021
Example 1
The following example displays output from a SHOWBLOCKS command for two tables.
Since no options were specified, the default /s option was used. The default SHOWBLOCKS
display is 132 characters wide.
Utilities
551
Chapter 14: Ferret Utility (ferret)
SHOWBLOCKS
Example 2
The following is an example of SHOWBLOCKS output using the /L option to show statistics
for each block size, including row statistics. This example shows statistics for the WAL log and
for two user tables.
552
Utilities
Chapter 14: Ferret Utility (ferret)
SHOWDEFAULTS
SHOWDEFAULTS
Purpose
Shows the default settings and the current context. The command displays the current default
radix for input and output, the current input, output, and error file names, and the SCOPE
defined. SHOWDEFAULTS also displays WAL log context information.
Syntax
SHOWDEFAULTS
SHOWD
GT06B018
Example
Ferret ==> showd
The current setting of the Input Radix is Decimal
The current setting of the Output Radix is Decimal
Scope for the Defrag command is :
All Cylinders
On All AMP vprocs
Scope for the Packdisk command is :
All of the AMP vprocs
Scope
All
WAL
All
for the Scandisk command is :
of the AMP vprocs
Log
Tables
Scope
All
WAL
All
for the Showblocks command is :
Tables
Log
of the AMP vprocs
Scope
All
WAL
All
for the Showspace command is :
Tables
Log
of the AMP vprocs
Scope for the ShowFSP command is :
All Tables
All of the AMP vprocs
Scope
All
WAL
All
Utilities
for the Showwhere command is :
Tables
Log
of the AMP vprocs
553
Chapter 14: Ferret Utility (ferret)
SHOWFSP
SHOWFSP
Purpose
The SHOWFSP command displays information about table storage space utilization.
SHOWFSP provides the following information on a per-vproc (per-AMP) basis:
•
Table name
•
Free space percentage (FSP), the amount of unused storage space on the cylinders used by
the table
•
Number of cylinders currently occupied by tables meeting SHOWFSP filtering criteria
•
Number of cylinders that would be occupied by each of these tables after a PACKDISK
command
SHOWFSP shows the number of cylinders that will be recovered or consumed if PACKDISK is
run to achieve a given percentage of free space on table cylinders.
The scope of SHOWFSP depends on the most recent options set by the SCOPE command,
and can include one or more specific tables, one or more specific vprocs, or all tables in the
system.
Syntax
SHOWFSP
SHOWF
-a
-c cyls
-d fsp
-m fsp
-r cyls
-v
1102E397
where:
554
Utilities
Chapter 14: Ferret Utility (ferret)
SHOWFSP
Syntax element …
Is the …
-a
all tables option which displays all tables meeting the criteria of -c, -d, and -m options. If you do not
specify any of those options, they will assume their default values. Whether specified or not, the -r
option automatically is overridden and set to the most negative integer known to Ferret.
Note: Regardless of options, a table must occupy at least one full cylinder or be the first table on a
cylinder to be considered for displaying.
If you specify the -a option, it will consider both of the cases:
• Where PACKDISK would tighten the packing (and thus free up cylinders)
• Where PACKDISK would loosen the packing (and thus consume free cylinders).
If you do not specify the -a option, the default is to consider only tables that would tighten the
packing and ignore the other case.
The number of cylinders affected by the PACKDISK are displayed either as a positive or negative
value, depending on whether cylinders are consumed or freed. If cylinders are freed, the value is
displayed as a positive. If cylinders are consumed, the value is displayed as a negative.
You can qualify tables further by specifying the -a option in conjunction with the other options. See
the examples for useful ways of using the -a option.
-c cyls
number of cylinders per AMP that must be exceeded by a table to be considered for display.
The -c cyls option follows this priority order:
• If -c cyls is specified, use it unless cyls is 0, in which case cyls defaults to 1 automatically.
• If -c cyls is not specified, it automatically defaults to 0.
You can further qualify tables by specifying the -c option in conjunction with the other options.
-d fsp
the desired FSP after packing.
The default follows this priority order:
• If -d fsp is specified, use it.
• If -d fsp is not specified and a table-level FSP exists, use the table-level FSP.
• Otherwise, use the system-wide default FSP as defined in the DBS Control GDO.
Note: This option allows SHOWFSP to display tables that would free up cylinders on each AMP if you
ran PACKDISK with this FSP. If specified in conjunction with the -a option, SHOWFSP also considers
tables that consume free cylinders. You can qualify tables further by specifying the -d fsp option in
conjunction with the other options.
-m fsp
minimum current FSP a table must exceed to qualify for display.
The default is 0.
Note: This option allows SHOWFSP to ignore tables whose current average FSP is under a certain
FSP value.
The average is calculated as a floating point value, and the option is input as an integer. Therefore,
often the FSP appears to act as a minimum. For example, an average FSP of 10.1 is greater than a
minimum FSP of 10, whereas an average FSP of 10 is not greater than the minimum FSP of 10.
Therefore, a table with an average FSP of 10.1 qualifies, whereas a table with an average FSP of 10
would not qualify.
You can qualify tables further by specifying the -m fsp option in conjunction with the other options.
Utilities
555
Chapter 14: Ferret Utility (ferret)
SHOWFSP
Syntax element …
Is the …
-r cyls
the number of recoverable cylinders that must be exceeded to qualify the table for display.
The default follows this priority order:
• If the -a option is specified, set -r cyls to the most negative integer known to Ferret.
• If -r cyls is specified, use it unless cyls is 0, in which case cyls is defaulted to 1 automatically.
• If -r cyls is not specified, it is defaulted to 0 automatically.
You can qualify tables further by specifying the -r cyls option in conjunction with the other options,
except for the -a option, in which case the -r cyls option is handled in the manner noted above.
-v
verbose mode.
The default is Off.
This is for debugging purposes and produces extremely detailed and voluminous output. Teradata
does not recommend this option for normal use.
Usage Notes
All arguments and displays are on a per-AMP basis. Without parameters, SHOWFSP displays
all tables with recoverable cylinders, assuming the defaults used by PACKDISK. For more
details as to how the defaults of each option are determined, see their respective sections
above.
SHOWFSP provides only an approximation of the number of cylinders that can be freed or
consumed by PACKDISK, so the number of recoverable or consumed cylinders are subject to
small errors. This small error is because SHOWFSP assumes a continuous medium, meaning
that part of a data block can reside at the end of one cylinder while the remaining part can
reside at the beginning of the next cylinder. However, PACKDISK packs cylinders based on
discrete boundaries, meaning a data block only can be on a cylinder in its entirety or not be on
the cylinder at all.
Examples
For the following examples, assume the following:
•
The default system-wide FSP is set at 15%.
•
The table master.backup has a table-level FSP of 25%.
•
The table payroll.phoenixariz has a table-level FSP of 5%.
•
All other tables do not have a table-level FSP assigned to them.
•
The SCOPE is set to vproc 1 (by issuing a SCOPE VPROC 1 command).
Example 1
Example 1 shows all tables that could be affected by a PACKDISK run against the current
default FSPs, even if they will not recover any cylinders.
showfsp -a
Fri Jun 20, 2003
556
14:41:15 : showfsp will be started
On AMP vproc(s) 1
Utilities
Chapter 14: Ferret Utility (ferret)
SHOWFSP
Fri
Fri
Fri
Fri
Jun
Jun
Jun
Jun
20,
20,
20,
20,
2003
2003
2003
2003
14:41:15
14:41:15
14:41:15
14:41:15
:
:
:
:
to find all the tables specified greater than 0 cylinders
which could free up/consume any number of cylinders
only tables with a current FSP greater than 0%
will be considered
Do you wish to continue based upon this scope?? (Y/N)
y
Fri Jun 20, 2003
vproc
1 (0001)
14:41:42 : ShowFsp has been started
On AMP vproc(s) 1
response
There are 10 tables larger than 0 cylinders on amp 1
Database
Table
fsp
Recoverable Current
Name
Name
%
Cylinders
Cylinders
-------------------------- ----------------------- ----- ----------- --------DBC
TVM
16
0
1
customers
t0
17
0
35
employees
t1
17
1
43
employees_address
t2
17
0
9
customers_address
t3
16
0
8
PAYROLL
losangelesca
17
0
6
PAYROLL
albanyny
17
0
5
MASTER
backup
27
0
11
MASTER
stocks
16
0
32
PAYROLL
phoenixariz
6
0
41
1 of 1 vprocs responded with the above tables fitting the criteria
ShowFsp has completed
Based on the SHOWFSP results, if a PACKDISK command is executed without specifying an
FSP, SHOWFSP predicts that the table employees.t1 will free up one cylinder. All other tables
will be unaffected.
Example 2
Example 2 shows all tables that could free up more than one cylinder per AMP if executing a
PACKDISK with an FSP of 10%.
showfsp -d 10 -r 1
Fri Jun 20, 2003
Fri
Fri
Fri
Fri
Fri
Jun
Jun
Jun
Jun
Jun
20,
20,
20,
20,
20,
2003
2003
2003
2003
2003
16:07:29 : showfsp will be started
On AMP vproc(s) 1
16:07:29 : to find all the tables specified greater than 0 cylinders
16:07:29 : which could free up more than 1 cylinders
16:07:29 : if packed to a desired FSP of 10%.
16:07:29 : only tables with a current FSP greater than 0%
16:07:29 : will be considered
Do you wish to continue based upon this scope?? (Y/N)
y
Fri Jun 20, 2003
vproc
Utilities
1 (0001)
16:07:30 : ShowFsp has been started
On AMP vproc(s) 1
response
557
Chapter 14: Ferret Utility (ferret)
SHOWFSP
There are
10 tables larger than 0 cylinders on amp 1
Database
Name
-------------------------customer
employees
MASTER
MASTER
Table
Name
----------------------t0
t1
backup
stocks
fsp
Recoverable
%
Cylinders
----- ----------17
2
17
3
27
2
16
2
Current
Cylinders
--------35
43
11
32
1 of 1 vprocs responded with the above tables fitting the criteria
ShowFsp has completed
Note: The following command would result in the same output:
showfsp -d 10 -r 0
If you omit the -r option, then SHOWFSP shows all tables that could free up one or more
cylinders.
Example 3
Example 3 shows all tables that could free up or consume any amount of cylinders per AMP if
executing a PACKDISK command with an FSP of 10%.
showfsp -a -d 10
Fri Jun 20, 2003
Fri
Fri
Fri
Fri
Fri
Jun
Jun
Jun
Jun
Jun
20,
20,
20,
20,
20,
2003
2003
2003
2003
2003
16:10:05 : showfsp will be started
On AMP vproc(s) 1
16:10:05 : to find all the tables specified greater than 0 cylinders
16:10:05 : which could free up/consume any number of cylinders
16:10:05 : if packed to a desired FSP of 10%.
16:10:05 : only tables with a current FSP greater than 0%
16:10:05 : will be considered
Do you wish to continue based upon this scope?? (Y/N)
y
Fri Jun 20, 2003
vproc
1 (0001)
There are
16:10:06 : ShowFsp has been started
On AMP vproc(s) 1
response
10 tables larger than 0 cylinders on amp 1
Database
Name
-------------------------DBC
customers
employees
employees_address
customer_address
PAYROLL
PAYROLL
MASTER
MASTER
PAYROLL
558
Table
Name
----------------------TVM
t0
t1
t2
t3
losangelesca
albanyny
backup
stocks
phoenixariz
fsp
Recoverable
%
Cylinders
----- ----------16
0
17
2
17
3
17
0
16
0
17
0
17
0
27
2
16
2
6
-2
Current
Cylinders
--------1
35
43
9
8
6
5
11
32
41
Utilities
Chapter 14: Ferret Utility (ferret)
SHOWFSP
1 of 1 vprocs responded with the above tables fitting the criteria
ShowFsp has completed
Example 4
Example 4 shows all tables that could free up or consume any amount of cylinders per AMP if
executing a PACKDISK command with an FSP of 5%. Consider only tables that currently
occupy more than 10 cylinders per AMP and have a current FSP of greater than 5%.
showfsp -a -d 5 -c 10 -m 5
Fri Jun 20, 2003
Fri
Fri
Fri
Fri
Fri
Jun
Jun
Jun
Jun
Jun
20,
20,
20,
20,
20,
2003
2003
2003
2003
2003
16:16:40 : showfsp will be started
On AMP vproc(s) 1
16:16:40 : to find all the tables specified greater than 10
16:16:40 : cylinders which could free up/consume any number of
16:16:40 : cylinders if packed to a desired FSP of 5%.
16:16:40 : only tables with a current FSP greater than 5%
16:16:40 : will be considered
Do you wish to continue based upon this scope?? (Y/N)
y
Fri Jun 20, 2003
vproc
1 (0001)
There are
16:16:41 : ShowFsp has been started
On AMP vproc(s) 1
response
5 tables larger than 10 cylinders on amp 1
Database
Name
-------------------------customer
employees
MASTER
MASTER
PAYROLL
Table
Name
----------------------t0
t1
backup
stocks
phoenixariz
fsp
Recoverable
%
Cylinders
----- ----------17
4
17
5
27
2
16
3
6
0
Current
Cylinders
--------35
43
11
32
41
1 of 1 vprocs responded with the above tables fitting the criteria
ShowFsp has completed
Example 5
Example 5 shows all tables that could free up more than three cylinders per AMP if executing
a PACKDISK command with an FSP of 7%. Consider only tables with a current FSP of more
than 10% and currently occupying more than three cylinders per AMP.
showfsp -d 7 -c 3 -m 10 -r 3
Fri Jun 20, 2003
Fri
Fri
Fri
Fri
Fri
Jun
Jun
Jun
Jun
Jun
Utilities
20,
20,
20,
20,
20,
2003
2003
2003
2003
2003
16:22:00 : showfsp will be started
On AMP vproc(s) 1
16:22:00 : to find all the tables specified greater than 3 cylinders
16:22:00 : which could free up more than 3 cylinders
16:22:00 : if packed to a desired FSP of 7%.
16:22:00 : only tables with a current FSP greater than 10%
16:22:00 : will be considered
559
Chapter 14: Ferret Utility (ferret)
SHOWFSP
Do you wish to continue based upon this scope?? (Y/N)
y
Fri Jun 20, 2003
vproc
1 (0001)
There are
16:22:01 : ShowFsp has been started
On AMP vproc(s) 1
response
9 tables larger than 3 cylinders on amp 1
Database
Name
-------------------------employees
Table
Name
----------------------t1
fsp
Recoverable
%
Cylinders
----- ----------17
4
Current
Cylinders
--------43
1 of 1 vprocs responded with the above tables fitting the criteria
ShowFsp has completed
Example 6
Example 6 shows all tables that could free more one or more cylinders per AMP if executing a
PACKDISK command at an FSP of 0%. Consider only tables with a current FSP of 20% or
more.
showfsp -d 0 -m 20
Fri Jun 20, 2003
Fri
Fri
Fri
Fri
Fri
Jun
Jun
Jun
Jun
Jun
20,
20,
20,
20,
20,
2003
2003
2003
2003
2003
16:53:55 : showfsp will be started
On AMP vproc(s) 1
16:53:55 : to find all the tables specified greater than 0 cylinders
16:53:55 : which could free up more than 0 cylinders
16:53:55 : if packed to a desired FSP of 0%.
16:53:55 : only tables with a current FSP greater than 20%
16:53:55 : will be considered
Do you wish to continue based upon this scope?? (Y/N)
y
Fri Jun 20, 2003
vproc
1 (0001)
There are
16:53:56 : ShowFsp has been started
On AMP vproc(s) 1
response
10 tables larger than 0 cylinders on amp 1
Database
Name
-------------------------MASTER
Table
Name
----------------------backup
fsp
Recoverable
%
Cylinders
----- ----------27
2
Current
Cylinders
--------11
1 of 1 vprocs responded with the above tables fitting the criteria
ShowFsp has completed
Another useful interpretation of this output is the following:
Show all tables that could free up more than 20% of their cylinders per AMP if packed at an
FSP of 0%.
560
Utilities
Chapter 14: Ferret Utility (ferret)
SHOWSPACE
SHOWSPACE
Purpose
The SHOWSPACE command shows storage space utilization. It displays the following
information based on the defined scope. For more information, see “SCOPE” on page 539.
•
Number of cylinders allocated for permanent, journal, temporary, spool, DEPOT, and
WAL log data.
•
Average utilization per cylinder for permanent, journal, temporary, DEPOT, and spool
cylinders.
•
Number and percentage of available free cylinders.
Syntax
SHOWSPACE
SHOWS
/S
/M
/L
1102A151
where:
Syntax element …
Specifies...
/S
a one-line summary display for each AMP as defined by the SCOPE
command.
This is the default.
/M
Same as the /L option.
/L
both the one-line summary display for each AMP, and a one-line display for
each pdisk storage device, as defined by the SCOPE command.
Example
The following is an example of the output that SHOWSPACE generates:
Utilities
561
Chapter 14: Ferret Utility (ferret)
SHOWSPACE
562
Utilities
Chapter 14: Ferret Utility (ferret)
SHOWWHERE
SHOWWHERE
Purpose
The SHOWWHERE command displays information about cylinder allocation, grade, and
temperature.
Syntax
SHOWWHERE
/S
SHOWW
/M
/L
1102A197
where:
Syntax Element
Description
/S
Displays a summary listing of the cylinders showing one line for every unique
combination of cylinder type and grade (PERM FAST, PERM MEDIUM,
PERM SLOW, WAL FAST, WAL MEDIUM, and so forth).
This is the default.
/M
Displays a medium length listing of the cylinders with one line for every
unique combination of cylinder type and grade per AMP (vproc).
/L
Displays a long listing of the cylinders with one line for every unique
combination of cylinder type and grade per AMP (vproc) per storage device.
Usage Notes
Display output is limited to cylinders in the current SCOPE (one or more vprocs, tables, or the
WAL log). If no scope has been specified, SHOWWHERE shows information for all cylinders.
For more information, see “SCOPE” on page 539.
SHOWWHERE is meaningful only if the system storage allocation method has been set to
ONE_DIMENSIONAL or TWO_DIMENSIONAL, and if the storage devices have already
been profiled. For more information, contact the Teradata Support Center.
Example
Ferret ==>
showwhere
SHOWWHERE
Utilities
result for all AMPs
563
Chapter 14: Ferret Utility (ferret)
SHOWWHERE
+----------+------------+------------------+------------------------+
Vproc
| # of
| Type
|
|
Temperature
|
Num DISK | Cyls
|
|
Grade
%
| %HOT
%WARM
%COLD
|
+----------+------------+------------------+-------+-------+--------+
|
9
|
PERM | MEDIUM
| 100% |
0% |100%
| 0%
|
|
4
|
WAL |
FAST
| 100% |
0% |100%
| 0%
|
|
6
|
DEPOT |
FAST
| 100% |
0% |100%
| 0%
|
|
28
|
WAL POOL |
FAST
| 100% |
0% |100%
| 0%
|
|
4
| SPOOL POOL |
FAST
| 100% |
0% |100%
| 0%
|
|
2
|
JRNL |
FAST
| 100% |
0% |100%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
TOTAL
|
44
|
|
FAST
| 83% |
0% |100%
| 0%
|
|
9
|
| MEDIUM
| 17% |
0% |100%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
Ferret ==>
showwhere /m
SHOWWHERE
result for Each AMP
+----------+------------+------------------+------------------------+
Vproc
| # of
| Type
|
|
Temperature
|
Num DISK | Cyls
|
|
Grade
%
| %HOT
%WARM
%COLD
|
+----------+------------+------------------+-------+-------+--------+
0
|
0
|
PERM |
FAST
|
0% |
0% | 0%
| 0%
|
|
4
|
PERM | MEDIUM
| 100% |
0% |100%
| 0%
|
|
0
|
PERM |
SLOW
|
0% |
0% | 0%
| 0%
|
|
2
|
WAL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
WAL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
3
|
DEPOT |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
DEPOT | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
DEPOT |
SLOW
|
0% |
0% | 0%
| 0%
|
|
14
|
WAL POOL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
WAL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
2
| SPOOL POOL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
| SPOOL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
| SPOOL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
1
|
JRNL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
JRNL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
JRNL |
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
+----------+------------+------------------+-------+-------+--------+
1
|
0
|
PERM |
FAST
|
0% |
0% | 0%
| 0%
|
|
5
|
PERM | MEDIUM
| 100% |
0% |100%
| 0%
|
|
0
|
PERM |
SLOW
|
0% |
0% | 0%
| 0%
|
|
2
|
WAL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
WAL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
3
|
DEPOT |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
DEPOT | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
DEPOT |
SLOW
|
0% |
0% | 0%
| 0%
|
|
14
|
WAL POOL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
WAL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
2
| SPOOL POOL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
| SPOOL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
| SPOOL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
1
|
JRNL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
JRNL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
JRNL |
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
+----------+------------+------------------+-------+-------+--------+
TOTAL
|
44
|
|
FAST
| 83% |
0% |100%
| 0%
|
|
9
|
| MEDIUM
| 17% |
0% |100%
| 0%
|
|
0
|
|
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
Ferret ==>
showwhere /l
You have indicated that you want to do a long display for
the SHOWWHERE command. Please be aware that the output can be
extremely large.
y
564
Do you wish to continue?? (Y/N)
Utilities
Chapter 14: Ferret Utility (ferret)
SHOWWHERE
SHOWWHERE
result for Each AMP
+----------+------------+------------------+------------------------+
Vproc
| # of
| Type
|
|
Temperature
|
Num DISK | Cyls
|
|
Grade
%
| %HOT
%WARM
%COLD
|
+----------+------------+------------------+-------+-------+--------+ 0
0
0
|
0
|
PERM |
FAST
|
0% |
0% | 0%
| 0%
|
|
4
|
PERM | MEDIUM
| 100% |
0% |100%
| 0%
|
|
0
|
PERM |
SLOW
|
0% |
0% | 0%
| 0%
|
|
2
|
WAL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
WAL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
3
|
DEPOT |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
DEPOT | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
DEPOT |
SLOW
|
0% |
0% | 0%
| 0%
|
|
14
|
WAL POOL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
WAL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
2
| SPOOL POOL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
| SPOOL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
| SPOOL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
1
|
JRNL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
JRNL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
JRNL |
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
SUB-TOTAL |
22
|
|
FAST
| 85% |
0% |100%
| 0%
|
|
4
|
| MEDIUM
| 15% |
0% |100%
| 0%
|
|
0
|
|
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
+----------+------------+------------------+-------+-------+--------+
|
0
|
PERM |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
PERM | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
PERM |
SLOW
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
0
|
DEPOT |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
DEPOT | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
DEPOT |
SLOW
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL POOL |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
0
| SPOOL POOL |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
| SPOOL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
| SPOOL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
0
|
JRNL |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
JRNL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
JRNL |
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
SUB-TOTAL |
0
|
|
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
| MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
|
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
0
1
+----------+------------+------------------+-------+-------+--------+
|
0
|
PERM |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
PERM | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
PERM |
SLOW
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
0
|
DEPOT |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
DEPOT | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
DEPOT |
SLOW
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL POOL |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
0
| SPOOL POOL |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
| SPOOL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
| SPOOL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
0
|
JRNL |
FAST
|
0% |
0% | 0%
| 0%
|
|
0
|
JRNL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
JRNL |
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
SUB-TOTAL |
0
|
|
FAST
|
0% |
0% | 0%
| 0%
|
1
Utilities
0
565
Chapter 14: Ferret Utility (ferret)
SHOWWHERE
|
0
|
| MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
|
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
+----------+------------+------------------+-------+-------+--------+
|
0
|
PERM |
FAST
|
0% |
0% | 0%
| 0%
|
|
5
|
PERM | MEDIUM
| 100% |
0% |100%
| 0%
|
|
0
|
PERM |
SLOW
|
0% |
0% | 0%
| 0%
|
|
2
|
WAL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
WAL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
3
|
DEPOT |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
DEPOT | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
DEPOT |
SLOW
|
0% |
0% | 0%
| 0%
|
|
14
|
WAL POOL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
WAL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
WAL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
2
| SPOOL POOL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
| SPOOL POOL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
| SPOOL POOL |
SLOW
|
0% |
0% | 0%
| 0%
|
|
1
|
JRNL |
FAST
| 100% |
0% |100%
| 0%
|
|
0
|
JRNL | MEDIUM
|
0% |
0% | 0%
| 0%
|
|
0
|
JRNL |
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
SUB-TOTAL |
22
|
|
FAST
| 81% |
0% |100%
| 0%
|
|
5
|
| MEDIUM
| 19% |
0% |100%
| 0%
|
|
0
|
|
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
1
TOTAL
566
1
+----------+------------+------------------+-------+-------+--------+
|
44
|
|
FAST
| 83% |
0% |100%
| 0%
|
|
9
|
| MEDIUM
| 17% |
0% |100%
| 0%
|
|
0
|
|
SLOW
|
0% |
0% | 0%
| 0%
|
+----------+------------+------------------+-------+-------+--------+
Utilities
Chapter 14: Ferret Utility (ferret)
TABLEID
TABLEID
Purpose
The TABLEID command displays the table number of the specified table when given the
database name and table name.
Syntax
TABLEID
"databasename. tablename"
"databasename". "tablename"
'databasename. tablename'
'databasename'. 'tablename'
1102A005
where:
Syntax element …
Specifies…
databasename
the name of the database containing the table for which the table number will
be displayed.
tablename
the name of the table for which the table number will be displayed.
Usage Notes
A table is identified in the data dictionary by a table number (tvm.tvmid). Each table number
is unique across the whole system, rather than local to a database. Therefore, a table number
uniquely identifies a table in the system.
The TABLEID command displays the table number of the table specified by databasename and
tablename. The output of the TABLEID command is a numeric subtable identifier (tid), which
consists of three numbers:
•
The first two comprise the table number. This pair of numbers is used to uniquely identify
a table in the system.
•
The third is the typeandindex value, which specifies a kind of subtable, such as a table
header, data subtable, or a particular index subtable. TABLEID always returns a
typeandindex value of zero (0), which specifies the table header.
For more information on how to interpret a tid, see “Specifying a Subtable Identifier (tid)” on
page 499.
The following rules apply when specifying databasename and tablename:
Utilities
•
The period (.) is required to separate the database name from the table name.
•
You must use either single ( ' ) or double ( " ) quotation marks when typing a database
name and table name. The results are the same.
567
Chapter 14: Ferret Utility (ferret)
TABLEID
•
You can specify a fully qualified table name using any one of the methods suggested in the
syntax diagram with the following exceptions:
•
The object name has an apostrophe, in which case you must specify the object name in
double quotes.
Valid examples include the following:
•
tableid "xyz.mark’s table"
•
tableid "xyz"."mark’s table"
Invalid examples include the following:
•
•
tableid 'xyz.mark’s table'
•
tableid "xyz".'mark’s table'
The object name has a period, in which case you must type the fully qualifying
tablename in the form of "database"."tablename" or
'database'.'tablename'.
Valid examples include the following:
•
tableid "xyz.0’s"."mark’s table.2.53.00"
•
tableid 'xyz'.'table.0'
Invalid examples include the following:
•
tableid "xyz.0’s.mark’s table.2.53.00"
•
tableid 'xyz.0’s.mark’s table.2.53.00'
Example
The following example shows output generated by TABLEID:
Ferret ==> tableid "mydatabase.mytable"
The table id for MYDATABASE.MYTABLE is
1 1217 0 (0x0001 0x4C1 0x0000)
Note: You could get the same results with the following commands:
tableid 'mydatabase.mytable'
tableid "mydatabase"."mytable"
tableid 'mydatabase'.'mytable'
568
Utilities
Chapter 14: Ferret Utility (ferret)
UPDATE DATA INTEGRITY FOR
UPDATE DATA INTEGRITY FOR
Purpose
The UPDATE DATA INTEGRITY FOR command allows you to update the disk I/O integrity
checksum levels fields of a table type you modified using the DBS Control utility. For more
information, see Chapter 12: “DBS Control (dbscontrol).”
Syntax
UPDATE DATA INTEGRITY FOR
ALL
SYSTEM
SYSTEM JOURNAL
SYSTEM LOGGING
USER
PERMANENT JOURNAL
TEMPORARY
1102A043
where:
Syntax element …
Updates the checksum values on …
ALL
all table types (system, system journal, system logging, user, permanent
journal, and temporary tables) simultaneously.
SYSTEM
all system table types (data dictionaries, session information, and so forth)
and table numbers0, 1 through 0, 999.
SYSTEM JOURNAL
transient journals, change tables, and recovery journals.
SYSTEM LOGGING
system and resource usage (RSS) tables.
USER
all user tables (table numbers 0, 1001 through 16383, 65535), which
includes the following:
• Stored procedures
• User-defined functions
• Join indexes
• Hash indexes
This also includes fallback for these and secondary indexes for tables and
join indexes.
Utilities
569
Chapter 14: Ferret Utility (ferret)
UPDATE DATA INTEGRITY FOR
Syntax element …
Updates the checksum values on …
PERMANENT
JOURNAL
all permanent journal tables (table uniq[0] IDs 16384 through 32767).
TEMPORARY
all temporary and spool tables (table uniq[0] IDs 32768 through 65535),
which includes the following:
• Global temporary tables
• Volatile tables
Usage Notes
This command provides similar functionality as issuing an ALTER TABLE command on an
individual table where the data integrity level is changed with the IMMEDIATE option.
Performing this operation causes all tables in the table type being updated to be read and have
the checksum updated and DBD rewritten. By rewriting the DBD, the checksums are updated
to the correct data integrity level automatically. By updating the data integrity level
immediately, you can convert back to a lower data integrity level if you previously had to run
at a higher level because of corruption problems, but the problem was fixed.
To enable disk I/O integrity checking on an individual table, use the ALTER TABLE
command. For detailed information, see “SQL Data Definition Language.
Example
To update the data integrity level for System Journal Tables, type the following:
UPDATE DATA INTEGRITY FOR SYSTEM JOURNAL
570
Utilities
CHAPTER 15
Gateway Control (gtwcontrol)
The Gateway Control utility, gtwcontrol, modifies the default values in the fields of the
gateway control Globally Distributed Object (GDO).
On MP-RAS systems, there is one gateway per node.
On Windows and Linux systems, there can be multiple gateways per node if the gateways
belong to different host groups and listen on different IP addresses.
Audience
Users of Gateway Control include the following:
•
Field Service engineers
•
Network administrators
•
Teradata Database system administrators
Users should be familiar with the configuration of Teradata Database and the performance of
the gateway.
User Interfaces
gtwcontrol runs on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Command line
(gtwcontrol is located in the /tgtw/bin directory.)
Database Window
Windows
Command line (“Teradata Command Prompt”)
Database Window
Linux
Command line
Database Window
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
Utilities
571
Chapter 15: Gateway Control (gtwcontrol)
Syntax
Syntax
Use the following syntax to run Gateway Control:
gtwcontrol option(s)
Options are prefaced with hyphens.
For convenience, multiple options that do not require values can be passed after a single
hyphen:
gtwcontrol -dACEX
Options that require values, however, should each have a hyphen:
gtwcontrol -a off -k 20
For a list of options and their explanations, see “Gateway Control Options” on page 575.
Typing:
gtwcontrol -h
causes Gateway Control to display a list of available options on screen.
Gateway Host Groups
The following table describes gateway host groups on MP-RAS, Windows, and Linux.
On...
The gateway...
MP-RAS
runs in the control AMP on that node. As a result, only one gateway can
exist per node.
Windows
runs in its own vproc. Therefore, multiple gateways can exist on a node.
Linux
runs in its own vproc. Therefore, multiple gateways can exist on a node.
All gateways that belong to the same host group (HG) defined in the vconfig file have a single
Assign Task that manages a set of PEs. The set of PEs that the Assign Task manages is
determined by using the host number (HN) in the vconfig file and mapping the HN to the
HN defined by the Configuration utility. The Assign Task is responsible for assigning a new
session to the Parsing Engine (PE) having the least number of sessions associated with that PE.
For more information on the host group and host number, see the ADD HOST command in
the Configuration utility.
Example 1
Example 1 represents an MP-RAS system.
NODE 1
572
NODE 2
Utilities
Chapter 15: Gateway Control (gtwcontrol)
Gateway Host Groups
PE 16383
PE 16384
HN = 1
HN = 2
PE 16382
PE 16381
HN = 1
HN = 2
Gateway
HGID 1
Gateway
HGID 2
The gateway (assign task) on node 1 manages sessions for PEs 16383 and 16382. The gateway
on node 2 manages sessions for PEs 16384 and 16381.
In example 1, all sessions connected to node 1 are processed by PEs 16383 and 16384. The
gateway looks for all connections which open to port 1025, or, in TCP/IP terms, *.1025. This
configuration supports multiple LAN cards, which could be multihomed.
The network administrator must make sure that a distinct name exists for node 1 and node 2
(such as, NODE1COP1 and NODE2COP1).
Example 2
Example 2 represents a Windows system running multiple gateways on each node.
NODE 1
PE 16383
PE 16384
NODE 2
HN = 1
HN = 2
Gateway (8192) HGID 1
192.168.1.1
Gateway (8190) HGID 2
192.168.1.3
PE 16382
PE 16381
HN = 1
HN = 2
Gateway (8191) HGID 2
192.168.1.2
Gateway (8189) HGID 1
192.168.1.4
In example 2, the same PE configuration exists as shown in Example 1. However, the
configuration is extended to support multiple host groups per node. Each gateway must have
a separate set of IP addresses. In example 2, the gateway running in vproc 8192 only looks for
network connections on IP192.168.1.1 port 1025. The gateway in vproc 8190 only looks for
network connections on IP 192.168.1.3 port 3. Example 2 does not require separate LAN
cards, but it does require that the IP addresses be unique.
In example 2, the network administrator would create the following host names/IP entries:
hgid1_cop1
hgid1_cop2
hgid2_cop1
hgid2_cop2
192.168.1.1
192.168.1.2
192.168.1.3
192.168.1.4
What are the advantages of the configuration in Example 2 over that in Example 1?
•
In example 1, if node 2 is down, then a user cannot connect to PEs 16384 and 16381. In
example 1, sufficient nodes must be available to support the configuration, even if it goes
down.
•
In example 2, the problem does not arise, since each node has a different host group.
Why would you want to define multiple host groups? If you submit very similar SQL requests
and have a separate host group to process those requests, you would get a better cache hit rate.
By controlling where the PEs and gateways for a host group are located and controlling which
jobs go to which host groups, you might be able to balance the gateway and PE workload
better.
Utilities
573
Chapter 15: Gateway Control (gtwcontrol)
Gateway Log Files
Gateway Log Files
Function
The gateway log files include the assign and connect files. Depending on the type of operating
system running on your system, these files are located in a default directory as described in the
table below.
On...
The Default Directory is...
MP-RAS
/tmp
Note: The location of the log files on MP-RAS can be changed by using the
-p option of gtwcontrol.
Windows
the temporary directory you specified for PDE at installation time.
Note: On Windows, the assign and connect files are combined into a single
file.
Linux
the temporary directory you specified for PDE at installation time.
Note: On Linux, the assign and connect files are combined into a single file.
Log File Naming Conventions
Gateway Control uses the following conventions to generate log file names.
MP-RAS
g[a|c]YDDDhhmmsslog
where:
574
Name element
Meaning
g
Literal g character. All Gateway Control log files have names
beginning with g.
[a|c]
The second character of the log file name is a, for assign process logs,
or c, for connect process logs.
Y
The last digit of the four-digit year.
DDD
The day number within the year.
hh
Hour of the day (24-hour clock).
mm
Minute of the hour.
ss
Second of the minute.
log
Literal log characters. All Gateway Control log files have names
ending with log.
Utilities
Chapter 15: Gateway Control (gtwcontrol)
Gateway Control Options
Examples:
ga6117140755log
gc6117140754log
Windows and Linux
Gtw_vvvvv_YYYYMMDDhhmmss.log
where:
Name element
Meaning
Gtw
Literal Gtw characters. All Gateway Control log files have names
beginning with Gtw.
vvvvv
Gateway vproc number.
YYYY
The four-digit year.
MM
Number of the month within the year.
DD
Number of the day within the month.
hh
Hour of the day (24-hour clock).
mm
Minute of the hour.
ss
Second of the minute.
log
Literal log characters. All Gateway Control log files have names
ending with file extension .log.
Example:
Gtw_08192_20060607154331.log
Gateway Control Options
Gateway Control options are case sensitive and must include the hyphen prefix.
The following example gives the syntax for the help option which lists the syntax for all other
gateway control options:
gtwcontrol -h
When a gateway option requires a field value, that option includes a field name where you
define the value.
For example, to select the host group number 1 on which to perform an action, use the option
-g Hostnumber and type:
gtwcontrol -g 1
Utilities
575
Chapter 15: Gateway Control (gtwcontrol)
Gateway Control Options
where the Hostnumber for the option is 1.
You can combine options by typing them, separated by a space.
For example, to set the maximum number of sessions for host group 1 to 600, type:
gtwcontrol -g 1 -s 600
The following table describes the options for the Gateway Control utility, and indicates the
operating systems that support each option.
Option
Description
Operating System
-a ExternalAuthentication
Enables or disables external authentication. The settings are as
follows:
All
• OFF rejects external authentication and accepts traditional
logons.
• ON accepts both external authentication and traditional
logons.
• ONLY accepts external authentication and rejects traditional
logons.
The factory default is ON.
For additional information on External Authentication, see
Security Administration.
-c connectiontimeout
Controls the logon message timeout in seconds. The Gateway
terminates any session for which the message in the logon
sequence is not received in a timely manner. The turnaround time
for any message during the logon should be less than the value in
the connectiontimeout setting.
All
The value ranges from 5 to 3600 seconds.
The factory default is 60 seconds.
-d
Displays current setting of the Gateway GDO.
All
-e Eventcnt
Specifies the number of event trace entries. The factory default is
512 on MP-RAS, and 500 on Windows and Linux.
All
-F
Note: This option is deprecated, and should not be used.
All
Toggles “append domain names” for authentication schemes in
which domain names are required to define user identities
uniquely. The factory default is OFF.
For information about authentication methods, see Security
Administration.
-f Logfilesize
Specifies the maximum log file size.
All
On Windows and Linux, the valid range is 1000 through
2147483647.
The factory default is 5000000.
576
Utilities
Chapter 15: Gateway Control (gtwcontrol)
Gateway Control Options
Option
Description
Operating System
-g Hostnumber
Specifies a host group to which the host-specific settings in this
invocation of gtwcontrol will be applied. If you do not specify this
option, the host settings are applied to all host groups.
All
Hostnumber is an integer from 0 through 1023 that identifies a
host group.
The host-specific options are: -a, -b, -c, -i, -k, -m, -r, -s, -t, -A, -F,
-C (on Windows and Linux only), -S (on MP-RAS only),
and -T (on Windows and Linux only).
-h
Displays help on gtwcontrol options.
All
-i InitialIothreads
Specifies the number of threads of each type that are started
initially for the processing of LAN messages. When adjusting the
number of threads to match the load, the number of threads of
each type will never be reduced below this number.
Windows and Linux
Two types of threads exist:
• One handles traffic from the client (that is, TCP/IP
connections).
• One handles traffic from the database (that is, the PDE
msgsystem).
The factory default is 5.
-k keepalivetimeout
Specifies how long the connection between the gateway and a
client remains idle before the operating system begins probing to
see if the connection has been lost.
All
keepalivetimeout specifies the time in minutes, and can be any
integer from 1 through 120.
When a connection has been idle for the specified number of
minutes, the gateway’s operating system will send a keepalive
message over the connection to see if there is a response from the
client’s operating system. If there is no response, the gateway’s
operating system repeats the probe several times.
If there continues to be no response from the client’s operating
system, the gateway’s operating system closes the connection,
disconnecting the session using it.
The specific number of probes and the time between probes vary
by operating system type. Some systems allow these values to be
changed when networking is configured. If these values have not
been changed, it typically takes about 10 minutes from the first
probe until a dead connection is closed. If the keepalivetimeout
value is 5, the actual time until the connection is closed is
approximately 15 minutes.
The factory default is 10 minutes.
-L
Utilities
Toggles enable logons. The factory default is ON.
All
577
Chapter 15: Gateway Control (gtwcontrol)
Gateway Control Options
Option
Description
Operating System
-m MaximumIothreads
Specifies the maximum number of threads per type. When
adjusting the number of threads to match the load, the number of
threads of each type will never be increased above this number.
Windows and Linux
Two types of threads exist:
• One handles traffic from the client (that is, TCP/IP
connections).
• One handles traffic from the database (that is, the PDE
msgsystem).
The factory default is 50.
-n EnableDeprecatedMessages
Enables deprecated, descriptive logon failure error messages.
All
EnableDeprecatedMessages can be one of the following
• NO causes Teradata Database to return only generic logon
failure error messages to users who attempt to logon
unsuccessfully. This is the default setting.
• YES returns less secure, more descriptive logon failure error
messages.
Database errors that are returned to users during unsuccessful
logon attempts often provide information regarding the cause of
the logon failure. This information could pose a security risk by
helping unauthorized users gain entry to the system.
By default, Teradata Database returns only a generic logon error
message. Users who attempt to log on to the system
unsuccessfully will see a message indicating only that the logon
failed, without indicating the reason why.
Regardless of this setting, more detailed information about logon
failures is always logged to the system logs and, on Linux and
Windows, to the DBC.eventlog system table, which system
administrators can use to determine the reasons for specific logon
failures. Administrators can also inspect these logs for repeated
unsuccessful logon attempts that might indicate attempts to
breach system security.
578
Utilities
Chapter 15: Gateway Control (gtwcontrol)
Gateway Control Options
Option
Description
Operating System
-o default
Indicates that the other options specified in this invocation of
gtwglobal should be saved as a set of user-defined default values.
These defaults take precedence over the “factory-set” gateway
control defaults, and will be used for new host groups and
gateway vprocs when the system is reconfigured.
All
Note: Host groups and vprocs that existed before the
reconfiguration retain their previous settings. To apply the
custom defaults to all existing host groups and vprocs, use the -z
option.
gtwcontrol -o default can be run several times to set
individual default values or groups of values. Subsequent runs do
not cancel previous runs.
To clear the user-defined defaults and restore the factory defaults,
use the -Z option together with -o default.
Note: The -o option cannot be used together with the -g or -v
options.
-p Pathname
Specifies path to gateway log files. The factory default is /tmp.
MP-RAS
-r IoThreadCheck
Determines the frequency in minutes that the gateway checks to
see if all the threads are busy.
Windows and Linux
If they are all busy, a new thread of the appropriate type is started
unless it will exceed the maximum number of threads set by the
-m option.
If more than one thread has not run during the IoThreadCheck
period, the gateway stops a thread, unless it will leave fewer
threads than are specified by the -i option.
Two types of threads exist:
• One handles traffic from the client (that is, TCP/IP
connections).
• One handles traffic from the database (that is, the PDE
msgsystem).
The factory default is 10 minutes.
-s Sessions
Specifies maximum sessions for host group.
MP-RAS
Limited to Teradata Database system resources.
Specifies maximum sessions per vproc.
Windows and Linux
The valid range is 1 through 2147483647.
The factory default is 600.
-t Timeoutvalue
Determines how long a disconnected session has to reconnect in
minutes. If the client has not reconnected within the specified
time period, the client is logged off automatically.
All
Note: During this time period, the session still counts against the
number of sessions allocated to a PE.
The factory default is 20 minutes.
Utilities
579
Chapter 15: Gateway Control (gtwcontrol)
Gateway Control Options
Option
Description
Operating System
-v Vprocnumber
Specifies a vproc to which the vproc-specific settings in this
invocation of gtwcontrol will be applied. If you do not specify this
option, the vproc-specific settings apply to all vprocs.
All
Vprocnumber is an integer from 0 through 16383 that identifies a
vproc.
The vproc-specific options are: -C, -D, -E, -H, -J, -K, -M, -O, -R, S (on Windows and Linux only), -T (on MP-RAS only), -W, and Y.
-z
Sets gateway control to apply the user-defined defaults created
with the -o option to all current host groups and vprocs.
-Z
Sets gateway control to apply the original factory defaults to all
current host groups and vprocs.
All
All
If a set of user-defined defaults, created with the -o option exist,
they will still be applied to new host groups and vprocs after a
reconfiguration. To reset these user-defined defaults to the
original factory defaults, so new hosts and vprocs will use the
original factory defaults, use the -Z option in conjunction with
the -o default option:
gtwcontrol -o default -Z
Caution:
The following options should be used only for debugging the gateway under the direction of
Teradata Support Center personnel.
Option
Description
Operating System
-l logonname
For remote gateway global access.
Windows and Linux
-A
Toggles assign tracing. The factory default is OFF.
All
-C
Toggles connect wait. The factory default is OFF.
MP-RAS
Toggles connection tracing. The factory default is OFF.
Windows and Linux
-D
Toggles no gtwdie. The factory default is OFF.
All
-E
Toggles event trace. The factory default is OFF.
Windows and Linux
The E event trace does not log the actions.
-H
Toggles connect heap trace. The factory default is OFF.
All
-I
Toggles interactive mode. The factory default is OFF.
Windows and Linux
-J
Toggles log LAN errors. The factory default is OFF.
Windows and Linux
Logs any LAN-related errors even when properly handled by the
gateway.
-K
Toggles session ctx lock trace. The factory default is OFF.
Windows and Linux
This option shows the session locking to make the session context
multiprocessor safe.
580
Utilities
Chapter 15: Gateway Control (gtwcontrol)
Changing Maximum Sessions Per Node
Option
Description
Operating System
-M
Toggles message tracing. The factory default is OFF.
All
-O
Toggles output LAN header on errors. The factory default is OFF.
Windows and Linux
Causes an error message to be written to the gateway log file.
-P
Toggles assign heap trace. The factory default is OFF.
All
-R
Toggles xport log all. The factory default is OFF.
Windows and Linux
By default, the xport trace does not log every LAN operation. The
xport log all options causes all LAN operations to be logged.
This option only takes effect if the Y trace is on.
-S
Toggles assign wait. The factory default is OFF.
MP-RAS
Toggles the action log. The factory default is OFF.
Windows and Linux
The S option turns on the action trace. The S option only takes
effect if the E trace is on.
-T
-U
Toggles connect trace. The factory default is OFF.
MP-RAS
Toggles allow gateway testing. The factory default is OFF.
Windows and Linux
Toggles tdgss trace. The factory default is OFF.
All
Note: The -U option causes tdgss-related errors to be logged into
the gateway log file in order to diagnose problems.
-W
Toggles connect and assign wait. The factory default is OFF.
MP-RAS
Toggles wait for debugger to attach. The factory default is OFF.
Windows and Linux
-X
Toggles xport trace. The factory default is OFF.
All
-Y
Toggles handle trace. The factory default is OFF.
Windows and Linux
Changing Maximum Sessions Per Node
The number of supported sessions on MP-RAS, Windows, and Linux is based on the
following:
•
The number of allotted Parsing Engines (PEs)
•
Usage (resources consumed)
•
Number of streams when the Teradata Database system is built (MP-RAS only)
If your site runs jobs that are CPU or I/O intensive, you might find that a lower session limit
gives better performance. You cannot set the maximum sessions to a negative number.
To view the current maximum sessions per node, type:
gtwcontrol -d
To set the maximum sessions to 1000, type:
Utilities
581
Chapter 15: Gateway Control (gtwcontrol)
Changing Maximum Sessions Per Node
gtwcontrol -g 1 -s 1000
where 1 is the number of the host group, and 1000 is the maximum sessions you want.
On Windows and Linux, the new limit is effective immediately.
On MP-RAS, the following table describes when the new limit takes effect.
IF the new maximum session limit is...
THEN the new limit takes effect...
less than 600 or less than the limit at the last
Teradata Database restart (whichever is bigger)
immediately.
more than 600 and more than the limit at the
last Teradata Databaserestart
after the next Teradata Database restart.
The current maximum session limit will default
to the value at the last Teradata Database restart.
gtwcontrol notifies you of the following:
•
Whether a Teradata Database restart is required for the new maximum session limit to
become effective
•
The current maximum sessions limit and the new maximum sessions limit if you did not
restart the Teradata Database after a change
If more sessions are active than the new maximum sessions limit allows, then no new sessions
are started. No new sessions can log on until the number of sessions is below the maximum
sessions limit.
582
Utilities
Gateway Global
(gtwglobal, xgtwglobal)
CHAPTER 16
The Gateway Global utility, xgtwglobal (gtwglobal on Windows and Linux), allows you to
monitor and control the sessions of Teradata Database LAN-connected users. The gateway
software runs as a separate operating system task and is the interface between the network and
the Teradata Database.
Client programs that communicate through the gateway to the Teradata Database can be
resident on the Teradata Database system, or they can be installed and running on networkattached workstations. Client programs that run on channel-attached hosts bypass the
gateway completely.
Teradata Database on MP-RAS supports one gateway per node.
Teradata Database on Windows and Linux supports multiple gateways per node. The gateways
must belong to different host groups and listen on different IP addresses.
The number of supported sessions is based on the following:
•
The number of allotted Parsing Engines (PEs)
•
Usage (resources consumed)
•
Number of streams available when the Teradata Database system is built (MP-RAS only)
Each logical network attachment requires at least one PE. Each PE can support up to 120
sessions.
When all the PEs in the DBS configuration are offline, Gateway Global exits, and the following
message appears:
xgtwglobal cannot proceed as no PEs are online.
When all of the PEs in some of the host groups in the DBS configuration are offline, then
Gateway Global displays the following notice and continues to process information for the
host groups with at least one PE online:
NOTICE: xgtwglobal cannot process all the host groups as all the PEs on
one or more of the host groups are offline.
The number of sessions per gateway is defined using the gtwcontrol utility. For information
on configuring gateway options, see Chapter 15: “Gateway Control (gtwcontrol).”
Note: Gateway errors are handled in the same manner as other Teradata Database system
errors.
Utilities
583
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Audience
Audience
Users of Gateway Global include the following:
•
Field service representatives
•
Network administrators
•
Teradata Database system administrators
Users should be familiar with the administration of their network environment and the
Teradata Database software.
User Interfaces
Gateway Global runs on the following platforms and interfaces:
Platform
Interfaces
MP-RAS
Command line
(xgtwglobal)
(xgtwglobal is located in the /tgtw/bin directory.)
Database Window
Note: On MP-RAS Gateway Global includes an X11-based graphical user
interface. To run the utility without the GUI, use the -nw command-line option.
The -nw option is required for running Gateway Global from the Database
Window under MP-RAS:
start xgtwglobal -nw
Windows
Command line (“Teradata Command Prompt”)
(gtwglobal)
Database Window
Linux
Command line
(gtwglobal)
Database Window
For general information on starting the utilities from different platforms and interfaces, see
Appendix B: “Starting the Utilities.”
On MP-RAS, Gateway Global can display a windowed graphical user interface (GUI), or can
run as a command-line utility.
•
GUI (available on MP-RAS only)
In order for Gateway Global to display a GUI, there must be an X server running on the
local machine. To run Gateway Global in windowing mode, use the X11 -display option
to specify the name or IP address of the current workstation and display. For example:
xgtwglobal -display displayspec &
where displayspec is the name or IP address of the local machine, followed by a colon and
the server number, usually 0 or 0.0. For example:
584
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Commands Overview
/tgtw/bin/xgtwglobal -display myworkstation.mycompany.com:0.0 &
or
/tgtw/bin/xgtwglobal -display 141.206.34.213:0.0 &
For more information about using the Gateway Global GUI, see “Gateway Global
Graphical User Interface” on page 615.
•
Non-windowing mode
In non-windowing (command-line) mode, Gateway Global presents its own command
prompt. Interaction with the program is accomplished using individual commands typed
at the prompt. Running Gateway Global in non-windowing mode does not require an X
server on the local machine.
To run Gateway Global in non-windowing mode on MP-RAS, or from Database Window
(DBW) on any platform, use the -nw command-line option:
/tgtw/bin/xgtwglobal -nw
On Windows and Linux, Gateway Global runs only in non-windowing mode. It is not
necessary to use the -nw option on these platforms. To start Gateway Global on Windows
and Linux, at the command line type either gtwglobal or xgtwglobal.
Gateway Global Commands Overview
You use Gateway Global commands to perform the following functions:
•
Display network and session information
•
Administer users and sessions
•
Perform routine and special diagnostics
The following sections summarize the functions of each command, whether the command
requires the use of SELECT HOST, and any platform restrictions if the command is not
available on all platforms.
Following this section, each command is discussed separately in detail in alphabetical order.
User Names and Hexadecimal Notation
The Gateway Global utility can display and accept hexadecimal (hex) notation to represent
user names that are not representable in the ASCII or Latin-1 character sets:
Utilities
•
If a non-printing character appears in a user name, the name is automatically displayed as
a single-quoted hexadecimal string representation of the Teradata Database internal
encoding, followed by the special character combination xn, signifying that the preceding
string represents a hex-format user name.
•
You can enter names in hex format using the same convention of surrounding the hex
name itself with single quotes, and following the closing quote with the xn character
combination.
585
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Commands Overview
Example
This example shows how a hexadecimal format name would be entered as part of the
DISPLAY USER command:
di us '46d64c4b454e'xn
User '46d64c4b454e'xn has 1 session
Session PE
User
IP Adr
------- ------- ------------------------------ ------------1242
16382
'46d64c4b454e'xn
131.222.36.73
Enter gateway command or enter h for Help:
Specifying a Host
Some commands require you to specify a host before you can initiate the command action. To
specify a host, use the SELECT HOST command.
Command
Function
Platform
SELECT HOST
Allows input of a specific host number to define
the scope used by a subsequent command
function.
All
Note: Selecting host 0 resets the host selection.
Displaying Network and Session Information
DISPLAY commands allow you to display your network configuration, sessions, and vprocs
associated with the gateway, plus information about specific sessions.
Command
Function
Platform
DISPLAY
DISCONNECT
Displays a list of sessions that have disconnected.
All
DISPLAY FORCE
Displays sessions that have been successfully
killed or aborted using Performance Monitoring
and Production Control (PMPC) abort process.
Windows and Linux
DISPLAY GTW
Displays all sessions connected to the gateway.
All
DISPLAY NETWORK
Displays network configuration information.
All
DISPLAY SESSION
Displays the following information on a specific
session: User name, IP address, TCP socket
number, state, event, action, partition, and
authentication method.
All
Requires using the SELECT HOST command.
Requires using the SELECT HOST command.
DISPLAY STATS
586
Displays the RSS statistics for the gateway vproc.
Windows and Linux
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Commands Overview
Command
Function
Platform
DISPLAY TIMEOUT
Displays the timeout value.
All
Requires using the SELECT HOST command.
DISPLAY USER
Displays the session number, PE number, User
name, IP address, and current connection status.
All
Requires using the SELECT HOST command.
Administering Users and Sessions
These commands allow you to control gateway traffic and access to the Teradata Database.
Command
Function
Platform
DISABLE LOGONS
Disables all logons to the Teradata Database
through the gateway.
All
Requires using the SELECT HOST command.
ENABLE LOGONS
Enables logons to the Teradata Database through
the gateway.
All
Requires using the SELECT HOST command.
DISCONNECT USER
Disconnects all sessions owned by a user.
MP-RAS
Requires using the SELECT HOST command.
DISCONNECT
SESSION
Disconnects a specific session.
KILL SESSION
Terminates a specific session.
MP-RAS
Requires using the SELECT HOST command.
All
Requires using the SELECT HOST command.
KILL USER
Terminates all sessions of a specific user.
All
Requires using the SELECT HOST command.
SET TIMEOUT
Sets a timeout value.
All
Requires using the SELECT HOST command.
Performing Special Diagnostics
The TRACE commands allow you to debug internal gateway errors or anomalies.
Command
Function
Platform
ENABLE TRACE
Records internal gateway events.
All
Requires using the SELECT HOST command.
Utilities
587
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Commands
Command
Function
Platform
DISABLE TRACE
Turns off the recording of event tracing and
writing to the gateway log files.
All
Requires using the SELECT HOST command.
FLUSH TRACE
Directs the gateway to write the contents of its
internal trace buffers to the gateway log files.
All
Requires using the SELECT HOST command.
Logging Sessions Off Using KILL
When you issue a KILL command, any outstanding requests are first aborted. The specified
session is logged off. The following table describes the KILL command behavior.
IF the session is currently...
THEN...
disconnected
the assign task attempts a log off by setting the KILL flag
to indicate that an attempt has been made to log off.
logged on
a kill message is sent to the connect task to abort any
outstanding requests and then to log off.
Note: Aborting an outstanding request could take a significant amount of time. Therefore,
killing a specific session or all of a user’s sessions does not necessarily free up the resources of
those sessions immediately. For more information on KILL commands, see “KILL SESSION”
on page 611 and “KILL USER” on page 612.
Getting Help
To list all the Gateway Global commands, type the following at the command line.
help
The following table describes the help command.
Command
Function
Platform
HELP
Displays a menu of all the gtwglobal commands
alphabetically, including the syntax for each
command.
All
Note: h can be used as a synonym for HELP on
the command-line.
Gateway Global Commands
The following sections describe the Gateway Global commands.
588
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISABLE LOGONS
DISABLE LOGONS
Purpose
The DISABLE LOGON command prevents users from logging on to the Teradata Database on
the network using the gateway for the selected host.
Syntax
DISABLE
DISA
LOGONS
LOGO
GT07B003
Usage Notes
The default setting of the logon flag is enabled. However, after a Teradata Database system
restart, all host groups that were disabled remain disabled.
Any application that attempts to log on after DISABLE LOGONS results in the following error
message:
8033: Logon is disabled.
Note: Sessions already logged on are not affected by the DISABLE LOGONS command.
Before using the DISABLE LOGONS command, you must select the host from which the
session is running using the SELECT HOST command. For information, see “SELECT
HOST” on page 613.
For additional information, see “ENABLE LOGONS” on page 606.
Example
To prevent network users from logging on through the gateway to the Teradata Database, type:
disa logo
Utilities
589
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISABLE TRACE
DISABLE TRACE
Purpose
The DISABLE TRACE command turns off the recording of event tracing and writing to the
gateway log files.
Syntax
DISABLE
DISA
TRACE
TRAC
GT07B004
Usage Notes
The default setting of the trace flag is disabled.
Note: If tracing has been enabled and you do not use the DISABLE TRACE command to turn
it off, the file system may become full.
Before using the DISABLE TRACE command, you must select the host from which the session
is running using the SELECT HOST command. For information, see “SELECT HOST” on
page 613.
For additional information, see “ENABLE TRACE” on page 607.
For information on the gateway log files, see Chapter 15: “Gateway Control (gtwcontrol).”
Example
To turn off the recording of event tracing and writing to the gateway log files, type:
disa trac
No report is generated.
590
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISCONNECT SESSION
DISCONNECT SESSION
Purpose
The DISCONNECT SESSION command disconnects a specific session from the gateway. This
command is available only on MP-RAS.
Syntax
DISCONNECT
DISC
SESSION
ses_no
SE
GT07C006
where:
Syntax element...
Is the...
ses_no
session number (in decimal).
Note: To list the possible values for ses_no, use the DISPLAY GTW or DISPLAY NETWORK
command.
Usage Notes
Before using the DISCONNECT SESSION command, you must select the host from which
the session is running using the SELECT HOST command. For information, see “SELECT
HOST” on page 613.
The DISCONNECT SESSION command disconnects only the session identified by ses_no.
This command does not require the actual processor number containing the session to be
entered. After 20 minutes, the session is logged off, if the client has not reconnected.
Note: The 20-minute value is only a default. You can change the value using the gtwcontrol or
gtwglobal command.
The DISCONNECT SESSION command is useful for testing application recovery when
reconnecting is required.
Example
To disconnect session 1000, type the following from the gateway control utility:
disc se 1000
The following statement appears:
Session 1000 scheduled to be disconnected.
Utilities
591
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISCONNECT USER
DISCONNECT USER
Purpose
The DISCONNECT USER command disconnects all sessions owned by a specific user
originating from a specific host. This command is available only on MP-RAS.
Syntax
DISCONNECT
DISC
USER
user_name
US
GT07C007
where:
Syntax element...
Is the...
user_name
name of the user.
Note: To list the valid users for user_name, use “DISPLAY GTW” on page 595 or “DISPLAY
SESSION” on page 599.
Usage Notes
Before using the DISCONNECT USER command, you must select the host from which the
sessions are running using the SELECT HOST command. For information, see “SELECT
HOST” on page 613.
The DISCONNECT USER command forces all sessions owned by the user_name to
disconnect. This command does not require the actual processor number containing the
sessions to be entered, just the user_name. After 20 minutes, the sessions are logged off, if the
client has not reconnected.
Note: The 20-minute value is only a default. You can change the value using the gtwcontrol or
gtwglobal command.
The DISCONNECT USER command is useful for testing application recovery when
reconnecting is required.
Example
disc us systemfe
User Systemfe has 2 sessions disconnected
SessionNo
--------1000
1001
592
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY DISCONNECT
DISPLAY DISCONNECT
Purpose
The DISPLAY DISCONNECT command returns a list of sessions that have disconnected and
not yet reconnected.
Syntax
DISPLAY
DISCONNECT
DI
DISC
GT07B022
Usage Notes
The context for these sessions is maintained in the gateway control assign task. If the client
associated with the session fails to reconnect in the time allotted, the session is logged off.
Before using the DISPLAY DISCONNECT command, you must select the host from which
the session is running using the SELECT HOST command. For information, see “SELECT
HOST” on page 613.
Example
To display disconnected sessions, type:
di disc
The following report appears.
Host Group 52 has 5 disconnected sessions:
Session
1050
1051
1052
1053
1054
Utilities
PE
16383
16383
16383
16383
16383
User
DBC
DBC
DBC
DBC
DBC
TimeToLive
1185 secs
1185 secs
1185 secs
1185 secs
1185 secs
593
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY FORCE
DISPLAY FORCE
Purpose
The DISPLAY FORCE command displays sessions that have been forced off because of a KILL
command or a Performance Monitoring and Production Control (PMPC) ABORT SESSION
command. This command is available only on Windows and Linux.
Syntax
DISPLAY
FORCE
DI
FO
FF07D356
Usage Notes
The Teradata Database system displays this information:
•
Host number
•
Number of sessions
•
Session ID number
•
User name associated with the session
•
The length of time before the session information is discarded
The gateway retains the session information for the standard timeout period after the client
has been logged off. The gateway returns an 8055 error when the client reconnects:
Session forced off by PMPC or gtwglobal
After the timeout period expires, the gateway discards the session information.
Example
To display a session that has been killed or aborted, type:
DI FO
Host Group 1 has 2 forced sessions
Session User TimeToLive
1002
DBC 916 secs
1004
DBC 922 secs
594
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY GTW
DISPLAY GTW
Purpose
The DISPLAY GTW command displays all sessions connected to the gateway.
Syntax
DISPLAY
GTW
DI
gtwid
ALL
GT07C023
where:
Syntax element...
Is the...
gtwid
number of the GTW processor containing the sessions you want to
display.
ALL
keyword that displays all sessions regardless of selected Host Group.
Note: To list all possible values for gtwid, use “DISPLAY NETWORK” on page 597.
Usage Notes
For the selected gateway, the Teradata Database system displays this information:
Utilities
•
Host number
•
Gateway vproc number
•
Number of sessions on the gateway
•
Session ID number
•
Internet address
•
User name associated with the session
•
Status
595
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY GTW
Example 1
On Windows or Linux, to display a gateway (for example, gateway 8192), type.
di gtw 8192
GTW 8192 has 18 sessions
Session
1001
1003
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
PE
16383
16383
16383
16382
16383
16382
16383
16382
16383
16382
16383
16382
16383
16382
16383
16382
16383
16382
User
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
IP Adr
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
Status
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
Example 2
To display all of the gateway sessions on host 1, type.
di gtw all
System WUSRH122051-JEA has 1 connected session
HG GTW
Session PE
User
IP Adr
Status
--- ----- ------- ----- ------------------------------ --------------- 1
596
8193
1005
16383 PERM01
141.206.38.109
CONNECTED
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY NETWORK
DISPLAY NETWORK
Purpose
The DISPLAY NETWORK command displays information about your network configuration
and its associated gateway.
Syntax
DISPLAY
DI
;
NETWORK
NE
LONG
LON
host_no
GT07C009
where:
Syntax element...
Is the...
host_no
host number (in decimal).
LONG
display of the gateway statistics for the particular network.
The LONG option applies to NT only.
Usage Notes
The default setting for the DISPLAY NETWORK command is the short form report, which
gives this information:
•
Host number(s)
•
Total number of PEs assigned to host
•
Total number of gateways assigned to host
•
Total number of active sessions on the host
•
Total number of disconnected sessions
•
Tracing information (either Enabled or Disabled)
•
Logon event information (either Enabled or Disabled)
The long form displays the following information in addition to the information displayed in
the short form report for the network:
•
Vproc number in a group (gateway)
•
Total number of sessions connected to the listed vproc
•
Total number of sessions connected to the listed PE
If host_no is used to request a specific host number, the long form cannot be requested.
Utilities
597
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY NETWORK
Example 1
To display information about your network configuration and its associated gateway in the
short form, type:
di ne
Host 2 has 0 session(s) over 1 GTW(s) and 1 PE(s)
( 0 Active / 0 Disconnected / 0 Forced)
Gateway Sessions Logon
8193
0
Enable
Trace
Disable
Host 1 has 20 session(s) over 1 GTW(s) and 2 PE(s)
( 18 Active / 0 Disconnected / 2 Forced)
Gateway Sessions Logon
8192
18
Enable
Trace
Disable
Example 2
To display information about your network configuration and its associated gateway in the
long form, type:
di ne long
Host 52 has 10 session(s) over 1 GTW(s) and 2 PE(s)
( 10 Active / 0 Disconnected )
PE
16382
16383
Gateway
16384
598
Sessions
5
5
Sessions
20
Logon
Enabled
Trace
Enabled
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY SESSION
DISPLAY SESSION
Purpose
The DISPLAY SESSION command displays information for a specified session.
Syntax
DISPLAY
SESSION
DI
ses_no
SE
LONG
LON
GT07C024
where:
Syntax element
Description
ses_no
The number (in decimal) of the session for which information will be
displayed.
LONG
Displays additional detailed information for the specified session.
Note: To list the possible values for ses_no, use “DISPLAY GTW” on page 595.
Usage Notes
Before using the DISPLAY SESSION command, you must select the host from which the
session is running using the SELECT HOST command. For information, see “SELECT
HOST” on page 613.
By default, the Teradata Database system displays the short-form report containing the
following information:
•
User Name
•
IP address
•
TCP socket number
•
State
•
Event
•
Action
•
Partition
•
Authentication method
The Authentication field has the values shown in the following table.
Utilities
599
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY SESSION
The value...
Indicates...
Database
that the database provided the authentication method.
Note: This was the normal logon method before the implementation of single
sign on.
TDGSS-returned
method
an authentication method returned by the Teradata Database Generic Security
Service (TDGSS).
For more information on TDGSS, see Security Administration.
The long-form report gives a detailed connect display for a selected session. It includes all of
the information displayed by the default short format, and adds the following information:
•
The Account associated with the User Name, if any.
•
Teradata Database system mailbox information.
•
On Windows and Linux, I/O statistics for the session.
The long-form report is useful for field service personnel when diagnosing gateway problems.
For information on the descriptions of the valid states, events, and actions for the DISPLAY
SESSION command, see Appendix C: “Session States, Events, and Actions.”
Displaying a Session
To display a session, do the following:
1
To select a host (for example, host 52), type:
se host 52
The Teradata Database system displays the following:
52>
2
To display a session (for example, session 1000), type:
di se 1000
Example 1
To display session 1040 in the default (or short) form, type:
di se 1040
gtwglobal displays the following output:
Session 1040 connected to GTW 8193 is assigned to PE 16383 of host 1
User Name
IP Addr
Port
------------------------------ --------------- ---------DBC
141.206.38.109 27399
State
Event
Action
----------------------------- ------------------------ ----------------CS_CLIENTWAITNOTRAN
CE_STARTMSGRSPNOTRAN
CA_SENDDBSRSP
600
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY SESSION
Partition
Authentication
---------------- -------------DBC/SQL
DATABASE
Example 2
To display session 1040 in the long form, type:
di se 1040 lon
gtwglobal on displays the following output:
Session 1040 connected to GTW 8193 is assigned to PE 16383 of host 1
User Name
Account
IP Addr
Port
-------------------- -------------------- --------------- ---------DBC
141.206.38.109 27399
State
Event
Action
----------------------------- ------------------------ ----------------CS_CLIENTWAITNOTRAN
CE_STARTMSGRSPNOTRAN
CA_SENDDBSRSP
Partition
Authentication
---------------- -------------DBC/SQL
DATABASE
StrMbx:
CntMbx:
AbtMbx:
01 00 3fff 0050 0000 02 00
02 00 3fff 0000 040d 02 00
02 00 3fff 0000 030d 43 53
HostMessageReads
HostBlockReads
HostReadBytes
DbsMessageReads
HostMessageWrites
HostBlockWrites
HostWriteBytes
DbsMessageWrites
Utilities
:
:
:
:
:
:
:
:
10
5
699
2
5
5
1245
2
601
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY STATS
DISPLAY STATS
Purpose
The DISPLAY STATS command allows you to display the statistics for the gateway vproc. This
command is available only on Windows and Linux.
Syntax
DISPLAY
STATS
gtwid
DI
ST
ALL
GS02A013
where:
Syntax element...
Is the...
gtwid
specified gateway vproc whose statistics are displayed.
ALL
selected host group.
If you select a host group, gtwglobal returns the statistics for all the
gateway vprocs in the selected host group.
If you do not select a host group, gtwglobal returns the statistics for all
gateway vprocs.
Example
di st all
Gtw Vprocid:
8193
HostMessageReads:
382
HostBlockReads:
125
HostReadBytes:
3829382
DbsMessageReads:
108
HostMessageWrites:
125
HostBlockWrites:
125
HostWriteBytes:
1167
DbsMessageWrites:
104
Gtw Vprocid:
8192
HostMessageReads:
2220
HostBlockReads:
1107
HostReadBytes:
92919
DbsMessageReads:
1103
HostMessageWrites:
1107
HostBlockWrites:
1107
HostWriteBytes:
3950263
DbsMessageWrites:
1103
602
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY TIMEOUT
DISPLAY TIMEOUT
Purpose
The DISPLAY TIMEOUT command allows you to display the logoff delay in minutes for
disconnected sessions.
Syntax
DISPLAY
DI
TIMEOUT
TI
GT07C028
Example
Before using the DISPLAY TIMEOUT command, you must select the host from which the
session is running using the SELECT HOST command. For information, see “SELECT
HOST” on page 613.
To display the timeout logoff delay, type:
di ti
Host 1 Timeout Value: 20 minutes
Utilities
603
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY USER
DISPLAY USER
Purpose
The DISPLAY USER command returns a list of connected sessions whose names match the
user name.
Syntax
DISPLAY
USER
DI
user_name
US
GT07C025
where:
Syntax element...
Is the...
user_name
name of the user.
Usage Notes
Before using the DISPLAY USER command, you must select the host from which the sessions
are running using the SELECT HOST command. For information, see “SELECT HOST” on
page 613.
The DISPLAY USER command displays this information:
604
•
Session number
•
Parsing Engine that the session is assigned to
•
User name
•
IP address
•
Status indicating the following (Windows and Linux only):
•
Connected
•
Forced
•
Gone
•
Killed
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
DISPLAY USER
Example 1
To display a user (for example, user DBC), type.
di us dbc
User DBC has 20 sessions
Session
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
PE
16383
0
16383
0
16383
0
16383
16382
16383
16382
16383
16382
16383
16382
16383
16382
16383
16382
16383
16382
User
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
DBC
IP Adr
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
141.206.35.30
Status
CONNECTED
FORCED
CONNECTED
FORCED
CONNECTED
GONE
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
CONNECTED
Example 2
To display a user name in hexadecimal format, type.
di us '46d64c4b454e'xn
User '46d64c4b454e'xn has 1 session
Session PE
User
IP Adr
------- ------- ------------------------------ ------------1242
16382
'46d64c4b454e'xn
131.222.36.73
Enter gateway command or enter h for Help:
Utilities
605
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
ENABLE LOGONS
ENABLE LOGONS
Purpose
The ENABLE LOGONS command allows users to log on to the Teradata Database using the
network through the gateway for the selected host.
Syntax
ENABLE
EN
LOGONS
LOGO
GT07B012
Usage Notes
The default setting of the logons flag is enabled.
Before using the ENABLE LOGONS command, you must select the host from which the
session is running using the SELECT HOST command.
For additional information, see “SELECT HOST” on page 613 and “DISABLE LOGONS” on
page 589.
Example
To allow users to log on to the Teradata Database using the network through the gateway, type:
enable logons
606
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
ENABLE TRACE
ENABLE TRACE
Purpose
The ENABLE TRACE command turns on session tracing. The command records internal
gateway events.
Syntax
ENABLE
TRACE
EN
TRAC
GT07B013
Usage Notes
Session tracing may contain some of these elements:
Note: Entries vary depending on the events and action taken.
•
Session number (if available)
•
Associated file description (if available)
•
Event
•
One or more of these events:
•
•
Assigned
•
Logged on
•
Logged off
•
Logoff forced (Teradata Database system killed session)
•
Logon failed (usually caused by invalid password)
•
Disconnected
•
Reassigned
•
Reconnected
Network address of the originating network-attached host
By default, tracing is disabled. After a Teradata Database system restart, tracing enabled earlier
using the ENABLE TRACE command is continued.
Caution:
If tracing has been enabled and you do not use the DISABLE TRACE command to turn it off,
the file system may become full.
Before using the ENABLE TRACE command, you must select the host from which the session
is running using the SELECT HOST command. For information, see “SELECT HOST” on
page 613.
Utilities
607
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
ENABLE TRACE
The trace file is a text file, which can be viewed by any tool that displays text. The trace
information is the same trace information controlled by Gateway Control. Therefore, the trace
information is located by the path set in Gateway Control.
The Gateway log files are the same as the event files.
For additional information, see “DISABLE TRACE” on page 590. For information on the
gateway log files, see Chapter 15: “Gateway Control (gtwcontrol).”
Example
To turn on session event tracing, type:
enable trace
608
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
FLUSH TRACE
FLUSH TRACE
Purpose
The FLUSH TRACE command directs the gateway to write the contents of its internal trace
buffers to the gateway log file. Usually, these transitions are buffered before being recorded in
gateway log files.
For the gateway log to accurately reflect all of the events processed by the gateway, you must
issue a FLUSH TRACE command.
Syntax
FLUSH
TRACE
FL
TRCE
GT07B015
Usage Notes
The FLUSH TRACE command can be used when diagnosing gateway anomalies.
For additional information, see “ENABLE TRACE” on page 607 and “DISABLE TRACE” on
page 590.
Before using the FLUSH TRACE command, you must select the host from which the session is
running using the SELECT HOST command. For information, see “SELECT HOST” on
page 613.
For information on the gateway log files, see Chapter 15: “Gateway Control (gtwcontrol).”
Example
To direct the gateway to write the contents of its internal trace buffers to the gateway log file,
type:
flush trace
Utilities
609
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
HELP
HELP
Purpose
The HELP command allows you to get help when using the xgtwglobal or gtwglobal
commands.
Syntax
HELP
H
1102A161
where:
Syntax element...
Is the...
help
displays syntax for all the gtwglobal or xgtwglobal commands.
H and h are synonyms for HELP on the command line.
610
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
KILL SESSION
KILL SESSION
Purpose
The KILL SESSION command terminates the specified session on the Teradata Database by
aborting any active request on that session and logging the session off.
Note: Aborting an outstanding request could take a significant amount of time. Therefore,
killing a session or a user’s sessions does not necessarily free up the resources of those sessions
immediately.
Syntax
KILL
KI
SESSION
ses_no
SE
GT07C016
where:
Syntax element...
Is the...
ses_no
session number (in decimal).
Note: To list the possible values for ses_no, use the “DISPLAY GTW” on page 595 or
“DISPLAY NETWORK” on page 597.
Usage Notes
Before using the KILL SESSION command, you must select the host from which the session is
running using the SELECT HOST command. For information, see “SELECT HOST” on
page 613.
The KILL SESSION command specifies that the session identified by ses_no be terminated.
This command leaves an audit trail in the error log.
Example
To terminate session 1000 on the network through the gateway, type:
kill se 1000
Session 1000 scheduled to be killed.
Utilities
611
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
KILL USER
KILL USER
Purpose
The KILL USER command terminates all logged on sessions for the specified user name
restricted to the host group of the gateway. KILL USER aborts any active requests on those
sessions and logs the sessions off.
Syntax
KILL
KI
USER
user_name
US
GT07C017
where:
Syntax element...
Is the...
user_name
name of the user.
Note: To list the possible values for user_name, use “DISPLAY GTW” on page 595 or
“DISPLAY SESSION” on page 599.
Usage Notes
Aborting an outstanding request can take a significant amount of time. Killing a session or a
user’s sessions does not necessarily free up the resources of those sessions immediately.
Before using the KILL USER command, select the host from which the session is running
using the SELECT HOST command. For information, see “SELECT HOST” on page 613.
The KILL USER command leaves an audit trail in the gateway log files, which shows the vproc
that the kill was issued on and the session number of the user being killed. Because the
attempt might be denied by Teradata Database, only the attempt is logged:
Gtwglobal issued kill command:
gtwassign.c @3205 (867): Fri Apr 18 13:39:01 2008
VprocId: 8193 Session Number: 1009
Example
kill us perm01
User PERM01 has 1 session killed
1005
612
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
SELECT HOST
SELECT HOST
Purpose
The SELECT HOST command selects the host for subsequent Gateway Control command
functions.
Syntax
SELECT
SE
HOST
host_no
HO
GT07C026
where:
Syntax element...
Is the...
host_no
host number.
Usage Notes
The SELECT HOST command allows you to specify the host number of a network-attached
host that will be affected by the subsequent Gateway Control command function that you type
next.
Example
To select host 1, type:
select host 1
IF the Host number is...
THEN the following message appears...
found
Host 1 has been selected.
not found
Invalid Host 1 is entered. Please try
again!
To deselect a host, type:
select host 0
Utilities
613
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
SET TIMEOUT
SET TIMEOUT
Purpose
The SET TIMEOUT command allows you to set the time in minutes to delay a logoff for
disconnected sessions.
Syntax
SET
TIMEOUT
time_value
TI
GT07C027
where:
Syntax element...
Is the...
time_value
amount of time to delay logoff in minutes.
Usage Notes
The SET TIMEOUT command can be used to delay logoff for disconnected sessions when
working on network or database problems.
Before using the SET TIMEOUT command, you must select the host from which the sessions
are running using the SELECT HOST command. For information, see “SELECT HOST” on
page 613.
Example
To set a timeout delay for one hour, type:
SET TI 60
Timeout value on host 1 set to 60
614
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Graphical User Interface
Gateway Global Graphical User Interface
The X Windows based GUI for Gateway Global provides much the same functionality as the
command-line version using various menus and Windows.
Note: The Gateway Global graphical user interface (GUI) is available only on MP-RAS. An X
server must be running on the local machine.
For information on starting xgtwglobal in GUI mode, see “User Interfaces” on page 584
Main Window
The Gateway Global Main window is shown below.
Menu Bar
Display
Summary
Session
Mode
Messages
The Gateway Global Main window consists of six sections differentiated by function. The
sections of the Main window are described in the following table.
Utilities
615
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Graphical User Interface
Subsection
Function
Menu Bar
Displays the menu items.
Display
Displays information relating to the Teradata Database system configuration or
current utilization.
Summary
Shows the sessions grouping based on the Display mode toggle button setting.
The button setting also changes the list column.
Session
Shows all sessions that are grouped under an item you select in the Summary list.
Mode
Allows you to disconnect or kill specified sessions or users.
Messages
Contains various information related to the xgtwglobal operations.
Usually, the actions from the Mode buttons initiate these messages.
Note: Press F1 with the cursor placed on a specific topic to display information about that
topic.
Menu Bar
The following table explains the menu bar items available in the Display section of the main
window.
Menu bar item
Submenu item
Description
File
Save Messages
Saves the capture buffer to a disk.
Clear Messages
Deletes the contents of the capture buffer.
Exit
Closes the main window display.
Sort
User
Sorts the current display by user name.
Filter
IP Address
Restricts the display to show only sessions that are
connected from the selected IP address.
A status message appears to indicate that filtering is
enabled any time the display is updated.
Note: Sessions connecting from other IP addresses
will not be displayed in the Gateway Global
window, but are otherwise unaffected.
System
616
Logon
Toggles network-connected client (gateway)
logons.
Trace
Toggles trace facilities in the gateway connect and
assign tasks.
Timeout
Indicates the time in minutes that the gateway
delays logoff processing for a disconnected session
because of network or database problems.
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Graphical User Interface
Menu bar item
Submenu item
Description
Autorefresh
Toggles autorefresh for xgtwglobal displays.
When you set the autorefresh On, xgtwglobal polls
the connect and assign tasks in the Teradata
Database system for information related to session
status changes.
Display
The box beneath the main menu bar contains display options. When you select one of the
display options, information about that option appears in the box below and in the summary
window. The following table explains the function of each of the options.
Option
Function
HGID
Displays the active host group identifiers.
When you select the HGID option, the Teradata Database system updates the
summary window to reflect the current status of all the connected host groups.
If you select a host group from the summary window, the Teradata Database
system updates the session detail window with a list of sessions connecting to the
select host group.
By selecting one or more of the sessions listed in the detail window, you can issue
an action command against a subset of the sessions.
GTW
Displays the active gateway vprocs.
If you select the GTW option, the Teradata Database system updates the
summary window with the current active gateway vprocs.
If you select a vproc in the summary window, the Teradata Database system
updates the detail with a scrolling selection list of sessions connected to that
vproc.
PE
Displays the active gateway parsing engines.
Like the HGID and GTW options, the PE option shows only the
Communications Processor (COP)-type parsing engines.
USER
Displays the summary of the currently logged on users.
If you select the User option, the Teradata Database system updates the
summary window with a list of Users. The list is sorted by host group and then
by user name.
If you select an entry in the summary window, the Teradata Database system
updates the detail window with a list of sessions.
Note: When there is a non-printing character (or string) in a user name, the
xgtwglobal USER option displays that user name in hexadecimal format as
shown in the example below.
'46d64c4b454e'xn
For details on typing user names in hexadecimal format, see “User Names and
Hexadecimal Notation” on page 585.
Utilities
617
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Graphical User Interface
Option
Function
DISC
Displays summary information for the currently disconnected users.
“Disconnected sessions” refers to those sessions that were logged off of the
database by a reset or a Teradata Database system panic. The Teradata Database
system gives these sessions a fixed period of time to reconnect.
If the reconnect time expires without the client reconnecting, the gateway
simulates a logoff.
FORC
Kills or Aborts using PMPC off sessions grouped by the Host Group.
Summary and Session
The Summary section shows the sessions grouping based on the Display mode toggle button
setting.
The Session section shows all sessions that are grouped under an item you select in the
Summary list.
The HGID display selection in the figure shows the following:
618
•
The host groups
•
The number of sessions connected
•
The number of sessions forced off
•
Other information about the sessions
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Graphical User Interface
Mode
The Mode options work in conjunction with the Session section of the Main window. The
following table describes the Mode options.
Utilities
619
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Graphical User Interface
Option
Function
Session
Indicates that each session will have a KILL or DISCONNECT command issued.
Session offers these options:
Option
Function
Disc
Sends a message to the gateway indicating the session or user is
to be disconnected.
A session disconnected using the Disconnect command is
allowed to reconnect.
If you choose the Disconnect command, a confirmation window
appears, giving you a chance to cancel the command before the
Teradata Database system proceeds.
Kill
Sends a message to the gateway indicating the session or user to
be acted upon.
The Kill command immediately forces the user off the Teradata
Database system. Unlike the Disconnect command, the Kill
command removes the session information about the user from
the database entirely, and if the client wants to reconnect, the
gateway denies it
If you choose the Kill command, a confirmation window
appears, giving you the chance to cancel the command before
the Teradata Database system proceeds.
Dsply Sess
Sends a message to the gateway requesting detail information
about the status of the selected sessions.
For a sample of the Display session, see “Session Window” on
page 621.
Sel/Clr All
User
Selects or deselects all of the sessions in the Session window.
Has the effect of sending a single command (per host group), forcing the gateway to
determine which sessions should be killed.
To use a Mode option, do the following:
1
Highlight a session in the Mode area of the Main window.
2
Select an option.
Messages
The Messages section is located at the bottom of the Main window. Messages generated by the
Teradata Database system that are the result of a specified Mode option for a specific session
display here.
For example, if you initiate a kill or disconnect option in the Mode window, you might see the
following messages:
620
Utilities
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Graphical User Interface
User DBC killed
Session xxx not found
Session Window
The Session window provides detailed information about a selected session.
To display the Session window, do the following:
1
In the Summary section of the Main window, select a session by clicking on it.
The session is highlighted.
2
In the Mode section of the Main window, select the Session option.
3
In the Mode section of the Main window, select the Dsply Sess button.
The Session window appears. The example session window below shows user and
configuration information for session number 1003.
The session window displays a list of the users that are connected to that particular session,
plus detailed configuration information.
Utilities
621
Chapter 16: Gateway Global (gtwglobal, xgtwglobal)
Gateway Global Graphical User Interface
622
Utilities
APPENDIX A
How to Read Syntax Diagrams
This appendix describes the conventions that apply to reading the syntax diagrams used in
this book.
Syntax Diagram Conventions
Notation Conventions
Item
Definition / Comments
Letter
An uppercase or lowercase alphabetic character ranging from A through Z.
Number
A digit ranging from 0 through 9.
Do not use commas when typing a number with more than 3 digits.
Word
Keywords and variables.
• UPPERCASE LETTERS represent a keyword.
Syntax diagrams show all keywords in uppercase, unless operating system
restrictions require them to be in lowercase.
• lowercase letters represent a keyword that you must type in lowercase, such as a
Linux command.
• lowercase italic letters represent a variable such as a column or table name.
Substitute the variable with a proper value.
• lowercase bold letters represent an excerpt from the diagram. The excerpt is
defined immediately following the diagram that contains it.
• UNDERLINED LETTERS represent the default value.
This applies to both uppercase and lowercase words.
Spaces
Use one space between items such as keywords or variables.
Punctuation
Type all punctuation exactly as it appears in the diagram.
Paths
The main path along the syntax diagram begins at the left with a keyword, and proceeds, left
to right, to the vertical bar, which marks the end of the diagram. Paths that do not have an
arrow or a vertical bar only show portions of the syntax.
The only part of a path that reads from right to left is a loop.
Utilities
623
Appendix A: How to Read Syntax Diagrams
Syntax Diagram Conventions
Continuation Links
Paths that are too long for one line use continuation links. Continuation links are circled
letters indicating the beginning and end of a link:
A
A
FE0CA002
When you see a circled letter in a syntax diagram, go to the corresponding circled letter and
continue reading.
Required Entries
Required entries appear on the main path:
SHOW
FE0CA003
If you can choose from more than one entry, the choices appear vertically, in a stack. The first
entry appears on the main path:
SHOW
CONTROLS
VERSIONS
FE0CA005
Optional Entries
You may choose to include or disregard optional entries. Optional entries appear below the
main path:
SHOW
CONTROLS
624
FE0CA004
Utilities
Appendix A: How to Read Syntax Diagrams
Syntax Diagram Conventions
If you can optionally choose from more than one entry, all the choices appear below the main
path:
READ
SHARE
ACCESS
JC01A010
Some commands and statements treat one of the optional choices as a default value. This
value is UNDERLINED. It is presumed to be selected if you type the command or statement
without specifying one of the options.
Strings
String literals appear in apostrophes:
'msgtext '
JC01A004
Abbreviations
If a keyword or a reserved word has a valid abbreviation, the unabbreviated form always
appears on the main path. The shortest valid abbreviation appears beneath.
SHOW
CONTROLS
CONTROL
FE0CA042
In the above syntax, the following formats are valid:
•
SHOW CONTROLS
•
SHOW CONTROL
Loops
A loop is an entry or a group of entries that you can repeat one or more times. Syntax
diagrams show loops as a return path above the main path, over the item or items that you can
repeat:
,
,
(
cname
3
4
)
JC01B012
Utilities
625
Appendix A: How to Read Syntax Diagrams
Syntax Diagram Conventions
Read loops from right to left.
The following conventions apply to loops:
IF...
THEN...
there is a maximum number of
entries allowed
the number appears in a circle on the return path.
there is a minimum number of
entries required
the number appears in a square on the return path.
a separator character is required
between entries
the character appears on the return path.
In the example, you may type cname a maximum of 4 times.
In the example, you must type at least three groups of column
names.
If the diagram does not show a separator character, use one
blank space.
In the example, the separator character is a comma.
a delimiter character is required
around entries
the beginning and end characters appear outside the return
path.
Generally, a space is not needed between delimiter characters
and entries.
In the example, the delimiter characters are the left and right
parentheses.
Excerpts
Sometimes a piece of a syntax phrase is too large to fit into the diagram. Such a phrase is
indicated by a break in the path, marked by (|) terminators on each side of the break. The
name for the excerpted piece appears between the terminators in boldface type.
The boldface excerpt name and the excerpted phrase appears immediately after the main
diagram. The excerpted phrase starts and ends with a plain horizontal line:
LOCKING
excerpt
A
A
HAVING
con
excerpt
where_cond
,
cname
,
col_pos
JC01A014
626
Utilities
Appendix A: How to Read Syntax Diagrams
Syntax Diagram Conventions
Multiple Legitimate Phrases
In a syntax diagram, it is possible for any number of phrases to be legitimate:
dbname
DATABASE
tname
TABLE
vname
VIEW
JC01A016
In this example, any of the following phrases are legitimate:
•
dbname
•
DATABASE dbname
•
tname
•
TABLE tname
•
vname
•
VIEW vname
Sample Syntax Diagram
,
viewname
CREATE VIEW
AS
cname
CV
A
LOCKING
LOCK
ACCESS
dbname
A
DATABASE
tname
FOR
SHARE
IN
READ
TABLE
WRITE
EXCLUSIVE
vname
VIEW
EXCL
,
B
SEL
B
MODE
expr
,
FROM
tname
qual_cond
C
.aname
C
HAVING cond
;
qual_cond
,
WHERE cond
GROUP BY
cname
,
col_pos
JC01A018
Utilities
627
Appendix A: How to Read Syntax Diagrams
Syntax Diagram Conventions
Diagram Identifier
The alphanumeric string that appears in the lower right corner of every diagram is an internal
identifier used to catalog the diagram. The text never refers to this string.
628
Utilities
APPENDIX B
Starting the Utilities
This appendix describes how to start Teradata Database utilities.
Interfaces for Starting the Utilities
Teradata Database offers several user interfaces from which the utilities may be started:
Interface
Supported Platforms
Command line
MP-RAS, Windows, Linux
Database Window (DBW)
MP-RAS, Windows
Host Utility Console (HUTCNS)
z/OS, z/VM
Not all utilities support all the user interfaces. For a listing of supported user interfaces for
each utility, see the documentation for each utility.
Note: Once started, some utilities present their own interactive command-line or graphical
user interfaces. These utilities let you browse and enter information until you explicitly exit
the utility.
Other utilities present their own interactive command-line or graphical user interfaces. These
utilities continue running until they are explicitly stopped by the user. Many utilities that
present their own command environment are stopped by entering the QUIT command.
Utilities that present a graphical user interface are stopped by clicking the Exit or Close
command from the graphical menu.
Utilities that are running in DBW can also be stopped by issuing the stop window_number
command from the DBW Supervisor window, where window_number is the numeric
identifier of the DBW application window in which the utility is running. For more
information on DBW, see Utilities Volume 1.
MP-RAS
On MP-RAS, the utilities can be run from Database Window (DBW) and from the command
line.
Utilities
629
Appendix B: Starting the Utilities
MP-RAS
Starting Utilities from Database Window
On MP-RAS, DBW is an X client program that requires an X server to be running on the local
machine. DBW supports standard X Windows display forwarding. To ensure that the
graphical user interface displays properly, you can use the standard -display option to
specify the host name or IP address of the local machine.
To start a utility from Database Window
1
Open DBW from the MP-RAS command line by typing:
xdbw –display displayspec &
where displayspec is the name or IP address of the local machine, followed by a colon and
the server number, typically 0 or 0.0. For example:
xdbw -display myworkstation.mycompany.com:0.0 &
or
xdbw -display 192.0.2.24:0.0 &
The DBW main window opens.
2
630
Click the supvr button to open the Supervisor (supv) window.
Utilities
Appendix B: Starting the Utilities
MP-RAS
3
Under Enter a command type:
start utilityname [options]
where utilityname is the name of the utility, and options can include any of the available
command-line options and arguments of that utility. utilityname is not case-sensitive.
The following message appears:
Started 'utilityname' in window x
where x is the number of one of the four available application windows DBW provides.
Each utility started runs in one of the four application windows. The title bar of the
application window and the corresponding button in the DBW main window change to
reflect the name of the running utility. When the utility stops running, the application
window and main window button revert to the default text title (that is, Appl1, Appl2, and so
forth).
Note: Up to four utilities can be run concurrently in DBW. The message “All Interactive
Partitions are busy!!” indicates that all four application windows are occupied. In this case,
one of the four running utilities must be quit before another can be started.
For more information on DBW, and on options available with the START command, see
Utilities Volume 1.
Utilities
631
Appendix B: Starting the Utilities
Windows
Starting Utilities from the Command Line
To start a utility from the command line
✔ On the command line type:
utilityname [options]
where utilityname is the name of the utility, and options can include any of the available
command-line options and arguments of that utility.
X Windows GUIs
Some utilities on MP-RAS offer X Windows (X11) based graphical user interfaces. These
utilities, whose program names generally begin with x, are launched from the MP-RAS
command line. Use the X Windows -display option to specify the name or IP address and
display server screen (typically 0 or 0.0) when starting these utilities. For example:
xdbw -display myworkstation.mycompany.com:0.0 &
or
xdbw -display 192.0.2.24:0.0 &
Windows
On Windows, the utilities can be run from Database Window and from the Teradata
Command Prompt.
Starting Utilities from Database Window
To start a utility from Database Window
1
Open Database Window (DBW) from the Windows Start menu by clicking:
Start>Programs>Teradata Database>Database Window
The DBW main window opens.
2
632
Click the Supvr button to open the Supervisor (supv) window.
Utilities
Appendix B: Starting the Utilities
Windows
3
Under Enter a command type:
start utilityname [options]
where utilityname is the name of the utility, and options can include any of the available
command-line options and arguments of that utility. utilityname is not case-sensitive.
The following message appears:
Started 'utilityname' in window x
where x is the number of one of the four available application windows DBW provides.
Each utility started runs in one of the four application windows. The title bar of the
application window and the corresponding button in the DBW main window change to
reflect the name of the running utility. When the utility stops running, the application
window and main window button revert to the default text title (that is, Appl1, Appl2, and
so forth).
Note: DBW can run up to four utilities concurrently. If you see the following message, you
must quit one of the four running utilities before starting another.
CNSSUPV start: All Interactive Partitions are busy!!
For more information on DBW, and on options available with the START command, see
Utilities Volume 1.
Utilities
633
Appendix B: Starting the Utilities
Linux
Starting Utilities from the Teradata Command Prompt
The Teradata Command Prompt is a Windows command line that has been modified for use
with Teradata. Teradata Database adds certain directories to the PATH environment variable
on your computer, which allows most command-line utilities to be run by typing their name
at the Teradata Command Prompt.
To start a utility from the Teradata Command Prompt
1
Start the Teradata Command Prompt from Windows by clicking:
Start>Programs>Teradata Database>Teradata Command Prompt
2
Type:
utilityname [options]
where utilityname is the name of the utility, and options can include any of the available
command line options and arguments of that utility. utilityname is not case-sensitive.
You can also use the Windows start command to open the utility in a separate command
window. For example:
C:\Documents and Settings\Administrator> start lokdisp
Linux
On Linux, the utilities can be run from Database Window and from the command line.
Note: When starting a Linux session, run tdatcmd from the command line to set up the
Teradata Database environment. It is only necessary to run tdatcmd (located by default in /
usr/pde/bin) once per Linux session. It adds certain directories to the PATH environment
variable on the local computer, which allows most command-line utilities to be run by typing
their name.
start utilityname [options]
Starting Utilities from Database Window
This version of Database Window is an X client program that requires an X server to be
running on the local machine. DBW supports standard X Windows display forwarding. To
ensure that the graphical user interface displays properly, you can use the standard -display
option to specify the host name or IP address of the local machine.
To Start a utility from Database Window
1
If not already done, set up the Teradata Database environment by typing:
tdatcmd
at the Linux command line.
634
Utilities
Appendix B: Starting the Utilities
Linux
2
Open DBW from the Linux command line by typing:
xdbw –display displayspec &
where displayspec is the name or IP address of the local machine, followed by a colon and
the server number, typically 0 or 0.0. For example:
xdbw -display myworkstation.mycompany.com:0.0 &
or
xdbw -display 192.0.2.24:0.0 &
The DBW main window opens.
3
Utilities
Click the Supvr button to open the Supervisor (supv) window.
635
Appendix B: Starting the Utilities
Linux
4
Under Enter a command type:
start utilityname [options]
where utilityname is the name of the utility, and options can include any of the available
command-line options and arguments of that utility. utilityname is not case-sensitive.
The following message appears:
Started 'utilityname' in window x
where x is the number of one of the four available application windows DBW provides.
Each utility started runs in one of the four application windows. The title bar of the
application window and the corresponding button in the DBW main window change to
reflect the name of the running utility. When the utility stops running, the application
window and main window button revert to the default text title (that is, Appl1, Appl2, and
so forth).
Note: Up to four utilities can be run concurrently in DBW. The message “All Interactive
Partitions are busy!!” indicates that all four application windows are occupied. In this case,
one of the four running utilities must be quit before another can be started.
For more information on DBW, and on options available with the START command, see
Utilities Volume 1.
636
Utilities
Appendix B: Starting the Utilities
z/OS and z/VM
Starting Utilities from the Command Line
To start a utility from the command line
1
If not already done, set up the Teradata Database environment by typing:
tdatcmd
at the Linux command line.
2
On the command line type:
utilityname [options]
where utilityname is the name of the utility, and options can include any of the available
command-line options and arguments of that utility.
z/OS and z/VM
The Host Utility Console (HUTCNS) is a console interface that runs a selected group of
Teradata Database utilities from a channel attached host terminal. These selected group
utilities include: Query Session, Query Configuration, Showlocks, and Recovery Manager.
The HUTCNS utility is only available on z/OS and z/VM terminals, not from networkattached hosts.
Starting Utilities from HUTCNS
To start a utility
1
On the command line type:
HUTCNS
The HUTCNS screen appears.
****
****
**** Data Base Computer
*
*
*
* *
*
*
****
*
Program: DBS Console Interface
*
*
*
* *
****
****
****
Enter logon string as ‘TDPID/UserID,Password’
When you type a valid TDPID, user ID, and password (except a user ID designating a
pooled session), the terminal displays the following information:
Enter the Utility to execute
- SESsionStatus
- CONfiguration
- LOCKsDisplay
- RCVmanager
First 3 characters are acceptable
2
Type the name or abbreviation of the specified utility:
utilityname
where utilityname is the name or abbreviation of the utility you want to run.
Utilities
637
Appendix B: Starting the Utilities
z/OS and z/VM
The utilityname screen appears.
Note: To exit the utility from HUTCNS, type END or QUIT.
638
Utilities
APPENDIX C
Session States, Events, and
Actions
This appendix contains descriptions of the valid states, events, and actions for the DISPLAY
SESSION command of the Gateway Global utility on MP-RAS, Windows, and Linux.
Session State Definitions
This section shows the valid states and definitions.
MP-RAS
The following table shows the valid states and definitions that can occur for a session.
Valid State
This state represents waiting for...
CS_ASNERRWAIT
an assign response from the assign task after an error has occurred.
CS_CONSSOWAIT
a connect after GSS/SSO.
CS_DISCWAITRSP
a response from the mux as a result of a DISCONNECT command.
CS_ESTABLISHED
any event to occur on an established session.
CS_GONEREASNERRWAIT
a reassign response after receiving a GONE message.
CS_GONESTATUSWAIT
the Status response after the session has been KILLed and GONE.
CS_GONEWAITRSP
a response from the mux as a result of a GONE message.
CS_IDLE
an assign request from the client.
CS_KILLASNERRWAIT
an assign response from the assign task after a KILL command.
CS_KILLLOGOFFWAIT
a logoff response as a result of receiving a KILL message.
CS_KILLLOGONWAITERR
a logon response after receiving a KILL message.
CS_KILLMUXLOGOFFWAIT
a logoff response after receiving a KILL message.
CS_KILLREASNERRWAIT
a reassign response from the assign task after a protocol error and KILL command.
CS_KILLREASNLOGOFFWAIT
a logoff response from session control after a KILL message.
CS_KILLSTATUSWAIT
the Status response after the session has been KILLed.
CS_KILLWAITRSP
a FIN response from the mux after a KILL message.
Utilities
639
Appendix C: Session States, Events, and Actions
Session State Definitions
Valid State
This state represents waiting for...
CS_LOGOFFWAITGONENOMUX
a logoff response from session control and the mux has terminated the virtual
circuit.
CS_LOGOFFWAITGONERSP
a logoff request where a GONE message has been received.
CS_LOGOFFWAITNOMUX
a logoff response from session control after the client closed the virtual circuit.
CS_LOGOFFWAITRSP
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPNOMUX
a logoff response from session control and the mux has terminated the virtual
circuit.
CS_LOGONWAIT
a response from session control to a logon request.
CS_LOGONWAITERR
a response from session control to a logon request after an error has occurred.
CS_LOGONWAITGONERR
a response from session control to a logon request after a GONE message.
CS_REASNERRWAIT
a reassign response from the assign task after an error has occurred.
CS_RELOGON
a reconnect request in Relogon mode.
CS_RELOGONWAITERR
a logon response in Relogon error mode.
CS_RESSOWAUT
GSS/SSO communication message during reconnect.
CS_SSOWAIT
GSS/SSO communication message.
CS_WAITASNRSP
an assign response from the assign task.
CS_WAITASNRSPSSO
assign response and a SSO request is pending.
CS_WAITCON
a connect request from the client.
CS_WAITMUXGONERSP
a response from the mux to a GONE message.
CS_WAITMUXRSP
a response from the mux after the user issued a logoff request.
CS_WAITREASNRSP
a reassign response from the assign task.
CS_WAITREASNRSPSSO
a reassign response and an SSO request is pending.
CS_WAITRECON
a reconnect response from the client.
CS_WAITRELOGONRSP
the Re-Logon response from session control.
CS_WAITSTATUSRSP
the Status response from session control.
CS_WAITTAKOVRRSP
a response from the mux indicating it has taken over the session.
CS_XLOGON
a connect request in Express Logon mode.
640
Utilities
Appendix C: Session States, Events, and Actions
Session State Definitions
Windows and Linux
The following table shows the valid states and definitions that can occur for a session.
Valid State
This state represents waiting for...
CS_ASNRSPWAIT
an assign response from the assign task.
CS_ASNRSPWAITERR
an assign response from the assign task after an error occurred.
CS_ASNRSPWAITERRCLOSED
an assign response from the assign task with the socket closed.
CS_ASNRSPWAITSSO
an assign response from the assign task with saved SSO.
CS_CLIENTWAITELICITDATARSP
a response from the client after deferred transfer of lob data.
CS_CLIENTWAITINDOUBT
a request from the client after indoubt trans.
CS_CLIENTWAITINTRAN
a request from the client after a transaction.
CS_CLIENTWAITNOTRAN
a request from the client after no transaction.
CS_CLOSEWAIT
the client to close the virtual circuit.
CS_CLOSEWAITTERM
the client to close the virtual circuit and notify assign to free context.
CS_CLOSEWAITTERMDISC
the client to close the virtual circuit and notify assign to keep context.
CS_CLOSEWAITTERMGONE
the client to close the virtual circuit and notify assign to force the context.
CS_CONSSOWAIT
an SSO CON response from the client.
CS_CONWAIT
a connect request from the client.
CS_ESTABLISHED
any event to occur on an established session.
CS_IDLE
an assign request from the client.
CS_LOGOFFWAITRSP
a logoff response from session control.
CS_LOGOFFWAITRSPCLOSED
a logoff response from session control and the virtual circuit is closed.
CS_LOGOFFWAITRSPCLOSEDDISC
a logoff response from session control and the virtual circuit was
disconnected.
CS_LOGOFFWAITRSPERR
a logoff response from session control.
CS_LOGOFFWAITRSPFORCED
a logoff response from session control and the client was killed.
CS_LOGOFFWAITRSPINDOUBT
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPINDOUBTDISC
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPINDOUBTFORCED
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPINDOUBTTERM
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPINTRAN
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPINTRANDISC
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPINTRANFORCED
a logoff response from session control after the client sent a logoff request.
Utilities
641
Appendix C: Session States, Events, and Actions
Session State Definitions
Valid State
This state represents waiting for...
CS_LOGOFFWAITRSPINTRANTERM
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPNOTRAN
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPNOTRANDISC
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPNOTRANFORCED
a logoff response from session control after the client sent a logoff request.
CS_LOGOFFWAITRSPNOTRANTERM
a logoff response from session control after the client sent a logoff request.
CS_LOGONWAIT
a response from session control to a logon request.
CS_LOGONWAITCLOSED
a response from session control to a logon request after the client has gone
away.
CS_LOGONWAITCLOSEDFORCED
a response from session control to a logon request after the client has gone
away and was killed.
CS_LOGONWAITERR
a response from session control to a logon request after an error has
occurred.
CS_OFFWAITUCABTRSP
a UCAbort request after a successful logoff request.
CS_OFFWAITUCABTRSPFORCED
a UCAbort request after a successful logoff request where the client was
killed.
CS_OFFWAITUCABTRSPTERM
a UCAbort request after a successful logoff request where the client closed
the virtual circuit.
CS_REASNRSPWAIT
a reassign response from the assign task.
CS_REASNRSPWAITSSO
a reassign response from the assign task with saved SSO parcel.
CS_REASNWAITCLOSED
a reassign response from the assign task after an error has occurred.
CS_REASNWAITCLOSEDDISC
a reassign response from the assign task after an error has occurred and the
client disconnected.
CS_REASNWAITCLOSEDKILL
a reassign response from the assign task after receiving a GONE/KILL
message.
CS_REASNWAITERR
a reassign response from the assign task after an error has occurred.
CS_RECONSSOWAIT
a reconnect response from the client using SSO.
CS_RECONWAIT
a reconnect response from the client.
CS_RESSOWAIT
an SSO response from the client as part of reconnect.
CS_SSOWAIT
an SSO response from the client.
CS_WAITABORTRSP
a DBS response to an ABORT request.
CS_WAITCONTINUERSPINDOUBT
a DBS response from the Continue Request in an indoubt transaction.
CS_WAITCONTINUERSPINTRAN
a DBS response from the Continue Request in a transaction.
CS_WAITCONTINUERSPNOTRAN
a DBS response from the Continue Request not in a transaction.
642
Utilities
Appendix C: Session States, Events, and Actions
Session State Definitions
Valid State
This state represents waiting for...
CS_WAITCONTRSPINDOUBTDISC
a DBS response from the Continue Request in an indoubt transaction after
disconnect.
CS_WAITCONTRSPINDOUBTFORCED
a DBS response from the Continue Request in an indoubt transaction after
kill.
CS_WAITCONTRSPINDOUBTLOGOFF
a DBS response from the Continue Request in an indoubt transaction after
logoff.
CS_WAITCONTRSPINDOUBTTERM
a DBS response from the Continue Request in an indoubt transaction after
terminate.
CS_WAITCONTRSPINTRANDISC
a DBS response from the Continue Request in a transaction after
disconnect.
CS_WAITCONTRSPINTRANFORCED
a DBS response from the Continue Request in a transaction after kill.
CS_WAITCONTRSPINTRANLOGOFF
a DBS response from the Continue Request in a transaction after logoff.
CS_WAITCONTRSPINTRANTERM
a DBS response from the Continue Request in transaction after terminate.
CS_WAITCONTRSPNOTRANDISC
a DBS response from the Continue Request not in a transaction after
disconnect.
CS_WAITCONTRSPNOTRANFORCED
a DBS response from the Continue Request not in a transaction after kill.
CS_WAITCONTRSPNOTRANLOGOFF
a DBS response from the Continue Request not in a transaction after logoff.
CS_WAITCONTRSPNOTRANTERM
a DBS response from the Continue Request not in transaction after
terminate.
CS_WAITDIRECTRSP
a DBS response from a Directed Request.
CS_WAITDIRECTRSPDISC
a DBS response from a Directed Request.
CS_WAITDIRECTRSPFORCED
a DBS response from a Directed Request.
CS_WAITDIRECTRSPLOGOFF
a DBS response from a Directed Request.
CS_WAITDIRECTRSPTERM
a DBS response from a Directed Request.
CS_WAITFREEIOABORT
an outstanding network I/O request to complete.
CS_WAITSTARTDBSRSPDISC
a DBS response after a terminate received.
CS_WAITSTARTDBSRSPFORCED
a DBS response after a terminate received.
CS_WAITSTARTDBSRSPTERM
a DBS response after a terminate received.
CS_WAITSTARTRSP
a DBS response from the Start Request.
CS_WAITSTARTRSPLOGOFF
a start response after receiving a logoff request.
CS_WAITUCABTRSP
a DBS response to a UCAbort Request.
CS_WAITUCABTRSPDISC
a DBS response to a UCAbort Request.
CS_WAITUCABTRSPFORCED
a DBS response to a UCAbort Request.
CS_WAITUCABTRSPTERM
a DBS response to a UCAbort Request.
Utilities
643
Appendix C: Session States, Events, and Actions
Session Event Definitions
Session Event Definitions
This section shows the valid events and definitions.
MP-RAS
The following table shows the events and definitions that can occur for a session.
Valid Events
Definition
CE_ASNMSG
An [Re]Assign Msg was received.
CE_ASNMSGERR
An [Re]Assign message contained an error.
CE_ASNOK
An Assign Msg was processed and was valid.
CE_ASNRSP
A success Assign response received from the assign task.
CE_ASNRSPERR
An error Assign response received from the assign task.
CE_ASNSSOOK
An Assign message was valid and contained pclssoreq_t in it.
CE_BADSTATE
The Assign task returned a badstate error on reassign response.
CE_CONNECTMSG
A Connect message was received.
CE_CONNECTOK
A Connect message was processed and was valid.
CE_DISC
The Virtual Circuit was disconnected in the connect task.
CE_GONE
A PMPC abort session message was received.
CE_HDRERR
A Client message header error occurred.
CE_HSCERR
An error response received from session control.
CE_HSCSUC
A successful response received from session control.
CE_INTERNALERR
An internal error occurred - usually specific to state.
CE_KILL
A Gtwglobal kill command occurred.
CE_LGNRECONERR
An error in the [Re]Connect message.
CE_LOGOFFREQ
A Logoff Request reported by MUX.
CE_METHODMSG
A Methods Msg was received.
CE_METHODMSGAUTH
CA_METHODMSG has invalid security - Recursive
CE_METHODMSGERR
CA_METHODMSG Method Msg contained an error - Recursive
CE_METHODMSGOK
CA_METHODMSG Method Msg was valid - Recursive
CE_NODEFULL
No virtual circuits on NODE.
CE_REASNOK
A ReAssign Message was processed and was valid.
CE_REASNRSP
A successful ReAssign response received from Assign task.
644
Utilities
Appendix C: Session States, Events, and Actions
Session Event Definitions
Valid Events
Definition
CE_REASNRSPERR
An unsuccessful ReAssign response from Assign task.
CE_REASNSSOOK
A ReAssign message was valid and contained pclssoreq_t in it.
CE_RECONNECTOK
A ReConnect message was processed and was valid.
CE_RELOGON
A successful ReLogon ReAssign response from Assign Task.
CE_SIMDISC
Gtwglobal requested simulated disconnect.
CE_SSOCONTINUE
The SSO authentication required more exchanges.
CE_SSOERR
The SSO authentication was not successful.
CE_SSOMSG
Single Sign-on Message was received.
CE_SSOMSGERR
An error was encountered processing the SSO message.
CE_SSOMSGOK
An SSO message has been received and is OK.
CE_SSOOK
The SSO authentication was successful.
CE_STATUSERR
An Error status response from session control.
CE_STATUSSUC
A Successful status response from session control.
CE_TAKBAKCLOSE
A TakBak message resulting from a orderly release.
CE_TAKBAKDISC
A TakBak message resulting from a disconnect received.
CE_TAKBAKFORCE
A TakBak message resulting from Gone/KILL received.
CE_TAKBAKLOGOFF
A TakBak message resulting from a success logoff or Mux error.
CE_TAKEOVERACK
A positive response from a TakeOver received.
CE_TERMINATE
An orderly release received while Connect has file descriptor.
CE_XLOGON
A successful Express Assign response from Assign Task.
Utilities
645
Appendix C: Session States, Events, and Actions
Session Event Definitions
Windows and Linux
Two types of session events occur on Windows and Linux:
•
Fundamental external events
•
Events that occur only if the given Action occurs in the State
Fundamental External Events
The following table shows the fundamental external events that can occur for a session.
Valid Events
Type of Event
Definition
CE_ABORTMSG
Network
Abort Msg was received.
CE_ASNMSG
Network
[Re]Assign Msg was received.
CE_ASSIGNRSP
Gateway
[Re]Assign response received from the assign task.
CE_CONFIGMSG
Network
Config message was received.
CE_CONNECTMSG
Network
Connect message was received.
CE_CONTINUEMSG
Network
Continue message was received.
CE_DBSRSP
Database
DBS Response was received.
CE_DIRECTMSG
Network
Directed Message was received.
CE_DIRECTMSGRSP
Database
DBS response to a directed request was received.
CE_DISCONNECT
Network
Error occurred on virtual circuit.
CE_ELICITDATAMSGRSP
Network
Elicitdata Message was received.
CE_FREESESCTX
Network
Generated to try to free sesctx after I/O outstanding.
CE_GONE
Database
PMPC abort session message was received.
CE_HDRERR
Network
A Client message header error occurred.
CE_KILL
Gateway
Gtwglobal kill command occurred.
CE_LOGOFFMSG
Network
Logoff Request was received.
CE_LOGONLOGOFFRSP
Database
DBS response to a logon/logoff request received.
CE_METHODMSG
Network
Methods Msg was received.
CE_SENDDISCONNECT
Network
A network send failed after the socket disconnected.
CE_SENDTERMINATE
Network
A network send failed after the session was terminated.
CE_SSOMSG
Network
Single Sign-on Message was received
CE_STARTMSG
Network
Start message has been received.
CE_TERMINATE
Network
Orderly release or Reset on virtual circuit.
CE_TESTMSG
Network
Test message was received.
646
Utilities
Appendix C: Session States, Events, and Actions
Session Event Definitions
Events Occurring if the Action Occurs in the State
The following table shows the events that can occur for a session if the given action occurs in
the state.
Valid Events
Type of
Event
Action
Definition
CE_ABORTMSGAUTH
Recursive
CA_ABORTMSG
Abort Message had invalid security.
CE_ABORTMSGERR
Recursive
CA_ABORTMSG
Abort Message contained an error.
CE_ABORTMSGOK
Recursive
CA_ABORTMSG
Abort Message was processed and was
valid.
CE_ABORTMSGSKIP
Recursive
CA_ABORTMSG
Abort Message was not for current
request.
CE_ABORTMSGUCOK
Recursive
CA_ABORTMSG
Abort Message was contained a
pclucabort.
CE_ASNMSGERR
Recursive
CA_ASNMSG
[Re]Assign message contained an error.
CE_ASNMSGOK
Recursive
CA_ASNMSG
Assign Msg was processed and was valid.
CE_ASNMSGSSOOK
Recursive
CA_ASNMSG
Assign Msg with SSO was processed and
was valid.
CE_ASNRSPERR
Recursive
CA_ASSIGNRSP
Error [Re]Assign response received from
the assign task.
CE_ASNRSPOK
Recursive
CA_ASSIGNRSP
Successful Assign response received from
the assign task.
CE_ASNRSPSSOCONTINUE
Recursive
CA_ASNRSPSSO
AsnRspOk and SSO parcel needs
continue.
CE_ASNRSPSSOERR
Recursive
CA_ASNRSPSSO
AsnRspOk and SSO parcel is in error.
CE_ASNRSPSSOOK
Recursive
CA_ASNRSPSSO
AsnRspOk and SSO parcel is Complete.
CE_BADSTATE
Recursive
CA_ASSIGNRSP
Badstate Assign response received from
the assign task.
CE_CONNECTMSGAUTH
Recursive
CA_CONNECTMSG
Connect message had invalid security.
CE_CONNECTMSGERR
Recursive
CA_CONNECTMSG
Connect message contained an error.
CE_CONNECTMSGOK
Recursive
CA_CONNECTMSG
Connect message was processed and was
valid.
CE_CONTINUEMSGAUTH
Recursive
CA_CONTINUEMSG
Continue message had invalid security.
CE_CONTINUEMSGERR
Recursive
CA_CONTINUEMSG
Continue message contained an error.
CE_CONTINUEMSGOK
Recursive
CA_CONTINUEMSG
Continue message was processed and was
valid.
CE_CONTINUEMSGRSPERR
Recursive
CA_CONTINUEMSGRSP
Continue response from the DBS
contained an error.
Utilities
647
Appendix C: Session States, Events, and Actions
Session Event Definitions
Valid Events
Type of
Event
Action
Definition
CE_CONTINUEMSGRSPOK
Recursive
CA_CONTINUEMSGRSP
Continue response from the DBS was OK.
CE_DIRECTMSGAUTH
Recursive
CA_DIRECTMSG
Direct message has invalid security.
CE_DIRECTMSGERR
Recursive
CA_DIRECTMSG
Direct message contained an error.
CE_DIRECTMSGOK
Recursive
CA_DIRECTMSG
Direct message was processed and was
valid.
CE_DIRECTMSGRSPERR
Recursive
CA_DIRECTMSGRSP
Direct Message response contained an
error.
CE_DIRECTMSGRSPOK
Recursive
CA_DIRECTMSGRSP
Direct message response was OK.
CE_ELICITDATAMSGREQ
Recursive
CA_STARTMSGRSP
An elicit data msg received; start request
response still pending.
CE_ELICITDATAMSGRSPAUT
H
Recursive
CA_ELICITDATAMSGRSP
Elicit data message response had invalid
security.
CE_ELICITDATAMSGRSPERR
Recursive
CA_ELICITDATAMSGRSP
Elicit data message response contained an
error.
CE_ELICITDATAMSGRSPOK
Recursive
CA_ELICITDATAMSGRSP
Elicit data message response was OK.
CE_FREEOUTSTANDINGIO
Recursive
CA_FREE
There was an outstanding I/O event.
CE_LOGOFFMSGRSPERR
Recursive
CA_LOGOFFMSGRSP
A successful response received from
session control.
CE_LOGOFFMSGRSPOK
Recursive
CA_LOGOFFMSGRSP
An error response received from session
control.
CE_LOGONERR
Recursive
CA_SENDLOGON
An error in the logon message.
CE_LOGONMSGRSPERR
Recursive
CA_LOGONMSGRSP
A successful resp[onse received from
session control.
CE_LOGONMSGRSPOK
Recursive
CA_LOGONMSGRSP
An error response received from session
control.
CE_LOGONMSGRSPRUN
Recursive
CA_LOGONMSGRSP
A bad runrsp was received from session
control.
CE_LOGONSSOERR
Recursive
CA_SENDLOGONSSO
An error in the SSO logon message.
CE_METHODMSGAUTH
Recursive
CA_METHODMSG
Method Msg had invalid security.
CE_METHODMSGERR
Recursive
CA_METHODMSG
Method Msg contained an error.
CE_METHODMSGOK
Recursive
CA_METHODMSG
Method Msg was valid.
CE_NODEFULL
Recursive
CA_ASSIGNRSP
Node full assign response was received
from assign.
CE_REASNMSGOK
Recursive
CA_ASNMSG
ReAssign Message was processed and was
valid.
648
Utilities
Appendix C: Session States, Events, and Actions
Session Event Definitions
Valid Events
Type of
Event
Action
Definition
CE_REASNMSGSSOOK
Recursive
CA_ASNMSG
ReAssign Message with SSO was processed
and was valid.
CE_REASNRSPOK
Recursive
CA_ASSIGNRSP
Successful ReAssign response received
from Assign task.
CE_REASNRSPOKSSO
Recursive
CA_ASSIGNRSP
Successful ReAssign response with SSO
received from Assign task.
CE_REASNRSPSSOCONTINU
E
Recursive
CA_REASNRSPSSO
AsnRspOk and SSO parcel needs
continue.
CE_REASNRSPSSOERR
Recursive
CA_REASNRSPSSO
AsnRspOk and SSO parcel is in error.
CE_REASNRSPSSOOK
Recursive
CA_REASNRSPSSO
AsnRspOk and SSO parcel is Complete.
CE_RECONERR
Recursive
CA_RECON
Reconnect validation unsuccessful.
CE_RECONNECTMSGOK
Recursive
CA_CONNECTMSG
ReConnect message was processed and
was valid.
CE_RECONOK
Recursive
CA_RECON
Reconnect validation successful.
CE_RECONSSOERR
Recursive
CA_RECONSSO
Reconnect validation unsuccessful.
CE_RECONSSOOK
Recursive
CA_RECONSSO
Reconnect validation successful.
CE_RESSOCONTINUE
Recursive
CA_SENDRESSORSP
The Reconnect SSO authentication
required more exchanges.
CE_RESSOERR
Recursive
CA_SENDRESSORSP
The Reconnect SSO authentication was
not successful.
CE_SSOCONTINUE
Recursive
CA_SENDRESSORSP
The SSO authentication required more
exchanges.
CE_SSOERR
Recursive
CA_SENDRESSORSP
The SSO authentication was not
successful.
CE_SSOMSGAUTH
Recursive
CA_SSOMSG
An SSO message had invalid security.
CE_SSOMSGERR
Recursive
CA_SSOMSG
An error was encountered processing the
SSO message.
CE_SSOMSGOK
Recursive
CA_SSOMSG
An SSO message has been received and is
OK.
CE_STARTMSGAUTH
Recursive
CA_STARTMSG
A start message has invalid security.
CE_STARTMSGERR
Recursive
CA_STARTMSG
An error was encountered processing the
start message.
CE_STARTMSGOK
Recursive
CA_STARTMSG
A start message has been received and is
OK.
CE_STARTMSGRSPERR
Recursive
CA_STARTMSGRSP
A start request response is in error.
Utilities
649
Appendix C: Session States, Events, and Actions
Session Event Definitions
Valid Events
Type of
Event
Action
Definition
CE_STARTMSGRSPINDOUBT
Recursive
CA_STARTMSGRSP
A start request response for an indoubt
transaction has been received.
CE_STARTMSGRSPINTRAN
Recursive
CA_STARTMSGRSP
A start request response in a transaction
has been received.
CE_STARTMSGRSPNOTRAN
Recursive
CA_STARTMSGRSP
A start request response not in a
transaction has been received.
CE_UCABORTMSGRSPERR
Recursive
CA_USABORTMSGRSP
UCAbort Response was not received.
CE_UCABORTMSGRSPOK
Recursive
CA_USABORTMSGRSP
UCAbort Response was received.
650
Utilities
Appendix C: Session States, Events, and Actions
Session Action Definitions
Session Action Definitions
This section shows the valid actions and definitions.
MP-RAS
The following table shows the valid actions and definitions that can occur for a session.
Valid Action
Definition
CA_ASNMSG
Validate assign request from the client.
CA_ASNRSP
Process valid assign response from assign task.
CA_BUILDLOGON
Build the logon request to be sent to session control.
CA_CLOSE
Close the file description in connect task.
CA_CONNECTMSG
Validate connect request from the client.
CA_CONTAKEOVER
Build and send takeover message to the mux.
CA_DISC
Build and send disconnect message to assign task.
CA_DORESSO
Handle the SSO request received on reconnect.
CA_DOSSO
Handle the SSO request received.
CA_FREE
Free the session context.
CA_GONE
Process a Gone message from session control.
CA_KILL
Process a Kill message from xgtwglobal.
CA_LOG
Log an event.
CA_LOGAUTHERR
Log an authentication error.
CA_LOGOFF
Build and send a logoff message to session control.
CA_LOGON
Send the normal logon request to session control.
CA_METHODMSG
Validate method request from client.
CA_NOP
Placeholder - do nothing.
CA_NOTIFYMUX
Build and send takeback message to mux.
CA_NOTIFYMUXOK
Build Success msg and send takeback message to mux.
CA_REASNNEWRSP
Process valid Re-Logon reassign response from assign task.
CA_REASNRSP
Process valid reassign response from assign task.
CA_RECONTAKEOVER
Build and send reconnect takeover message to the mux.
CA_RELOGON
Re-send the logon request to session control.
CA_SENDASNREQ
Build and assign request to assign task.
Utilities
651
Appendix C: Session States, Events, and Actions
Session Action Definitions
Valid Action
Definition
CA_SENDASNRSP
Build and send (re)assign response to the client. Include configrsp and ssorsp if required.
CA_SENDERR
Build and send error parcel to the client.
CA_SENDERR_inline
Build and send error parcel from statetable to the client.
CA_SENDMETHODRSP
Build and send methods response to the client.
CA_SENDPERR
Forward error parcel from session control to the client.
CA_SENDREASNREQ
Build and send reassign request to assign task.
CA_SENDREASNRSP
Build and send reassign response to the client.
CA_SENDSSORSP
Send SSO response already built in session context by CA_DOSSO/CA_DORESSO.
CA_SSOMSG
Validate SSOMSG.
CA_STARTCONTIMER
Start connect timer.
CA_STATUS
Process the logon response from session control and send the Status request to session
control.
CA_STOPCONTIMER
Stop connect timer.
CA_TERM
Build and send terminate request to assign task.
CA_TERMFORCE
Build and send terminate by force request to assign task.
CA_XLOGON
Send the fast logon request to session control.
652
Utilities
Appendix C: Session States, Events, and Actions
Session Action Definitions
Windows and Linux
The following table shows the valid actions and definitions that can occur for a session.
Note: Actions of the form CA_ERRGTWxxxx will cause the CA_SENDERR_inline function
to be called with the value specified by ERRGTWxxxx.
Valid Action
Definition
CA_ABORTMSG
Validate abort request from the client.
CA_ASNMSG
Validate assign request from the client.
CA_ASNRSP
Process valid assign response from assign task.
CA_ASNRSPSSO
Process valid assign response from assign task with SSO parcel.
CA_ASSIGNRSP
Process [re]assign response from assign task.
CA_CLOSE
Close the file description in connect task.
CA_CONFIGMSG
Process a config message and send the response.
CA_CONNECTMSG
Validate connect request from the client.
CA_CONTINUEMSG
Validate continue request from the client.
CA_CONTINUEMSGRSP
Process continue response from the DBS.
CA_DIRECTMSG
Validate directed request from the client.
CA_DIRECTMSGRSP
Process the response to a directed message.
CA_ELICITDATAMSGRSP
Process the response from the client to an elicit data request.
CA_FREE
Free the session context if all I/O complete.
CA_ILLEGALEVENT
An unexpected event occurred.
CA_LOG
Log an event.
CA_LOGAUTHERR
Log an authentication error.
CA_LOGOFF
Build and send a logoff message to session control.
CA_LOGOFFMSGRSP
Process logoff response from the DBS.
CA_LOGONMSGRSP
Process logon response from the DBS.
CA_METHODMSG
Validate method request from client.
CA_NOP
Placeholder - do nothing.
CA_REASNRSP
Process valid reassign response from assign task.
CA_REASNRSPSSO
Process valid reassign response from assign task with SSO parcel.
CA_RECON
Process reconnect message from client.
CA_RECONSSO
Process reconnect message from client using SSO.
CA_SENDABORT
Send an abort request to the DBS.
Utilities
653
Appendix C: Session States, Events, and Actions
Session Action Definitions
Valid Action
Definition
CA_SENDASN
Build and send assign request to assign task.
CA_SENDASNRSP
Build and send assign response to the client.
CA_SENDASNRSPSSO
Build and send assign response and SSO parcel to the client.
CA_SENDCONRSP
Send the connect response to the client.
CA_SENDCONTINUE
Send the continue request to the DBS.
CA_SENDDBSRSP
Send the DBS response to the client.
CA_SENDDIRABORT
Send an abort request for a directed message.
CA_SENDDIRECT
Send the directed request to the DBS.
CA_SENDDIRECTRSP
Send the Direct Request response to the client.
CA_SENDELICITDATA
Send the Elicit Data to the dbs.
CA_SENDELICITDATAREQ
Send the ElicitData Request response to the client.
CA_SENDERR
Build and send error parcel to the client.
CA_SENDERR_inline
Build and send error parcel from statetable to the client.
CA_SENDLOGOFFOKRSP
Build OK logoff response and forward to the client.
CA_SENDLOGOFFRSP
Forward logoff response to the client.
CA_SENDLOGON
Send the normal logon request to session control.
CA_SENDLOGONSSO
Build and send SSO logon request to session control.
CA_SENDMETHODRSP
Build and send methods response to the client.
CA_SENDPERR
Forward error parcel from session control to the client.
CA_SENDREASN
Build and send reassign request to assign task.
CA_SENDREASNRSP
Build and send reassign response to the client.
CA_SENDREASNRSPSSO
Build and send reassign response with SSO parcel to the client.
CA_SENDRECONRSP
Build and send reconnect response to the client.
CA_SENDRESSORSP
Build and send reconnect SSO response to the client.
CA_SENDSSORSP
Build and send SSO response to the client.
CA_SENDSTART
Forward start request to the DBS.
CA_SENDTERM
Build and send terminate request to assign task.
CA_SENDTERMDISC
Build and send a disconnect terminate request to assign task.
CA_SENDTERMFORCE
Build and send a force terminate request to assign task.
CA_SENDTERMFREE
Build and send a free terminate request to assign task.
CA_SENDTERMGONE
Build and send a gone terminate request to assign task.
654
Utilities
Appendix C: Session States, Events, and Actions
Session Action Definitions
Valid Action
Definition
CA_SENDUCABORT
Send an UC abort request to the DBS.
CA_SHUTDOWN
Shutdown the virtual circuit.
CA_SSOMSG
Validate SSOMSG.
CA_STARTCONTIMER
Start connect timer.
CA_STARTMSG
Validate Start message.
CA_STARTMSGRSP
Process Start Response from DBS.
CA_STOPCONTIMER
Stop connect timer.
CA_TESTMSG
Process the Test Message from the client.
CA_UCABORTMSGRSP
Validate if DBS response is to a UCABORT.
Utilities
655
Appendix C: Session States, Events, and Actions
Session Action Definitions
656
Utilities
Glossary
2PC
Two-phase Commit
AG
Allocation Group
AMP
Access Module Processor
ARC
Archive/Recovery
AWS
Administration Workstation
AWT
AMP Worker Task
BLK Block
BTEQ Basic Teradata Query
CI Cylinder Index
CICS
Customer Information Control System
CID
Cylinder Index Descriptor
CNS
Teradata Database Console Subsystem
CJ
Changed Row Journal
COP Communications Processor
DB
Data Block
DBQL
DBS
Database Query Log
Database System or Database Software
DBW
Database Window
DDL
Data Definition Language
DEM
Database Extensibility Mechanism
DML
Data Manipulation Language
Dwin Display Window
FIB
File Information Block
FSE
Free Sector
FSG
File Segment
FSP Free Space Percent
Utilities
657
Glossary
GDB Teradata GNU Debugger
GDO Globally Distributed Object
GLOP Global and persistent
GUI
Graphical User Interface
hex
Hexadecimal
HG
Host Group
HN Host Number
HOST
HUT
Teradata Database Host Software
Host Utility
HUTCNS
Host Utility Console
IMS
Information management system
LAN
Local area network
LOB Large Object
LSN Log Sequence Number
LSNSPEC
LSN Specification
LUN
Logical Unit
MDS
Metadata Services
MI Master Index
MLOAD
MultiLoad
MPP Massively Parallel Processing
NoPI
No Primary Index
NPPI
Nonpartitioned Primary Index
NTA Nested Top Action
NUSI Non-Unique Secondary Index
OCES
OJ
Optimizer Cost Estimation Subsystem
Ordered System Change Journal
OLTP Online Transaction Processing
658
PCT
PPI Cache Threshold
PDE
Parallel Database Extensions
Utilities
Glossary
PDEGPL
PDN
GNU General Public License Version of Parallel Database Extensions
Package Distribution Node
PE
Parsing Engine
PG
Performance Group
PID
PJ
Process ID
Permanent Journal
PMPC
PPI
Performance Monitoring and Production Control
Partitioned Primary Index
PROC
Procedural Management Subsystem
PSA
Priority Scheduler Administrator
PUT
Parallel Upgrade Tool
QIC
Quarter-inch Cartridge
RBM
Resolver Base Module
ResUsage Resource Usage
RFC
Reservation Flow Control
RJ Recovery Journal
RP
Resource Partition
RSG
Relay Services Gateway
RSS
Resource Subsystem
SDF
Source Specification for Data Formatting
SECTORSPEC
Sector Specification
SMP
Symmetric MultiProcessing
stdin
Standard Input
stdout
Standard Output
supv Supervisor Window
Supvr Supervisor Icon
sysinit
SysView
TCHN
Utilities
System Initializer
System View
Teradata Channel
659
Glossary
TDGSS
TDP
Teradata Database Generic Security Services
Teradata Director Program
Teradata ASM
Teradata DWM
TGTW
TJ
Teradata Active System Management
Teradata Dynamic Workload Manager
Teradata Gateway
Transient Journal
TLE
Target Level Emulation
TPA
Trusted Parallel Application
TZ
Time Zone
UDF
User-Defined Function
UDM
User-Defined Method
UDT
User-Defined Type
UNFES
Unfree Sector
UPI Unique Primary Index
USI
Unique Secondary Index
UTC
Universal Coordinated Time
vdisk
Virtual Disk
vproc
Virtual Processor
WAL
Write Ahead Logging
WCI
WAL Cylinder Index
WD Workload Definition
WDB
WDBD
WLC
WLSN
WMI
660
WAL Data Block
WAL Data Block Descriptors
Workload Class
WAL Log Sequence Number
WAL Master Index
Utilities
Index
Symbols
.OS command, DUL utility 482
A
ABORT command, DUL utility 470
Abort Host utility
aborting transactions 32
overview 31
starting 31
user interfaces 31
ABORT SESSION command, Database Window 248
access rights, creating with DIPACR, DIP utility 233
AccessLockForUncomRead, DBS Control utility 312
Active Row Filtering, Ctl utility 172
ADD AMP command, Configuration utility 114
ADD HOST command, Configuration utility 116
ADD PE command, Configuration utility 117
AMP Load utility 33
starting 33
user interfaces 33
ampload utility. See AMP Load utility
AMPs, Configuration utility 109
AWT Monitor utility. See awtmon utility
awtmon utility
overview 35
starting 35
syntax 36
user interfaces 35
B
BEGIN CONFIG command, Configuration utility 119
Bkgrnd Age Cycle Interval field, DBS Control utility 313
Break Stop setting, Ctl utility 170
C
Century Break field, DBS Control utility 314
Character sets, creating with DIPCCS, DIP utility 233
CHECK command, CheckTable utility 50
Check levels, CheckTable utility 56
Checksum fields, DBS Control utility 446
CheckTable Table Lock Retry Limit field,
DBS Control utility 316
CheckTable utility
aborting a table check 48
aborting the CHECK command 48
Utilities
CHECK command 50
check levels 56
CHECKTABLEB command 55
COMPRESSCHECK option 55
concurrent mode 44
data checking 42
data subtables 60, 64, 76
deadlocks 91
error handling 94
general checks 81
level-one checking 58
level-three checking 75
level-two checking 62
overview 41
pendingop-level checking 58
recommendations for running 43
referential integrity terminology 42
rules for using wildcard syntax 99
starting 41
table check status 47
user interfaces 41
using commands 45
using function keys 45
using valid characters in names 95
using wildcard characters in names 97
viewing help 45
wildcard syntax 97
CheckTable utility commands
CHECK 50
CHECKTABLEB 55
Client character sets, creating with DIPCCS, DIP utility 233
Client Reset Timeout field, DBS Control utility 317
CLIEnvFile field, Cufconfig utility 185
CLIIncPath field, Cufconfig utility 186
CLILibPath field, Cufconfig utility 187
Clique Failure setting, Ctl utility 168
CNSGET command, Database Window 250
Cnsrun utility 103
CNSSET LINES command, Database Window 251
CNSSET STATEPOLL command, Database Window 252
CNSSET TIMEOUT command, Database Window 253
CompilerPath field, Cufconfig utility 188
CompilerTempDirectory field, Cufconfig utility 189
concurrent mode, CheckTable utility 44
config utility. See Configuration utility
Configuration maps, Configuration utility 108
661
Index
Configuration utility
about 108
activities 110
AMP commands 111
AMPs 109
command overview 111
command types 111
examples 152
host commands 112
hosts 109
maps 108
messages 153
overview 107
PE commands 112
PEs 109
physical processors 108
Reconfiguration utility activities and 110
session control commands 111
starting 107
system attribute commands 111
user interfaces 107
vprocs 108
Configuration utility commands
overview 111
ADD AMP 114
ADD HOST 116
ADD PE 117
BEGIN CONFIG 119
DEFAULT CLUSTER 120
DEL AMP 123
DEL HOST 125
DEL PE 126
END CONFIG 128
LIST 129
LIST AMP 131
LIST CLUSTER 133
LIST HOST 135
LIST PE 137
MOD AMP 139
MOD HOST 140
MOD PE 141
MOVE AMP 142
MOVE PE 144
SHOW CLUSTER 146
SHOW HOST 147
SHOW VPROC 149
STOP 151
Control GDO Editor. See Ctl utility
CostProfileId field, DBS Control utility 318
Crashdumps 233
Crashdumps database, DUL utility 461
Ctl command shell 157
Ctl utility
commands 157
662
overview 155
starting 155
syntax 156
user interfaces 155
Ctl utility commands
EXIT 158
HARDWARE 159
HELP 161
PRINT group 162
PRINT variable 163
QUIT 164
READ 165
SCREEN 166
SCREEN DBS 166
SCREEN DEBUG 169
SCREEN RSS 172
SCREEN VERSION 174
variable=setting 177
WRITE 179
Ctl utility settings
Break Stop 170
Clique Failure 168
Cylinder Read 168
Cylinder Slots/AMP 169
FSG Cache Percent 168
Maximum Dumps 171
Mini Dump Type 171
Minimum Node Action 167
Minimum Nodes Per Clique 167
Node Logging Rate 172
Restart After DAPowerFail 169
RSS Active Row Filter Mode Enable 172
RSS Collection Rate 172
RSS Summary Mode Enable 172
RSS Table Logging Enable 172
Save Dumps 171
Snapshot Crash 171
Start DBS 170
Start PrgTraces 171
Start With Debug 170
Start With Logons 170
Vproc Logging Rate 172
Ctl Vproc Logging Rate, Ctl utility 172
Cufconfig utility
fields 182
overview 181
settings 182
starting 181
syntax 182
user interfaces 181
Cufconfig utility fields
CLIEnvFile 185
CLIIncPath 186
CLILibPath 187
Utilities
Index
CompilerPath 188
CompilerTempDirectory 189
GLOPLockTimeout 190
GLOPLockWait 191
GLOPMemMapPath 192
JavaBaseDebugPort 193
JavaEnvFile 195
JavaHybridThreads 196
JavaLibraryPath 197
JavaLogPath 198
JavaServerTasks 199
JavaVersion 200
JREPath 201
JSVServerMemPath 202
LinkerPath 203
MallocLimit 204
MaximumCompilations 205
MaximumGLOPMem 206
MaximumGLOPPages 207
MaximumGLOPSize 208
ModTime 209
ParallelUserServerAMPs 210
ParallelUserServerPEs 211
SecureGroupMembership 212
SecureServerAMPs 213
SecureServerPEs 214
SourceDirectoryPath 215
summary 182
SWDistNodeID 216
TDSPLibBase 217
UDFEnvFile 218
UDFIncPath 219
UDFLibPath 220
UDFLibraryPath 221
UDFServerMemPath 222
UDFServerTasks 223
Version 224
CurHashBucketSize field, DBS Control utility 321
Cylinder Read setting, Ctl utility 168
Cylinder Slots/AMP setting, Ctl utility 169
Cylinders Saved for PERM field, DBS Control utility 322
D
data checking, CheckTable utility 42
DATABASE command, DUL utility 471
Database Initialization Program. See DIP utility
Database Utilities overview 25
Database Window
commands 246
granting CNS permissions 243
help 245
main window 242
repeating commands 244
Utilities
running scripts 245
saving output 244
saving the buffer 245
scripts 245
starting 241
user interfaces 241
viewing log and buffer files 245
Database Window commands
ABORT SESSION 248
CNSGET 250
CNSSET LINES 251
CNSSET STATEPOLL 252
CNSSET TIMEOUT 253
DISABLE ALL LOGONS 254
DISABLE LOGONS 254
ENABLE ALL LOGONS 256
ENABLE DBC LOGONS 255
ENABLE LOGONS 256
GET ACTIVELOGTABLE 257
GET CONFIG 259
GET EXTAUTH 263
GET LOGTABLE 264
GET PERMISSIONS 266
GET RESOURCE 267
GET SUMLOGTABLE 268
GET TIME 269
GET VERSION 270
GRANT 272
LOG 274
QUERY STATE 275
RESTART TPA 277
REVOKE 278
SET ACTIVELOGTABLE 280
SET EXTAUTH 282
SET LOGTABLE 283
SET RESOURCE 285
SET SESSION COLLECTION 288
SET SUMLOGTABLE 289
START 291
STOP 293
Databases
creating crashdumps database, DIP utility 233
creating SQLJ with DIPSQLJ, DIP utility 235
creating Sys_Calendar with DIPCAL, DIP utility 233
creating SYSLIB with DIPDEM, DIP utility 233
creating SYSUDTLIB with DIPUDT, DIP utility 235
DATE command, Ferret utility 508
DateForm field, DBS Control utility 323
DBC.AccLogRule macro, DIP utility 232
DBQL Log Last Resp field, DBS Control utility 326
DBQLFlushRate field, DBS Control utility 324
DBS Control commands 296
DBS Control utility
DISPLAY command 297
663
Index
enabling checksums on individual tables 448
field groups 296
HELP command 300
MaxRequestsSaved field 388
MODIFY command 302
overview 295
QUIT command 304
starting 295
user interfaces 295
using 296
WRITE command 305
DBS Control utility fields
AccessLockForUncomRead 312
Bkgrnd Age Cycle Interval 313
Century Break field 314
Checksum fields 446
CheckTable Table Lock Retry Limit field 316
Client Reset Timeout field 317
CostProfileId field 318
CurHashBucketSize field 321
Cylinders Saved for PERM 322
DateForm field 323
DBQL Log Last Resp 326
DBQLFlushRate field 324
DBSCacheCtrl 329
DBSCacheThr 330
DeadlockTimeOut field 334
DefaultCaseSpec 336
DefragLowCylProd 337
DictionaryCacheSize 339
DisablePeekUsing 341
DisableSyncScan 343
DisableUDTImplCastForSysFuncOp field 340
DisableWAL 344
DisableWALforDBs 345
EnableCostProfileTLE field 346
EnableSetCostProfile field 348
Export Width Table ID field 349
ExternalAuthentication field 356
Free Cylinder Cache Size 358
FreeSpacePercent 359
HashFuncDBC field 362
HTMemAlloc 363
IAMaxWorkloadCache 365
IdCol Batch Size field 366
IVMaxWorkloadCache 368
LargeDepotCylsPerPdisk 369
LockLogger Delay Filter 373
LockLogger Delay Filter Time 374
LockLogger field 371
LockLogSegmentSize 376
MaxDecimal 377
MaxDownRegions 378
MaxJoinTables 380
664
MaxLoadAWT 381
MaxLoadTasks 384
MaxParseTreeSegs 387
MaxRequestsSaved 388
MaxRowHashBlocksPercent 389
MaxSyncWALWrites 390
MDS Is Enabled 391
Memory Limit Per Transaction 393
MiniCylPackLowCylProd 394
MonSesCPUNormalization 396
MPS_IncludePEOnlyNodes 398
NewHashBucketSize 399
ObjectUseCountCollectRate 401
Permanent Journal Tables 453
PermDBAllocUnit 403
PermDBSize 405
PPICacheThrP 407
PrimaryIndexDefault field 411
Read Ahead Count 415
ReadAhead 413
ReadLockOnly 416
RedistBufsize 417
RepCacheSegSize 419
RevertJoinPlanning 420
RollbackPriority 421
RollbackRSTransaction 423
RollForwardLock 424
RoundHalfwayMagUp 425
RSDeadLockInterval 426
SessionMode 427
SkewAllowance 428
Small Depot Slots 429
Spill File Path 431
StandAloneReadAheadCount 432
StepSegmentSize 433
SyncScanCacheThr 434
SysInit 436
System Journal Tables 450
System Logging Tables 451
System Tables 449
System TimeZone Hour 437
System TimeZone Minute 438
Target Level Emulation 439
TempLargePageSize 440
Temporary Storage Page Size 441
Temporary Tables 454
User Tables field 452
UtilityReadAheadCount 443
Version 444
WAL Buffers 445
WAL Checkpoint Interval 446
DBSCacheCtrl field, DBS Control utility 329
DBSCacheThr field, DBS Control utility 330
DBW. See Database Window
Utilities
Index
DeadlockTimeOut field, DBS Control utility 334
DEFAULT CLUSTER command, Configuration utility 120
DefaultCaseSpec field, DBS Control utility 336
DefragLowCylProd field, DBS Control utility 337
DEFRAGMENT command, Ferret utility 509
DEL AMP command, Configuration utility 123
DEL HOST command, Configuration utility 125
DEL PE command, Configuration utility 126
DictionaryCacheSize field, DBS Control utility 339
DIP utility
creating access rights 233
creating calendar view 233
creating client character sets 233
creating crashdumps database 233
creating error message logs 233
creating online help 234
creating ResUsage macros and views 235
creating ResUsage tables 234
creating Sys_Calendar 233
creating system views 236
creating SystemFE macros 235
DIP scripts 232
overview 231
restricting password words 234
scripts 232
starting 232
user interfaces 232
DIP utility scripts
DIPACC 232
DIPACR 233
DIPALL 233
DIPCAL 233
DIPCCS 233
DIPCRASH 233
DIPDEM 233
DIPERR 233
DIPOCES 234
DIPOLH 234
DIPPATCH 234
DIPPWRSTNS 234
DIPRCO 234
DIPRSS 234
DIPRUM 235
DIPSQLJ 235
DIPUDT 235
DIPVIEW 236
DISABLE ALL LOGONS command, Database Window 254
DISABLE command, Ferret utility 512
DISABLE LOGONS command, Database Window 254
DISABLE LOGONS command, Gateway Global utility 589
DISABLE TRACE command, Gateway Global utility 590
DisablePeekUsing field, DBS Control utility 341
DisableSyncScan field, DBS Control utility 343
DisableUDTImplCastForSysFuncOp field,
Utilities
DBS Control utility 340
DisableWAL field, DBS Control utility 344
DisableWALforDBs field, DBS Control utility 345
DISCONNECT SESSION command,
Gateway Global utility 591
DISCONNECT USER command, Gateway Global utility 592
DISPLAY command, DBS Control utility 297
DISPLAY DISCONNECT command,
Gateway Global utility 593
DISPLAY FORCE command, Gateway Global utility 594
DISPLAY GTW command, Gateway Global utility 595
DISPLAY NETWORK command, Gateway Global utility 597
DISPLAY SESSION command, Gateway Global utility 599
DISPLAY STATS command, Gateway Global utility 602
DISPLAY TIMEOUT command, Gateway Global utility 603
DISPLAY USER command, Gateway Global utility 604
down subtables and regions
CheckTable level-one checking 62
CheckTable level-three checking 81
CheckTable level-two checking 74
DROP command, DUL utility 472
DUL utility
about 455
crashdumps database 461
DULTAPE utility 456
entering commands 469
mailing dump tapes to Teradata 463
modes of operation 457
overview 455
privileges 461
restarting 464
return codes 465
saving dumps to tape 462
space allocation 461
space requirements 461
starting and running 457
transferring Windows dump files 464
user interfaces 455
what DUL does 455
what DULTAPE does 456
DUL utility commands
.OS 482
ABORT 470
DATABASE 471
DROP 472
END 474
HELP 475
LOAD 476
LOGOFF 479
LOGON 480
QUIT 483
SEE 484
SELECT 485
SHOW TAPE 487
665
Index
SHOW VERSIONS 488
UNLOAD 489
DULTAPE utility. See DUL utility
Dump Unload/Load utilities. See DUL utility
E
ENABLE ALL LOGONS command, Database Window 256
ENABLE command, Ferret utility 513
ENABLE DBC LOGONS command, Database Window 255
ENABLE LOGONS command, Database Window 256
ENABLE LOGONS command, Gateway Global utility 606
ENABLE TRACE command, Gateway Global utility 607
EnableCostProfileTLE field, DBS Control utility 346
EnableSetCostProfile field, DBS Control utility 348
END command, DUL utility 474
END CONFIG command, Configuration utility 128
Error handling, CheckTable utility 94
Error message logs, creating with DIPERR, DIP utility 233
Error messages
Configuration utility 153
Ferret utility 507
ERRORS command, Ferret utility 514
EXIT command, Ctl utility 158
Export Width Table ID field, DBS Control utility 349
ExternalAuthentication field, DBS Control utility 356
F
Ferret utility
classes of tables 503
command scope 505
command summary 506
command syntax 496
command usage rules 497
cylinders 494
data blocks 494
entering commands 496
entering multitoken parameters 498
entering numeric input 499
error messages 507
overview 493
performance considerations 494
redirecting input 494
redirecting output 494
scope 505
specifying a subtable identifier (tid) 499
starting 493
user interfaces 493
using parameters 498
vproc numbers 504
Ferret utility commands
DATE 508
DEFRAGMENT 509
DISABLE 512
666
ENABLE 513
ERRORS 514
HELP 516
INPUT 517
OUTPUT 518
overview 505
PACKDISK 520
PRIORITY 525
QUIT 526
RADIX 527
SCANDISK 528
SCOPE 539
SHOWBLOCKS 550
SHOWDEFAULTS 553
SHOWFSP 554
SHOWSPACE 561
TABLEID 567
TIME 508
UPDATE DATA INTEGRITY FOR 569
field groups, DBS Control utility 296
FLUSH TRACE command, Gateway Global utility 609
Free Cylinder Cache Size field, DBS Control utility 358
FreeSpacePercent field, DBS Control utility 359
FSG Cache Percent setting, Ctl utility 168
G
Gateway Control utility
changing maximum sessions per node 581
gateway host groups 572
gateway log files 574
options 575
overview 571
starting 571
syntax 572
user interfaces 571
Gateway Global utility
administering sessions and users 587
commands overview 585
diagnostics 587
displaying network information 586
displaying session information 586
getting help 588
logging sessions off 588
Main window 615
Main window display options 617
Main window Messages section 620
Main window Mode options 619
Main window Summary section 618
overview 583
Session window 621
specifying a host 586
user interfaces 584
user names and hexadecimal notation 585
Utilities
Index
xgtwglobal, X Window interface 615
Gateway Global utility commands
DISABLE LOGONS 589
DISABLE TRACE 590
DISCONNECT SESSION 591
DISCONNECT USER 592
DISPLAY DISCONNECT 593
DISPLAY FORCE 594
DISPLAY GTW 595
DISPLAY NETWORK 597
DISPLAY SESSION 599
DISPLAY STATS 602
DISPLAY TIMEOUT 603
DISPLAY USER 604
ENABLE LOGONS 606
ENABLE TRACE 607
FLUSH TRACE 609
HELP 610
KILL SESSION 611
KILL USER 612
SELECT HOST 613
SET TIMEOUT 614
Gateway Host Groups, Gateway Control utility 572
GET ACTIVELOGTABLE command, Database Window 257
GET CONFIG command, Database Window 259
GET EXTAUTH command, Database Window 263
GET LOGTABLE command, Database Window 264
GET PERMISSIONS command, Database Window 266
GET RESOURCE command, Database Window 267
GET SUMLOGTABLE command, Database Window 268
GET TIME command, Database Window 269
GET VERSION command, Database Window 270
GLOPLockTimeout field, Cufconfig utility 190
GLOPLockWait field, Cufconfig utility 191
GLOPMemMapPath field, Cufconfig utility 192
GRANT command, Database Window 272
gtwcontrol. See Gateway Control utility
gtwglobal. See Gateway Global utility 583
H
HARDWARE command, Ctl utility 159
HashFuncDBC field, DBS Control utility 362
HELP command
Ctl utility 161
DBS Control utility 300
DUL utility 475
Ferret utility 516
Gateway Global utility 610
Host, Configuration utility 109
HTMemAlloc field, DBS Control utility 363
I
IdCol Batch Size field, DBS Control utility 366
INPUT command, Ferret utility 517
IVMaxWorkloadCache field, DBS Control utility 368
J
JavaBaseDebugPort field, Cufconfig utility 193
JavaEnvFile field, Cufconfig utility 195
JavaHybridThreads field, Cufconfig utility 196
JavaLibraryPath field, Cufconfig utility 197
JavaLogPath field, Cufconfig utility 198
JavaServerTasks field, Cufconfig utility 199
JavaVersion field, Cufconfig utility 200
JREPath field, Cufconfig utility 201
JSVServerMemPath field, Cufconfig utility 202
K
KILL SESSION command, Gateway Global utility 611
KILL USER command, Gateway Global utility 612
L
Large object (LOB) subtables
level-one checking 60
level-two checking 66
LargeDepotCylsPerPdisk field, DBS Control utility 369
LinkerPath field, Cufconfig utility 203
Linux, starting utilities on 634
LIST AMP command, Configuration utility 131
LIST CLUSTER command, Configuration utility 133
LIST command, Configuration utility 129
LIST HOST command, Configuration utility 135
LIST PE command, Configuration utility 137
LOAD command, DUL utility 476
LockLogger Delay Filter field, DBS Control utility 373
LockLogger Delay Filter Time field, DBS Control utility 374
LockLogger field, DBS Control utility 371
LockLogSegmentSize field, DBS Control utility 376
LOG command, Database Window 274
Logging rates, Ctl utility 172
LOGOFF command, DUL utility 479
LOGON command, DUL utility 480
M
MallocLimit field, Cufconfig utility 204
MaxDecimal field, DBS Control utility 377
MaxDownRegions field, DBS Control utility 378
Maximum Dumps setting, Ctl utility 171
MaximumCompilations field, Cufconfig utility 205
MaximumGLOPMem field, Cufconfig utility 206
MaximumGLOPPages field, Cufconfig utility 207
MaximumGLOPSize field, Cufconfig utility 208
MaxJoinTables field, DBS Control utility 380
IAMaxWorkloadCache field, DBS Control utility 365
Utilities
667
Index
MaxLoadAWT field, DBS Control utility 381
MaxLoadTasks field, DBS Control utility 384
MaxParseTreeSegs field, DBS Control utility 387
MaxRequestsSaved field, DBS Control utility 388
MaxRowHashBlocksPercent field, DBS Control utility 389
MaxSyncWALWrites field, DBS Control utility 390
MDS Is Enabled field, DBS Control utility 391
Memory Limit Per Transaction field, DBS Control utility 393
Microsoft Windows, starting utilities on 632
Mini Dump Type setting, Ctl utility 171
MiniCylPackLowCylProd field, DBS Control utility 394
Minimum Node Action setting, Ctl utility 167
Minimum Nodes Per Clique setting, Ctl utility 167
MOD AMP command, Configuration utility 139
MOD HOST command, Configuration utility 140
MOD PE command, Configuration utility 141
MODIFY command, DBS Control utility 302
ModTime field, Cufconfig utility 209
MonSesCPUNormalization field, DBS Control utility 396
MOVE AMP command, Configuration utility 142
MOVE PE command, Configuration utility 144
MP-RAS, starting utilities on 629
MPS_IncludePEOnlyNodes field, DBS Control utility 398
N
NewHashBucketSize field, DBS Control utility 399
no primary index (NoPI) tables 411
Node Logging Rate, Ctl utility 172
Nonunique secondary indexes
CheckTable level-one checking 61
CheckTable level-three checking 79
CheckTable level-two checking 70
O
ObjectUseCountCollectRate field, DBS Control utility 401
OCES, initializing with DIPOCES, DIP utility 234
Online help, creating with DIPOLH, DIP utility 234
Optimizer Cost Estimation Subsystem. See OCES
OUTPUT command, Ferret utility 518
P
PACKDISK command, Ferret utility 520
ParallelUserServerAMPs field
Cufconfig utility 210
ParallelUserServerPEs field, Cufconfig utility 211
Password words, restricting with DIPPWRSTNS,
DIP utility 234
PDE Control program. See Ctl utility
Permanent Journal Tables field, DBS Control utility 453
PermDBAllocUnit field, DBS Control utility 403
PermDBSize field, DBS Control utility 405
PEs, Configuration utility 109
668
Physical processors, Configuration utility 108
PPICacheThrP field, DBS Control utility 407
PrimaryIndexDefault field, DBS Control utility 411
PRINT group command, Ctl utility 162
PRINT variable command, Ctl utility 163
PRIORITY command, Ferret utility 525
Q
QUERY STATE command, Database Window 275
QUIT command
Ctl utility 164
DBS Control utility 304
DUL utility 483
Ferret utility 526
R
RADIX command, Ferret utility 527
Read Ahead Count field, DBS Control utility 415
READ command, Ctl utility 165
ReadAhead field, DBS Control utility 413
ReadLockOnly field, DBS Control utility 416
RedistBufSize field, DBS Control utility 417
Reference index subtables,
CheckTable level-three checking 80
Reference indexes, CheckTable level-two checking 73
referential integrity, CheckTable utility and 42
RepCacheSegSize field, DBS Control utility 419
Resource Usage
creating tables with DIPRSS, DIP utility 234
creating views and macros with DIPRUM, DIP utility 235
Restart After DAPowerFail setting, Ctl utility 169
RESTART TPA command, Database Window 277
RevertJoinPlanning field, DBS Control utility 420
REVOKE command, Database Window 278
RollbackPriority field, DBS Control utility 421
RollbackRSTransaction field, DBS Control utility 423
RollForwardLock field, DBS Control utility 424
RoundHalfwayMagUp field, DBS Control utility 425
RSDeadLockInterval field, DBS Control utility 426
RSS Active Row Filter Mode Enable, Ctl utility 172
RSS Collection Rate, Ctl utility 172
RSS Summary Mode Enable, Ctl utility 172
RSS Table Logging Enable, Ctl utility 172
S
Save Dumps setting, Ctl utility 171
SCANDISK command, Ferret utility 528
SCOPE command, Ferret utility 539
SCREEN command, Ctl utility 166
SCREEN DBS command, Ctl utility 166
SCREEN DEBUG command, Ctl utility 169
SCREEN RSS command, Ctl utility 172
Utilities
Index
SCREEN VERSION command, Ctl utility 174
SecureGroupMembership field, Cufconfig utility 212
SecureServerAMPs field, Cufconfig utility 213
SecureServerPEs field, Cufconfig utility 214
SEE command, DUL utility 484
SELECT command, DUL utility 485
SELECT HOST command, Gateway Global utility 613
Session Action definitions 651
Session Event definitions 644
Session State definitions 639
SessionMode field, DBS Control utility 427
SET ACTIVELOGTABLE command, Database Window 280
SET EXTAUTH command, Database Window 282
SET LOGTABLE command, Database Window 283
SET RESOURCE command, Database Window 285
SET SESSION COLLECTION command,
Database Window 288
SET SUMLOGTABLE command, Database Window 289
SET TIMEOUT command, Gateway Global utility 614
SHOW CLUSTER command, Configuration utility 146
SHOW HOST command, Configuration utility 147
SHOW TAPE command, DUL utility 487
SHOW VERSIONS command, DUL utility 488
SHOW VPROC command, Configuration utility 149
SHOWBLOCKS command, Ferret utility 550
SHOWDEFAULTS command, Ferret utility 553
SHOWFSP command, Ferret utility 554
SHOWSPACE command, Ferret utility 561
SkewAllowance field, DBS Control utility 428
Small Depot Slots field, DBS Control utility 429
Snapshot Crash setting, Ctl utility 171
SourceDirectoryPath field, Cufconfig utility 215
Spill File Path field, DBS Control utility 431
StandAloneReadAheadCount field, DBS Control utility 432
START command, Database Window 291
Start DBS setting, Ctl utility 170
Start PrgTraces setting, Ctl utility 171
Start With Debug setting, Ctl utility 170
Start With Logons setting, Ctl utility 170
StepsSegmentSize field, DBS Control utility 433
STOP command, Configuration utility 151
STOP command, Database Window 293
Summary Mode, Ctl utility 172
SUSE Linux, starting utilities on 634
SWDistNodeID field, Cufconfig utility 216
SyncScanCacheThr field, DBS Control utility 434
Syntax diagrams, how to read 623
SysInit field, DBS Control utility 436
System Journal Tables field, DBS Control utility 450
System Logging Tables field, DBS Control utility 451
System Tables field, DBS Control utility 449
System TimeZone Hour field, DBS Control utility 437
System TimeZone Minute field, DBS Control utility 438
SystemFE macros, creating with DIPSYSFE, DIP utility 235
Utilities
T
Table headers, level-one checking 59
Table Logging, Ctl utility 172
TABLEID command, Ferret utility 567
tables without primary indexes (NoPI tables) 411
Target Level Emulation field, DBS Control utility 439
TDSPLibBase field, Cufconfig utility 217
TempLargePageSize field, DBS Control utility 440
Temporary Storage Page Size field, DBS Control utility 441
Temporary Tables field, DBS Control utility 454
Teradata Database file system overview, Ferret utility 494
Teradata Database Utilities overview 25
TIME command, Ferret utility 508
U
UDF GDO Configuration utility. See Cufconfig utility
UDFEnvFile field, Cufconfig utility 218
UDFIncPath field, Cufconfig utility 219
UDFLibPath field, Cufconfig utility 220
UDFLibraryPath field, Cufconfig utility 221
UDFServerMemPath field, Cufconfig utility 222
UDFServerTasks field, Cufconfig utility 223
Unique secondary indexes
CheckTable level-one checking 61
CheckTable level-three checking 78
CheckTable level-two checking 67
UNLOAD command, DUL utility 489
UPDATE DATA INTEGRITY FOR command,
Ferret utility 569
User Tables field, DBS Control utility 452
Utilities
alphabetical listing 25
running on different platforms 629
starting on Linux 634
starting on MP-RAS 629
starting on Windows 632
starting on z/OS 637
starting on z/VM 637
stopping 629
UtilityReadAheadCount field, DBS Control utility 443
V
variable=setting command, Ctl utility 177
Version field, Cufconfig utility 224
Version field, DBS Control utility 444
Vprocs, Configuration utility 108
W
WAL Buffers field, DBS Control utility 445
WAL Checkpoint Interval field, DBS Control utility 446
Wildcard syntax, CheckTable utility 97, 99
Windows, starting utilities on 632
669
Index
WRITE command
Ctl utility 179
DBS Control utility 305
X
xdbw. See Database Window
xgtwglobal. See Gateway Global utility
Z
z/OS, starting utilities on 637
z/VM, starting utilities on 637
670
Utilities