Download Teradata Archive/Recovery Utility Reference

Document related concepts

Microsoft Access wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Serializability wikipedia , lookup

Open Database Connectivity wikipedia , lookup

IMDb wikipedia , lookup

Oracle Database wikipedia , lookup

Functional Database Model wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Concurrency control wikipedia , lookup

Database wikipedia , lookup

Ingres (database) wikipedia , lookup

Relational model wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Clusterpoint wikipedia , lookup

Internet Archive wikipedia , lookup

ContactPoint wikipedia , lookup

Database model wikipedia , lookup

Transcript
Teradata Archive/Recovery Utility
Reference
Release 16.00
B035-2412-086K
November 2016
The product or products described in this book are licensed products of Teradata Corporation or its affiliates.
Teradata, Applications-Within, Aster, BYNET, Claraview, DecisionCast, Gridscale, MyCommerce, QueryGrid, SQL-MapReduce, Teradata
Decision Experts, "Teradata Labs" logo, Teradata ServiceConnect, Teradata Source Experts, WebAnalyst, and Xkoto are trademarks or registered
trademarks of Teradata Corporation or its affiliates in the United States and other countries.
Adaptec and SCSISelect are trademarks or registered trademarks of Adaptec, Inc.
Amazon Web Services, AWS, [any other AWS Marks used in such materials] are trademarks of Amazon.com, Inc. or its affiliates in the United
States and/or other countries.
AMD Opteron and Opteron are trademarks of Advanced Micro Devices, Inc.
Apache, Apache Avro, Apache Hadoop, Apache Hive, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks of
the Apache Software Foundation in the United States and/or other countries.
Apple, Mac, and OS X all are registered trademarks of Apple Inc.
Axeda is a registered trademark of Axeda Corporation. Axeda Agents, Axeda Applications, Axeda Policy Manager, Axeda Enterprise, Axeda
Access, Axeda Software Management, Axeda Service, Axeda ServiceLink, and Firewall-Friendly are trademarks and Maximum Results and
Maximum Support are servicemarks of Axeda Corporation.
CENTOS is a trademark of Red Hat, Inc., registered in the U.S. and other countries.
Cloudera, CDH, [any other Cloudera Marks used in such materials] are trademarks or registered trademarks of Cloudera Inc. in the United
States, and in jurisdictions throughout the world.
Data Domain, EMC, PowerPath, SRDF, and Symmetrix are registered trademarks of EMC Corporation.
GoldenGate is a trademark of Oracle.
Hewlett-Packard and HP are registered trademarks of Hewlett-Packard Company.
Hortonworks, the Hortonworks logo and other Hortonworks trademarks are trademarks of Hortonworks Inc. in the United States and other
countries.
Intel, Pentium, and XEON are registered trademarks of Intel Corporation.
IBM, CICS, RACF, Tivoli, and z/OS are registered trademarks of International Business Machines Corporation.
Linux is a registered trademark of Linus Torvalds.
LSI is a registered trademark of LSI Corporation.
Microsoft, Active Directory, Windows, Windows NT, and Windows Server are registered trademarks of Microsoft Corporation in the United
States and other countries.
NetVault is a trademark or registered trademark of Dell Inc. in the United States and/or other countries.
Novell and SUSE are registered trademarks of Novell, Inc., in the United States and other countries.
Oracle, Java, and Solaris are registered trademarks of Oracle and/or its affiliates.
QLogic and SANbox are trademarks or registered trademarks of QLogic Corporation.
Quantum and the Quantum logo are trademarks of Quantum Corporation, registered in the U.S.A. and other countries.
Red Hat is a trademark of Red Hat, Inc., registered in the U.S. and other countries. Used under license.
SAP is the trademark or registered trademark of SAP AG in Germany and in several other countries.
SAS and SAS/C are trademarks or registered trademarks of SAS Institute Inc.
SPARC is a registered trademark of SPARC International, Inc.
Symantec, NetBackup, and VERITAS are trademarks or registered trademarks of Symantec Corporation or its affiliates in the United States
and other countries.
Unicode is a registered trademark of Unicode, Inc. in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and company names mentioned herein may be the trademarks of their respective owners.
THE INFORMATION CONTAINED IN THIS DOCUMENT IS PROVIDED ON AN "AS-IS" BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT. SOME
JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSION MAY NOT APPLY TO YOU. IN NO EVENT WILL
TERADATA CORPORATION BE LIABLE FOR ANY INDIRECT, DIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS
OR LOST SAVINGS, EVEN IF EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
The information contained in this document may contain references or cross-references to features, functions, products, or services that are
not announced or available in your country. Such references do not imply that Teradata Corporation intends to announce such features,
functions, products, or services in your country. Please consult your local Teradata Corporation representative for those features, functions,
products, or services available in your country.
Information contained in this document may contain technical inaccuracies or typographical errors. Information may be changed or updated
without notice. Teradata Corporation may also make improvements or changes in the products or services described in this information at any
time without notice.
To maintain the quality of our products and services, we would like your comments on the accuracy, clarity, organization, and value of this
document. Please email: [email protected].
Any comments or materials (collectively referred to as "Feedback") sent to Teradata Corporation will be deemed non-confidential. Teradata
Corporation will have no obligation of any kind with respect to Feedback and will be free to use, reproduce, disclose, exhibit, display, transform,
create derivative works of, and distribute the Feedback and derivative works thereof without limitation on a royalty-free basis. Further, Teradata
Corporation will be free to use any ideas, concepts, know-how, or techniques contained in such Feedback for any purpose whatsoever, including
developing, manufacturing, or marketing products or services incorporating Feedback.
Copyright © 1997-2016 by Teradata. All Rights Reserved.
Preface
Purpose
This book provides information about Teradata Archive/Recovery Utility (Teradata ARC),
which is a Teradata® Tools and Utilities product. Teradata Tools and Utilities is a group of
products designed to work with Teradata Database.
Teradata ARC writes and reads sequential files on a Teradata client system to archive, restore,
recover, and copy Teradata Database object data. Through its associated script language, it also
provides an interface between Teradata’s Backup Application Software solutions and the
Teradata Database.
Audience
This book is intended for use by:
•
System administrators
•
Database administrators
•
Other technical personnel responsible for maintaining a Teradata Database
Supported Releases
This book supports the following releases:
•
Teradata Database 16.00
•
Teradata Tools and Utilities 16.00
•
Teradata Archive/Recovery Utility 16.00
Note: See “Verifying Teradata ARC Version Number” on page 22 to verify the Teradata
Archive/Recovery Utility version number.
To locate detailed supported-release information:
1
Go to http://www.info.teradata.com/.
2
Under Online Publications, click General Search.
3
Type 3119 in the Publication Product ID box.
4
Under Sort By, select Date.
5
Click Search.
Teradata Archive/Recovery Utility Reference, Release 16.00
3
Preface
Prerequisites
6
Open the version of the Teradata Tools and Utilities ##.# Supported Platforms and Product
Versions spreadsheet associated with this release.
The spreadsheet includes supported Teradata Database versions, platforms, and product
release numbers.
Prerequisites
The following prerequisite knowledge is required for this product:
•
Basic computer technology
•
Teradata system hardware
•
Teradata Database
•
System console environment
•
X Windows
Changes to This Book
The following changes were made to this book in support of the current release. Changes are
marked with change bars. For a complete list of changes to the product, see the Teradata Tools
and Utilities Release Definition associated with this release.
4
Teradata Archive/Recovery Utility Reference, Release 16.00
Preface
Additional Information
Date and Release
Description
November 2016
• Updated release to 16.00 in Preface, Chapter 2.Added new DBC tables and
descriptions to Table 2: Tables Archived for Database DBC.Added note in
Chapter 1 about limited support for “Multiple Hash Maps (MHM)”
feature.Replaced references to “Windows XP/Server 2003” with
“Windows.”Added footnote (c) text in Appendix B.Removed error code 123
from table below return code 12; added error code 123 to return code 8 in
Chapter 5.Updated references to Windows Server 2003 to Windows Server
2008
• Updated ARC startup banner
• Added 4320, 4321, 4323, 4324 to WARNING section of table 17 in Chapter 5
• Added Schema to the Database Object Types in table 22 and as a 16.00
supported item at the end of Appendix A
• Added Schema to Selective Backup and Restore rules in Appendix B (table
23)
• Added Schema to Exclude Tables Object Types in Appendix C (table 24)
• In Chapter 2, documented how ONLINE ARCHIVE and restore work.
• In Chapter 4 for the RESTARTLOG parameter, added a new section. “Restart
Log File Size Limit.”
• In Preface add one entry to “Additional Information” table; in Chapter 2,
add new section titled, “Data Compression.”
• In Chapter 2, add statement to section titled, “Restoring Database TD_
SYSFNLIB.”
16.00
Additional Information
Additional information that supports this product and Teradata Tools and Utilities is available
at the web sites listed in the table that follows. In the table, mmyx represents the publication
date of a manual, where mm is the month, y is the last digit of the year, and x is an internal
publication code. Match the mmy of a related publication to the date on the cover of this book.
This ensures that the publication selected supports the same release.
Teradata Archive/Recovery Utility Reference, Release 16.00
5
Preface
Additional Information
Type of Information
Description
Access to Information
• Release overview
• Late information
Use the Release Definition for the following
information:
1 Go to http://www.info.teradata.com/.
• Overview of all of the products in the
release
• Information received too late to be
included in the manuals
• Operating systems and Teradata
Database versions that are certified to
work with each product
• Version numbers of each product and
the documentation for each product
• Information about available training
and the support center
Additional product
information
Use the Teradata Information Products web
site to view or download specific manuals
that supply related or additional
information to this manual.
2 Under Downloadable Publications, click General
Search.
3 Type 2029 in the Publication Product ID box.
4 Click Search.
5 Select the appropriate Release Definition from
the search results.
1 Go to http://www.info.teradata.com/.
2 Under the Downloadable Publications
subcategory, Browse by Category, click Data
Warehousing.
3 Do one of the following:
• For a list of Teradata Tools and Utilities
documents, click Teradata Tools and Utilities,
and then select an item under Releases or
Products.
• Select a link to any of the data warehousing
publications categories listed.
Specific books related to Teradata Archive/Recovery
Utility are as follows:
• Data Dictionary
B035-1092-mmyx
• Messages
B035-1096-mmyx
• Teradata BAR Backup Application Software
Release Definition
B035-3114-mmyx
• Teradata Call-Level Interface Version 2 Reference
for Workstation-Attached Systems
B035-2418-mmyx
• Teradata Tools and Utilities for Microsoft
Windows Installation Guide
B035-2407-mmyx
• Teradata Tools and Utilities for IBM z/OS
Installation Guide
B035-3128-mmyx
• Teradata Tools and Utilities Installation Guide for
UNIX and Linux
B035-2459-mmyx
6
Teradata Archive/Recovery Utility Reference, Release 16.00
Preface
Additional Information
Type of Information
Description
Access to Information
CD-ROM images
Access a link to a downloadable CD-ROM
image of all customer documentation for
this release. Customers are authorized to
create CD-ROMs for their use from this
image.
1 Go to http://www.info.teradata.com/.
2 Under the Downloadable Publications
subcategory, Browse by Category, click Data
Warehousing.
3 Click Documentation CD Images – Teradata
Database and Teradata Tools & Utilities.
4 Follow the ordering instructions.
General information
about Teradata
The Teradata home page provides links to
numerous sources of information about
Teradata. Links include:
1 Go to Teradata.com.
2 Select a link.
• Executive reports, case studies of
customer experiences with Teradata,
and thought leadership
• Technical information, solutions, and
expert advice
• Press releases, mentions, and media
resources
Data Migration
documents
Use the Teradata Information Products web
site to view or download specific data
migration documents.
1 Go to http://www.info.teradata.com/.
2 Under the Downloadable Publications
subcategory, Browse by Category, click Data
Warehousing.
3 Click Teradata Database Node Software IUMB.
4 Under Releases, select the desired release.
5 Select Teradata Database Node Software
Migration Guide (B035-59xx) to download the
documentation.
Database Query Log
(DBQL) utility
Located in Chapter 16 of the Database
Administration manual, release 16.00.
1 Go to http://www.info.teradata.com/.
2 Under the Downloadable Publications
subcategory, Browse by Category, click Data
Warehousing.
3 Click Teradata Database.
4 Under Titles, on the right side of the page, click
Database Administration.
5 Click the 16.00 release version to download the
documentation.
Teradata Archive/Recovery Utility Reference, Release 16.00
7
Preface
Additional Information
Type of Information
Description
Access to Information
Data Compression
information
Located in Chapter 13 of the Database
Design manual, release 16.00. There are
also some references in the Database
Administration manual.
1 Go to http://www.info.teradata.com/.
2 Under the Downloadable Publications
subcategory, Browse by Category, click Data
Warehousing.
3 Click Teradata Database.
4 Under Titles, on the right side of the page, click
Database Design and/or Database
Administration.
5 Click the 16.00 release version to download
documentation.
8
Teradata Archive/Recovery Utility Reference, Release 16.00
Preface
Product Safety Information
Product Safety Information
This document may contain information addressing product safety practices related to data or
property damage, identified by the word Notice. A notice indicates a situation which, if not
avoided, could result in damage to property, such as equipment or data, but not related to
personal injury.
Example:
Notice:
Improper use of the Reconfiguration utility can result in data loss.
Teradata Archive/Recovery Utility Reference, Release 16.00
9
Preface
Product Safety Information
10
Teradata Archive/Recovery Utility Reference, Release 16.00
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Supported Releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Changes to This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Product Safety Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
Chapter 1:
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
What is Teradata ARC?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Verifying Teradata ARC Version Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teradata ARC-Specific Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
How Teradata ARC Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Starting Teradata ARC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Uses of Teradata ARC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
22
22
22
23
23
What is ARCMAIN? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Starting ARCMAIN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Starting ARCMAIN from z/OS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Starting ARCMAIN from Linux and Windows XP/Server 2008 . . . . . . . . . . . . . . . . . . . . 27
Canceling Teradata ARC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Sample Teradata ARC Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Validating Teradata ARC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Chapter 2:
Archive/Recovery Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Database DBC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Database SYSUDTLIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Logical Link to DBC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Teradata Archive/Recovery Utility Reference, Release 16.00
11
Table of Contents
Archiving. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
Restoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
Copying. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
Deleting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
Release Lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
User-Defined Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
User-Defined Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
Copying UDTs and UDMs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Extended Object Names (EON) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Teradata Active System Management (TASM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Always On - Redrive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39
Archiving Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39
Concurrency Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
All AMPs Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
Dictionary Archive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
Cluster Archive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
Offline AMPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
Large Objects (LOBs). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49
Non-Hashed and Partially Loaded Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49
Join and Hash Indexes in Archiving. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
Archiving AMPs Locally . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
Archiving DBC with Multi-Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
Archiving Online. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
Teradata Database versions prior to 14.00. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
Teradata Database 14.00 and later . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
Migrating Online Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
Determining Whether Online Archive Logging is in Use . . . . . . . . . . . . . . . . . . . . . . . . . .55
Online Archive Example (using the LOGGING ONLINE ARCHIVE ON statement) . . .56
Online Archive Example (using the ONLINE option with the ARCHIVE statement) . . . . . .56
Restoring Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57
Conditions Needed for a Restore Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Considerations Before Restoring Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59
Restoring the Database DBC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62
Restoring a User Database or Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64
Restoring With All AMPs Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70
Restoring with a Specific-AMP Archive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
Restoring Using the EXCLUDE Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
Restoring Database TD_SYSFNLIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
Restoring With AMPs Offline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73
Restoring With One AMP Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73
12
Teradata Archive/Recovery Utility Reference, Release 16.00
Table of Contents
Restoring Cluster Archives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Restoring After a Reconfiguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Restoring with a Larger Number of AMPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Restoring a Specific AMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Restoring with Multi-Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Restoring with Partial-AMPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Restoring AMPs Locally . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
76
77
78
78
79
80
Recovering Tables and Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Recovering With Offline AMPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Recovering a Specific AMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Copying Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Copy vs. Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conditions for Using the COPY Statement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
COPY Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
82
82
84
Using Host Utility Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transaction vs. Utility Locks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teradata ARC Locks During an Archive Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teradata ARC Locks During a Restore Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Locks Associated with Other Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitoring for Deadlock Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
88
88
89
91
92
93
Setting Up Journal Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Location of Change Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Local Journaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Archiving Journal Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Journal Impact on Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
94
95
96
96
Controlling Journal Checkpoint Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Checkpoint Names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Submitting a CHECKPOINT Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Checkpoint and Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Completing a Checkpoint With Offline AMPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
97
97
97
98
Skipping a Join Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Skipping a Stat Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Data compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Chapter 3:
Environment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
ARCDFLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
ARCENV and ARCENVX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Teradata Archive/Recovery Utility Reference, Release 16.00
13
Table of Contents
Chapter 4:
Runtime Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
CATALOG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110
CHARSETNAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
CHECKPOINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
CHECKSUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123
COPYSJI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124
DATAENCRYPTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125
DBSERROR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126
DEFAULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128
ENCODINGCHARSET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
ERRLOG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131
FATAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132
FILEDEF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
HALT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
HEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
IOMODULE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137
IOPARM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138
LINEWRAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139
LOGON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140
LOGSKIPPED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
NOTMSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142
NOWARN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143
OUTLOG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145
PARM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146
PAUSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149
PERFFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150
RESTART. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152
RESTARTLOG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153
Restart Log File Size Limit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153
SESSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155
STARTAMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .157
UEN (Utility Event Number) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .158
VERBOSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .159
WORKDIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164
14
Teradata Archive/Recovery Utility Reference, Release 16.00
Table of Contents
Chapter 5:
Return Codes and UNIX Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Return Codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
UNIX Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Stack Trace Dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Usage Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Chapter 6:
Archive/Recovery Control Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
ANALYZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
ARCHIVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
BUILD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
CHECKPOINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
COPY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
DELETE DATABASE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
DELETE JOURNAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
LOGDATA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
LOGGING ONLINE ARCHIVE OFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
LOGGING ONLINE ARCHIVE ON. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
LOGMECH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
LOGOFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
LOGON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
RELEASE LOCK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
RESTORE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Resizing the Teradata Database with DBSIZEFILE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
REVALIDATE REFERENCES FOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
ROLLBACK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
ROLLFORWARD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
SET QUERY_BAND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Chapter 7:
Restarting Teradata ARC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Restart Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Restarting Teradata ARC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Teradata Archive/Recovery Utility Reference, Release 16.00
15
Table of Contents
During a Database DBC Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .262
During an Archive Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .262
During a Restore Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .262
During a Recovery Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263
Restart After Client or Teradata ARC Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263
Restarting an Archive Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263
Restarting a Restore Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264
Restarting a Checkpoint Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .265
Restart After a Teradata Database Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .265
Restarting an Archive Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .265
Restarting a Restore Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .266
Restarting a Recovery Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .266
Restarting a Checkpoint Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .266
Removing HUT Locks After a Restart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .266
Recovery Control Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .267
Recording Row Activity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .267
Recording AMP Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270
Recording Device Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270
Appendix A:
Database Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271
Database Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271
Individual Object Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .272
Appendix B:
Restoring and Copying Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .275
Selective Backup and Restore Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .275
Meaning of Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276
Appendix C:
Excluding Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .279
Exclude Tables Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .279
16
Teradata Archive/Recovery Utility Reference, Release 16.00
Table of Contents
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Teradata Archive/Recovery Utility Reference, Release 16.00
17
Table of Contents
18
Teradata Archive/Recovery Utility Reference, Release 16.00
List of Tables
Table 1: Minimum Region Sizes Running from z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Table 2: Tables Archived for Database DBC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Table 3: Dictionary Rows and Tables Archived for User Database . . . . . . . . . . . . . . . . . . . . . 41
Table 4: Database DBC Restore Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Table 5: Error Table Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Table 6: User Database Data Dictionary Rows Restored. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Table 7: User Table Dictionary Rows Restored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Table 8: Read Locks During Archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Table 9: Journal Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Table 10: Journal Change Row Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Table 11: Skip Join Index Use Case Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Table 12: Environment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Table 13: Runtime Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Table 14: Teradata-Defined Character Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Table 15: Data Session Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Table 16: Return Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Table 17: Database Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Table 18: Client-Generated Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Table 19: Summary of Teradata ARC Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Table 20: File Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Table 21: RCEvent Table Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Table 22: Database Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Table 23: Selective Backup and Restore Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Table 24: Exclude Tables Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Teradata Archive/Recovery Utility Reference, Release 16.00
19
List of Tables
20
Teradata Archive/Recovery Utility Reference, Release 16.00
CHAPTER 1
Introduction
This chapter contains these topics:
•
What is Teradata ARC?
•
What is ARCMAIN?
•
Starting ARCMAIN
•
Canceling Teradata ARC
•
Sample Teradata ARC Scripts
•
Validating Teradata ARC
What is Teradata ARC?
Teradata ARC writes and reads sequential files on a Teradata client system to archive, restore,
and recover, as well as to copy, Teradata Database object data.
For a detailed list of objects supported by Teradata ARC, see “Appendix A Database Objects”
on page 271.
Teradata ARC allows you to specify all objects as individual objects in an Archive, Restore, or
Copy statement. Some objects require Teradata Database 13.00 or higher. For a list of object
support by Teradata Database version, see Appendix A: “Database Objects.”
To see a detailed list of platforms that Teradata ARC supports:
1
Go to http://www.info.teradata.com.
2
Navigate to General Search > Publication Product ID.
3
Enter 3119.
4
Open the version of the Teradata Tools and Utilities ##.# Supported Platforms and Product
Versions spreadsheet associated with this release.
Note: Starting with TTU 15.10.00.00, the use of Permanent Journal tables is deprecated by the
Teradata Archive/Recovery Utility. This includes all statements and options that are related to
Journal tables that are mentioned in this Manual.
Note: Starting with TTU 16.00.00.00, ARC will provide limited support for the “Multiple
Hash Maps (MHM)” feature. As part of the “Multiple Hash Map” support, ARCmain will
only Archive/Restore/Copy the objects belonging to an all AMPs Map.
Teradata Archive/Recovery Utility Reference, Release 16.00
21
Chapter 1: Introduction
What is Teradata ARC?
Verifying Teradata ARC Version Number
The version number for Teradata ARC is always displayed in the startup banner in Teradata
ARC output. For example:
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
07/30/2015
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
06:21:14
Copyright 1989-2015, Teradata Corporation.
All Rights Reserved.
***
*
*
*****
*
*
*
*
****
*
*
****
* *
*
*
****
*
*
*
****
PROGRAM: ARCMAIN
RELEASE: 16b.00.00.00
BUILD:
160108LX64 (Jul 13 2015)
RESTARTLOG = ARCLOG150730_062114_30862.rlg
PARAMETERS IN USE:
CHARACTER SET IN USE: ASCII
Verify the Teradata ARC version number with operating-specific commands. For example, on
Linux execute rpm -q teradata_arc to display the release version number for
teradata_arc. On Windows, click Start > Control Panel > Add or Remove Programs to
display a list of installed programs. The entry for Teradata ARC should list the release version
number.
Teradata ARC-Specific Terminology
The terms backup and dump are often used interchangeably with archive.
Restore, which is a specific Teradata ARC keyword and the name of a Teradata ARC operation,
is part of the recovery process. In addition to restore, recovery entails other operations, such as
returning data tables to their state following their modification (called rolling forward),
returning data tables to the state they were in before they were modified (called rolling back),
and so on. For further explanation, see Chapter 6: “Archive/Recovery Control Language.”
There is no Teradata ARC statement called “recover.”
The difference between copy and restore is in the kind of operation being performed:
•
A restore operation moves data from archived files back to the same Teradata Database
from which it was archived or to a different Teradata Database so long as the database DBC
is already restored.
•
A copy operation moves data from an archived file back to a Teradata Database, not
necessarily to the same system, and creates a new table if one does not already exist on the
target database. When a selected partition is copied, the table must exist and be a table that
was previously copied as a full-table copy.
How Teradata ARC Works
Teradata ARC creates files when databases, individual data tables, selected partitions of
primary partition index (PPI) tables, or permanent journal tables from the Teradata Database
22
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 1: Introduction
What is Teradata ARC?
are archived. When those objects are restored to the Teradata Database, the Teradata ARC files
are also restored.
Teradata ARC also includes recovery with rollback and rollforward functions for data tables
defined with a journal option. Checkpoint these journals with a synchronization point across
all AMPs, and delete selected portions of the journals.
Starting Teradata ARC
Teradata ARC runs in either online or batch mode on:
•
IBM z/OS
•
Linux SUSE
•
Windows Server 2003
Although Teradata ARC is normally started in batch mode, it can be run interactively. In the
interactive mode, do not expect a user-friendly interface in online sessions.
Teradata ARC is started with a program module called ARCMAIN.
Beginning with Teradata Tools and Utilities 13.10.00, Teradata ARC is now supported as a
64-bit application on 64-bit Windows and Linux systems. On IBM mainframe systems,
Teradata ARC can only be run as a 32-bit application.
Uses of Teradata ARC
Teradata ARC supports:
•
Archiving a database, individual table, or selected partitions of a PPI table from a Teradata
Database to a client resident file.
•
Restoring a database, individual table, or selected partitions of a PPI table back to a
Teradata Database from a client resident archive file.
•
Copying an archived database, table, or selected partitions of a PPI table to a Teradata
Database on a different hardware platform than the one from which the database or table
was archived.
Note: Beginning with TTU 13.00.00, Teradata ARC supports:
•
Archiving, restoring, and copying all objects as individual objects, except for triggers
which cannot be copied. For a complete list of objects supported by Teradata ARC, see
“Appendix A Database Objects” on page 271.
•
Archiving and restoring join and hash index definitions; automatically rebuilding the
indexes after a restore or copy operation.
Teradata ARC also supports:
•
Placing a checkpoint entry in a journal table.
•
Recovering a database to an arbitrary checkpoint by rolling it back or rolling it forward,
using change images from a journal table.
•
Deleting change image rows from a journal table.
Teradata Archive/Recovery Utility Reference, Release 16.00
23
Chapter 1: Introduction
What is ARCMAIN?
What is ARCMAIN?
ARCMAIN is the program name of the Teradata ARC utility. ARCMAIN uses these files:
•
Input – (required) contains archive and recovery control statements that have been
created.
•
Output log – contains all runtime messages that are output from the utility. The file
indicates, statement by statement, the activity that occurs from the statements in the input
file. This file is generated automatically.
•
Restart log – contains restart recovery information for internal use. The utility places into
this non-readable file the information that it needs if a task is terminated prematurely and
then restarted (with the RESTART runtime parameter). This file is automatically
generated.
•
Stack Trace Dump file – contains stack trace information for internal use. This file is
generated automatically for certain error conditions.
•
Archive – contains archival data. An output archive file is the target of the data provided by
an ARCHIVE statement. An input archive file is the source of data for restoring a database
or table. Any number of archive files can be used during a utility run. Use the FILE
parameter of the RESTORE/COPY or ARCHIVE statement to identify the names of input
and output archive files. This file is required only if the task involves the creation of an
archive or the restoration of a database or table.
Although Teradata ARC is able to process up to 350 ARCMAIN jobs at a time, each with up to
1024 sessions, avoid running so many jobs.
Note: For information on file size limits, see“Restart Log File Size Limit” on page 153
Starting ARCMAIN
To view the JCL provided with the release, see the file on the release tape called ARCJOB in
dbcpfx.PROCLIB.
Starting ARCMAIN from z/OS
This sample JCL shows how to start Teradata ARC from z/OS:
Figure 1: Sample JCL for Starting ARCMAIN from z/OS
24
//ARCJOB
PROC ARCPARM=,DBCPFX=,
//USERJOB
//ARCJOB
//
//
//*
//STEP1
//
//STEPLIB
//
JOB <job info>
PROC ARCPARM=,DBCPFX=
DUMP=DUMMY,DUMPDSN=,DSEQ=,DVOL=,
RESTORE=DUMMY,RSTRDSN=,DBCLOG=
EXEC PGM=ARCMAIN
PARM='&ARCPARM'
DD
DSN=&DBCPFX..AUTHLOAD,
DISP=SHR
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 1: Introduction
Starting ARCMAIN
Figure 1: Sample JCL for Starting ARCMAIN from z/OS
//ARCHIVE
//
//
//ARCIN
//DBCLOG
//SYSPRINT
//SYSUDUMP
//
//*
//S1
//
//
//SYSIN
DD
DD
DD
DD
DD
PEND
&DUMP,DSN=&DBCPFX..&DUMPDSN,DISP=(,CATLG),
UNIT=TAPE,LABEL=(&DSEQ,SL),
VOL=SER=(&DVOL)
&RESTORE,DSN=&DBCPFX..&RESTORE&RSTRDSN,DISP=OLD
DSN=&DBCPFX..&DBCLOG,DISP=OLD
SYSOUT=*
SYSOUT=*
EXEC ARCJOB,ARCPARM='SESSIONS=100 HEX',
DBCPFX=DBCTEST,DUMP='DUMP.DATA',DSEQ=1,
DVOL='TAPE01,TAPE02',DBCLOG='ARCLOG.DATA'
DD
DATA,DLM=##
LOGON DBC,DBC
ARCHIVE DATA TABLES (PERSONNEL) ALL,
RELEASE LOCK,
INDEXES
FILE=ARCHIVE;
LOGOFF;
##
Note: If specified, Teradata ARC honors a DCB=BLKSIZE value in the ARCHIVE DD
statement, otherwise Teradata ARC uses the optimum BLKSIZE value supplied by the z/OS
operating system.
ARCMAIN 14.10 (and later) writes a new format archive data set which always has the
attribute DCB=RECFM=VBA. ARCMAIN 14.10 will read archive data sets of the new format
and the old format, which it recognizes by the data set attribute DCB=RECFM=U.
The new archive data set format is not compatible with the old archive data set format.
ARCMAIN depends on recognizing the archive data set format from the RECFM DCB attribute
of the data set. On z/OS, if the RECFM DCB attribute is specified in the DD statement which
identifies the archive input data set, that specification overrides and hides the actual RECFM
DCB attribute of the data set itself, potentially misleading ARCMAIN as to the archive format.
Recommendation: Remove the DCB attributes RECFM, and BLKSIZE if present, from the DD
statement for an input archive data set. On z/OS, it is never appropriate to specify those DCB
attributes on input DD statements. It is always best to allow the input data set itself to supply
the RECFM, LRECL, and BLKSIZE DCB attributes.
Starting with ARCMAIN 13.10, the RECFM, LRECL, and BLKSIZE DCB attributes for output
archive data sets are set by default to optimum values for the output device supplied by z/OS.
With ARCMAIN 13.10 and later, Teradata recommends removing RECFM, LRECL, and
BLKSIZE DCB attribute specifications from all archive data set DD statements, whether
ARCMAIN is reading or writing the archive data set.
The JCL example uses these syntax elements:
•
DUMP is the archive keyword, with DUMPDSN as the data set name for the archive
output file. DUDISP specifies whether to KEEP or CATLG the output file.
•
DBCLOG is the log file data set name created by Teradata ARC.
•
DBCPFX is the high-level qualifier for Teradata Database load libraries.
Teradata Archive/Recovery Utility Reference, Release 16.00
25
Chapter 1: Introduction
Starting ARCMAIN
The database DBCLOG card indicates the restart log file, and the data set must have RECFM
either F, FBS or FS. For better results, set BLKSIZE near 32K. The ARCHIVE card is not
needed in this example; it is shown for illustration purposes. Create and catalog database
DBCLOG once and reuse it as needed. To create the database DBCLOG file, see the file on the
release tape called ARCLOG, in dbcpfx.PROCLIB.
Calculating Region Size
Table 1 is applicable to Teradata ARC 8.0 or earlier. Use the information in this table to
estimate the minimum region size (in kilobytes) needed to run Teradata ARC from z/OS.
Note: This table provides estimates for archive or restore operations only. Copy operations
might require more memory.
Table 1: Minimum Region Sizes Running from z/OS
Number
of Sessions
Region Size
For 200 Objects
Region Size
For 2,000 Objects
Region Size
For 8,000 Objects
10
2,500 KB
3,200 KB
5,620 KB
50
4,420 KB
5,140 KB
7,540 KB
100
6,820 KB
7,540 KB
9,940 KB
where:
Table Element
Description
Number of Objects
The number of databases + number of tables and stored procedures to be
archived, restored, or copied.
The ALL keyword expands the number according to this query:
SELECT count (*) from database DBC.CHILDREN WHERE PARENT =
‘<db>’
Sessions
number of data sessions to use in the operation.
Example
In this example, database db2 and table db3.a are not children of database db.
ARCHIVE DATA TABLES (db) ALL,(db2),(db3.a),
RELEASE LOCK,
FILE=ARCHIVE;
If the SELECT query returns 5 for db, the total number of objects in the ARCHIVE DATA
TABLES statement is 7.
26
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 1: Introduction
Starting ARCMAIN
Starting ARCMAIN from Linux and Windows XP/Server 2008
Examples in this section are applicable to Linux and Windows XP/Server 2008. The first
example is a command-line invocation of Teradata ARC from Windows XP/Server 2008:
ARCMAIN SESSIONS=8 CATALOG OUTLOG=ARCALL.OUT <ARCALL.IN
The above command line calls the ARCMAIN executable, uses eight sessions, and enables the
catalog feature.
ARCALL.IN is an input file that contains ARCMAIN commands. The ''<'' redirects the input
file to ARCMAIN.
By default, data is written to (or read from) a disk file in the current directory. With a tape
product, use IOMODULE and IOPARM command-line parameters. To determine the proper
values for the IOMODULE and IOPARM command-line parameters, see “IOMODULE” on
page 137 and “LOGON” on page 140.
Two of the Backup Application Software products, Quest® NetVault and Symantec Veritas
NetBackup, provide additional archive and restore functionality to Teradata ARC, including a
graphical user interface (GUI) to perform archive and restore operations. See the Quest
NetVault Backup Plugin User's Guide for Teradata or the Teradata Extension for NetBackup
Administrator Guide for information on how to install, configure, and use the Teradata access
module for these products.
Using a Configuration File or Environment Variables
Frequently used command-line options can be saved in an environment variable or
configuration file. For example, if T:\DEFAULTS\CONFIG.ARC contains the following
runtime parameters (the parameters are defined in this file):
CATALOG
FILEDEF=(ARCHIVE,ARCHIVE_%UEN%)
SESSIONS=8
and one or both of the following environment variables:
•
ARCDFLT
•
ARCENV or ARCENVX (These are the same environment variables, except that
ARCENVX has override priority.)
are set temporarily at the command prompt:
SET ARCDFLT=T:\DEFAULT\CONFIG.ARC
SET ARCENV=WDIR=C:\TEMP\ARC\
or set permanently, using Start > Settings > Control Panel > System > Environment, then
ARCMAIN can be called:
ARCMAIN RLOG=JOB980813 <INPUT.ARC
In the above command, runtime parameters RLOG=JOB980813 are not defined in a
configuration file or in an environment variable because they are job-specific. The call to
ARCMAIN is equivalent to the following command-line invocation of Teradata ARC:
Teradata Archive/Recovery Utility Reference, Release 16.00
27
Chapter 1: Introduction
Starting ARCMAIN
ARCMAIN CATALOG
FILEDEF=(ARCHIVE,ARCHIVE_%UEN%) SESSIONS=8
RLOG=JOB980813
<INPUT.ARC
WDIR=C:\TEMP\ARC
When called, ARCMAIN automatically uses the context of any set environment variable. The
example assumes that ARCENV is set to WDIR=C:\TEMP\ARC\.
Using Environment Variables
Most runtime parameters can be set as defaults in the ARCENV environment variable, the file
pointed to by the ARCDFLT environment variable, or in some combination of the two.
These environment variables are known to ARCMAIN, which loads the contents of ARCENV
internally, or the Default file when building its runtime parameters. If a parameter is specified
multiple times in difference places, the override priority is:
1
Parameters set in ARCENVX
2
Actual runtime parameters on the command line
3
Parameters set in ARCENV
4
Parameters set in the default file pointed to by ARCDFLT
For example, for everyone to use a specific runtime parameter, such as CATALOG, create a
network default file that contains the parameter, then point ARCDFLT to it.
Local defaults or frequently used parameters such as IOMODULE can be set in ARCENV,
which overrides parameters set in the network default file. Parameters on the command line
override parameters set in ARCENV; parameters set in ARCENVX override actual runtime
parameters on the command line.
ARC Executable File Changes for Linux
The following information is for the Linux platform only.
On Linux platforms, the ‘arcmain’ executable file is available in a 32-bit version and in a 64-bit
version. During installation, both versions of the ‘arcmain’ executable file will be installed
simultaneously on your Linux system.
Prior to ARC 15.10, both versions of the executable file had the same name (‘arcmain’). To
prevent having a name conflict, each executable file was installed in a different directory
location. The 32-bit version was the default version. If you just entered ‘arcmain’ at your Linux
command prompt, the 32-bit version of arcmain would execute as the default.
Several different ways were provided to allow you to invoke ‘arcmain’ to access the desired
executable file:
arcmain
/usr/bin/arcmain
/usr/bin64/arcmain
/opt/teradata/client/15.10/bin/arcmain
/opt/teradata/client/15.10/bin64/arcmain
28
------
32-bit
32-bit
64-bit
32-bit
64-bit
‘arcmain’
‘arcmain’
‘arcmain’
‘arcmain’
‘arcmain’
softlink
softlink
(full path)
(full path)
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 1: Introduction
Canceling Teradata ARC
Starting with ARC 15.10, the following changes have been implemented:
•
There will no longer be separate 32-bit and 64-bit ‘bin’ directories in /opt/teradata/client/
15.10. There will only be a single ‘bin’ directory and both versions of ‘arcmain’ will reside
in it.
•
In order to uniquely identify each version of ‘arcmain’ in the ‘bin’ directory, the name of
the 32-bit ‘arcmain’ executable file will be changed to ‘arcmain32’. The name of the 64-bit
‘arcmain’ executable file will remain ‘arcmain’.
•
Due to this name change, the 64-bit version of ‘arcmain’ will now become the default
version on the Linux platform. If you just enter ‘arcmain’ at your Linux prompt, the 64-bit
version of arcmain will now execute as the default. If the 32-bit version of ‘arcmain’ is
desired, the user will have to explicitly specify the ‘arcmain32’ executable name.
•
Both ‘arcmain’ softlinks (/usr/bin/arcmain and /usr/bin64/arcmain) will still be set up but
they will now have the following links:
/usr/bin/arcmain
-> /opt/teradata/client/15.10/bin/arcmain32
/usr/bin64/arcmain -> /opt/teradata/client/15.10/bin/arcmain
•
The full path to /opt/teradata/client/15.10/bin will be set up in your system’s path during
Installation.
In Summary, you will now be able to execute ‘arcmain’ by entering one of the following on
your Linux system:
arcmain32
arcmain
/usr/bin/arcmain
/usr/bin64/arcmain
/opt/teradata/client/15.10/bin/arcmain32
/opt/teradata/client/15.10/bin/arcmain
-------
32-bit
64-bit
32-bit
64-bit
32-bit
64-bit
‘arcmain’
‘arcmain’
‘arcmain’
‘arcmain’
‘arcmain’
‘arcmain’
softlink
softlink
(full path)
(full path)
Canceling Teradata ARC
Use standard operating system or terminal commands to cancel a Teradata ARC task.
To resume an aborted operation, specify the RESTART parameter when resubmitting the task.
If a restore operation aborts, there is no need to delete the partially restored database. Partially
restored data is overwritten when an attempt is made to restart the operation.
Utility locks are not released when an operation is aborted, therefore explicitly release the
locks on the entity that was being processed when the operation aborted.
Sample Teradata ARC Scripts
Example 1: Archive Script
LOGON DBCID/USERID,PASSWORD;
ARCHIVE DATA TABLES (DB1) ALL,(DB2),
RELEASE LOCK,
Teradata Archive/Recovery Utility Reference, Release 16.00
29
Chapter 1: Introduction
Sample Teradata ARC Scripts
FILE=ARCHIVE;
LOGOFF;
This script performs these archive operations:
1
ARCMAIN logs on to a Teradata Database pointed to by database DBCID.
2
ARCMAIN places Host Utility (HUT) locks on the databases to be archived before starting
the archive operation.
3
The database’s DB1, all its descendent databases, and DB2 are backed up. The operation is
performed database by database in alphabetical order.
4
The result of the backup is written to the file pointed to by FILE=.
For Linux and Windows XP/Server 2008: If no IOMODULE and IOPARM runtime
parameters are specified (as in the above example), data is written to hard disk.
If IOMODULE and IOPARM are specified, data is written to the tape drive or other media
specified and pointed to by IOMODULE and defined by the initialization string to which
IOPARM is set.
For an example, see “Starting ARCMAIN from Linux and Windows XP/Server 2008” on
page 27.
5
The RELEASE LOCK option releases the HUT locks after the archive of each database is
complete.
6
After all databases are archived, the LOGOFF command disconnects ARCMAIN from the
Teradata Database.
Example 2: Restore Script
LOGON DBCID/USERID,PASSWORD;
RESTORE DATA TABLES (DB1) ALL, (DB2),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
This script performs these restore operations:
1
ARCMAIN logs on to a Teradata Database pointed to by database DBCID.
2
ARCMAIN places Host Utility (HUT) locks on the databases to be restored before starting
the restore operation.
3
The database’s DB1, all its descendent databases, and DB2 are restored. The operation is
performed database by database in alphabetical order.
Note: For each database, these operations are performed table by table in alphabetical
order:
30
•
Primary data is restored.
•
If applicable, fallback and index data are built from primary data.
4
The RELEASE LOCK option releases the HUT locks after the restore of each database is
complete.
5
After all databases are restored, the LOGOFF command disconnects ARCMAIN from the
Teradata Database.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 1: Introduction
Validating Teradata ARC
Validating Teradata ARC
The sample scripts below can be used to validate Teradata ARC.
Example 1: Archive Script
LOGON DBCID/USERID,PASSWORD;
ARCHIVE DATA TABLES (CRASHDUMPS),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
Note: The user needs to modify DBCID, USERID, and PASSWORD in the archive script
accordingly before executing the script.
The above archive script performs a backup of user/database CRASHDUMPS and terminates
with return code 0 as shown in the following sample output.
Sample Output: Archive
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08:20:32
08:20:32
08:20:32
08:20:32
08:20:32
08:20:32
08:20:32
08:20:33
08:20:33
08:20:33
08:20:33
08:20:33
08:20:33
08:20:33
08:20:33
08:20:33
08:20:33
ARCHIVE DATA TABLES (CRASHDUMPS),
RELEASE LOCK,
FILE=ARCHIVE;
UTILITY EVENT NUMBER
- 2580
LOGGED ON
4 SESSIONS
ARCHIVING DATABASE “CRASHDUMPS”
“CRASHDUMPS” - LOCK RELEASED
DUMP COMPLETED
STATEMENT COMPLETED
LOGOFF;
LOGGED OFF
6 SESSIONS
STATEMENT COMPLETED
ARCMAIN TERMINATED WITH SEVERITY 0
Example 2: Restore Script
LOGON DBCID/USERID,PASSWORD;
RESTORE DATA TABLES (CRASHDUMPS),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
Note: The user needs to modify DBCID, USERID, and PASSWORD in the restore script
accordingly before executing the script.
The above restore script performs a restore of user/database CRASHDUMPS and terminates
with return code 0 as shown in the following sample output.
Teradata Archive/Recovery Utility Reference, Release 16.00
31
Chapter 1: Introduction
Validating Teradata ARC
Sample Output: Restore
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
08/06/2014
32
08:23:34
08:23:34
08:23:34
08:23:34
08:23:35
08:23:35
08:23:35
08:23:35
08:23:35
08:23:35
08:23:35
08:23:35
08:23:35
08:23:35
08:23:35
08:23:35
08:23:35
RESTORE DATA TABLES (CRASHDUMPS),
RELEASE LOCK,
FILE=ARCHIVE;
UTILITY EVENT NUMBER
- 2581
LOGGED ON
4 SESSIONS
STARTING TO RESTORE DATABASE “CRASHDUMPS”
DICTIONARY RESTORE COMPLETED
“CRASHDUMPS” - LOCK RELEASED
STATEMENT COMPLETED
LOGOFF;
LOGGED OFF
6 SESSIONS
STATEMENT COMPLETED
ARCMAIN TERMINATED WITH SEVERITY 0
Teradata Archive/Recovery Utility Reference, Release 16.00
CHAPTER 2
Archive/Recovery Operations
This chapter contains these topics:
•
Database DBC
•
Database SYSUDTLIB
•
Extended Object Names (EON)
•
Teradata Active System Management (TASM)
•
Always On - Redrive
•
Archiving Objects
•
Archiving AMPs Locally
•
Restoring Objects
•
Recovering Tables and Databases
•
Copying Objects
•
Using Host Utility Locks
•
Setting Up Journal Tables
•
Controlling Journal Checkpoint Operations
•
Skipping a Join Index
•
Skipping a Stat Collection
Database DBC
The DBC database contains critical system tables that define the user databases in the Teradata
Database.
Table 2 lists the tables that are archived for database DBC.
Note: If a DBC user table is not listed in Table 2, it will not be archived for database DBC.
Instead, use SQL to copy the DBC user table into a user-level table in another database for
Teradata ARC to back up. The information cannot be copied back to the DBC user table
during a restore process.
Teradata Archive/Recovery Utility Reference, Release 16.00
33
Chapter 2: Archive/Recovery Operations
Database DBC
Table 2: Tables Archived for Database DBC
Table Name
Description
AccessRights
Specifies all granted rights.
AccLogRuleTbl
Defines access logging rules generated by executing BEGIN/END LOGGING statements.
Accounts
Lists all authorized account numbers.
AsgdSecConstraints
Stores information about the name and value pairs of security constraints that have been
assigned to users or profiles.
BusinessCalendarException
Stores information about days that are exceptions to the active business calendar pattern.
BusinessCalendarPattern
Stores information about different patterns of business days and non-business days for a
given calendar.
CollationTbl
Defines MULTINATIONAL collation.
ConnectRulesTbl
Contains information about CONNECT THROUGH privileges.
ConstraintFunctions
Stores information about the UDFs for security constraint objects.
ConstraintValues
Stores information about the name and value pairs for security constraint objects.
DBase
Defines each database and user.
DBQLRuleCountTb1
Is reserved for internal use only.
DBQLRuleTb1
Specifies the rule table for DBQL.
Hosts
Defines information about user-defined character sets used as defaults for client systems.
LogonRuleTbl
Defines information about logon rules generated by a GRANT LOGON statement.
Maps
Defines all hash maps.
MapGrants
Stores information about hash map grants to users and roles.
NetSecPolicyLogRuleTbl
Defines the rules when the network security policy components are to be logged.
Next
Generates table, stored procedure, and database identifiers (internal table).
OldPasswords
Lists passwords that are no longer in use, including the user to which the password was
assigned, the date the password was changed, and encrypted password string.
Owners
Defines all databases owned by another.
Parents
Defines all parent/child relationships between databases.
PasswordRestrictions
Restricts a password from being created that contains any word listed in the table.
Profiles
Defines the available roles and profiles on the Teradata machine.
RCConfiguration
Records the configuration for RCEvent rows.
RCEvent
Records all archive and recovery activities.
RCMedia
Records all removable devices used in archive activities.
34
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Database DBC
Table 2: Tables Archived for Database DBC (continued)
Table Name
Description
ReconfigDeleteOrderTbl
Stores information about the order in which tables are processed during the Deletion phase
of reconfiguration done using the Reconfig utility.
ReconfigInfoTbl
Stores summary and status information about reconfiguration done using the Reconfig
utility.
ReconfigRedistOrderTbl
Stores information about the order in which tables are processed during the Redistribution
phase of reconfiguration done using the Reconfig utility.
ReconfigTableStatsTbl
Stores reconfig statistics about each table processed during the redistribution phase and the
deletion phase of a reconfig.
RoleGrants
Defines the available roles and profiles on the Teradata machine.
Roles
Defines the available roles and profiles on the Teradata machine.
SecConstraints
Stores information about security constraint objects.
SysSecDefaults
Defines the system security defaults of a Teradata Database, for example, the minimum and
maximum characters allowed in a password.
Translation
Defines hexadecimal codes that form translation tables for non-English character sets.
UDTCast
Contains information on the source and target data types that are involved in the casting
operation.
UDTInfo
Captures the specifics contained within the CREATE TYPE statement.
UDTTransform
Contains the transform group name and the routine identifiers.
Zones
Defines all secure zones.
ZoneGrants
Stores information about secure zone access grants to users and roles.
Note: Full access privileges are required for archiving or restoring DBC. If DBC does not have
full privileges, they need to be granted prior to performing the archive. If the source system is
not available, such as restoring from tape, then once DBC has been restored, full access
privileges have to be explicitly granted for each DBC object, view, and macro before running
the post_dbc_restore script and DIP.
If included in an archive operation, database DBC is always archived first, regardless of
alphabetical order. If included in a restore or copy operation, database DBC is always restored
or copied first. Refer to “Restoring the Database DBC” on page 62 for specific instructions on
restoring database DBC.
For a list of individual object types and whether they can be restored or copied under different
conditions, see Appendix B.
Teradata Archive/Recovery Utility Reference, Release 16.00
35
Chapter 2: Archive/Recovery Operations
Database SYSUDTLIB
Database SYSUDTLIB
A system database, SYSUDTLIB, is created and maintained by the Teradata Database and
contains the definition of all of the User Defined Types (UDTs) and the User Defined Methods
(UDMs) defined in the Teradata Database. Refer to“User-Defined Types” on page 37 and
“User-Defined Methods” on page 37, and SQL External Routine Programming for more
information.
Logical Link to DBC
Database SYSUDTLIB is logically linked to the DBC database. Therefore, whenever an
operation involves the DBC database, the SYSUDTLIB database is usually involved also. The
next paragraphs discuss exceptions.
Archiving
When database DBC is archived, Teradata ARC automatically archives database SYSUDTLIB
also. DBC is always archived first and SYSUDTLIB archived second, before any other objects
are archived. Do not archive SYSUDTLIB as a separate object.
Using the EXCLUDE option to exclude DBC on an ARCHIVE statement excludes
SYSUDTLIB also.
Restoring
When database DBC is restored, Teradata ARC automatically restores database SYSUDTLIB
and its UDT definitions. DBC is always restored first and SYSUDTLIB restored second, before
any other objects are restored. Do not restore SYSUDTLIB as a separate object.
Using the EXCLUDE option to exclude DBC on a RESTORE statement excludes SYSUDTLIB
also.
Refer to “Restoring the Database DBC” on page 62 for specific instructions on restoring
database DBC.
Copying
Only specify database DBC or SYSUDTLIB on a COPY statement using the COPY FROM
format. When using the COPY FROM format, DBC and SYSUDTLIB can be copied to
another database but no data can be copied into DBC or SYSUDTLIB. Either DBC or
SYSUDTLIB can be specified as a separate source object, but neither can be specified as a
separate target object.
Deleting
To prepare a system for restoring Database DBC, use the DELETE command to delete all
objects in the system except for permanent journal tables. (To drop permanent journal tables,
use either the MODIFY USER or MODIFY DATABASE statement.)
delete database (DBC) ALL, exclude (DBC);
36
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
User-Defined Types
When the EXCLUDE DBC option is used with the (DBC) ALL option on a DELETE
DATABASE statement, SYSUDTLIB is delinked from DBC. This allows SYSUDTLIB to be
deleted so that DBC can be restored. However, SYSUDTLIB is deleted last, after all other
databases have been deleted. Any objects that have definitions based on the UDTs stored in
SYSUDTLIB are deleted before the UDTs themselves are deleted.
If the EXCLUDE option is used to exclude DBC on a DELETE DATABASE statement, but the
(DBC) ALL option is not used, SYSUDTLIB continues to be linked to DBC. DBC and
SYSUDTLIB are excluded from the delete operation.
Do not specify SYSUDTLIB as an object of the DELETE DATABASE statement.
Release Lock
Any leftover HUT locks on database SYSUDTLIB and its members can be explicitly released
using the Release Lock statement:
release lock (SYSUDTLIB), override;
User-Defined Types
User-Defined Types (UDTs) allow the creation of custom data types that model the structure
and behavior of data in applications. UDTs augment the Teradata Database with data types
that have capabilities not offered by Teradata predefined (built-in) data types. A UDT can only
exist on the Teradata Database server.
Teradata Database supports two types of UDTs: distinct and structured.
•
A distinct UDT is based on a single predefined data type such as INTEGER or VARCHAR.
It redefines an already-defined SQL data type.
Example
Define 'Salary' as a Decimal (9,2), but with a function that prevents someone from
entering a value of less than 20,000 or more than 2,000,000.
•
A structured UDT is a collection of fields called attributes, each of which is defined as a
predefined data type or other UDT (which allows nesting).
Example
If 'Address' is defined as a UDT with subfields for Street, Apartment, City, State, and Zip
Code, Transform functions must be provided that convert input and output data into and
from an 'Address' object. Input and output data as a string with all the fields concatenated,
using fixed field lengths or a delimiter such as a comma between them.
User-Defined Methods
Custom functions can also be created that are explicitly connected to UDTs; these are known
as user-defined methods (UDMs).
Teradata Archive/Recovery Utility Reference, Release 16.00
37
Chapter 2: Archive/Recovery Operations
Copying UDTs and UDMs
When creating UDMs, CREATE METHOD must indicate that the file(s) containing the
definitions of the methods are located on the server. Creating UDMs from file(s) located on
the client is not supported.
Copying UDTs and UDMs
When copying UDFs, UDMs, or external stored procedures from one system to another, and
the definition (CREATE or REPLACE statement) includes an EXTERNAL NAME clause that
specifies a library or package, make sure that library or package is on the target system before
copying the UDF, UDM, or external stored procedure. Otherwise an error will occur.
SQL External Routine Programming provide further information about UDTs and UDMs in
general, and about an error that is described in the prior paragraph.
Extended Object Names (EON)
Beginning with the 14.10 releases of Teradata ARC and Teradata Database, support for object
names has been extended with the Extended Object Name (EON) feature. EON provides
support for 128-character, 512-byte object names. This allows each character to be a full
multi-byte character of up to 4 bytes long, depending on character set. Users can now specify a
full 128-character UNICODE name.
In Teradata ARC and Teradata Database 13.10 releases, support for object names was extended
with the Extended Object Name Parcels (EONP) feature. EONP provided support for
30-character, 120-byte object names where each character could be a full multi-byte character
of up to 4 bytes per character. With EONP, users can specify a full 30-character UNICODE
name, where each character is 4-bytes long.
Prior to 13.10, Teradata ARC and Teradata Database only supported 30-character object
names in a 30-byte field. Depending on the character set, this could result in a maximum
object name that is less than 30 characters long. For example, if using UTF-8 (which supports
up to 3 bytes per character), the maximum number of characters that can fit in the 30-byte
object name field is 10 characters.
Teradata Active System Management (TASM)
Teradata Active System Management (TASM) is an on going initiative designed to improve the
overall systems management capabilities of Teradata Relational Database Management
Software (RDBMS). A component of TASM is the Utility Manager which is responsible for
managing load utilities such as Archive/Restore.
Teradata ARC 13.10 and above now support the Utility Manager by doing the following:
•
38
Notification of ARC activity
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Always On - Redrive
•
Changing the number of sessions based on the Utility Manager response
Always On - Redrive
The new Teradata Database feature, Always On, includes a Redrive component. The Redrive
component enables Teradata Databases to store incoming SQL requests from client products
so the Teradata Database can automatically restart them. Teradata Tools and Utilities clients
should not have to handle retries of SQL requests, because Teradata Database automatically
reissues them.
Teradata ARC does not support the Redrive component for the Teradata Database 14.0
implementation of Always On. Teradata ARC reissues SQL request retries the same as it has in
previous versions.
Teradata ARC does not support the Recoverable Network Protocol (RNP) component of
Always On.
Archiving Objects
An archive operation copies database information from the Teradata Database to one or two
client resident files. The archive can be from any of the following:
•
All AMPs
•
Specified clusters of AMPs
•
Specified AMPs
In general, Teradata ARC archives have these characteristics:
•
If an ARCHIVE statement specifies more than one database, Teradata ARC archives the
databases (except for DBC and SYSUDTLIB) in alphabetical order. Within each database,
all related objects are archived in alphabetical order.
•
If the statement specifies multiple individual objects, Teradata Archive/Recovery Utility
archives the objects in alphabetical order first according to database and then by object.
•
If an archive of database DBC is interrupted for any reason, the entire archive must be
performed again. It is not possible to restart an archive operation on database DBC.
•
Archives cannot contain both data tables and journal tables. A journal table archive
contains only the saved portion of the table.
•
Journal table archives do not delete change images from the journal table. To accumulate
change images for several activity periods, execute a checkpoint with save after each
period. Then archive the table without deleting the journal.
Note: When performing an archive, do not make any DDL modifications to change the
definition of any of the objects being archived.
Teradata Archive/Recovery Utility Reference, Release 16.00
39
Chapter 2: Archive/Recovery Operations
Archiving Objects
Note: Beginning with TTU 13.00.00, there is support for archiving all objects as individual
objects. For a complete list of objects supported by Teradata ARC, see “Appendix A Database
Objects” on page 271.
Concurrency Control
When not running an Online Archive, the ARCHIVE statement places a HUT read lock on
the:
•
Database when archiving the database
•
Table when archiving the table
If there is an existing SQL read lock or access lock (created by using the LOCKING FOR
ACCESS modifier) on the database, Teradata ARC can obtain its read lock and start the
archive. If there is an existing SQL write lock or an exclusive lock on the database or table,
Teradata ARC will be blocked until the blocking lock is released. Once the read or access lock
is obtained, it is maintained on that object until it's data is archived and then the lock will be
released.
When running an Online Archive, the ARCHIVE statement places a transaction table read
lock on all of the object(s) to be archived to establish a consistency point so that Online
Logging can be started on all of the involved objects at the same time. After the consistency
point is established and Online Logging is enabled, the read lock is released prior to archiving
any of the table data.
Establishing the consistency points defines the state of the tables to which they will be
restored. While waiting for the table read locks required to set the consistency point, new write
transactions will wait behind those read lock requests. If it is important to reduce impact to
the write transactions, then there are some important considerations for reducing the time to
establish a consistency point. New write transactions can be expected to wait for at least as
long as the longest running write transaction in the system that accesses one of the tables in
the consistency point. The options to shorten the wait time on the new write transactions are
four fold.
•
First, you may exclude tables being written to from this archive.
•
Second, you may pause the start-up of all or most of the write transactions while the
consistency point is being established.
•
Third, you may shorten or break up the long running write transactions so that they finish
quicker.
•
Fourth, you may break a consistency point up into smaller groups of tables making
multiple smaller consistency points.
Introduction to Teradata contains details on concurrency control and transaction recovery.
All AMPs Online
An all-AMPs archive (or dictionary archive) of a database or table contains the Teradata
Database dictionary rows that are needed to define the entity.
40
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Archiving Objects
Table 3 alphabetically lists the dictionary rows or tables that are archived by an all-AMPs data
archive (or a dictionary tables archive) of a user database or table.
Table 3: Dictionary Rows and Tables Archived for User Database
Tables and Table Rows
Description
ConstraintNames
Constraints that have names
DBCAssociation
Retrieves information about tables that have been ported using the Dump/Restore facility
Dependency
Stores relationships among a UDT, its dependent routines, User- Defined Casts, User-Defined
Transforms, User-Defined Orderings, and any dependency on any other database object
IdCol
Identity column
Indexes
Columns that are indexes
IndexNames
Indexes that have names
QueryStatsTbl
Saves information about collected statistics on the queries
ReferencedTbls
Referenced columns (i.e., Parent Key) for Parent tables
ReferencingTbls
Referencing columns (i.e., Foreign Key) for Child tables
StatsTbl
Saves information about collected statistics on the base tables, views, or queries
TableConstraints
Table level Check constraints
TextTbl
System table containing overflow DDL text
TriggersTbl
Definition of all triggers in the database
TVFields
All columns in data tables and views
TVM
All objects in the database
UDFInfo
Information on user-defined functions
UnresolvedReferences
Unresolved referential constraints
Example
The following example archives all databases when all AMPs are online:
LOGON DBC,DBC;
ARCHIVE DATA TABLES (DBC) ALL,
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
When the archive operation is complete, check the output log to verify that all AMPs
remained online during the archive.
If an AMP goes offline during an archive, the output log reports the processor number that is
offline. If one or more AMPs go offline, see “Offline AMPs” on page 48 for more information.
Teradata Archive/Recovery Utility Reference, Release 16.00
41
Chapter 2: Archive/Recovery Operations
Archiving Objects
Archiving Selected Partitions of PPI Tables
An all-AMPs archive on one or more partitions of a table is supported: it is not necessary to
perform a full-table backup and restore. The ability to select partitions from PPI tables is
limited to all-AMP archives. Dictionary, cluster, and journal archives are not supported.
Use partitioning to:
•
Archive only a subset of data and avoid archiving data that has already been backed up.
(This can minimize the size of the archive and improve performance.)
•
Restore data in a table that is partially damaged.
•
Copy a limited set of data to a disaster recovery machine or to a test system.
Before using this feature, read “Potential Data Risks When Archiving/Restoring Selected
Partitions” on page 45.
For information about the keywords that are specific to archiving and restoring selected
partitions of PPI tables, see “Using Keywords with ARCHIVE” on page 185 and “Archiving
Selected Partitions of PPI Tables” on page 189.
Description
PPI allows the division of data in a table into separate partitions based on a specific
partitioning scheme. With Teradata ARC, individual partitions can be archived according to
user-defined partitioning expressions. For more information about options for PPI archives/
restores, see Chapter 6: “Archive/Recovery Control Language.”
Considerations
Consider the following when archiving selected partitions in PPI tables:
42
•
A restore operation always deletes the selected partitions of the target table before
restoring the rows that are stored in the archive.
•
Archiving selected partitions operates on complete partitions within tables, meaning that
the selection of a partial partition implies the entire partition.
•
PPI and non-PPI tables are permissible in a single command. Both table types can be
managed in a single database with the EXCLUDE TABLES option.
•
Partitioning is based on one or more columns specified in the table definition.
•
Partition elimination restricts a query to operating only in the set of partitions that are
required for the query.
•
Incremental archives are possible by using a partition expression that is based on date
fields, which indicate when a row is inserted or updated.
•
An archive or restore of selected partitions only places full-table HUT locks. HUT locks on
individual partitions are not supported.
•
Re-collect table statistics after a restore of selected partitions. Statistics are part of the table
dictionary rows, which are not restored during a partition-level restore.
•
If a table has a partitioning expression that is different from the partitioning expression
used in the PPI archive, a PPI restore is possible as long as no other significant DDL
changes are made to the table.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Archiving Objects
The archival of selected partitions has limitations. For more information, see “Potential Data
Risks When Archiving/Restoring Selected Partitions” on page 45 and “Considerations Before
Restoring Data” on page 59.
The next example shows a partitioning expression that follows the PARTITION BY keyword.
Data is partitioned for the TransactionHistory table, based on the month when the transaction
occurred:
CREATE TABLE TransactionHistory
(TransactionID
INTEGER,
TransactionDate
DATE FORMAT ‘yyyy-mm’dd’,
TransactionParam1
INTEGER,
…
)
PRIMARY INDEX (TransactionID)
PARTITION BY RANGE_N
(TransactionDate BETWEEN DATE ‘2000-01-01’ AND DATE ‘2004-12-31’
EACH INTERVAL ‘1’ MONTH
);
Procedure for Backing Up Selected Partitions
The procedure in this section is a generic example of how to set up archive and restore scripts
for selected partitions. This example is based on the TransactionHistory table previously
described.
Assume that this backup schedule is desired for the TransactionHistory table:
•
An incremental backup of the currently active partition will be done nightly.
•
At the beginning of each month, the final incremental backup for the previous month will
be done; this backup will be saved until the next differential or full-table backup is done.
•
Every three months, a differential backup will be done, containing the data for the last
three months.
•
Every year, a full-table backup will be done.
To back up PPI data in the TransactionHistory table:
1
Perform a full-table archive:
ARCHIVE DATA TABLES
(SYSDBA.TransactionHistory),
RELEASE LOCK,
FILE=ARCHIVE;
2
Set up incremental archives:
ARCHIVE DATA TABLES
(SYSDBA.TransactionHistory)
( PARTITIONS WHERE
(! TransactionDate BETWEEN CURRENT_DATE – 3 AND CURRENT_DATE
!) ),
RELEASE LOCK,
FILE=ARCHIVE;
Teradata Archive/Recovery Utility Reference, Release 16.00
43
Chapter 2: Archive/Recovery Operations
Archiving Objects
Note: In this example, ‘CURRENT_DATE – 3’ archives a partition even after it becomes
non-active, in case the final archive of the partition fails or the value of CURRENT_DATE
changes during the final backup.
3
Set up differential backups:
ARCHIVE DATA TABLES
(SYSDBA.TransactionHistory)
( PARTITIONS WHERE
(! TransactionDate BETWEEN DATE ‘2004-01-01’ AND DATE ‘2004-03-31’
!) ),
RELEASE LOCK,
FILE=ARCHIVE;
4
(Optional) Perform a separate partition backup if updating a partition that is not archived
by the incremental backup step (step 2):
ARCHIVE DATA TABLES
(SYSDBA.TransactionHistory)
( PARTITIONS WHERE
(! TransactionDate = DATE ‘2004-03-15’
!) ),
RELEASE LOCK,
FILE=ARCHIVE;
To fully restore the TransactionHistory table:
1
Perform a complete restoration of the full-table archive:
RESTORE DATA TABLES
(SYSDBA.TransactionHistory),
RELEASE LOCK,
FILE=ARCHIVE;
2
Perform a full restoration of the differential and incremental archives (in order):
RESTORE DATA TABLES(SYSDBA.TransactionHistory) (ALL PARTITIONS),
RELEASE LOCK,
FILE=ARCHIVE;
3
(Optional) If a separate partition backup is performed due to an update of a non-active
partition, restore the separate partition backup after (or instead of) the differential or
incremental backup that contains the updated partition:
RESTORE DATA TABLES
(SYSDBA.TransactionHistory)
( PARTITIONS WHERE
(! TransactionDate = DATE ‘2004-03-15’
!) ),
RELEASE LOCK,
FILE=ARCHIVE;
Individual partitions can also be restored from a full-table or selected-partition archive. To
restore individual partitions, specify the partitions to be restored in a PARTITIONS WHERE
expression:
RESTORE DATA TABLES
(SYSDBA.TransactionHistory)
( PARTITIONS WHERE
(! TransactionDate BETWEEN DATE ‘2004-05-01’ AND DATE ‘2004-05-31’!)
),
44
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Archiving Objects
RELEASE LOCK,
FILE=ARCHIVE;
Potential Data Risks When Archiving/Restoring Selected Partitions
Be careful when archiving partitioned tables: a number of undesirable conditions can occur.
For additional issues that might occur during restore operations, see “Considerations Before
Restoring Data” on page 59.
Note: The following cases generally do not display an error or give any indication that a
problem has occurred. In most instances, the only indication is that data is incorrect or is
missing from a table.
•
•
Use Correct Specifications –The incorrect use of specifications causes the following
problems:
•
An incorrect PARTITIONS WHERE specification during backup can result in an
incomplete archive or difficulties during a restore operation.
•
An incorrect PARTITIONS WHERE or ALL PARTITIONS specification during restore
can result in data lost from a table or the restoration of stale data to a table if the
archive being restored contains partial, incomplete, or stale versions of an already
existing partition.
Restrict Updates to Active Partitions – It is not possible to determine which partitions
have been modified since the last backup. If changed partitions are not re-archived, the
changes are lost when restored.
For example, if the following scenarios exist for a table:
•
The backup strategy is to back up only the active (latest) partition of the table.
•
A change is made to a non-active partition (to fix an incorrect update).
the only way to archive the change is to archive the changed partitions separately.
The remedy for this situation is to restrict updates to the active partitions only (by using
views to control which rows/partitions are updated) or to re-archive all modified
partitions.
•
Do Not Change Values of Functions or Variables – I f a built-in SQL function or variable
is used in the PARTITIONS WHERE condition, and the value of the function or variable
changes during the job, a different set of partitions might be archived (or restored) for
some objects in that single archive.
For example, if an archive job uses the CURRENT_DATE built-in function to determine
which is the active partition, and the backup runs past midnight, the date change causes a
different partition to be selected. This means that objects archived after midnight will
archive the new (and probably empty) partition.
The remedy for this situation is to do one of the following:
•
Avoid using a changing function or variable in the PARTITIONS WHERE condition.
•
Run the backup at a time when the value will not change.
•
Modify the PARTITIONS WHERE condition to take the value change into account
when selecting partitions. For example, define a range, such as ‘BETWEEN
CURRENT_DATE – n AND_CURRENT_DATE’ to archive the active partition even if the
date changes.
Teradata Archive/Recovery Utility Reference, Release 16.00
45
Chapter 2: Archive/Recovery Operations
Archiving Objects
•
Always Specify PARTITIONS WHERE or ALL PARTITIONS – If PARTITIONS WHERE
or ALL PARTITIONS are not specified for a RESTORE or COPY operation, the default
action is to overwrite the entire table with the archived table definition and data.
Essentially, this is the same as a full-table restore.
For example, if PARTITIONS WHERE is omitted when restoring a single-partition
backup, data is dropped from the table and the single partition stored on the archive is
restored.
To solve this problem, always specify PARTITIONS WHERE or ALL PARTITIONS when
restoring partitions into an existing table. Otherwise, the existing table will be overwritten.
•
Know What Partitions are Being Deleted – In a RESTORE or COPY operation, all
partitions that match the PARTITIONS WHERE condition are deleted, even if they are not
stored on the archive.
For example, if an archive is restored that:
•
Contains the data for April 2007
•
Has a PARTITIONS WHERE condition that matches both March and April 2007
the data for March and April 2007 are deleted, and only April 2007 is restored.
Therefore, be careful when using PARTITONS WHERE. If there is any doubt about which
partitions are affected, COPY the selected partition backup to a staging table, and
manually copy the desired partition(s) into the target table using INSERT ... SELECT and/
or DELETE.
•
Avoid Restoring From a Previous Partitioning Scheme – When changing the partitioning
expression for a table, changing the boundaries of existing partitions is feasible. If these
partitions are restored, Teradata might drop more data than expected or restore less data
than expected, if the archive does not include data for all of the selected partitions.
For example, if an archive is done on a table partitioned by month with the archive data
corresponding to March 2004, and the table is re-partitioned by week, then a PPI restore of
the March backup (using ALL PARTITIONS) overwrites the data for all weeks that contain
at least one day in March. As a result, the last few days of February and the first few days of
April might be deleted and not restored.
Therefore, avoid restoring partition backups from a previous partitioning scheme to an
updated table. Or, use LOG WHERE for the weeks that contain days in both March and
February/April, and manually copy the rows into the table.
•
Track the Partitions in Each Archive – Manual steps are required to determine which
partitions are archived by a given backup job, or to determine which backup job has the
latest version of a given partition. ANALYZE displays the Teradata-generated bounding
condition that defines the archived partitions. (This differs from a user-entered condition
that might only qualify partial partitions.) In this case, inconsistent or old data might be
restored to the table if the wrong archive is restored for a partition, or if partition-level
archives are restored out-of-order and the archives contain an overlapping set of
partitions.
For example, updated data is lost in the following situation. Assume that a final backup for
a March 2007 partition is performed on April 1, 2007. On April 5, a mistake is found in a
row dated March 16, so the row is updated, and a new backup of the March partition is
46
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Archiving Objects
done. If, for instance, the table is accidentally deleted a month later, and an attempt is
made to restore the April 1 backup instead of the April 5 backup, the updated data is lost.
To determine the partitions in each archive, keep track of the partition contents of each
archived table, retain the output listing associated with a tape, or run ANALYZE jobs on
archives.
Dictionary Archive
A dictionary only archive just archives the Teradata Database dictionary rows that are needed
to define a database or table. No table data is archived. When using the multi-stream feature, a
dictionary archive must be run as a single-stream job.
Examples
LOGON USER,PASSWORD;
ARCHIVE DICTIONARY TABLES (USERDB), FILE=DBDICT;
LOGOFF;
LOGON USER,PASSWORD;
ARCHIVE DICTIONARY TABLES (USERDB.USERTB), FILE=TBDICT;
LOGOFF;
Cluster Archive
An alternative to archiving data tables from all AMPs into a single archive is to archive into a
set of archive files called a cluster archive.
To create a cluster archive, archive the data tables by groups of AMP clusters so that the
complete set of archive files contains all the data from all of the AMPs. This saves time by
archiving jobs in parallel.
Cluster Archive vs. Dictionary Archive
A cluster archive does not contain dictionary information (the data stored in the dictionary
tables for the tables in the database). Create a dictionary archive:
•
Before running a cluster archive for the first time
•
Whenever modifying the structure of tables in the cluster archive
Consequently, if archiving by cluster, archive the dictionary and maintain it separately. Do not
create a cluster archive of journal tables or database DBC.
Cluster archives only contain table data, therefore perform a dictionary archive for any
databases or tables that are included in the cluster archive:
1
Create the dictionary archive immediately prior to the cluster archives and then, without
releasing Teradata Archive/Recovery Utility locks.
2
Follow with a set of cluster archive requests covering all AMPs.
To restore successfully, the table data in the cluster archives must match the dictionary
definition contained in the dictionary archive. A restore operation fails when it tries to restore
cluster archives that do not match the dictionary archive.
Teradata Archive/Recovery Utility Reference, Release 16.00
47
Chapter 2: Archive/Recovery Operations
Archiving Objects
Example
This example assumes the system has four clusters, each consisting of two AMPs. Each archive
is of two clusters. The procedure is divided into three jobs. Run Job 1 first, then run Job 2 and
Job 3 in parallel. Finally, run Job 4. This example makes a dictionary archive and two cluster
archives.
Job 1
LOGON USER,USER;
ARCHIVE DICTIONARY TABLES (USERDB),
FILE = ARCHDICT;
LOGOFF;
Job 2 (run in parallel
with Job 3)
LOGON USER, USER;
ARCHIVE DATA TABLES (USERDB),
CLUSTER = 0,1,
FILE = CLUSTER1;
LOGOFF;
Job 3 (run in parallel
with Job 2)
LOGON USER, USER;
ARCHIVE DATA TABLES (USERDB),
CLUSTER = 2,3,
FILE = CLUSTER2;
LOGOFF;
Job 4
LOGON USER, USER;
RELEASE LOCK (USERDB);
LOGOFF;
Offline AMPs
The Teradata Database configuration determines how to archive when AMPs are offline. If a
table has fallback protection or dual after-image journaling, Teradata ARC makes a complete
copy of the data rows, even if an AMP (per cluster) is offline during the archive. For a cluster
archive, only AMPs within specified clusters are considered.
All Teradata Database dictionary rows are defined with fallback. Therefore, Teradata ARC
always performs a complete archive of the necessary dictionary entries, even if AMPs are
offline. This is also true when database DBC is archived.
Consider a situation where:
•
AMPs are offline
•
A request is made for an all-AMPs or a cluster archive involving one of the following:
•
Nonfallback tables
•
Journal table archive that contains single images
In this situation, a specific AMP archive for the offline processors after the AMPs are back
online is required.
Single after-images (remote or local) are maintained on single AMPs, therefore after-images
for offline AMPs are not included in the archive. If one of these archives is restored with a
rollforward operation, data from some of the online AMPs is not rolled forward.
48
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Archiving Objects
Example 1
This example archives all databases when one AMP is offline with Teradata Database. In this
case, the offline AMP is not out of service because of a disk failure.
LOGON DBC,DBC;
ARCHIVE DATA TABLES (DBC) ALL,
RELEASE LOCK,
INDEXES,
FILE=ARCHIVE;
LOGOFF;
When the archive operation is complete, check the print file to verify that the offline AMP did
not come back online during the archive. If the offline AMP came back online during the
archive, the print file reports the following with Teradata Database, where n is the number of
the AMP:
AMP n BACK ON-LINE
When the offline AMP returns to online operation, run another archive procedure to archive
rows from the offline AMP, even if the AMP came back online during the archive.
Example 2
This example archives all nonfallback data from an AMP that was offline during a previous
Teradata Database archive operation.
LOGON DBC,DBC;
ARCHIVE NO FALLBACK TABLES (DBC) ALL,
AMP=3,
EXCLUDE (DBC),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
Large Objects (LOBs)
Teradata ARC supports the archive operation for tables that contain large object columns as
long as the database systems are enabled for large object support. Starting with Teradata ARC
14.00, large object columns can be restored to a system that uses a hash function different
from the one used for the archive.
An archive of selected partitions with LOBs is supported, but the restore is not. To restore
selected partitions of LOBs, perform a full-table restore.
Non-Hashed and Partially Loaded Tables
Each table has information about itself stored in a special row called a table header. Table
headers are duplicated on all AMPs.
An archive operation captures only table headers for:
•
Nonhashed tables
•
Tables with loading in progress by the FastLoad or MultiLoad utilities. When these tables
are restored, Teradata ARC restores them empty.
Teradata Archive/Recovery Utility Reference, Release 16.00
49
Chapter 2: Archive/Recovery Operations
Archiving AMPs Locally
Join and Hash Indexes in Archiving
Beginning with Teradata Tools and Utilities 13.00.00, and other requirements listed in “Join
and Hash Indexes in Restoring” on page 59, when archiving a user table, all join and hash
indexes defined in that table are automatically archived. Individual join and hash index
definitions can be backed up by specifying the join or hash index name in the list of objects to
archive.
Performance
To improve performance when archiving, Teradata ARC optimizes local sessions when
assigning AMPs. This optimization requires:
•
Teradata ARC 13.00.00 or later
•
Teradata Database 13.00.00 or later
•
That the Teradata ARC user has privileges to log onto a MONITOR session (described in
Application Programming Reference)
The optimization works best if Teradata ARC sessions are connected to as many nodes as
possible. This increases the likelihood of having a local session available on each node for each
AMP. To accomplish this requires the correct configuration of node Communications
Processor (COP) names and addresses. Refer to information on establishing a session in
Teradata Call-Level Interface Version 2 Reference for Workstation-Attached Systems.
Archiving AMPs Locally
Where possible, Teradata ARC attempts to archive AMP data using sessions that are local to
each of the AMPs. This minimizes the amount of data that must be transferred between nodes
for the backup to complete.
When performing all-AMP backups, it is best to have sessions logged on to as many nodes as
possible; this ensures that at least one session is local to each AMP, and increases the chance
that the local session is used to archive the data.
When performing multi-stream backups, it is best to have each stream connect to only a
subset of the nodes in the system. Teradata ARC automatically divides AMPs between the
streams so that each stream receives AMPs that are local to its sessions. This improves the
scalability of the solution, as well as further increases the chance of a local session being used
to archive each AMP.
In order for this feature to work, Teradata ARC must be able to log on a MONITOR session to
determine AMP location information. To grant this privilege, the following SQL can be used:
GRANT MONITOR TO <arcuser>;
where <arcuser> is the user account which ARC is using to perform the backup.
Note: The Local AMP feature is not supported on Mainframe systems.
50
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Archiving DBC with Multi-Stream
Archiving DBC with Multi-Stream
When using the multi-stream feature, database DBC can only be archived as a single-stream
job. Only the single-stream option is allowed when specifying the characteristics of the job. An
attempt to specify more than a single stream for an archive causes Teradata ARC to display the
following error message and then terminate the job:
ARC0001:SHDUMP.DMPHASE1: DBC archive with Multi-Stream is not allowed
Due to this restriction, if a multi-stream archive includes database DBC, the archive should be
split into two separate jobs. This maintains the performance improvements when using
multiple parallel streams:
1
Archive database DBC as a single-stream job:
LOGON DBC,DBC;
ARCHIVE DATA TABLES (DBC),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
2
Archive the user data as a multi-stream job:
LOGON DBC,DBC;
ARCHIVE DATA TABLES (DBC) ALL,
EXCLUDE (DBC),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
If multi-stream is not required (for example, when performance is not an issue), then the
entire archive job can be run as one single-stream archive job:
LOGON DBC,DBC;
ARCHIVE DATA TABLES (DBC) ALL,
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
Note: Multi-stream archives which include database DBC and were created in Teradata
ARC versions prior to 14.00 should be rerun as single-stream archives.
Archiving Online
Online archiving allows concurrent updates to tables during the archive process. Any updates
during the archive are logged so that they can be rolled back to a consistent point during a
restore.
There are different ways to start and stop online archiving. Use the ONLINE keyword in the
ARCHIVE statement or the LOGGING ONLINE ARCHIVE ON and LOGGING ONLINE
ARCHIVE OFF statements.
Teradata Archive/Recovery Utility Reference, Release 16.00
51
Chapter 2: Archive/Recovery Operations
Archiving Online
The ONLINE keyword in the ARCHIVE statement specifies that the objects listed in the
statement are to be archived online. Then, Teradata Archive/Recovery Utility automatically
enables logging before archiving the listed objects and automatically disables logging after the
listed objects are archived. This method can be used for all-AMP archives and non-cluster
multi-stream archives.
The LOGGING ONLINE ARCHIVE ON statement explicitly starts online logging for one or
more objects in the system. Subsequent ARCHIVE statements on these objects (without using
the ONLINE keyword) will perform an online archive, allowing concurrent updates. Use this
method for cluster archives and Dictionary archives and when one wishes to separate groups
of tables into smaller sets of consistency points to reduce impact on write transactions.
The LOGGING ONLINE ARCHIVE OFF statement stops online logging on the specified
objects. This statement must be submitted for all objects after the online archive is complete if
the LOGGING ONLINE ARCHIVE ON statement was used, or if an online archive job fails. If
LOGGING ONLINE ARCHIVE OFF is not submitted, logging will continue on the objects
indefinitely.
When the Online table is restored or copied, the changes recorded in the Online log are used
to roll back those changes in the table to the consistency point that was established when
online logging was started on that table. The consistency point for an individual table may be
different from the consistency point of other tables in the Online Archive job, based on when
online logging was enabled on that table.
If used on the ARCHIVE statement, the NOSYNC option can be specified, but only if the
ONLINE option is also specified. The NOSYNC option affects how Online Logging is enabled
on the involved tables.
The primary purpose of NOSYNC is to provide an option that has minimal impact on
transactions writing to the database. When trying to establish a common consistency point
across a large number of tables to enable logging on all of those tables at the same time, it is
best to pause the write transactions on those tables. This pause in activity on those tables
allows the DBS to establish a common consistency point. This pause is not needed for the
other phases of Online Archive. If this pause is not practical, then NOSYNC is an alternative
way to enable logging on a large number of tables without having to establish a single
common consistency point.
NOSYNC may require some additional CPU and elapsed time since individual Online
Logging requests may have to be sent to the Teradata Database and some blocked lock time
may have to be endured. If a sufficiently large number of tables are involved, it may be
desirable to break up the archive into multiple smaller jobs.
If any table is still enabled for Online Logging either after the archive is finished or due to a
graceful abnormal termination of Teradata ARC, a warning message appears indicating that
one or more tables are still enabled for online logging. A series of these messages may appear
after archiving an object, at the end of an Archive statement, and when Teradata ARC is
terminated with a FATAL error (severity level 12 or higher).
52
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Archiving Online
If one of these messages appears, determine which table(s) is still online logging (either from
the job output or by using the query in “Determining Whether Online Archive Logging is in
Use” on page 55). Then determine if those tables should continue to log or not. If not, then
use the LOGGING ONLINE ARCHIVE OFF statement to turn off online logging for those
tables that should not be logging.
When starting ONLINE ARCHIVE on one or more objects, the following conditions are
handled differently based on the target database version.
Teradata Database versions prior to 14.00
•
If Online Logging is requested on a table that is involved in a load operation (by Fastload
or Multiload), Teradata ARC displays the following DBS warning message:
*** Warning 9136: Online archive logging cannot be enabled on one
or more tables.
Error code 9136 is returned by the DBS for one of the following conditions:
•
The table is a MultiLoad or FastLoad target table that was
aborted in the apply phase
•
The table is an aborted multiload work table
When Teradata ARC receives the 9136 error code, Online Logging is not enabled on the
table and when Teradata ARC attempts to archive the table, the table will be skipped.
Although the table is skipped, the table header will still be archived but no data is archived.
When the table is restored from the archive file, only the table header is restored, resulting
in an empty table after the restore or copy operation.
•
If Online Logging is requested for a table that is already enabled for logging, the Teradata
Database will return an error and Teradata Archive/Recovery Utility will abort. An
exception is when the NOSYNC option has been specified. In this case, the error will be
ignored and the archive job will continue.
•
If the Teradata ARC user does not have the DUMP access right (or permission) on the
table to be enabled for ONLINE ARCHIVE, Teradata ARC will display an error message
and the Teradata Archive/Recovery Utility will abort. If the NOSYNC option is specified,
the error will be ignored and the archive job will continue.
•
If the table that we are trying to enable Online Logging on has a lock on it, Teradata ARC
will be blocked. Teradata ARC will be blocked on that lock until it is freed up and Teradata
ARC can obtain it.
NOSYNC Considerations
The NOSYNC option tells Teradata ARC to handle all of the tables in a DB-level online
archive request as if they were TB-level objects. Teradata ARC takes the DB-level request and
breaks it down to send a TB-level online archive request for each table in the database.
By using the NOSYNC option, no read lock or access lock is placed on that database and no
synchronization is done to make sure that all of the tables are enabled with online archive
before access to those tables is allowed. This means that these tables do not have a single
common consistency point. Each table has its own consistency point in the event of a restore.
Teradata Archive/Recovery Utility Reference, Release 16.00
53
Chapter 2: Archive/Recovery Operations
Archiving Online
Each table restores to the state it was in when the Online Archive NOSYNC processing reached
that particular table when activating the Online Archive change logs.
Teradata Database 14.00 and later
•
If Online Logging is requested on a table that is involved in a load operation (by Fastload
or Multiload). Teradata ARC handles this condition the same as in earlier releases except
that Teradata ARC now displays the list of tables that are in a Fastload/Multiload state.
•
If Online Logging is requested for a table that is already enabled for logging. Teradata ARC
handles this condition the same as in earlier releases except that Teradata ARC now
displays the list of tables that are already logging.
•
If the Teradata ARC user does not have the DUMP access right (or permission) on the
table to be enabled for ONLINE ARCHIVE, Teradata ARC displays the following message
and then lists each table with the access right error prior to aborting the job:
The user does not have 'DUMP' access right for [number]
object[s]:
[db or tb name]
[db or tb name]
[db or tb name]
If the NOSYNC option is specified, the error will be ignored and the archive job will
continue.
•
If the table to be enabled for Online Logging already has a lock on it, Teradata ARC is
blocked. If the NOSYNC option is not specified, no special action is taken and Teradata
ARC remains blocked on that lock until it is freed up and Teradata ARC can obtain it.
NOSYNC Considerations
The NOSYNC option tells Teradata ARC to handle lock conflicts in a special way. Without the
NOSYNC option, Teradata ARC is blocked when a table it attempts to lock is already locked
by a different process. There is no indication that a lock is blocking Teradata ARC, it simply
appears that Teradata ARC is in a hanging state.
If NOSYNC is specified, Teradata ARC will not initially block on those tables that are already
locked by a different process. Instead, Teradata ARC will retain control and display the
following message:
"Unable to enable Online Logging on %s tables due to lock conflicts."
If the VB3 runtime parameter is specified, Teradata ARC will follow the above message with a
list of tables that are blocked by an existing lock.
After reporting the lock conflict(s), Teradata ARC will retry each blocked Online Log request,
one table at a time, displaying the following message for each table as it does so:
"Retrying Online Log request for table '%s' - waiting to obtain
lock."
54
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Archiving Online
This time, Teradata ARC will block if the lock is still owned by another process until the lock is
freed up and Teradata ARC can obtain it. When Teradata ARC can obtain the lock and
thereafter enable Online Logging on that table, Teradata ARC displays the following message:
"Retry of Online Log request for table '%s' was successful."
After Teradata ARC successfully enables Online Logging on a blocked table, it continues with
the next table until it has successfully enabled Online Logging on all of the blocked tables. The
Archive process does not start until Teradata ARC has successfully enabled Online Logging on
all of the involved tables.
By using this methodology, Teradata ARC can let the user know which table it is currently
blocked on. This allows the user to decide whether it is desirable to make any changes which
could free up that lock, so that Teradata ARC can continue.
All objects in the initial request to enable Online Logging will have the same consistency point,
except for those table(s) that were blocked by a lock. Each of the blocked table(s) will have a
different consistency point as those tables are retried with a separate Online Logging request,
one at a time.
Migrating Online Tables
Teradata ARC does not allowed a table that was archived with the ONLINE ARCHIVE feature
to be copied to a target system that has a different hash function than the source system. The
hash function changed in Teradata Database 13.10, so Teradata ARC displays the ARC1273
error message to indicate this situation when attempting to migrate an online table from a
pre-Teradata Database 13.10 system to a Teradata Database 13.10 or later system.
Note: The ONLINE ARCHIVE feature is supported on Mainframe systems.
Determining Whether Online Archive Logging is in Use
To determine which tables and databases are currently being logged for online archive, run a
query against the ArchiveLoggingObjsV view. An example of a query is:
select CreateTimeStamp,
DatabaseName (VARCHAR(30)),
TVMName (VARCHAR(30))
from DBC.ArchiveLoggingObjsV;
*** Query completed. One row found. 3 columns returned.
*** Total elapsed time was 1 second.
CreateTimeStamp
DatabaseName
TVMName
---------------------------------------------------------------------2007-04-27 16:57:37 jmd
t1
Teradata Archive/Recovery Utility Reference, Release 16.00
55
Chapter 2: Archive/Recovery Operations
Online Archive Example (using the ONLINE option with the ARCHIVE statement)
Online Archive Example (using the LOGGING ONLINE ARCHIVE ON
statement)
•
Online Logging is enabled at 4PM by executing a stand-alone LOGGING ONLINE
ARCHIVE ON statement and table TEST has 5 rows:
1
2
3
4
5
•
AAA
BBB
CCC
DDD
EEE
Online archive starts at 5PM by executing an ARCHIVE DATA TABLES statement. At that
time, table TEST has only 4 rows: rows 1, 2, 3, and 4. Row 5 was deleted.
1
2
3
4
AAA
BBB
CCC
DDD
•
During the ONLINE archive: a job modifies row 3 to ‘3 XXX’ and deletes row 2.
•
At the end of the ONLINE Archive job, table TEST has 3 rows:
1 AAA
3 XXX
4 DDD
•
A restore of table TEST is done from the online backup and the contents of table TEST
after the restore should be:
1
2
3
4
5
AAA
BBB
CCC
DDD
EEE
Explanation of the Restore Result:
This is the contents of the table when Online Logging was enabled at 4PM by the LOGGING
ONLINE ARCHIVE ON statement. Once logging is enabled, changes will start getting logged
even if the online archive job has not started yet. The changes to table TEST that will be logged
into the Online log should include:
a) Delete row 5
(change made before the online archive job)
b) Modify row 3 to ‘3 XXX’
(change made during the online archive job)
c) Delete row 2
(change made during the online archive job)
These changes in the Online log are also archived when the table is archived and when the
table is restored, the Online log is also restored and then every change in the restored Online
log will get rolled back. The resulting table will be in the same state that existed when Online
Logging was first enabled on the table at 4PM.
Online Archive Example (using the ONLINE option with the ARCHIVE
statement)
•
At 4PM, table TEST has 5 rows:
1 AAA
2 BBB
56
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
3 CCC
4 DDD
5 EEE
•
Online archive starts at 5PM by executing an ARCHIVE DATA TABLES statement with the
ONLINE option. At that time, table TEST has only 4 rows: rows 1, 2, 3, and 4. Row 5 was
deleted.
1
2
3
4
AAA
BBB
CCC
DDD
•
During the ONLINE archive: a job modifies row 3 to ‘3 XXX’ and deletes row 2.
•
At the end of the ONLINE Archive job, table TEST has 3 rows:
1 AAA
3 XXX
4 DDD
•
A restore of table TEST is done from the online backup and the contents of table TEST
after the restore should be:
1
2
3
4
AAA
BBB
CCC
DDD
Explanation of the restore result:
This is the contents of the table when Online Logging was enabled at 5PM by the ARCHIVE
statement. Once logging is enabled, changes will start getting logged as the online archive job
proceeds. The changes to table TEST that will be logged into the Online log should include:
b) Modify row 3 to ‘3 XXX’ (change made during the online archive job)
c) Delete row 2
(change made during the online archive job)
These changes in the Online log are also archived when the table is archived and when the
table is restored, the Online log is also restored and then every change in the restored Online
log will get rolled back. The resulting table will be in the same state that existed when Online
Logging was first enabled on the table at 5PM.
Note that the 1st change to table TEST:
a) Delete row 5
(change made before the online archive job)
was done before the Online Archive job was started and before Online Logging was enabled
and therefore is not included in the Online log and will not get rolled back during the restore.
Restoring Objects
A restore operation transfers database information from archive files on the client to one of
the following:
•
Database DBC
•
Database SYSUDTLIB
Teradata Archive/Recovery Utility Reference, Release 16.00
57
Chapter 2: Archive/Recovery Operations
Restoring Objects
•
User database or table
•
All AMPs
•
Clusters of AMPs
•
Specified AMPs
An archive of database DBC contains the definition of all user databases in the Teradata
Database. Therefore, restoring database DBC to the Teradata Database automatically defines
for that system the user databases from a database DBC archive.
Restoring database DBC also restores database SYSUDTLIB automatically. SYSUDTLIB
contains the definition of all UDTs defined in the system.
These types of information are restored:
•
An all-AMPs dictionary tables archive of a user database contains the dictionary
definitions for each object in the database.
•
An all-AMPs data tables archive of a user database contains the dictionary definitions plus
any data for each object in the database.
•
An all-AMPs restore of a database archive automatically restores all objects from the
archive (including the dictionary definitions and any data). Similarly, a restore of a
dictionary archive restores the dictionary definitions of all objects in the archive but no
data. For a complete list of objects supported by Teradata ARC, see “Appendix A Database
Objects” on page 271.
•
An all-AMPS restore of selected partitions restores only the partitions specified by the
PARTITIONS WHERE condition. Dictionary, cluster, and journal restores are not
supported.
•
Restores of selected partitions are always to existing tables, and certain changes to the
target table definition are allowed. For more information, see “Changes Allowed to a PPI
Table” on page 190.
•
Secondary indexes are also updated for restores.
Beginning with Teradata Tools and Utilities 13.00.00, Teradata ARC supports restoring all
objects as individual objects. For a complete list of objects supported by Teradata ARC, see
“Appendix A Database Objects” on page 271.
Conditions Needed for a Restore Operation
•
Data Dictionary Definitions – Archived table and databases can only be restored to a
Teradata Database if the data dictionary contains a definition of the entity to be restored.
•
Locks – Teradata ARC uses a utility (HUT) lock, therefore a single user cannot initiate
concurrent restore tasks. The Teradata ARC lock avoids possible conflicts when releasing
locks. To restore concurrently, supply a separate user identifier for each restore task.
For restores of selected partitions, Teradata ARC applies a HUT write lock.
•
58
UtilVersion Match – To restore selected partitions, the archive UtilVersion must match the
UtilVersion on the target system if they are both non-zero. If UtilVersion is zero on the
archive, the Version on the archive must match the UtilVersion on the target system. For
more information, see “Archiving Selected Partitions of PPI Tables” on page 41.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
Considerations Before Restoring Data
Before performing a restore operation, consider the following items. For a list of potential
issues regarding the archiving of selected partitions, see “Potential Data Risks When
Archiving/Restoring Selected Partitions” on page 45.
Dropped Database, Users, Tables, Views, and Macros
A restore of a database DBC drops all new databases or users created since the time the archive
was created. Additionally, all new UDTs are dropped (because database SYSUDTLIB will also
be restored with database DBC).
A restore of a user database drops any new objects created since the archive of the database.
For a complete list of objects supported by Teradata ARC, see “Appendix A Database Objects”
on page 271.
Restoring Undefined Tables with COPY
Do not restore a database or table to another system that does not contain an equivalent
definition of the entity (for example, the same name and internal identifier).Otherwise,
conflicting database and table internal identifiers can occur. To restore data tables that are not
already defined in the data dictionary, use the COPY statement.
Insufficient Memory for Large Tables
Teradata ARC uses the same methodology as the Teradata SQL CREATE INDEX function to
rebuild secondary table indexes. If there is insufficient available disk space, it might not be
possible to restore a very large table because of the amount of temporary disk space that is
required to recreate a secondary index.
Join and Hash Indexes in Restoring
Prior to Teradata Tools and Utilities 13.00.00
Teradata Archive/Recovery Utility does not archive or restore join indexes. If you attempt to
restore a database containing a join index, the restore fails with DBS error 5468. If you attempt
to restore an individual table which has a join index defined on it, the restore fails with DBS
error 5467. This happens because the DBS does not allow deleting a database if it has a join
index or a table with a join index defined on it. The DBS also does not allow deleting table
with a join index defined on it.
To restore a database or a table with an associated join index from the archive file, you must
manually drop all the join and hash indexes associated with the databases and tables being
restored from the archive file before running the restore job. After the restore is completed
recreate the join indexes.
To drop and recreate a join index
1
Extract the join index definition by executing a SHOW JOIN INDEX command and by
saving the join index definition.
2
Drop the join index.
Teradata Archive/Recovery Utility Reference, Release 16.00
59
Chapter 2: Archive/Recovery Operations
Restoring Objects
3
Restore the database or the table.
4
Recreate the join index by executing a CREATE JOIN INDEX command with the saved
join index definition.
5
Collect the new statistics.
Teradata Tools and Utilities 13.00.00 and later
Beginning with Teradata Tools and Utilities 13.00.00, join and hash indexes can be archived
and restored if these requirements are met:
•
The site must use Teradata Tools and Utilities 13.00.00 and later.
•
The site must use Teradata Database 13.00.00 and later
•
The user must have SELECT rights on the DBC.TablesV and DBC.JoinIndicesV views.
Join and hash indexes cannot be restored from a pre-13.00.00 archive file because the indexes
were not archived prior to 13.00.00.
When restoring a user table, any join and hash indexes defined in that table are automatically
dropped prior to restoring the table. After the table is restored, the join and hash indexes are
automatically recreated. The indexes are rebuilt from the archived definition if they do not
currently exist on the system. If an archived join or hash index currently exists on the table, the
existing definition is used to rebuild the join or hash index.
Individual join or hash index definitions can be restored by specifying the index name in the
list of objects to restore.
When restoring a user table that has a join or hash index defined on it (or database containing
such a table), the name of the user table or database must be specified in the RESTORE
statement. If ALL FROM ARCHIVE is used to restore the table or database containing the user
table with a join index, the job is aborted with a DBS error (error 5467 for a table and error
5468 for a database). This occurs because Teradata ARC is not given the name of the table or
the database being restored and is therefore, unable to drop the associated join or hash index
before restoring the table or database from the archive file. When using the ALL FROM
ARCHIVE option, use the pre-TTU 13.0 procedure to drop manually all of the join and hash
indexes associated with the databases and tables being restored from the archive file before
running the job. See “To drop and recreate a join index” on page 59.
Restoring Join and Hash Indexes for a Cluster Archive
Join and hash indexes might not be recreated automatically when restoring a cluster archive
because data in the base tables is not restored until all phases of the cluster restore are
complete.
To restore join or hash indexes from a cluster archive
•
Restore the dictionary and cluster archives.
During the dictionary restore, a warning appears for any join or hash indexes that could
not be recreated.
60
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
•
After all cluster archives have been restored, issue a BUILD statement on any databases or
tables involved in the cluster archive.
•
After the BUILD statement completes, issue a RESTORE DICTIONARY TABLES
statement using the dictionary archive file. Specify only the join and hash index objects
that could not be rebuilt.
Matching Hash Functions for Large Objects
Teradata ARC supports the restore operation for tables that contain large object columns as
long as the database system is enabled for large object support. Starting with Teradata ARC
14.00, large object columns can be restored to a system that uses a hash function that is
different from the one used for the archive.
Certain Statements Force the Restoration of the Entire Database
A Teradata SQL DROP or RENAME statement cause the definition of an entity to be removed
from the dictionary. This same definition cannot be recreated using a Teradata SQL CREATE
statement because a create operation is a new logical definition.
As a result, do not restore a dropped:
•
Table unless you restore the entire database
•
Database unless you restore database DBC first
A Database DBC archive must also contain a definition of the dropped database.
Archive individual tables with care, and periodically archive the dictionary.
If restoring all of a user database or database DBC (that is, all of the Teradata Database) is
required because of a catastrophic event, restore the dictionary information for the database at
the database level before restoring the individual tables. Restoring the dictionary first restores
the table definitions, so that the tables can be successfully restored.
Restoring/Copying NoPI Tables
When restoring/copying a No Primary Index (NoPI) table to a destination system that has a
different number of AMPs than the source system, there is a high probability that the data will
be skewed on the destination system after the restore/copy. This is due to the algorithm that is
used by the DBS to distribute the data rows onto the AMPs for a NoPI table.
Teradata Archive/Recovery Utility Reference, Release 16.00
61
Chapter 2: Archive/Recovery Operations
Restoring Objects
Restoring the Database DBC
To restore Database DBC, the recommended procedure for Teradata ARC 16.00 and Teradata
Database 16.00 is shown below. For the recommended procedure for prior releases, please see
the Database Migration document for your specific release. For instructions on how to access
the Data Migration documents, see the “Data Migration documents” section of the table on
page 7, under “Additional Information.”
Table 4 provides a summary and description of each step in the Database DBC restore process.
Table 4: Database DBC Restore Steps
Step Summary
Description
1
sysinit
Reinitialize the system by running sysinit.
2
If needed, reconfigure the
system
If needed, reconfigure the system to the desired configuration.
3
run DIPERR and
DIPVIEWS
Run the DBS Database Initialization Program (DIP) and execute the DIPERR (initialize
the error message tables) and DIPVIEWS (initialize the system views) scripts. Do not
run any of the other DIP scripts at this time, including DIPALL. See the Teradata
Database Node Software IUMB document for information on running DIP. Please view
the specific version of this document that matches the database version that you are
restoring to. For instructions on how to access the Data Migration documents, see the
“Data Migration documents” section of the table on page 7, under “Additional
Information.”
4
Enable DBC logons
Only user DBC is allowed to restore database DBC. In addition, to avoid possible restart
loops that could result in a sysinit, logons should be disabled for all other users except
for user DBC. To do this, enter the following in the DBS supervisor console window.
DISABLE LOGONS;
ENABLE DBC LOGONS;
After database DBC is restored, logons for all user can be re-enabled by entering the
following in the DBS supervisor console window.
ENABLE LOGONS;
5
Drop all user objects
(ARC script option)
If user objects are defined in the Teradata Database, drop them before restoring database
DBC. (For a complete list of objects supported by Teradata ARC, see “Appendix
A Database Objects” on page 271.) Use the following statement to drop all such objects,
except for permanent journal tables:
DELETE DATABASE (DBC) ALL, EXCLUDE (DBC);
DELETE DATABASE also deletes all UDTs from the SYSUDTLIB database.
To drop permanent journal tables, use the MODIFY USER or MODIFY DATABASE
statement. (See the MODIFY DATABASE/USER descriptions in SQL Data Definition
Language.)
62
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
Table 4: Database DBC Restore Steps (continued)
Step Summary
6
Description
Restore database dbc
(ARC script)
Restore database DBC by itself running a script similar to the following:
LOGON DBC,DBC;
DELETE DATABASE (DBC) ALL, EXCLUDE (DBC);
RESTORE DATA TABLES (DBC),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
Run DBC restores as single-stream jobs unless the archive is multi-stream. If multistream, use the same number of streams that were used during the archive. Using fewer
streams during the restore can cause the job to fail. For more information, see
“Restoring with Multi-Stream” on page 78.
7
Run post_dbc_restore
After restoring database DBC, run the post DBC restore script (post_dbc_restore). The
post_dbc_restore script will also run additional DIP scripts, as needed. For more
information on the post_dbc_restore script, see “Migration Scripts” below.
8
Restore user data
(ARC script)
If restoring user data, follow the post_dbc_restore script by running a script similar to
the following:
LOGON DBC,DBC;
RESTORE DATA TABLES (DBC) ALL,
EXCLUDE (DBC), (TD_SYSFNLIB),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
If using multi-stream, use the same number of streams that were used during the
archive. For more information, see “Restoring with Multi-Stream” on page 78.
9
Run post_data_restore
After restoring the user data, run the post data restore script (post_data_restore). The
post_data_restore script also runs additional DIP scripts as needed. For more
information on the post_data_restore script, see “Migration Scripts” on page 63.
Interrupted or Failed Restores
If a restore of database DBC or SYSUDTLIB is interrupted for any reason, perform the entire
restore again. Do not restart a restore operation on database DBC or SYSUDTLIB. If the
restore of database DBC or SYSUDTLIB is interrupted, reinitialization of the database and
disks with the database software and dictionary tables might be involved (in some cases) prior
to rerunning the restore of database DBC or SYSUDTLIB.
Migration Scripts
The following information applies to DBS release 12.0 and above.
Whenever database DBC is restored, run the post_dbc_restore migration script before
continuing with the restore of the Teradata system. If the post_dbc_restore migration script
fails or is interrupted, reinitialize the system and restart the migration from the beginning.
Whenever table data is restored to a system as part of a full system restore, to a system with a
different release, or to a different platform, run the post_data_restore migration script after
the data restore is completed. If the'post_data_restore migration script fails or is interrupted,
run the data restore and the 'post_data_restore' migration script again.
Teradata Archive/Recovery Utility Reference, Release 16.00
63
Chapter 2: Archive/Recovery Operations
Restoring Objects
In most cases, Teradata ARC prints a message that specifies that a specific migration script
needs to be run, and then automatically exits to allow the user to run the script. After the
script runs, the restore can be continued.
For detailed information regarding the post_dbc_restore and post_data_restore migration
scripts, refer to the appropriate Data Migration document for the DBS release on the target
system. To find the related Data Migration documents, see the Data Migration documents
section in the table on page 7, under “Additional Information”. The Data Migration
documents that discuss the post_dbc_restore and post_data_restore migration scripts have
the general title, Teradata Database Node Software IUMB. Please view the specific version of
this document that matches the database version that you are restoring to.
Restoring a User Database or Table
All-AMPs Restore of a Complete Database
An all-AMPs restore of a complete database from an archive of a complete database:
1
Drops all objects in the resident version of the database. Join and hash indexes created
prior to Teradata Tools and Utilities 13.00.00 are also dropped. See “Join and Hash Indexes
in Restoring” on page 59 for information on join and hash indexes in Teradata Tools and
Utilities 13.00.00 and later.
2
Restores all objects in the archive. For a complete list of objects supported by Teradata
ARC, see “Appendix A Database Objects” on page 271.
All-AMPs Restore of a Selected Table
An all-AMPs restore of a selected data table from an archive of a complete database restores
only the dictionary rows and data for the requested table. Other objects in the database are not
restored. (For a complete list of objects supported by Teradata ARC, see “Appendix
A Database Objects” on page 271.) Additionally, join and hash indexes created prior to
Teradata Tools and Utilities 13.00.00 are not restored. See “Join and Hash Indexes in
Restoring” on page 59 for information on those indexes in Teradata Tools and Utilities
13.00.00 and later.
All-AMPs Restore of a Full Database from a Selected Table Archive
An all-AMPs restore of a full database from an archive that was created for specific data tables
restores only the dictionary rows and data for the tables that were originally archived.
Restoring Selected Partitions
Selected partitions can be directly backed up and restored with a PARTITIONS WHERE
option that restricts the list of rows processed. The PARTITIONS WHERE option operates on
complete table partitions. A RESTORE (or COPY) operation wipes out selected partitions
(specified by the PARTITIONS WHERE option) of an existing target table before recovering
the rows stored on the backup tape. A backup operation on selected partitions is allowed with
the same all-AMP BAR styles as for full tables.
64
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
Restoring selected partitions is affected by the maintenance activities that can occur on a table.
For example, a user can add or delete a column, add or delete a secondary index, or change the
partitioning expression.
If a selected partition was archived prior to one of these changes, restoring that partition back
to the table might not be possible. Restoring selected partition data back to a table even if the
target table has some characteristics that are different from the source stored on tape is
allowed. However, not all changes are allowed and some will prevent the restoration of the
selected partition data. For a list of acceptable and non-acceptable differences, see “Changes
Allowed to a PPI Table” on page 190 and “Restrictions on Archiving Selected Partitions” on
page 189.
Note: Restoring selected partitions might take up CPU resources.
For an overview of backing up partitioned data with Teradata ARC, and the keywords for
restoring partitioned data, see “Archiving Selected Partitions of PPI Tables” on page 41.
All-AMPs Restore of Selected Partitions
The following options restore partitioned data:
•
•
PARTITIONS WHERE specifies the conditional expression that determines which rows to
restore to the table. Use this option only if the following conditions are true:
•
The object is an individual table rather than a database.
•
The source and target tables have a defined PARTITIONS BY expression.
•
The restore is an all-AMP restore.
•
The table is excluded from the database object (via EXCLUDE TABLES) if the table
belongs to a database object that is specified in the RESTORE script.
LOG WHERE conditionally inserts (log) archived rows that fall outside the partitions
specified by the PARTITIONS WHERE conditional expression into a Teradata-generated
error table. Teradata ARC inserts into the error table any rows that match the LOG
WHERE conditional expression and fall outside the partitions specified by the
PARTITIONS WHERE conditional expression.
Teradata Archive/Recovery Utility Reference, Release 16.00
65
Chapter 2: Archive/Recovery Operations
Restoring Objects
DB-level Selected Partitions Restore from a DB-level Archive
One of the options for restoring selected partitions is to do a DB-level selected partitions
restore of a PPI table from a DB-level archive:
Example 1
DB-level Archive:
ARCHIVE DATA TABLES
(SYSDBA),
RELEASE LOCK,
FILE=ARCHIVE;
DB-level Selected Partitions Restore from the DB-level Archive:
RESTORE DATA TABLES
(SYSDBA)
(EXCLUDE TABLES (TransactionHistory)),
(SYSDBA.TransactionHistory)
( PARTITIONS WHERE
(! TransactionDate = DATE ‘2004-03-15’
!) ),
RELEASE LOCK,
FILE=ARCHIVE;
This format of the restore statement combines a DB-level restore with a TB-level selected
partitions restore of a PPI table using the same database object in the same restore statement.
In some instances, executing this format of the restore statement may fail with an error similar
to the following:
*** Failure ARC0805:Access Module returned error code 34:
Error BAM1300: 5:The sequence of calls is incorrect.
This error is indicative of a failure to correctly process the 2nd part of the statement. To
process this statement, Teradata ARC needs to process the database object twice to complete
the restore. During the 1st pass, Teradata ARC restores all of the objects in the database object
except for the excluded PPI table. Then Teradata ARC must make a 2nd pass through the
database object to re-find the PPI table so that the TB-level Selected Partitions Restore of the
PPI table can be processed. In order to do the 2nd pass, Teradata ARC must reposition back to
the beginning of the database object to re-read the database object. It is when the backward
reposition is attempted that you may get the "The sequence of calls is incorrect" error.
If you get this error (or something similar), this restore job must be broken down into 2
separate restore steps:
1
First, you must execute a DB-level restore statement to restore the DB-level object while
excluding the PPI table.
2
Second, you must execute a separate TB-level restore statement to do the selected
partitions restore of the PPI table.
These 2 restore statements may be included in the same job or split up into 2 separate jobs.
Example 2
DB-level Archive:
66
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
ARCHIVE DATA TABLES
(SYSDBA),
RELEASE LOCK,
FILE=ARCHIVE;
DB-level Restore with Exclude tables from the DB-level Archive:
RESTORE DATA TABLES
(SYSDBA)
(EXCLUDE TABLES (TransactionHistory)),
RELEASE LOCK,
FILE=ARCHIVE;
TB-level Selected Partitions Restore from the DB-level Archive:
RESTORE DATA TABLES
(SYSDBA.TransactionHistory)
( PARTITIONS WHERE
(! TransactionDate = DATE ‘2004-03-15’
!) ),
RELEASE LOCK,
FILE=ARCHIVE;
When you break down the restore into 2 separate restore statements, each restore statement
requires a rewind back to the beginning of the archive file. Now, when the TB-level Selected
Partitions Restore statement is executed, you don't need to do a backward reposition since you
are already at the beginning of the archive file. You can just read forward in the archive file to
find the PPI table. This will avoid the "The sequence of calls is incorrect" error.
Error Table Name and Structure
The restore/copy process for selected partitions creates Teradata-generated error tables. Rows
are stored in error tables for the following reasons:
•
A row that is in the range of selected partitions being restored fails the target table integrity
checks
•
A user does not want to restore a row but it satisfies the LOG WHERE condition
•
An error occurs while evaluating these conditions for a row
The restore/copy process for selected partitions creates an error table with the characteristics
listed in Table 5.
Table 5: Error Table Characteristics
Names
Database Name
If a database name is not specified (in the ERRORDB option or as a database qualifier of the error table
name in the ERRORTABLES option), the error table resides in the same database as the target table
being restored or copied. Otherwise, the error table resides in the specified database.
Teradata Archive/Recovery Utility Reference, Release 16.00
67
Chapter 2: Archive/Recovery Operations
Restoring Objects
Table 5: Error Table Characteristics (continued)
Error Table Name
If the error table name is not specified (in the ERRORTABLES option), the error table has the name
RS_targettablename truncated on the right as needed to 30 characters; if truncated, a warning message
is issued.
For example, if restoring table TB1, the default name for the error table is RS_TB1.
If a table has the same name as an existing error table in a database, an error occurs. Before the restore/
copy job can be submitted, drop or rename this table, or specify another table or database name.
Review the contents of the existing table to determine whether to drop it:
• If it is an error table, review for logged errors.
• If it is not an error table, consider using a different database or renaming for the error table.
Indexes
Primary Indexes
The primary index of an error table is non-unique and is not partitioned. The primary index columns
are identical to the primary index columns of the target table.
Secondary Indexes
An error table does not have any secondary indexes or participate in any referential integrity
relationships.
See “Error Tables and Indexes” on page 68 for an example.
Restrictions for Error Tables
Do not share an error table between two or more restore/copy jobs. Each table targeted by a restore/copy job must have its
own error table to ensure jobs run correctly.
Do not drop the error table until the restore/copy job finishes. The error table must not exist for non-restart of restore/copy
job. It must already exist for a restart of restore/copy job.
Other Characteristics
Columns
All columns have the same data types and attributes as the target table except for column-level integrity
constraints and not null attributes.
DBCErrorCode
DBCErrorCode has INTEGER data type, not null. It indicates the Teradata error causing the row to be
inserted into the error table.
DBCOldROWID
DBCOldROWID has BYTE(10) data type, not null. It indicates the internal partition number, row
hash, and row uniqueness value of the row at the time the backup was created.
Empty Table
If the error table is empty when the restore/copy job for the target table finishes, it is dropped.
No Constraints
from Target Table
An error table does not have any of the NOT NULL, table-level, or column-level integrity constraints of
the target table. This ensures that rows can be inserted into the error table that might fail the integrity
constraints.
Protection Type
An error table has the same FALLBACK or NO FALLBACK protection type as the associated target
table.
SET Table
An error table is always a SET table even if the target table is a MULTISET table. Duplicate rows are
discarded.
Error Tables and Indexes
As noted in Table 5, error tables do not have secondary indexes, and primary indexes are nonunique. For example, assume that selected partitions of the following target table are being
restored:
68
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
CREATE TABLE SalesHistory
(storeid INTEGER NOT NULL,
productid INTEGER NOT NULL CHECK(productid > 0),
salesdate DATE FORMAT 'yyyy-mm-dd' NOT NULL,
totalrevenue DECIMAL(13,2),
totalsold INTEGER,
note VARCHAR(256))
UNIQUE PRIMARY INDEX (storeid, productid, salesdate)
PARTITION BY RANGE_N(salesdate BETWEEN
DATE '1997-01-01' AND DATE '2000-12-31'
EACH INTERVAL '7' DAY)
INDEX (productid)
INDEX (storeid);
The restore job creates the following error table by default. The bold text points out pertinent
differences from the target table definition.
CREATE SET TABLE RS_SalesHistory, FALLBACK
(storeid INTEGER,
productid INTEGER,
salesdate DATE FORMAT 'yyyy-mm-dd',
totalrevenue DECIMAL(13,2),
totalsold INTEGER,
note VARCHAR(256),
DBCErrorCode INTEGER NOT NULL,
DBCOldROWID BYTE(10) NOT NULL
)
PRIMARY INDEX (storeid, productid, salesdate)
Restoring from a Dictionary Archive
If a database from a dictionary archive created by specifying individual tables is restored, only
the dictionary rows for the specified tables are restored. Table 6 alphabetically lists the
dictionary rows that are restored by an all-AMPs data restore or a dictionary tables restore of a
user database.
Table 6: User Database Data Dictionary Rows Restored
Table Rows
Description
Indexes
Definition of all indexed columns for all objects in the database
IndexName
Definition of named indexes in the table
TVM
Definition of all objects in the database
TVFields
Definition of all columns for all objects in the database
TriggersTbl
Definition of all triggers in the database
Table 7 lists the dictionary table rows that are restored by an all-AMPs restore of a specific user
data table or journal table.
Table 7: User Table Dictionary Rows Restored
Tables Rows
Description
Indexes
Columns in tables that are indexes
Teradata Archive/Recovery Utility Reference, Release 16.00
69
Chapter 2: Archive/Recovery Operations
Restoring Objects
Table 7: User Table Dictionary Rows Restored (continued)
Tables Rows
Description
IndexName
Indexes in the table that have names
TVM
Table names
TVFields
Columns for the table
Restoring from a Journal Archive
If an all-AMPs archive of a journal table is restored, the restored permanent journal goes to a
different subtable than the journal currently writing the updates.
A restored journal overlays any change images that were restored to the same AMPs. During
normal processing, the Teradata Database writes after-change images to a different processor
than the one with the original data row. If after-change images from an archive are restored,
the rows are written to the processor that contains the data row to which the change applies.
Restoring With All AMPs Online
It is feasible to restore even if AMPs are added to or deleted from the configuration after the
archive.
Example
This example restores all databases with all AMPs online and assumes that none of the
databases have journal table definitions. Journal tables must be dropped before running this
job.
LOGON DBC,DBC;
DELETE DATABASE (DBC) ALL, EXCLUDE (DBC);
RESTORE DATA TABLES (DBC),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
Run the post DBC restore script (post_dbc_restore).
LOGON DBC,DBC;
RESTORE DATA TABLES (DBC) ALL,
EXCLUDE (DBC), (TD_SYSFNLIB),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
Run the post data restore script (post_data_restore).
The DELETE DATABASE statement is included so that only the tables in database DBC are
present. If the DELETE statement is omitted, the restore cannot finish.
When the operation is complete, check the print file to verify that all AMPs involved in the
restore process remained online. If one or more AMPs went offline, see “Restoring With AMPs
Offline” on page 73 for more information.
70
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
Restoring with a Specific-AMP Archive
Before restoring database DBC:
1
Reconfigure the system from a single AMP to the desired configuration.
2
Run the DBS Database Initialization Program (DIP) and execute the DIPERR and
DIPVIEWS scripts. Do not run any of the other DIP scripts at this time, including
DIPALL.
Example
This example demonstrates how to restore all databases from an all-AMPs archive and a
specific-AMP archive. All AMPs are online for the restore, although AMPs may have been
added to or deleted from the configuration since the archive was taken.
The next example assumes that none of the databases have journal table definitions. Journal
tables must be dropped before running this job.
LOGON DBC,DBC;
DELETE DATABASE (DBC) ALL, EXCLUDE (DBC);
RESTORE DATA TABLES (DBC),
FILE=ARCHIVE;
LOGOFF;
Run the post DBC restore script (post_dbc_restore).
LOGON DBC,DBC;
RESTORE DATA TABLES (DBC) ALL,
EXCLUDE (DBC), (TD_SYSFNLIB),
NO BUILD,
FILE=ARCHIVE;
RESTORE DATA TABLES (DBC) ALL,
EXCLUDE (DBC), (TD_SYSFNLIB),
NO BUILD,
FILE=ARCHIVEX;
BUILD DATA TABLES (DBC) ALL,
EXCLUDE (DBC),
RELEASE LOCK;
LOGOFF;
Run the post data restore script (post_data_restore).
In this example, file ARCHIVE is an all-AMPs archive; ARCHIVEX is a single AMP archive.
When the restore is complete, check the print file to verify that all AMPs remained online
throughout the process. If one or more AMPs went offline, see “Restoring With AMPs
Offline” on page 73 for more information.
To decrease the total time required to complete the restore process, specify the NO BUILD
option for the restore operation, then submit a BUILD statement after the specific-AMP
restore. In addition, unique secondary indexes remain valid on nonfallback tables.
Restoring Using the EXCLUDE Option
Before restoring database DBC:
Teradata Archive/Recovery Utility Reference, Release 16.00
71
Chapter 2: Archive/Recovery Operations
Restoring Objects
1
Reconfigure the system from a single AMP to the desired configuration.
2
Run the DBS Database Initialization Program (DIP) and execute the DIPERR and
DIPVIEWS scripts. Do not run any of the other DIP scripts at this time, including
DIPALL.
3
Use the EXCLUDE option to:
•
Make one database, ADMIN, and its descendants available first; and make the other
databases in the system available immediately thereafter.
•
Exclude one large database, U12, from being restored.
Example
In the next example, an entire system is restored from an all-AMPs archive and one specific
AMP archive. File ARCHIVE is an all-AMPs archive; ARCHIVEX is a single AMP archive.
LOGON DBC,DBC;
DELETE DATABASE (DBC) ALL, EXCLUDE (DBC);
RESTORE DATA TABLES (DBC),
FILE=ARCHIVE;
LOGOFF;
Run the post DBC restore script (post_dbc_restore).
LOGON DBC,DBC;
RESTORE DATA TABLES (ADMIN) ALL,
NO BUILD,
FILE=ARCHIVE;
RESTORE DATA TABLES (ADMIN) ALL,
NO BUILD,
FILE=ARCHIVEX;
BUILD DATA TABLES (ADMIN) ALL,
RELEASE LOCK;
RESTORE DATA TABLES (DBC) ALL,
EXCLUDE (ADMIN) ALL, (U12), (DBC), (TD_SYSFNLIB),
NO BUILD,
FILE=ARCHIVE;
RESTORE DATA TABLES (DBC) ALL,
EXCLUDE (ADMIN) ALL, (U12), (DBC), (TD_SYSFNLIB),
NO BUILD,
FILE=ARCHIVEX;
BUILD DATA TABLES (DBC) ALL,
EXCLUDE (ADMIN) ALL, (U12), (DBC),
RELEASE LOCK;
LOGOFF;
Run the post data restore script (post_data_restore).
When the operation is complete, check the output to verify that all AMPs remained online
during the restore.
Restoring Database TD_SYSFNLIB
When doing a full system restore (RESTORE DATA TABLES (DBC) ALL), database
TD_SYSFNLIB must be excluded from the restore. If database TD_SYSFNLIB is not excluded,
the restore job aborts with the following error message:
72
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
"*** Failure 9806:Invalid Restore of database TD_SYSFNLIB."
The correct syntax for a full system restore is:
LOGON DBC,DBC;
RESTORE DATA TABLES (DBC) ALL,
EXCLUDE (DBC), (TD_SYSFNLIB),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
Since you cannot restore database TD_SYSFNLIB, you must run the 'dipudt' DIP script after
restoring the rest of your data to recreate and repopulate database TD_SYSFNLIB.
Restoring With AMPs Offline
Restoring a Table with Fallback
If a data table with fallback from an all-AMPs data tables or cluster archive is restored while
one or more AMPs are offline, Teradata ARC generates the information to restore the data on
the offline AMPs when they return to operation. The system recovery process restores the
offline AMPs when they return to online status.
Restoring a Table Without Fallback
If a data table without fallback (or from a specific-AMP archive) is restored while one or more
AMPs that must also be restored are offline, restore the data table to each of the offline AMPs
as soon as the AMPs come back online.
The exclusive utility lock is automatically placed on the data table for the all-AMPs restore and
on each offline processor when the AMPs return to an online status. The lock allows Teradata
Archive/Recovery Utility to finish the restoring process before the table can be accessed.
Unique Secondary Indexes
If a table without fallback is restored with AMPs offline, unique secondary indexes defined for
the table are invalidated. This prevents future updates to the table until the unique secondary
indexes are regenerated. Drop and recreate them or use the BUILD statement.
Restoring Change Images
The Teradata Database always restores change images to the AMP that contains the data row,
unless the row is from a single after-image journal. If a journal table contains change images
for data tables with fallback, the database automatically creates the change images necessary
for the offline AMPs when a recovery operation uses the restored journal.
If the journal table contains change images for tables without the fallback option, repeat the
journal table restore for any offline AMPs when they return to online status. Any recovery
activity using the restored journal must be repeated when the AMPs return to online status.
Restoring With One AMP Offline
The restore operation can occur even if AMPs have been added to or deleted from the
configuration since the archive was taken.
Teradata Archive/Recovery Utility Reference, Release 16.00
73
Chapter 2: Archive/Recovery Operations
Restoring Objects
To restore with an AMP offline
1
Reconfigure the system from a single AMP to the desired configuration (optional).
2
Run the DBS Database Initialization Program (DIP) and execute the DIPERR and
DIPVIEWS scripts. Do not run any of the other DIP scripts at this time, including
DIPALL.
See the Teradata Database Node Software IUMB document for information on running
DIP. Please view the specific version of this document that matches the database version
that you are restoring to. For instructions on how to access the Data Migration documents,
see the “Data Migration documents” section of the table on page 7, under “Additional
Information.”
3
Restore all AMPs by performing the operation described in “Restoring With All AMPs
Online” on page 70.
4
When the offline AMP is returned to online operation, perform a specific-AMP restore for
that AMP.
Example
This example performs a specific-AMP restore for an AMP that was offline during the allAMP restore.
LOGON DBC,DBC;
RESTORE NO FALLBACK TABLES (DBC) ALL,
AMP=1,
EXCLUDE (DBC), (TD_SYSFNLIB),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
When the operation is complete, check the print file to verify that all AMPs remained online
during the restore. If one or more AMPs went offline, see “Restoring With AMPs Offline” on
page 73 for more information.
To decrease the total time required to complete the restore process, specify the NO BUILD
option for the restore operation, then submit a BUILD statement after the specific AMP
restore. In addition, unique secondary indexes remain valid on nonfallback tables.
Restoring Cluster Archives
When restoring all AMPs from a cluster archive, restore the dictionary archive first. Restoring
the dictionary archive:
74
•
Replaces the dictionary information for all objects being restored in the dictionary tables
that reside in database DBC. For a complete list of objects supported by Teradata ARC, see
“Appendix A Database Objects” on page 271.
•
Deletes any data (excluding journals) that belong to the database or tables being restored.
•
Prevents fallback tables that are being restored from accepting Teradata SQL requests.
•
Prevents nonfallback tables with indexes that are being restored from accepting Teradata
SQL requests.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
Remove tables from the restoring state with the BUILD DATA TABLES statement. This
statement also revalidates any existing secondary indexes. The build operation should follow
the restore of all cluster archives that represent the complete archive.
Note: To avoid duplicate rows, do not run multiple copy operations from a single cluster
archive.
Restoring Cluster Archives in Parallel
If restoring cluster archives in parallel, execute multiple jobs at the same time. Each of these
jobs must have a different Teradata Database user identification because Teradata Archive/
Recovery Utility does not allow concurrent restore operations by the same user.
Exclusive Locks
Each cluster restore requests exclusive locks within the various clusters. Therefore, the locks
applied during the dictionary restore must be released prior to concurrent cluster restores.
Build Data Tables Operation
BUILD is run automatically for the primary data and all NUSIs. If the NO BUILD option is
specified for a cluster restore and USI exists on one or more objects, a separate BUILD must be
run on those objects after the entire cluster restore has completed. After completing a restore
of an all cluster archive, perform the build data tables operation against the tables that were
restored. This generates the fallback copy of fallback tables and any indexes of fallback and
nonfallback tables.
Rollforward Operation
If the build operation completes the recovery process and no rollforward operation is
performed, releasing the locks is permissible.
If a rollforward is required, do it after the build operation using the guidelines described in
“Recovering Tables and Databases” on page 80.
To learn about the rollforward operation, see “ROLLFORWARD” on page 255.
Example
This example shows how to restore a cluster. The procedure is divided into three steps. Run
Step 1 first, then run Steps 2 and 3 in parallel. Finally, run Step 4.
Job 1
LOGON USER1,USER1;
RESTORE DICTIONARY TABLES (USERDB),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
Job 2 (run in parallel
with Job 3)
LOGON USER1,USER1;
RESTORE DATA TABLES (USERDB),
CLUSTER=0, 1,
FILE=ARCHIVE;
LOGOFF;
Teradata Archive/Recovery Utility Reference, Release 16.00
75
Chapter 2: Archive/Recovery Operations
Restoring Objects
Job 3 (run in parallel
with Job 2)
LOGON USER2,USER2;
RESTORE DATA TABLES (USERDB),
CLUSTER=2, 3,
FILE=ARCHIVE;
LOGOFF;
Job 4
LOGON USER1,USER1;
BUILD DATA TABLES (USERDB),
RELEASE LOCK;
LOGOFF;
Restoring After a Reconfiguration
If AMPs have been added or dropped since the archive was created, restore cluster and specific
AMP archives to all AMPs. Reconfiguration redistributes data among AMPs, therefore rows in
the archive do not distribute exactly to the AMPs from which the rows came.
When restoring or copying a No Primary Index (NoPI) table to a destination system that has a
different number of AMPs than the source system, there is a high probability that the data will
be skewed on the destination system after the restore or copy. This is due to the algorithm that
is used by Teradata Database to distribute the data rows onto the AMPs for a NoPI table.
When PPI tables are restored after a change in configuration or platform, it is necessary for the
primary index to be re-validated prior to using the table. For all-AMP and multi-stream
restores, Teradata ARC automatically attempts to perform this revalidation. If a problem
occurs, Teradata ARC displays a warning indicating the cause of the failure and continues the
restore job. After the restore job completes, the user must attempt to rerun the revalidate step
manually for PPI tables by using either of the following SQL statements:
ALTER TABLE <database>.<table> REVALIDATE;
ALTER TABLE <database>.<table> REVALIDATE PRIMARY INDEX;
When Column Partition (CP) tables are restored after a change in configuration or platform,
it is also necessary for the table to be revalidated prior to using it. For all AMP and
multi-stream restores, Teradata ARC automatically attempts to perform this revalidation. If a
problem occurs, Teradata ARC displays a warning indicating the cause of the failure and
continues the restore job. After the restore job completes, the user must attempt to rerun the
revalidate step manually for CP tables by using the following SQL statement:
ALTER TABLE <database>.<table> REVALIDATE;
Teradata ARC does not automatically revalidate the primary index for dictionary or cluster
backups. Additionally, Teradata ARC does not automatically revalidate if the NO BUILD
option is specified on the restore, or if RELEASE LOCK is not specified.
Note: After running the revalidate primary index step, you can no longer execute a
ROLLFORWARD or ROLLBACK operation using journal table rows taken on the source
system before the restore. This restriction only applies to PPI tables and CP tables.
Order of Restores
To restore a cluster archive to a reconfigured Teradata Database, first restore the dictionary
archive and then individually restore each cluster archive to all AMPs. Only one restore
operation is permitted at a time.
76
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
NO BUILD Option
When restoring cluster archives to all-AMPs, use the NO BUILD option on each cluster
restore operation. If the NO BUILD option is omitted, Teradata ARC prints a warning and
enables the option automatically. After all the cluster archives have been restored, a BUILD
statement will need to be run.
If an all-AMPs archive with nonfallback tables is restored to all AMPs, and the all-AMPs
archive is missing an AMP that has a specific-AMP archive, restore the all-AMPs archive using
the NO BUILD option. In this case, nonfallback tables are not revalidated. Following
completion of the restore of the all-AMPs archive, restore the specific-AMP archive to all
AMPs. Then follow this specific-AMP archive restore with a build operation.
Logical vs. Physical Space
When restoring to a target system that is smaller than the source, the database space might be
reported as a negative value when the operation finishes. This indicates there is no more
logical space, rather than physical space. If this occurs, reduce the perm space for some of the
databases created on the system by doing the following:
1
Reduce the maximum perm space of user databases and rerun RESTORE.
2
If negative space is reported again, reset the Teradata Database, run UPDATESPACE to
decrease the maximum perm space of user databases and rerun RESTORE.
3
Resize one or more users or databases with the DBSIZEFILE=filename option, described
in “RESTORE” on page 231.
Restoring with a Larger Number of AMPs
When running a complete restore (including database DBC) on a system with a larger number
of AMPs, Teradata ARC sometimes returns an error that indicates the user database is out of
space. This usually occurs only with databases that contain a large number of stored
procedures or tables with poorly distributed data.
The reason for the error is that the perm space for a database is divided evenly among the
AMPs on a system. When the target system has more AMPs, the perm space available on each
AMP is lower. Tables with poorly distributed data might exceed the perm space allocation for
one or more AMPs. If this error occurs, increase the perm space for the database and restart
the restore operation.
Note: A reconfiguration that defines fewer disks will always be logically smaller than the
original, even if it uses the same number of AMPs. To prevent the restore from resulting in
negative database space, reduce the maximum perm space of user databases before initiating a
restore operation.
Teradata Archive/Recovery Utility Reference, Release 16.00
77
Chapter 2: Archive/Recovery Operations
Restoring Objects
Restoring a Specific AMP
Each table has information about itself stored in a special row called a table header. Table
headers are duplicated on all AMPs.
A request for a specific-AMP restore of a table is only allowed on tables without the fallback
option. This type of restore is accepted only if a table header for the table already exists on the
specified AMP. If a header does not exist, Teradata ARC rejects the request.
When restoring a single processor after a disk failure, use the TABLE REBUILD utility to
create the header for the table.
If a header already exists for the table, it must have the same structure as the header on the
archive. If the definition of the table is changed without rerunning the archive, Teradata ARC
rejects the restore request.
Any data definition statement (for example, ALTER TABLE or DROP INDEX) that is executed
for a table also changes the structure (table header) of the table.
If the header structures match, Teradata ARC makes another check for invalid unique
secondary indexes for the table. This check ensures that the specific AMP restore does not
create a mismatch between data rows and secondary index rows.
If Teradata ARC finds any valid unique secondary indexes, it invalidates them automatically
on all AMPs. An all-AMPs restore of such a table invalidates any unique secondary indexes on
the table.
Restoring with Multi-Stream
When archiving/restoring database DBC, the recommendation is to run both archive and
restore as single-stream jobs. This recommendation was not enforced in versions previous to
Teradata ARC 14.00. Beginning with Teradata ARC 14.00, the user can no longer create
multi-stream archives of database DBC. However, multi-stream restores of database DBC still
support restores from older archives that used multiple streams. For more information, see
“Archiving DBC with Multi-Stream” on page 51.
If a multi-stream restore includes database DBC, use the procedure in Table 4 on page 62.
When restoring database DBC, use the same number of streams during the restore that were
used during the archive.
Note: Using multiple streams to restore database DBC can have unpredictable results, using
fewer streams than were used during the archive can cause the DBC restore to fail.
When restoring user data, the same number of streams should also be used during the restore
that were used during the archive (single-stream or multi-stream). If the same number of
streams (for example, tape drives) are not available on the target system, then you must break
up your user data restore job into multiple jobs with fewer streams and run Partial-AMPs
restores. For more information, see “Restoring with Partial-AMPs” on page 79.
Whenever database DBC is restored, the post_dbc_restore script must be run to perform
conversion activities against the data in database DBC and SYSUDTLIB. To enforce this
requirement, Teradata ARC terminates any restore job which includes database DBC after
78
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Restoring Objects
restoring databases DBC and SYSUDTLIB. By terminating the job, Teradata ARC allows the
user to run the post_dbc_restore script before restoring any user data.
Restoring with Partial-AMPs
If restoring from a non-cluster multi-stream archive and the target system has fewer available
streams (data transfer devices) than the source system, then the restore must be broken down
into multiple parts in order to restore all of the data from the archive. This is a “Partial-AMPs
Restore” and is similar to a cluster restore.
For example, you can perform a six-stream multi-stream archive on a source system if that
system has six tape drives available on it. To restore that six-stream archive to a target system
that only has three tape drives available, run two three-stream restore jobs in order to restore
all of the data from the six-stream archive. The first restore job could restore streams #1-#3,
where stream #1 would be the master stream and streams #2 and #3 would be the child
streams. Stream #1 in the first restore job should be Stream #1 from the archive job.The
second restore job would restore streams #4-#6, where stream #4 would 'act' as the master
stream and streams #5 and #6 would be the child streams.
Similar to a cluster restore, when doing a Partial-AMPs restore, you must run with the NO
BUILD option so that Teradata ARC does not attempt to build the restored tables after each
restore job. When all of the restore jobs have run and all of the data has been restored, run a
standalone BUILD job to build the tables involved in the restore. The BUILD job moves the
tables out of the 'restoring' state to allow external applications to access the table data.
When using a Partial-AMP restore to restore database DBC to a target system, the restore is
only supported in 'non-catalog' mode. Therefore, do not specify either the CATALOG or
CATALOGFILE runtime parameters (or their synonyms) when doing a Partial-AMP restore of
database DBC.
When using a Partial-AMP restore to migrate data to a target system that is running with a
different database version than the source system, special care must be taken based on the
hash function on the source and target systems, otherwise the restore could cause the database
to restart:
1
If the hash functions between the two Teradata Database versions is the same, the
NOSORT runtime parameter must be used on the restore job(s).
2
If the hash function between the two Teradata Database versions is different, Teradata ARC
determines if running the Partial-AMPs restore could cause the Teradata Database to
restart. If no, the Partial-AMPs restore is allowed to continue. If yes, Teradata ARC aborts
the job with the following error message:
*** Failure ARC1280:Partial AMP RESTORE/COPY not allowed when
re-hashing data.
Note: These restrictions only apply during Partial-AMP Restores and Partial-AMP Copies.
Teradata Archive/Recovery Utility Reference, Release 16.00
79
Chapter 2: Archive/Recovery Operations
Recovering Tables and Databases
Restoring AMPs Locally
Whenever possible, Teradata ARC attempts to restore/copy AMP data using sessions that are
local to each of the AMPs. This minimizes the amount of data that must be transferred
between nodes for the restore to complete.
The Local AMP feature is supported during restores/copies on all-AMP jobs, cluster jobs, and
multi-stream jobs.
In order for the Local AMP feature to work, Teradata ARC must be able to log on to a
MONITOR session to determine AMP location information. To grant this privilege, enter the
following SQL statement:
GRANT MONITOR TO <arcuser>;
where <arcuser> is the user account which ARC is using to perform the backup.
For additional information regarding the Local AMP feature, see “Archiving AMPs Locally”
on page 50.
Note: The Local AMP feature is not supported on Mainframe systems.
Recovering Tables and Databases
Teradata ARC provides functions to roll back or roll forward databases or tables from a
journal table. To identify the databases and/or data tables to recover, use a Teradata ARC
statement. Multiple databases or data tables must all use the same journal table because a
single recovery operation only uses change images from one journal.
Note: When a PPI table or a Column Partition (CP) table is restored, Teradata ARC
automatically executes the Revalidate step during the BUILD phase. The Revalidate process
increases the Util version of the table. This is considered to be a structural change to the table.
Because of this change, a PPI table or a CP table cannot be recovered using either the RollBack
or RollForward operation after it is restored.
A Teradata ARC statement also identifies whether the restored or current journal subtable is to
be used in the recovery activity.
To recover data tables from more than one archived journal, perform the recovery operation
in three steps:
1
Restore an archived journal.
2
Recover the data tables using the restored journal.
3
Repeat steps 1 and 2 until all the tables are recovered.
Teradata ARC completes the requested recovery action as long as the table structure is the
same as the structure of the change images. If the structures are not the same, Teradata ARC
stops the requested recovery for the data table and returns an error message. When recovering
multiple tables, the stopped recovery of one table does not affect recovery of the other tables.
80
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Recovering Tables and Databases
Teradata Database does not generate transient journal images during a rollback or rollforward
operation. If the rollback or rollforward operation is not completed, the data tables being
recovered are left in an unknown state.
If the rollback or rollforward is not completed because of a:
•
Hardware failure, Teradata ARC automatically restarts the recovery operation.
•
Client failure, resubmit Teradata ARC with the RESTART option.
If the HUT locks on the data tables are released prior to restarting and completing the
Teradata ARC rollback or rollforward operation, the state of the data is unpredictable.
Teradata Database does not generate permanent journal images during a rollback or
rollforward operation. Consequently, a rollback operation might not be completed properly.
For example, assume:
1
A batch job updates a data table and creates both before- and after-images.
2
The batch job finishes.
3
Later (because of an error in the batch update) the table is rolled back using the beforeimages.
4
A disk is replaced and the most recent archive is restored. A rollforward is performed to
incorporate any changes that have occurred since the archive was created.
If the same journal that was used in the rollback described above is used in the rollforward, it
might be expected that the updates created by the batch job would not be reapplied because
the batch job had been rolled back. Instead, the Teradata Database does not modify a journal
table as the result of a recovery operation and the updates executed by the batch program are
reapplied to the data tables during the rollforward.
Recovering With Offline AMPs
If a recovery is performed while AMPs are offline and the tables being recovered have fallback,
the Teradata Database automatically generates the necessary information to recover the tables
when the offline AMPs come back online. However, if the recovery includes tables without
fallback, or if the recovery is from a cluster archive, restart recovery when the AMPs come back
online.
Be especially careful of performing a rollforward operation on tables without fallback and
with a single (local or remote) journal. If this type of operation uses the current journal as
input with an offline AMP, the AMP that uses the offline AMP as its backup is not rolled
forward. However, if the RELEASE LOCK option is specified on the rollforward operation,
HUT locks are released on all AMPs except those that are down and nonfallback tables that
have single after images and a down backup.
A similar option, the BACKUP NOT DOWN option, is available for the RELEASE LOCK
statement.
Roll forward nonfallback tables with HUT locks remaining (because of a down backup AMP)
when the down AMP is brought back online. Also, roll forward the down AMP when bringing
it back online.
Teradata Archive/Recovery Utility Reference, Release 16.00
81
Chapter 2: Archive/Recovery Operations
Copying Objects
Recovering a Specific AMP
When restoring a nonfallback table with after-image journaling to a specific AMP after a disk
failure, use a ROLLFORWARD statement followed by a BUILD statement of the nonfallback
table.
If the nonfallback table has unique indexes, rollforward time may be improved by using the
PRIMARY DATA option. This option instructs the rollforward process to skip unique
secondary index change images in the journal. These indexes would be invalid from the
specific-AMP restore operation, therefore the PRIMARY DATA option might save a
significant amount of I/O. Revalidate the indexes following the rollforward with the BUILD
statement.
Copying Objects
Teradata ARC can copy (or restore) an object to a different Teradata Database environment.
Use the COPY statement to:
•
Replace an object in a target database
•
Create an object in a target database
•
Move an archived file to a different Teradata Database other than the one from which the
archive was made
•
Move an archived file to the same Teradata Database from which the archive was made
Note: The ability to copy all objects as individual objects is a feature of TTU 13.00.00 and
later. Triggers cannot be copied. For a complete list of objects supported by Teradata ARC, see
“Appendix A Database Objects” on page 271.
Copy vs. Restore
The difference between copy and restore depends on the kind of operation being performed:
•
A restore operation moves data from archived files back to the same Teradata Database
from which it was archived or moves data to a different Teradata Database so long as
database DBC is already restored.
•
A copy operation moves data from an archived file to any existing Teradata Database and
creates a new object if one does not already exist on that target database. (The target object
does not have to exist, however, the database must already exist.)
When selected partitions are copied, the table must exist and be a table that was previously
copied as a full-table copy.
Conditions for Using the COPY Statement
A COPY statement allows the creation of objects and giving them different names. In
database-level copy operations, the statement allows copying to a database with a different
name.
To use the COPY statement, these conditions must be met:
82
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Copying Objects
•
Restore access privileges on the target database or table are required.
•
A target database must exist to copy a database or an individual object.
•
When copying a single table that does not exist on the target system, you must have
CREATE TABLE and RESTORE database access privileges for the target database.
•
If the archived database contains a journal table that needs to be copied, the target
database must also have a journal table.
Copying Database DBC
Do not use database DBC as the target database name in a COPY statement. Using DBC as the
source database in a COPY FROM statement is allowed, but specifying a target database name
that is different than DBC is required. This allows DBC to be copied from a source system to a
different database on a target system. When used as the source database in a COPY FROM
statement, database DBC is not linked to database SYSUDTLIB. Therefore, only database
DBC is copied.
Copying Database SYSUDTLIB
Do not use database SYSUDTLIB as the target database name in a COPY statement. Using
SYSUDTLIB as the source database in a COPY FROM statement is allowed, but specifying a
target database name that is different than SYSUDTLIB is required. This allows SYSUDTLIB
to be copied from a source system to a different database on a target system. When used as the
source database in a COPY FROM statement, database SYSUDTLIB is not linked to database
DBC. Therefore, only database SYSUDTLIB is copied.
Copying Data Table Archives
Always copy data tables before copying any associated journal tables. When a data table is
copied to a new environment, Teradata ARC creates a new table or replaces an existing table
on the target Teradata Database. COPY only replaces existing permanent journal tables; it
cannot create permanent journal tables. For data table archives, note the following:
•
If the target database does not have tables with the intended names, the copy operation
creates them.
•
If the target database has tables with the intended names, they are replaced by the archived
table data. The existing table data and table definition are replaced by the data from the
archive.
When copying data tables, these operations are allowed:
•
Disabling journaling
•
Specifying a journal table in a database different from the database that is receiving the
table
•
Changing a fallback table to a nonfallback table
Copying Selected Partitions
Copying selected partitions in a PPI table works the same way as restoring selected partitions.
For more information, see “Restoring Selected Partitions” on page 64.
Teradata Archive/Recovery Utility Reference, Release 16.00
83
Chapter 2: Archive/Recovery Operations
Copying Objects
HUT Locks in Copy Operations
Copy operations apply exclusive utility (HUT) locks on the object to be copied. Therefore, if a
full database is copied from a complete database archive, Teradata ARC locks the entire
database. Similarly, if a single table is copied from a table-level archive, Teradata ARC locks
that table (but only that table). To copy selected partitions, Teradata ARC applies a HUT write
lock.
When an SQL-UDF is copied, the corresponding DBC.DBASE row is locked with a write hash
lock. This prevents the user from doing any DDL on that database for the duration of the copy
operation.
Copying Large Objects (LOBs)
Teradata ARC supports copying tables that contain large object columns as long as the
database systems are enabled for large object support and the copy is not for selected
partitions. Starting with Teradata ARC 14.00, large object columns can be copied to a system
that uses a hash function that is different from the one used for the archive. To copy an archive
of selected partitions of LOBs, perform a full-table copy.
Copying System Join Indexes (SJIs)
Teradata ARC now supports copying System Join Indexes (SJIs) to a different database. This
feature requires logic in both ARC and the DBS. This logic is integrated in the 15.00 versions
(and higher) of both products so that no additional changes need to be made to use the
feature when those versions are used. Any time that an earlier version of either product is
used, the logic to copy SJIs to a different database must be enabled on both sides. If the logic is
not enabled on either side, the SJIs will not be copied to its intended target database. Both sets
of changes must be enabled during both the ARCHIVE step and the COPY FROM step.
The ARC changes have also been released into ARC 14.10 (but not to earlier ARC releases) and
must be enabled by specifying the COPYSJI runtime parameter. The DBS changes have also
been released into versions 14.0 and 14.10 and must be enabled by setting the general
DBSControl flag #361 (DumpSJIWithoutDBName) to TRUE.
COPY Examples
The following examples illustrate data and journal table copies to all AMPs. Examples include
copying:
84
•
A table to a different configuration
•
A data table to a new database
•
A table to a new table or database
•
A database to a new database
•
Two databases with fallback
•
A journal table from two archives
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Copying Objects
Copying a Table to a Different Configuration
An archived data table named Personnel.Department is copied to a different Teradata
Database in this example:
COPY DATA TABLE (Personnel.Department)
,FILE = ARCHIVE;
The table has the same name in the target database that it had in the archived database, and
belongs to a database named Personnel on the target Teradata Database. In the example, if:
•
A database named Personnel does not exist on the target database, Teradata ARC rejects
the copy operation.
•
A table named Personnel.Department does not exist in the target database, Teradata ARC
creates one.
•
The data table being restored has permanent journaling on the source system, it has
permanent journaling on the target system as well.
•
Teradata ARC creates a new table for the copy operation, the target journal table is the
default journal for database Personnel. Otherwise, the target journal table is the journal
table associated with the table that is being replaced on the target system.
•
Teradata ARC creates a new table for the copy operation and the Personnel database does
not have a default journal, then Teradata ARC rejects the copy operation.
Copying a Data Table to a New Database
In this example, an archived data table is copied under a new database name to a different
Teradata Database:
COPY DATA TABLE (Personnel.Department)
(FROM (OldPersonnel), NO JOURNAL, NO FALLBACK)
,FILE = ARCHIVE;
In this example:
•
The new database name is Personnel; the old database name is OldPersonnel. In both
databases, the table name is Department.
•
If a database named Personnel does not exist in the target system, Teradata ARC rejects the
copy operation.
•
If a data table named Department does not exist in the Personnel database on the target
system, Teradata ARC creates one with that name.
•
The NO JOURNAL option indicates that this table should not have permanent journaling
in the target database.
•
The NO FALLBACK option indicates that the new table should be nonfallback on the
target system, even if it was a fallback table on the archived system.
Teradata Archive/Recovery Utility Reference, Release 16.00
85
Chapter 2: Archive/Recovery Operations
Copying Objects
Copying a Table to a New Table or Database
An archived data table is copied under new database and table names to a different Teradata
Database in this example:
COPY DATA TABLE (Personnel.Department)
(FROM (OldPersonnel.Dept)
,WITH JOURNAL TABLE = PersonnelJnlDB.PermanentJournal
,NO FALLBACK)
,FILE = ARCHIVE;
In the example:
•
The new database name is Personnel; the old database name is OldPersonnel. The new
data table name is Department; the old data table name is Dept.
•
If a database named Personnel does not exist in the target system, Teradata Archive/
Recovery Utility rejects the copy operation.
•
If a data table named Department does not exist in the Personnel database on the target
system, Teradata ARC creates one with that name.
•
The WITH JOURNAL option indicates to carry journaling over from the old Teradata
Database to the new Teradata Database for this table. In the example, the user copies
journal images into the permanent journal table named
PersonnelJnlDB.PermanentJournal.
•
The NO FALLBACK option indicates that the new table is to be nonfallback on the target
system, even if it was a fallback table on the archived system.
•
If the table was nonfallback on the archive, then it retains its journaling options on the
target system (that is, the journal options do not change).
•
If the table was fallback on the archive, then its images are dual on the target system.
Copying a Database to a New Database
In this example, an archived database is copied with a new name to a different Teradata
Database:
COPY DATA TABLE (Personnel)
(FROM (OldPersonnel)
,WITH JOURNAL TABLE = PersonnelJnlDB.PermanentJournal
,NO FALLBACK)
,FILE = ARCHIVE;
In the example:
86
•
The new database name is Personnel; the old database name is OldPersonnel.
•
If a database named Personnel does not exist in the target system, Teradata ARC rejects the
copy operation.
•
The WITH JOURNAL option indicates to carry journaling over from the old Teradata
Database to the new Teradata Database for any tables in the copied database that have
journaling enabled on the archived system. In the example, the user copies journal images
into the permanent journal table named PersonnelJnlDB.PermanentJournal. Although the
tables are copied with the specified journal, the target database default does not change.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Copying Objects
•
The NO FALLBACK option indicates that the new table is to be nonfallback on the target
system, even if it was a fallback table on the archived system.
•
If the table was nonfallback on the archive, then it retains its journaling options on the
target system (that is, the journaling options do not change).
•
If the table was fallback on the archive, its images are dual on the target system.
Although Teradata ARC copies the tables as nonfallback, the default for the target database
does not change. For example, if the database named Personnel is defined with fallback,
the database keeps that default after the copy operation is complete. The NO FALLBACK
option applies only to the tables within the archive.
Copying Two Databases With Fallback
Here, two databases are copied that have different fallback attributes:
COPY DATA TABLE (Personnel)
(FROM (OldPersonnel), NO FALLBACK)
,(Finance) (NO JOURNAL)
,FILE = ARCHIVE;
In the example:
•
The database named Personnel is restored from an archived database named OldPersonnel
with all tables defined as nonfallback after the copy operation.
•
The database named Finance is restored from an archived database also named Finance
with all journaling associated with its data tables disabled.
Copying a Journal Table From Two Archives
This example illustrates how a journal table that has some nonfallback images spread across
two separate archive files (because an AMP was down at the time of an all-AMPs archive
operation) is restored:
COPY JOURNAL TABLE (PersonnelJnlDB.PermanentJournal)
(APPLY TO (Personnel.Employee, Personnel.Department))
,NO BUILD
,FILE = ARCHIVE1;
COPY JOURNAL TABLE (PersonnelJnlDB.PermanentJournal)
(APPLY TO (Personnel.Employee, Personnel.Department))
,FILE = ARCHIVE2;
When restoring to a different configuration, Teradata ARC redistributes journal images
among all the AMPs. Teradata ARC builds the journal only after it completes the all-AMPs
data table archive operation.
To perform this operation:
•
Copy the all-AMPs archive and specify the NO BUILD option.
•
Copy the specific-AMP archive, but do not specify the NO BUILD option.
This operation is only allowed if the applicable data tables are already copied into the target
system. In the example, the archive of all AMPs (except the AMP that was offline) is called
ARCHIVE1. The archive of the AMP that was offline at the time of the main archive is called
ARCHIVE2.
Teradata Archive/Recovery Utility Reference, Release 16.00
87
Chapter 2: Archive/Recovery Operations
Using Host Utility Locks
Teradata ARC copies only checkpoint journal images and images associated with the tables
named Personnel.Employee and Personnel.Department to the target system. Teradata ARC
discards all other journal images.
Using Host Utility Locks
A Host Utility (HUT) lock, also referred to as a utility lock in this book, is the lock that
Teradata ARC places when most Teradata ARC commands are executed. Exceptions are
CHECKPOINT, DELETE DATABASE, and DELETE JOURNAL. When Teradata ARC places a
HUT lock on a object, for example, with an ARCHIVE or RESTORE command, the following
conditions apply:
•
HUT locks are associated with the currently logged-on user who entered the statement,
not with a job or transaction.
•
HUT locks are placed only on the AMPs that are participating in a Teradata ARC
operation.
•
A HUT lock that is placed for a user on an object at one level never conflicts with another
level of lock on the same object for the same user.
The ShowLocks utility, described in Utilities, shows the HUT locks that are currently applied
to a Teradata Database.
Transaction vs. Utility Locks
From a blocking perspective, the behavior of a HUT lock is the same as a transaction lock. For
example, a read lock prevents another job from claiming a write lock or exclusive lock.
Conversely, a write lock prevents a read lock, write lock, or exclusive lock. An exclusive lock
prevents any other type of lock. Conflicting HUT and transaction locks on an object block
each other, just as if both were transaction locks or both were HUT locks.
The two differences between transaction locks and HUT locks are:
•
Permanence
A transaction lock exists only for the duration of a transaction; after the transaction
completes or is aborted, the lock is released. A HUT lock is more permanent. It is only
removed if an explicit RELEASE LOCK command is issued, or if the locking operation (for
example, execution of an ARCHIVE or RESTORE command) completes successfully and
has the RELEASE LOCK option specified. The lock remains even if the command fails, the
ARC job terminates, or if the Teradata system has a restart.
•
Scope
A transaction lock has session scope, meaning that it applies to any command submitted
by the session running the locking transaction. Another session running against the same
object(s) must claim its own locks, even if it is running under the same user.
A HUT lock has user scope, meaning that it applies to any Teradata ARC operation that
the user is performing on the object. A user must have only one HUT lock against an
object at a time. The HUT lock allows any Teradata ARC operation from that user to access
88
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Using Host Utility Locks
the locked object(s), but blocks other users from acquiring a conflicting HUT lock on the
object(s). Additionally, the HUT lock blocks all conflicting transaction lock attempts on
the object, even if the transaction locks are from the same user. There is only one HUT
lock. Therefore, a RELEASE LOCK operation for that user on an object always releases an
existing HUT lock, even if other jobs for that user are still accessing the object.
Notice:
Releasing a utility lock on a database or table that is being accessed by another Teradata
Archive/Recovery Utility job could result in data corruption and unexpected errors from
ARCMAIN or the database.
HUT locks that are not released are automatically reinstated following a restart.
For information on transaction locking and processing, refer to SQL Request and
Transaction Processing.
Teradata ARC Locks During an Archive Operation
A Teradata ARC operation locks objects that are being archived. To see where Teradata ARC
places read utility locks during an archive, refer to Table 8.
Table 8: Read Locks During Archive
Object Being Archived
Action by Teradata Archive/Recovery Utility
Entire database
Places the read utility lock at the database level before it archives any
tables
Individual table
Places the read utility lock on each table before it archives that table
If the RELEASE LOCK option is specified on the ARCHIVE statement, Teradata ARC releases
the read utility lock when it successfully completes the archive of a database or table.
Therefore, only a full database archive creates consistency among tables archived from the
same database.
GROUP READ LOCK
A group read HUT lock can be specified for tables that are defined with an after-image journal
option. If the GROUP READ LOCK option is used to perform an online backup of a databaselevel object, define after-journaling for every table in the database. This restriction also applies
to excluded tables; if a table is excluded as part of a database-level GROUP READ LOCK
archive, it must still have an after-journal table defined so ARC can accept the GROUP READ
LOCK option.
When a group read HUT lock is specified, the following events occur:
•
The AMPs first place an access HUT lock on the entire affected table to prevent users from
changing the definition of the table (that is, DDL changes) while the table is being
archived.
•
The AMPs then place a series of rolling read HUT locks on blocks of (approximately)
64,000 bytes of table data. Each AMP places a read HUT lock on a block of data while the
AMP sends the data block across the IFP or PE to the client.
Teradata Archive/Recovery Utility Reference, Release 16.00
89
Chapter 2: Archive/Recovery Operations
Using Host Utility Locks
When a block transmission finishes, the AMP releases the read HUT lock on that block and
places another read HUT lock on the next block of 64,000 bytes. The AMPs repeat this process
until they archive the entire table.
Concurrent Transactions: Archives and Updates
When an archive with a group read HUT lock is finished, the archive may contain partial
results of a concurrently executing update transaction. Consider the following example:
1
The archive operation reads row 1 and sends it to the client.
2
Transaction A reads and updates row 3.
3
The archive operation reads row 2 and sends it to the client.
4
Transaction A reads and updates row 1.
5
Transaction A ends, releasing its HUT lock on rows 1 and 3.
6
The archive operation reads row 3 and sends it to the client.
The archive described above contains:
•
A copy of row 3 after an update by transaction A, and
•
A copy of row 1 before its update by the transaction.
For this reason, define an after-image permanent journal for tables to be archived using a
group read HUT lock. After the archive finishes, archive the corresponding journal. The afterimages on the journal archive provide a consistent view of any transaction that executes
concurrently with the archive.
Archiving Using a GROUP READ LOCK
Archiving a table with a group read HUT lock at the same time users are updating it is
allowed.
Tables archived with the group read HUT lock option do not archive with secondary indexes.
However, the secondary indexes can be rebuilt after that table is restored.
Restoring a GROUP READ LOCK Archive
To restore tables that are archived under a group read HUT lock, define them with an afterimage journal. Always follow the restore of an archived data set taken under a group read HUT
lock by performing a rollforward operation, using the after-image journal that was created
during the archive. Rollforward ensures that the data in the tables is consistent. To learn more
about the rollforward operation, see “ROLLFORWARD” on page 255.
An archive taken under a group read HUT lock does not archive secondary indexes defined for
the tables being archived. Therefore, rebuilding the indexes after restoring the archive is
required. If the NO BUILD option is not specified for the restore, Teradata ARC builds
nonunique secondary indexes as part of the restore process. Teradata ARC does not rebuild
unique secondary indexes. To learn more about the NO BUILD option, see “NO BUILD
Option” on page 77.
90
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Using Host Utility Locks
The Teradata ARC restore process marks all existing unique secondary indexes as invalid for
the table being restored. Rebuild or drop all secondary indexes by doing one of the following:
•
Issue a BUILD DATA TABLES statement.
•
Drop and recreate the index.
Rebuild unique secondary indexes only after the rollforward of the data tables is complete.
Restoring a GROUP READ LOCK Archive of Selected Partitions
When specifying the GROUP READ LOCK option for an archive of selected partitions, use
the next procedure to perform the restore. Ensure that the structure version of the tables being
restored match the structure version of the tables stored on the backup file.
1
Restore or copy the selected partitions to the target table using the NO BUILD option.
2
Restore or copy journal tables of the journal backup associated with the time period of
interest.
3
Roll forward the restored tables using the USE RESTORED JOURNAL and PRIMARY
DATA options.
4
Submit a BUILD DATA TABLES statement to complete the build of the restored tables.
During this procedure, an “out-of-range” error condition can occur if the selected partitions
for the restore are inconsistent with the change rows being rolled forward (that is, the journal
contains rows outside of the selected partitions for the restore). ROLL processing continues on
the indicated table and the out-of-range changes are made to the target table.
Notice:
This procedure corrects the structural validity of the table. However, it does not reveal the
nature of the updates applied by the ROLL operation that fall outside of the selected partitions
for the restore. There might be significant partitions that were missed in the restore and that
were only partially updated by the out-of-range changes. Audit the entire recovery process to
ensure that no data is missing. If missing data is suspected, repeat the above procedure for the
affected partitions.
Teradata ARC Locks During a Restore Operation
A restore operation uses an exclusive HUT lock when data tables are restored.
•
To restore an entire database, Teradata ARC places the exclusive lock at the database level.
•
To restore selected tables, Teradata ARC places the exclusive lock on each table before it
starts to restore the table.
•
To restore (or copy) selected partitions of PPI tables, Teradata ARC uses a write lock.
•
To restore selected partitions of PPI tables with GROUP READ LOCK, the entire table
locks rather than a part of the table.
Restoring Tables
If the RELEASE LOCK keyword in the RESTORE statement is specified and that statement
specifies that selected tables are to be restored, Teradata ARC releases the exclusive HUT lock
on each table as soon as it finishes restoring it.
Teradata Archive/Recovery Utility Reference, Release 16.00
91
Chapter 2: Archive/Recovery Operations
Using Host Utility Locks
Restoring a Journal Table
When a journal table is restored, Teradata ARC places a write HUT lock on the table. The
restored journal table and the journal table that is recording changed rows are different
physical tables. Consequently, the write HUT lock exerted by a journal table restore does not
prevent change images from recording to the journal table. The write HUT lock only prevents
concurrent restore operations to the same journal.
Locks Associated with Other Operations
Teradata also uses the following locks.
Build Operation Exclusive Utility Lock
A build operation places an exclusive utility lock on databases where all tables are being built.
Locks are applied only to the tables that are being built.
Checkpoint Operation Transaction Locks
The CHECKPOINT statement places transaction locks rather than utility (HUT) locks.
Transaction locks are automatically released when the checkpoint operation terminates. A
checkpoint operation places:
•
A write lock on the journal table for which checkpoints are being set.
•
Read locks on the data tables that may contribute change images to the journal table for
which checkpoints are being set.
The operation places these locks and then writes the checkpoint. After the operation writes
the checkpoint, Teradata ARC releases the locks.
Note: If a journal is marked for later archiving using CHECKPOINT with the SAVE option,
the AMP places an access lock on the data tables instead of a read lock.
Rollback/Rollforward Operations Exclusive and Read Utility Locks
Rollback and rollforward operations place these utility (HUT) locks:
•
Exclusive locks on the tables to be recovered.
•
A read utility lock on the journal table that is input to the recovery operation.
A journal table with a read utility lock can still log change images. The read utility lock on the
journal table does prevent concurrent Teradata ARC statements, such as DELETE JOURNAL.
Delete Journal Operations Write Lock
DELETE JOURNAL places transaction locks, not utility (HUT) locks. When the operation
finishes, Teradata Archive/Recovery Utility automatically releases all transaction locks.
DELETE JOURNAL places a write lock on the journal table to be deleted. DELETE JOURNAL
deletes only the saved journal, therefore the active portion of the journal can still log change
images during the delete operation. Other archive or recovery operations are not permitted.
92
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Setting Up Journal Tables
Monitoring for Deadlock Conditions
When attempting to obtain one or more locks, Teradata ARC may be competing with another
external process for those locks. When such a situation occurs, it is possible for both processes
to get into a 'deadlock' situation where each process may obtain one or more of the locks that
it needs but is blocked because the other process owns the remaining lock(s) that it needs. If
neither process is willing to give up the locks that is owns, then you have a 'deadlock'
condition and both processes will wait forever to get the locks that it needs from the other
process. The only way to clear up such a 'deadlock' condition is to abort one of the processes
to free up the locks that it owns so that the remaining process can obtain the remaining lock(s)
that it needs to start running. But, the main issue is how to determine if your Teradata ARC
job is in a 'deadlock' condition and what other processes are involved so that the user can
decide which process should get aborted and which should continue.
Starting with TTU 15.10, the Database Query Log (DBQL) utility provides a method for using
the LOCK LOGGER utility to monitor jobs for deadlock conditions. If the Teradata ARC user
is interested in setting up his system to monitor Teradata ARC jobs for deadlock conditions,
please see Chapter 16 of the Database Administration manual, release 15.10 or later, for details
on how to set that up (also search for 'Lock Log' in other parts of the manual). See “Additional
Information” on page 5 for a reference to the Database Administration manual.
Setting Up Journal Tables
Select the following journal options either at the database level or the table level. Specify single
image or dual image journals.
The terms in Table 9 are used to describe journaling.
Note: Teradata Archive/Recovery Utility supports the generation of both before-image and
after-image permanent journals, both of which can be either local or remote.
Note: Starting with TTU 15.10.00.00, the use of Permanent Journal tables is deprecated by the
Teradata Archive/Recovery Utility. This includes all statements and options that are related to
Journal tables that are mentioned in this Manual.
Table 9: Journal Options
Journal Options
Description
After image
The image of the row after changes have been made to it.
Before image
The image of the row before changes have been made to it.
Dual image
Two copies of the journal that are written to two different AMPs.
Primary AMP
The AMP on which the primary copy of a row resides.
Fallback AMP
The AMP on which the copy of a row resides.
The Teradata Database distributes duplicate data rows to fallback processors by assigning the hash
code of the row to a different AMP in the same cluster.
Teradata Archive/Recovery Utility Reference, Release 16.00
93
Chapter 2: Archive/Recovery Operations
Setting Up Journal Tables
Table 9: Journal Options (continued)
Journal Options
Description
Backup AMP
The AMP on which copies of journal rows for nonfallback tables reside, including journals with single
after-images, one copy of the dual after images, and one copy of the dual before images.
Backup AMPs differ from fallback AMPs because journal images are not distributed to backup AMPs
through a hashing algorithm. Instead, all images for one AMP go to a single backup. The backup for
each AMP is always in the same cluster.
For example, if AMPs A, B and C are in the same cluster, A backs up B, B backs up C and C backs up A.
Location of Change Data
Teradata ARC places change images in whatever journal table is defined. The table can be in
the same database as the data tables or it can be in another database.
Each database can contain only one journal table, but any journal table and the data tables that
use it can reside in the same or in different databases. Set up journaling to have a single journal
table for all data tables in the Teradata Database, a separate journal table for each table within
a database, or a combination of these two.
If a data table does not have fallback protection, Teradata ARC always writes its after images to
another AMP (backup AMP) in the same cluster as the one containing the data being
changed.
Table 10 shows where Teradata ARC writes journal change rows based on all the possible
combinations of table protection and the specified journal option.
Table 10: Journal Change Row Location
Journal Option
Fallback
Location of Change Data
After
Yes
Primary and Fallback AMPs
Before
Yes
Primary and Fallback AMPs
Dual After
Yes
Primary and Fallback AMPs
Dual Before
Yes
Primary and Fallback AMPs
After
No
Backup AMP
Local After
No
Primary AMP
Before
No
Primary AMP
Dual After
No
Primary and Backup AMP
Dual Before
No
Primary and Backup AMP
If the dual option is specified, Teradata ARC writes an after-image on the primary and backup
AMP. For a fallback table with a single after-image journal, Teradata ARC writes a journal row
to the primary and fallback AMP. Teradata ARC also writes single before-images for
nonfallback tables to the same processor as the data row being changed.
94
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Setting Up Journal Tables
Subtables
Teradata Database maintains journals in tables similar to data tables. Each journal table
consists of active, saved and restored subtables. The active and saved subtables are the current
journal. Teradata Archive/Recovery Utility appends change images from updates to a data
table to the active subtable. Teradata ARC also writes rows marking a checkpoint on the
journal to the active subtable.
Before archiving a journal table, use the CHECKPOINT statement with the SAVE option. The
checkpoint operation logically terminates the active portion of the journal and appends it to
the current saved portion. Archive this saved portion to the client or delete it from the
Teradata Database.
When an archived journal is restored to the Teradata Database, Teradata ARC places change
images in the restored subtable. For more efficient recovery operations, Teradata ARC always
places the change images on the processor they apply to, instead of placing them on a backup
processor. To delete a restored journal subtable, use the DELETE JOURNAL table statement.
Roll operations can use either the current journal or the restored journal. When the current
journal is specified, Teradata ARC automatically uses both the active and saved subtables of
the journal table.
Local Journaling
Teradata ARC allows specifying whether single after-image journal rows for nonfallback data
tables are written on the same AMP as the changed data rows (called local single after-image
journaling) or written to another AMP in the cluster (called remote single after-image
journal).
Use the local single after-image journal only with nonfallback tables. It is specified with the
JOURNAL option in these SQL statements:
•
CREATE DATABASE
•
CREATE USER
•
MODIFY DATABASE
•
MODIFY USER
•
CREATE TABLE
•
ALTER TABLE
The system dictionary table/view columns describing journaling types are modified to be able
to describe LOCAL single after-image journaling.
Teradata ARC rules for local single after-image journal and single before-image journal are the
same except that a single local after-image journal is used with ROLLFORWARD only and
single before-image journal is used with ROLLBACK only.
Local single after-image journaling reduces AMP path length. Without file system fault
isolation support, the recoverability of user data tables against certain type of software errors
with local single after-image journaling is less efficient than recovering with remote single
after-image journaling.
Teradata Archive/Recovery Utility Reference, Release 16.00
95
Chapter 2: Archive/Recovery Operations
Setting Up Journal Tables
Archiving Journal Tables
Prior to archiving journal tables, use the CHECKPOINT statement to bookmark each journal
table to prepare for archival. For more information, see “CHECKPOINT” on page 196.
In this example, journal tables for database PJ are archived to a file with a filename of
ARCHIV1. The assumption is that database PJ has one or more journal tables.
LOGON TDPID/DBC,DBC;
CHECKPOINT (PJ) ALL, WITH SAVE;
ARCHIVE JOURNAL TABLES (PJ) ALL,
RELEASE LOCK,
FILE=ARCHIV1;
LOGOFF;
In this example, all of the journal tables in the Teradata Database are archived to a file named
ARCHIV2. The assumption is that each journal table in the system has already been
bookmarked with the CHECKPOINT command to prepare for archival.
LOGON TDPID/DBC,DBC;
ARCHIVE JOURNAL TABLES (DBC) ALL,
EXCLUDE (DBC),
NONEMPTY DATABASES,
RELEASE LOCK,
FILE=ARCHIV2;
LOGOFF;
You can include a single CHECKPOINT statement to bookmark all journal tables in the
system. If there are any databases without a PERMANENT journal, the Teradata Database will
automatically flag those databases. Arcmain will display a non-fatal 3566 error (Database
does not have a PERMANENT journal) for each database in response to the
CHECKPOINT request.
LOGON TDPID/DBC,DBC;
CHECKPOINT (DBC) ALL, WITH SAVE;
ARCHIVE JOURNAL TABLES (DBC) ALL,
EXCLUDE (DBC),
NONEMPTY DATABASES,
RELEASE LOCK,
FILE=ARCHIV2;
LOGOFF;
Journal Impact on Recovery
If a logical disk is damaged, data tables with local single after-image journaling might not be
recovered beyond the last archived data. On the other hand, data tables with remote single
after-image journaling might be recovered to the most recent transaction.
While the risks of cylinder corruption are minimal, a cylinder containing a user table and
master journal, or a master index, can become corrupted. If this occurs, local journaling does
not offer as much protection as remote journaling.
96
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Controlling Journal Checkpoint Operations
Controlling Journal Checkpoint Operations
A checkpoint places a marker at the chronological end of the active journal subtable. When
the checkpoint is taken, the Teradata Database assigns an event number to it and returns that
number as a response to the CHECKPOINT statement. A name can be supplied with
CHECKPOINT and used to reference the checkpoint in future operations.
During a checkpoint with a save operation, the Teradata Database logically appends the active
subtable of the journal to the end of the saved subtable, then it automatically initiates a new
active subtable.
Archiving or deleting only the saved subtable of a journal is possible. An ARCHIVE
JOURNAL TABLE statement copies the saved subtable to the client. Use a DELETE JOURNAL
table statement to physically purge the saved subtable.
Checkpoint Names
Checkpoint names must be unique within a journal subtable to avoid ambiguity. If checkpoint
names are not unique, the Teradata Database uses one or both of the following:
•
The last instance of a named checkpoint found in rollback operations
•
The first instance of a named checkpoint found in rollforward operations.
Qualify a named checkpoint by the event number assigned to it by the Teradata Database.
Submitting a CHECKPOINT Statement
CHECKPOINT can be submitted as a Teradata ARC statement or as a Teradata SQL statement
(such as, through BTEQ or a user program). If an SQL CHECKPOINT statement is specified,
the active journal is not saved.
If CHECKPOINT is used with the SAVE option, the Teradata Database creates the saved
subtable. Enter this option only through the Teradata ARC form of the CHECKPOINT
statement.
Checkpoint and Locks
The checkpoint with a save operation can be taken with either a read lock or an access lock. If
a read lock is used, the Teradata Database suspends update activity for all data tables that
might write change images to the journal table for which checkpoints are being set. This
action provides a clean point on the journal.
When a save operation is performed using an access lock, the Teradata Database takes all
transactions that have written change images to the journal (which have not been committed)
and treats them as though they started after the checkpoint was written.
A checkpoint with a save operation under an access lock is useful only for coordinating
rollforward activities first from the restored journal and then from the current journal. (This
is because you do not know how the Teradata Database treats particular transactions.)
Teradata Archive/Recovery Utility Reference, Release 16.00
97
Chapter 2: Archive/Recovery Operations
Skipping a Join Index
Completing a Checkpoint With Offline AMPs
Issuing a checkpoint statement while AMPs are offline is permissible. If AMPs are offline when
a checkpoint is issued, Teradata ARC automatically generates a system log entry that takes the
checkpoint on the offline AMPs as soon as they return to online status. The system startup
process generates the checkpoint and requires no further action.
In addition to generating the checkpoint entry in the journal, system recovery updates the
journal table with change images that were generated while the AMP was offline. These
change images are for fallback tables or dual image journals that should be on the AMP that
was offline when the system took the checkpoint.
Skipping a Join Index
The SKIP JOIN INDEX option has been added to the archive/restore/copy/build statements.
The purpose of the SKIP JOIN INDEX option is to skip the Join Index handling in those
respective statements. For archive, this removes the logic to read and archive the DDL for
recreating the Join/Hash indexes. For restore/copy and build, this removes the logic to drop
any existing Join/Hash indexes on the tables to be restored/copied or built on the target system
before doing the restore/copy or build.
When SKIP JOIN INDEX is specified, the user must revert to manual handling of Join/Hash
indexes that existed prior to Teradata Tools and Utilities 13.00.00. Particularly for restore/copy
and build, the user must manually drop any existing Join/Hash indexes on the target system
and manually recreate them after restoring, copying, or building the specified tables (see “To
drop and recreate a join index” on page 59).
If the user does not drop the Join/Hash index when it is necessary, the Teradata Database will
return the following error message when trying to restore the specified tables:
5468: Cannot delete database because either this database has a join
or hash index in it or one of its tables has a join or hash index
defined on it.
The following table describes use cases where the SKIP JOIN INDEX option would be
designated within the archive, restore/copy, and build scripts. Additionally, the table indicates
if the use case requires the user to manually drop the Join/Hash index before a restore/copy or
build operation.
Table 11: Skip Join Index Use Case Scenarios
Skip Joint Index Settings
98
Use Case
Archive
Restore
Build
Drops Joint Index
Likely Use Case?
Case 1
0a
0
0
Teradata ARC
yes
Case 2
0
0
1
User
no
Case 3
0
1b
0
User
no
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Skipping a Join Index
Table 11: Skip Join Index Use Case Scenarios (continued)
Skip Joint Index Settings
Use Case
Archive
Restore
Build
Drops Joint Index
Likely Use Case?
Case 4
0
1
1
User
no
Case 5
1
0
0
Teradata ARC
yes
Case 6
1
0
1
User
no
Case 7
1
1
0
User
no
Case 8
1
1
1
User
no
a. A value of zero means the skip join index is not specified in the script.
b. A value of one means skip join index is specified in the script.
When specifying the SKIP JOIN INDEX option:
•
If specified on archive, then the archive file will not have any join index information.
•
If specified on restore, then no join index information should restore.
•
If specified on build, then no join index information should build.
The following are examples of archive, restore, and build scripts setting the skip join index
option. Based on the table above, just remove the line that designates skip join index for cases
where it is not needed.
ARCHIVE Script
.LOGON brunello/DBC,DBC;
ARCHIVE DATA TABLES (arctest01_ji) ALL,
skip join index,
RELEASE LOCK,
FILE = DATA1;
LOGOFF;
RESTORE Script
.LOGON brunello/DBC,DBC;
RESTORE DATA TABLE (arctest01_ji) ALL,
skip join index,
RELEASE LOCK,
FILE = DATA1;
LOGOFF;
BUILD Script
.LOGON brunello/DBC,DBC;
BUILD DATA TABLES (arctest01_ji) ALL,
Teradata Archive/Recovery Utility Reference, Release 16.00
99
Chapter 2: Archive/Recovery Operations
Skipping a Stat Collection
skip join index,
RELEASE LOCK,
LOGOFF;
Skipping a Stat Collection
The SKIP STAT COLLECTION option has been added to the archive/restore/copy statements.
The purpose of the SKIP STAT COLLECTION option is to skip the Stat Collection handling
in those respective statements. For archive, it removes the logic to read and archive the DDL
for recreating the Stat Collections. For restore/copy, it removes the logic to recreate the Stat
Collections after restoring/copying the table data for the associated object.
When SKIP STAT COLLECTION is specified, the user must revert to the manual handling of
Stat Collections that existed prior to Teradata Tools and Utilities 14.00. The user must
manually make a copy of the Stat Collection DDL from the source system. After the table data
has been restored/copied to the target system, the user must manually recreate the Stat
Collections by re-executing the Stat Collection DDLs that were saved from the source system.
When specifying the SKIP STAT COLLECTION option:
•
If specified on archive, then the archive file will not have any Stat Collection information.
•
If specified on restore, then no Stat Collection information should restore.
•
If specified on copy, then no Stat Collection information should copy.
The following are examples of archive, restore, and copy scripts setting the SKIP STAT
COLLECTION option. If you want arcmain to automatically archive/restore/copy the Stat
Collection information, which is the default behavior, remove the line that designates SKIP
STAT COLLECTION for the cases where it is not needed or desired.
ARCHIVE Script
.LOGON brunello/DBC,DBC;
ARCHIVE DATA TABLES (arctest) ALL,
skip stat collection,
RELEASE LOCK,
FILE = DATA1;
LOGOFF;
RESTORE Script
.LOGON brunello/DBC,DBC;
RESTORE DATA TABLE (arctest) ALL,
skip stat collection,
RELEASE LOCK,
FILE = DATA1;
LOGOFF;
COPY Script
.LOGON brunello/DBC,DBC;
100
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 2: Archive/Recovery Operations
Data compression
COPY DATA TABLE (arctest) ALL,
skip stat collection,
RELEASE LOCK,
FILE = DATA1;
LOGOFF;
Data compression
ARC does not compress/decompress data as it is received from or sent to the DBS. Any data
compression/decompression is done by the DBS. To learn more about what data compression/
decompression features are available in the DBS, please refer to the Database Design manual
and the Database Administration manual. See “Additional Information” on page 5 for
references to the Database Design manual and the Database Administration manual.
Teradata Archive/Recovery Utility Reference, Release 16.00
101
Chapter 2: Archive/Recovery Operations
Data compression
102
Teradata Archive/Recovery Utility Reference, Release 16.00
CHAPTER 3
Environment Variables
This chapter describes the variables used in Teradata ARC.
Table 12 summarizes Teradata ARC environment variables. For more detail, refer to the
sections that follow the table.
Table 12: Environment Variables
Variable
Description
ARCDFLT
The environment variable that points to the file containing the system-wide
default parameters values, publicly accessible on the network.
ARCENV
The environment variable in which any valid runtime parameters may be
specified.
ARCENVX
Same as ARCENV, except that ARCENVX has the highest override priority. See
ARCENV and ARCENVX in this chapter. Any runtime parameter set in
ARCENVX is guaranteed to be used.
Teradata Archive/Recovery Utility Reference, Release 16.00
103
Chapter 3: Environment Variables
ARCDFLT
ARCDFLT
Purpose
ARCDFLLT is the environment variable that points to the file that contains the default
runtime parameters.
Usage Notes
Here is an example using ARCDFLT:
SET ARCDFLT=C:\TESTARC\CONFIG.ARC
The example above is equivalent to:
ARCMAIN DEFAULT=C:\TESTARC\CONFIG.ARC
Almost all runtime parameters can be set as defaults in the file that is pointed to by ARCDFLT.
This avoids long command-line arguments. ARCMAIN loads the contents of the file pointed
to by ARCDFLT when ARCMAIN builds runtime parameters using these rules:
104
•
If the path of a DEFAULT parameter file includes embedded spaces or special characters,
enclose the path with single (’) or double (") quote marks.
•
Permanently set ARCDFLT using Start > Control Panel > System > Environment. However.
if a parameter is specified multiple times in different places, the override priority is:
a
Parameters set in ARCENVX
b
Actual runtime parameters on the command line
c
Parameters set in ARCENV
d
Parameters set in the default file pointed to by ARCDFLT
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 3: Environment Variables
ARCENV and ARCENVX
ARCENV and ARCENVX
Purpose
ARCENV and ARCENVX are environment variables in which any valid Teradata ARC
runtime parameters can be specified.
Usage Notes
Here is an example using ARCENV and ARCENVX:
SET ARCENV=WDIR=C:\TEMP\ARC\
SET ARCENVX=SESSIONS=20
Almost all runtime parameters can be set as defaults in ARCENV or ARCDFLT or in some
combination of both. ARCMAIN internally loads the contents of ARCENV, or the contents of
the default file to which ARCDFLT points, when ARCMAIN builds runtime parameters using
these rules:
•
If a parameter is specified multiple times in difference places, the override priority is:
a
Parameters set in ARCENVX
b
Actual runtime parameters on the command line
c
Parameters set in ARCENV
d
Parameters set in the default file pointed to by ARCDFLT
•
Specify runtime parameters in any order as long as PARM / DEFAULT is specified first.
•
Commas or white spaces can be used as delimiters to separate ARCMAIN runtime
parameter syntax elements. If any of the runtime parameters require commas or white
spaces in their values, the values must be delimited by double or single quotes:
WORKDIR='\my workdir'
IOPARM='device=tape1 tapeid=100001 volset=weeklybackup'
Teradata Archive/Recovery Utility Reference, Release 16.00
105
Chapter 3: Environment Variables
ARCENV and ARCENVX
106
Teradata Archive/Recovery Utility Reference, Release 16.00
CHAPTER 4
Runtime Parameters
When specifying Teradata ARC runtime parameters, specify PARM/DEFAULT first, then the
other parameters in any order.
For z/OS systems, specify runtime parameters in ARCPARM or PARM on the EXEC card of
the JCL.
Table 13 summarizes Teradata ARC runtime parameters. For more detail, refer to the sections
that follow the table.
Table 13: Runtime Parameters
Parameter
Platforms
Supported
CATALOG
All Platforms
Enables direct tape positioning for restore and copy operations
CHARSETNAME
All Platforms
Enables support of Chinese/Korean character sets when connecting to the
database
CHECKPOINT
All Platforms
Places restart information in the restart log each time the specified number
of data blocks are processed
CHECKSUM
All Platforms
Allows the Teradata Database and Teradata ARC to verify data packets
during ARCHIVE or RESTORE
COPYSJI
All Platforms
Enables the ARC side logic to allow System Join Indexes (SJIs) to be copied to
a different database
DATAENCRYPTION
All Platforms
Instructs Teradata ARC to use the payload encryption feature that is
supported by Teradata Call-Level Interface (CLI)
DBSERROR
All Platforms
Allows the user to override the severity of specified error code
Description
The new severity will determine how Teradata ARC handles the error.
DEFAULT
All Platforms
Defines the file path name that contains default options for Teradata ARC
ENCODINGCHARSET
All Platforms
Enables ARCMAIN to handle UTF16 strings when interacting with its
interfaces
ERRLOG
All Platforms
Creates a file to store Teradata ARC execution errors
All Platforms
Sets the program to reclassify a normally nonfatal error condition as fatal,
and abend if the selected message is issued
(Specify the prefix NO to
disable if previously
enabled, even in the PARM
file.)
FATAL
Teradata Archive/Recovery Utility Reference, Release 16.00
107
Chapter 4: Runtime Parameters
Table 13: Runtime Parameters (continued)
Parameter
Platforms
Supported
FILEDEF
All Platforms
Maps the internal name of a file to an external name, if specified
HALT
All Platforms
Terminates the current task if, during archive or recovery, the Teradata
Database fails or restarts, and the new configuration differs from the
configuration that preceded the failure
All Platforms
Displays object names in the standard output in hexadecimal notation
(Specify the prefix NO to
disable if previously
enabled, even in the PARM
file.)
HEX
Description
(Specify the prefix NO to
disable if previously
enabled, even in the PARM
file.)
IOMODULE
IOPARM
Linux,
Windows
Specifies the name of an access module
Linux,
Windows
Specifies the initialization string for the access module named in
IOMODULE
Note: The value of IOMODULE is passed directly to the tape access module.
ARCMAIN does not interpret the value.
Note: The value of IOPARM is passed directly to the access module.
ARCMAIN does not interpret the value.
LINEWRAP
All Platforms
Specifies the number of characters at which Teradata ARC wraps lines on its
output log
LOGON
All Platforms
Enables the runtime definition of the logon string
LOGSKIPPED
All Platforms
Instructs Teradata ARC to log all of the tables that are skipped during an
ARCHIVE operation
NOTMSM
All Platforms
except
mainframe
(IBM z/OS)
Turns off Teradata Multi-System Manager (TMSM) event logging
NOWARN
All Platforms
Allows the user to override the severity of specified warning error codes to
Info (severity 0)
The new severity will determine how Teradata ARC will handle the warning.
OUTLOG
Windows
Duplicates standard (stdout) output to a file
PARM
IBM z/OS
Specifies that frequently used runtime parameters are in a parameter file
Important: This parameter must be first when other parameters are
specified.
PAUSE
(Specify the prefix NO to
disable if previously
enabled, even in the PARM
file.)
108
All Platforms
Pauses execution if, during archive or recovery, the Teradata Database fails or
restarts, and the new configuration differs from the configuration that
preceded the failure
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
Table 13: Runtime Parameters (continued)
Parameter
Platforms
Supported
PERFFILE
All Platforms
Provides a performance logging option for Teradata ARC to export
performance data for job analysis
RESTART
All Platforms
Specifies the current operation is a restart of a previous operation that was
interrupted by a client failure
RESTARTLOG
All Platforms
Specifies the restart log name for ARCMAIN
SESSIONS
All Platforms
Specifies the number of Teradata sessions available for archive and recovery
operations
STARTAMP
All Platforms
Specifies a starting AMP for an all-AMPs archive rather than starting on a
random AMP or starting with AMP0
(Specify the prefix NO to
disable if previously
enabled, even in the PARM
file.)
UEN (Utility Event
Number)
VERBOSE
Windows
All Platforms
(Specify the prefix NO to
disable if previously
enabled, even in the PARM
file.)
WORKDIR
Windows
Description
Specifies the value that will be used to define %UEN% when in a RESTORE/
COPY operation
Sends Teradata ARC progress status information to the standard output
device (for example, SYSPRINT for IBM platforms)
Specifies the name of the working directory
Teradata Archive/Recovery Utility Reference, Release 16.00
109
Chapter 4: Runtime Parameters
CATALOG
CATALOG
Purpose
CATALOG provides direct tape positioning for restore and copy operations when a table or a
database is archived with the CATALOG runtime parameter enabled.
Syntax
CATALOG
NO
CTLG
ALLTABLES
OLDCATALOG
ALL
OLDCTLG
OBJONLY
OBJ
FILE
FILENAME
2412a002
where:
Syntax Element
Definition
CATALOG or CTLG
Enables CATALOG at the table level for ARCHIVE, RESTORE/COPY and/or
ANALYZE statements
CATALOGALLTABLES or CTLGALL
Synonyms for CATALOG
CATALOGOBJONLY or CTLGOBJ
Enables CATALOG at the object level for ARCHIVE, RESTORE/COPY and/or
ANALYZE statements
CATALOGFILE or CTLGFILE
Enables CATALOG to write all catalog information to a file on the client machine
The default file name for the catalog is 'CATALOG' on mainframe systems, and
'CATALOG_<uen>.CTG' on other systems, where <uen> is the Utility Event
Number for the archive job.
OLDCATALOG or OLDCTLG
When specified, along with CATALOG, enables use of the multi-table
implementation of CATALOG, instead of the single-table method which does not
create a new table for each ARCHIVE
NOCATALOG or NOCTLG
Disables CATALOG for ARCHIVE, RESTORE/COPY and/or ANALYZE statements
CATALOGFILENAME or CTLGFILE
Enables changing the file name for the catalog
NOCATALOGFILE or NOCTLGFILE
Disables CATALOGFILE so that catalog information is not created during
ARCHIVE, and not used during RESTORE or COPY
110
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
CATALOG
The following syntax is not supported:
•
CATALOGALL
•
CTLGALLTABLES
•
CATALOGOBJ
•
CTLGOBJONLY
Usage Notes
CATALOG has two levels of granularity: OBJECT and ALL tables. If CATALOG is specified at
the object level, the repositioning information is inserted into the CATALOG table for the
database header record only.
Change the database in which the CATALOG table is created by adding this runtime
parameter:
CATALOGDB=dbname
If CATALOGDB is not specified, Teradata ARC uses $ARC as the default CATALOG database.
Catalog information is saved in the database, therefore CATALOG enabled has a slight impact
on performance. However, it is recommended that when archiving a database that contains
many small or empty tables, disable CATALOG or, alternately, enable CATALOG at the object
level.
Teradata ARC automatically excludes an active CATALOG database from its object list to
avoid HUT lock conflicts. To archive, restore, or copy a CATALOG database, use
NOCATALOG.
Automatic Activation of CATALOG
Teradata ARC automatically checks for the following macro before starting an archive
operations if CATALOG is specified as a runtime parameter:
$ARC.ACTIVATE_CATALOG
If CATALOG is desired for all archive operations, create the following macro:
CREATE MACRO $ARC.ACTIVATE_CATALOG as (;);
Automatic activation of CATALOG only works for archive operations. Explicitly specify
CATALOG for restore/copy operations.
The CATALOG Table
The default CATALOG database is $ARC (or $NETVAULT_CATALOG when using NetVault).
It must be created and appropriate access privileges must be granted before the first use of the
CATALOG runtime parameter.
Catalog information for all archive jobs is kept in the one CATALOG table. To continue using
the previous multi-table implementation of CATALOG, specify the OLDCATALOG
parameter in addition to the CATALOG parameter. Using old catalog tables during RESTORE
or COPY is valid, even if OLDCATALOG is not specified. ARCMAIN searches for a table with
the old naming style, and uses it if one exists.
Teradata Archive/Recovery Utility Reference, Release 16.00
111
Chapter 4: Runtime Parameters
CATALOG
Example
For UEN 1234, the following table is automatically created at the beginning of an archive
operation:
$ARC.CATALOG_1234
If an error occurs when creating the CATALOG table, Teradata ARC terminates with a fatal
error. Correct the error and resubmit the archive job.
During an archive operation, a CATALOG table row is inserted at the following instances:
•
For each object, a database level CATALOG row is inserted at the end of the dictionary
phase.
•
For each table in the database, if the object level has not been specified, a table level
CATALOG row is inserted at the end of the data phase.
The CATALOG tables can be used for detailed archive history in conjunction with the
database DBC.RCEvent table. For information on the database DBC.RCEvent table, see the
Data Dictionary.
Changing the Primary Index in the CATALOG Table
In previous versions of Teradata ARC, CATALOG tables were created with inefficient primary
indexes that sometimes caused rows to be skewed on a limited number of AMPS. To correct
this, follow these steps to change the primary index of an existing catalog table.
Note: If a new catalog table is created by Teradata ARC, these steps are not necessary.
•
Create a new catalog table with the same DDL as the existing table:
CREATE TABLE CATALOGNEW AS CATALOG WITH NO DATA;
•
Modify the new catalog table to use the correct primary index:
ALTER TABLE CATALOGNEW
MODIFY PRIMARY INDEX(EVENTNUM, DATABASENAME, TABLENAME);
•
Copy all data from the existing catalog data to the new catalog table:
INSERT INTO CATALOGNEW SELECT * FROM CATALOG;
•
Remove the existing catalog table:
DROP TABLE CATALOG;
•
Rename the new catalog table to “CATALOG”:
RENAME TABLE CATALOGNEW AS CATALOG;
CATALOG Operations
Archive
CATALOG no longer creates a table for each ARCHIVE statement; instead, CATALOG uses a
single table named CATALOG. Multiple catalog tables are only created if the OLDCATALOG
parameter is specified.
Restore
Use CATALOG in a restore operation only to selectively restore one or some databases or
tables. To restore the majority of a tape or an entire tape, do not use CATALOG.
112
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
CATALOG
The UEN is saved in an archive header. Therefore, ARCMAIN knows which CATALOG table
in the CATALOG database to access to retrieve catalog information for a restore operation.
When ARCMAIN is about to search for the database header, it retrieves the CATALOG row for
the database and uses its repositioning information for direct tape repositioning.
These operations occur if CATALOG was enabled for the archive operation:
•
If CATALOG was enabled at the table level, ARCMAIN queries the CATALOG table for
that specific table and uses the repositioning information for the table header search.
•
If CATALOG was enabled at the object level at the time of the archive operation, then only
database level repositioning is done using the catalog information; the tapes are scanned
thereafter.
Copy
Use CATALOG in a copy operation only to selectively copy one or some databases or tables. If
most or all of a tape is being copied, do not use CATALOG.
Copy operations to the same system from which data was archived work similarly to restore
operations with respect to the use of catalog information. To use CATALOG in a copy
operation to a different system, knowing the UEN of the tape is necessary. Obtain the UEN by
using an ANALYZE statement or by reading the output from an ARCHIVE statement. Use the
UEN to archive and copy the corresponding CATALOG table from the source system to the
target system before running a copy operation.
Mirrored Files
When two different data sets are archived by specifying File twice in an ARCHIVE statement,
repositioning information is only saved in the CATALOG table for the primary, not the second
(called the mirrored) data set.
If the mirrored data set is used in a restore operation with CATALOG, Teradata ARC detects
that the mirrored data set is being used and automatically disables CATALOG.
Analyze
A CATALOG table can be generated offline using an ANALYZE statement that contains the
CATALOG keyword, however, the ANALYZE statement does not recognize CATALOG
specified as a runtime parameter. Tapes are scanned and the CATALOG rows are inserted into
the CATALOG table for the database header records and the table header records in the tape
set. After the CATALOG table is generated, it can be used in restore/copy operations.
Specifying CATALOG as a keyword in an ANALYZE statement is useful when a subset of
objects must be restored or copied from a recataloged table set multiple times. However, for
one-time operations, use a RESTORE or COPY statement without specifying CATALOG as a
runtime parameter; catalog generation using an ANALYZE statement requires scanning the
entire tape set anyway.
Teradata Archive/Recovery Utility Reference, Release 16.00
113
Chapter 4: Runtime Parameters
CATALOG
CATALOGFILE
Use the CATALOGFILE parameter to change the file name for the catalog with the
CATALOGFILENAME command-line option. Disable CATALOGFILE by specifying
NOCATALOGFILE or NOCTLGFILE. The effect is the same as NOCATALOG. Catalog
information is not created during ARCHIVE, and is not used during RESTORE or COPY.
When using the CATALOG option of ANALYZE along with the CATALOGFILE /
CATALOGFILENAME parameters, ANALYZE writes to both the CATALOG table on the
database and the specified client-side file.
Note: In mainframe platforms (for example, z/OS), limit file names to eight characters.
NOCATALOG
NOCATALOG or NOCTLG disables CATALOG. No CATALOG table is created during an
archive operation; no catalog information is used during a restore/copy operation.
Dictionary Archive
The internal processing of a dictionary archive is equivalent to archiving a database with
empty tables. There is no performance benefit for archive or restore/copy operations for a
dictionary archive. Therefore, Teradata ARC automatically lowers the granularity of
cataloging to the object level for dictionary archives. To generate table-level details for
content/history purposes, run an ANALYZE of the archive file while using the CATALOG
keyword.
114
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
CHARSETNAME
CHARSETNAME
Purpose
The CHARSETNAME parameter adds support for the BIG5, GB, UTF-8, and Korean
character sets to Teradata ARC. Use the CHARSETNAME parameter when invoking the
ARCMAIN program at startup.
Syntax
CHARSETNAME
=name
2412a008
where:
Syntax Element
name
Definition
Name of the selected character set
Usage Notes
Teradata ARC can be used with a database that uses an alternate character set. Alternate
character sets available for use with Teradata ARC include kanji ideographs, katakana, Korean,
and traditional and simplified Chinese.
Use either CSNAME or CS as shorthand for CHARSETNAME.
Teradata ARC uses the specified character set when connecting to the database. If the option is
not specified, Teradata ARC uses one of these default character sets for the client system:
•
EBCDIC for channel-attached
•
ASCII for network-attached
Available Character Sets
Use any of the character sets in “Table 14: Teradata-Defined Character Sets” to define the
name parameter. Refer to SQL Fundamentals for information on defining your own character
set.
Teradata Archive/Recovery Utility Reference, Release 16.00
115
Chapter 4: Runtime Parameters
CHARSETNAME
Table 14: Teradata-Defined Character Sets
Operating System
Character Set Name
Language Usage
Channel-attached
EBCIDIC
default
KATAKANAEBCDIC
katakana
KANJIEBCDIC5026
kanji 5026
KANJIEBCDIC5035
kanji 5035
TCHEBCDIC937_3IB
traditional Chinese
SCHEBCDIC935_2IJ
simplified Chinese
HANGULEBCDIC933_1II
Korean
ASCII
default
UTF8
general
KANJIEUC_0U
kanji EUC
KANJISJIS_0S
kanji shift-JIS
TCHBIG5_1R0
traditional Chinese
SCHGB2312_1T0
simplified Chinese
HANGULKSC5601_2R4
Korean
Network-attached
For compatibility with previous Teradata ARC versions, individually specify the following
character sets on the command line. It is not necessary to use the CHARSETNAME
parameter.
Channel-Attached Client Systems
Network-Attached Client Systems
EBCDIC
ASCII
KATAKANAEBCDIC
KANJIEUC_0U
KANJIEBCDIC5026_0I
KANJISJIS_0S
KANJIEBCDIC5035_0I
It is not necessary to modify Teradata ARC jobs that used kanji character sets available in
previous Teradata ARC versions.
Establishing a Character Set
Establishing a character set for a particular archive/recovery operation is supported. However,
after the character set is established, do not attempt to change it while Teradata ARC is
executing an operation. When a character set is established, that character set accepts only
Teradata ARC statements input in that character set.
116
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
CHARSETNAME
Establish a character set in one of the following ways:
•
At startup, invoke a parameter in JCL from z/OS. This user-specified character set
overrides all other character sets previously specified or placed in default.
•
Specify a character set value in the HSHSPB parameter module for IBM mainframes. This
takes second precedence.
•
If a character set is not user-specified or specified in HSHSPB, the default is the value in
the system table, database DBC.Hosts.
If relying on the database DBC.Hosts table for the default character set name, ensure that the
initial logon from the channel-attached side is in EBCDIC. Otherwise, Teradata ARC does not
know the default character set before logon.
Examples
On z/OS, this JCL invokes ARCMAIN:
//ARCCPY EXEC PGM=ARCMAIN,PARM=’SESSIONS=8 CHARSETNAME=KATAKANAEBCDIC’
Character Set Limitations
Using an alternate character set allows naming tables and databases using an extended set of
characters. However, the internal representation of these extended characters depends on the
session character set.
Although Teradata ARC can be used with a Teradata Database that uses an alternate character
set, special care must be taken when archive and restore operations are used on a Teradata
Database with an alternate character set:
•
When archiving objects with names that use non-Latin characters, use the same character
set defined using the multinational collation option for the database.
Multinational collation provides more culturally aware ordering of data in the database. If
a different character set is used, Teradata ARC might have difficulty restoring the archive.
To learn more about multinational collation, refer to SQL Fundamentals.
•
To restore an archive made with an alternate character set, use the same character set used
in the archive.
•
Do not restore an archive on a system that cannot handle the character sets used to create
the archive.
•
When the KATAKANAEBCDIC character set is passed to Teradata ARC, all messages are
uppercase because that character set does not support lowercase Latin letters.
•
Teradata ARC does not support the use of control characters (0x00 thru 0x1F) in object
names.
•
Teradata ARC treats multi-byte character object names as case sensitive object names. This
means that the user has to specify a multi-byte object name in the exact case that was used
when the object was created in the Teradata Database, otherwise Teradata ARC may not be
able to locate that object. Character sets which may contain multi-byte characters include
any of the non-Latin character sets. The ASCII and EBCIDIC character sets are not
included in this restriction.
Teradata Archive/Recovery Utility Reference, Release 16.00
117
Chapter 4: Runtime Parameters
CHARSETNAME
When a Teradata ARC run begins while using an alternate character set, the software displays
the following message, where character_set_name is the name of the character set defined in
the runtime parameter:
CHARACTER SET IN USE: character_set_name
Note: Contact the system administrator or Teradata field support representative to learn more
about the alternate character sets supported for the Teradata Database installation.
Limitations for Japanese Character Sets and Object Names
For certain character sets (including multibyte characters), Teradata ARC accepts non-Latin
characters as part of an object name. An object name can be a database name, table name, user
name, password, or checkpoint name in a Teradata ARC statement. Object names must be no
longer than 30 characters.
Teradata ARC also supports input and display of object names in hexadecimal notation.
Teradata Database can store and display objects in hexadecimal format to allow object names
to be archived and restored from one platform to another. To learn more about using nonLatin characters in object names, refer to SQL Fundamentals.
For example, to archive a database created on a UNIX® platform from an IBM mainframe, do
not specify the UNIX EUC character set on the IBM mainframe. Instead, specify the database
name in the internal hexadecimal format understood by the Teradata Database.
The Teradata ARC allows specifying object names in any of these formats:
•
Without quotes. For example, for database DBC:
DBC
•
With quotes. For example, for database DBC:
“DBC”
•
External hexadecimal:
X’<object name in external hexadecimal format>’
This indicates that the specified hexadecimal string represents a name in the client
(external) format. For example, for database DBC:
X’C4C2C3’
•
Internal hexadecimal
’<object name in internal hexadecimal format>’XN
This indicates that the specified hexadecimal string represents a name in the Teradata
Database internal format. For example, for database DBC:
’444243’XN
A Teradata ARC statement can contain any combination of object name notations. For
example, the hexadecimal notation is a valid object name:
’4142’XN.TABLEONE
118
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
CHARSETNAME
Limitations for Chinese and Korean Character Sets and Object
Names
When using Chinese and Korean character sets on channel- and network-attached platforms,
object names are limited to:
•
A-Z, a-z
•
0-9
•
Special characters such as $ and _
•
No more than 30 bytes
Note: For more information on Chinese and Korean character set restrictions, refer to SQL
Fundamentals.
Troubleshooting Character Set Use
Archive objects in the same character set class as the character set class from which they
originated. It might not be possible to restore an object to a database that does not support
alternate character sets. The name of a new table and a table that is part of the archive created
using an alternate character set will not appear to be the same, and Teradata ARC considers
the table as dropped during its restore of the archive.
If the Teradata Database fails to restore or copy a table or database that is archived, specify the
following to perform a restore or copy operation:
•
XN form for this object
•
Session character set as the same character set as the one used to archive the object
If it is not possible to use the same character set for a restore operation as was used for the
archive, specify the XN form of the object name for the particular table. The XN form will not
be the normal internal form that was stored in the dictionary. Instead, it is the hexadecimal
form of the object name stored in the archive. For example, to specify the XN name of an
object in the COPY/RESTORE command:
COPY DATA TABLES
(mkw.'8ABF8E9A6575635F3235355F636F6C5F746162'xn)
(from ('616263'xn.'8ABF8E9A6575635F3235355F636F6C5F746162'xn)),
release lock,
FILE = ARCHIV1;
Teradata ARC might still not be able to locate the object during the archive because of
collation differences between character sets and single-byte uppercasing applied to the object.
Teradata ARC might fail to locate a object that was part of an archive when copying the table
using ASCII or EBCDIC character sets because single-byte uppercasing was applied to the
object.
If Teradata ARC cannot locate the object on the archive via the XN format, copy the object to
a system that uses the character set of the archive, then archive it again. If the object name has
non-Latin characters, rename the object to a valid Latin character name before archiving.
Note: Object names must be no longer than 30 characters.
Teradata Archive/Recovery Utility Reference, Release 16.00
119
Chapter 4: Runtime Parameters
CHARSETNAME
Sample JCL
The following sample line of code from the JCL illustrates how the name of a session character
set is passed via the PARM runtime parameter to ARCMAIN. In the example, the character set
is “KANJIEBCDIC5035_0I”:
//COPY EXEC PGM=ARCMAIN, PARM='KANJIEBCDIC5035_0I'
120
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
CHECKPOINT
CHECKPOINT
Purpose
The CHECKPOINT parameter saves restart information in the restart log at the intervals
specified by this parameter.
Syntax
CHECKPOINT
CHKPT
=n
2412A021
where:
Syntax Element
n
Definition
Number of data blocks processed before the checkpoint is taken.
The maximum number of blocks is 10000000. Suffix K replaces three zeros (000). Setting
CHECKPOINT to 0 disables ARC’s GetPos request.
If CHECKPOINT is not specified, the default number of blocks is 10000 for IBM and 32000 for
Windows.
Usage Notes
Both archive and restore/copy operations take checkpoints in the data phase. Every time the
specified number of data blocks are processed, Teradata ARC saves the tape positioning and
other processing information in the restart log. The frequency of checkpoint operations is
controlled by the number of data blocks processed.
Checkpoint operations causes I/Os and additional processing overhead, therefore too many
checkpoints might adversely impact performance. Each Teradata ARC data block contains up
to 32K.
The CHECKPOINT parameter also controls the frequency of VERBOSE display, if active. To
see the amount of data processed every minute with an approximate archive rate of 1MB/
second, set the CHECKPOINT value between 2500 and 4500.
•
Setting a checkpoint frequency too high (a low CHECKPOINT value) might impact
performance.
•
Setting a checkpoint frequency too low (a high CHECKPOINT value) causes Teradata
ARC to reprocess a large number of data blocks in case of a restart.
Note: The CHECKPOINT runtime parameter only applies to Mainframe systems (z/OS).
Checkpointing to allow for a restart of Teradata ARC is no longer supported on networkattached systems.
Teradata Archive/Recovery Utility Reference, Release 16.00
121
Chapter 4: Runtime Parameters
CHECKPOINT
Setting CHECKPOINT to 0
If CHECKPOINT is set to 0:
•
Teradata ARC does not save positioning into the restart log. This means there is no restart/
reconnect support. Some RESTORE/COPY operations also die with an error because no
positioning information is saved.
•
CATALOG is not supported and automatically disabled. Restart processing terminates
with a fatal error.
For some tape drives, tape positioning information is not available or is very expensive.
Setting CHECKPOINT to 0 allows more efficient processing in these cases.
Note: Setting CHECKPOINT to 0 is not the same as setting CHECKPOINT to a very large
number. If CHECKPOINT is set to a very large number, Teradata ARC still saves some
checkpoint information in the restart log, such as the beginning of database/table, and restart/
reconnect is supported at database/table boundaries.
Note: Some applications that use Teradata ARC do not support positioning activity. To
prevent positioning activity, set CHECKPOINT to 0.
122
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
CHECKSUM
CHECKSUM
Purpose
The CHECKSUM parameter allows the Teradata Database and/or Teradata ARC to verify data
packets during ARCHIVE or RESTORE.
Syntax
CHECKSUM
=n
2412a005
where:
Syntax Element
n
Definition
Checksum value, which can be one of the following:
• 0 = checksum option is disabled.
• 1 = the checksum is calculated by the Teradata Database.
• 2 = the checksum is verified by Teradata ARC during ARCHIVE, and by both Teradata ARC and
Teradata Database during RESTORE.
Note: CHECKSUM can also be used with the ANALYZE statement. If the VALIDATE option is used
with ANALYZE, Teradata ARC validates the checksum for each data block in the archive.
Usage Notes
Expect the following effects, depending on the value set for CHECKSUM:
•
Setting CHECKSUM=1 helps ensure data integrity during RESTORE, but this might cause
a slight performance drop due to the fact that the checksum value for the data blocks is
calculated only by the Teradata Database.
•
Setting CHECKSUM=2 provides verification of data integrity during both ARCHIVE and
RESTORE, but because both the Teradata Database and Teradata ARC calculate the
checksum, this can result in a significant performance drop.
Teradata Archive/Recovery Utility Reference, Release 16.00
123
Chapter 4: Runtime Parameters
COPYSJI
COPYSJI
Purpose
The COPYSJI parameter enables logic to allow System Join Indexes (SJIs) to be copied to a
different database.
Syntax
COPYSJI
2412-039
Usage Notes
This parameter is only needed for ARC 14.10 releases that are running with ARC-8522. If this
parameter is used with a version of ARC 14.10 that does not have ARC-8522 installed, ARC
will fail with the following fatal error:
*** Failure ARC0004:Sorry, I do not recognize parameter COPYSJI.
For 15.0 and higher, the COPYSJI runtime parameter is not needed to enable the logic to
allow System Join Indexes to be copied to a different database as that logic is integrated into
ARC 15.00 and will be the default behavior.
The ARC logic to copy SJIs is dependent on DBS DR 164709. DBS DR 164709 must also be
enabled to allow the SJIs to be copied. The DBS logic is integrated into V15.00 and will be the
default behavior. DBS DR 164709 has also been back-ported to V14.0 and V14.10 and must be
enabled by setting general DBSControl flag #361 (DumpSJIWithoutDBName) to TRUE.
If the logic is not enabled on either side, the SJIs will not be copied to its intended target
database. Both sets of changes must be enabled during both the ARCHIVE step and the COPY
FROM step. The following is an example of syntax:
arcmain COPYSJI < in_script > out_file
124
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
DATAENCRYPTION
DATAENCRYPTION
Purpose
The DATAENCRYPTION parameter instructs Teradata ARC to use the payload encryption
feature (for security) that is supported by Teradata Call-Level Interface (CLI).
Syntax
DATAENCRYPTION
2412A024
Usage Notes
This parameter encrypts all Teradata ARC data that is sent across a network, but it does not
encrypt data written to tape.
Note: Encryption and decryption require additional CPU resources. Therefore, using this
parameter significantly affects the backup and restoration of data being sent to and received
by Teradata ARC.
For additional information about statements related to this parameter, see “LOGDATA” on
page 218 and “LOGMECH” on page 224.
For more information about the encryption feature, see Teradata Call-Level Interface Version 2
Reference for Workstation-Attached Systems.
Teradata Archive/Recovery Utility Reference, Release 16.00
125
Chapter 4: Runtime Parameters
DBSERROR
DBSERROR
Purpose
The DBSERROR parameter changes the severity of Teradata Database error codes.
Syntax
DBSERROR
=
code, level
2412C034
where:
Syntax Element
Definition
code
Teradata Database error code
level
Specifies the new severity level for the specified error code
Teradata ARC only accepts the integer values listed below for the severity level
for the DBSERROR parameter:
0 = normal exit
4 = warning
8 = non-fatal error
12 = fatal error
16 = internal error (fatal)
Usage Notes
The severity of any Teradata Database error code can be increased from its default severity
level. The only Teradata Database error code that can be decreased from its default severity
level is 2843. If a lower severity level is specified for any Teradata Database error code other
than 2843, Teradata ARC displays error message ARC0124, then aborts. Specifying a Teradata
Database error code's default severity level is always acceptable. See Table 17 for a list of
selected DBS error codes and their default severity levels.
Use the DBSERROR parameter, for example, to trigger a fatal error in a situation that would
normally produce a warning or a non-fatal error.
If Teradata Database error 2843 occurs, use the DBSERROR parameter to decrease the severity
of the error from fatal error to warning. The error, which indicates that there is no more space
available during a restore operation, will be skipped. This allows the restore operation to
continue.
If a specified severity level does not match one of the values specified above for any Teradata Database
error, Teradata ARC displays error message ARC0125, then aborts.
126
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
DBSERROR
The DBSERROR parameter can be specified multiple times for the same or different Teradata
Database errors. If the same DBS error is specified multiple times with the same or different
severity levels, the last severity level specified will be used.
Example
ARCMAIN DBSERROR=(2805,12)
In this example, error code 2805 (maximum row length exceeded) has its severity level
increased from its default level of warning (4) to fatal error (12).
ARCMAIN DBSERROR=(2843,4)
In this example, error code 2843 (database out of space) has its severity level decreased from
its default level of fatal error (12) to warning (4).
ARCMAIN DBSERROR=(2843,4) DBSERROR=(2805,8)
In this example, error code 2843 (database out of space) has its severity level decreased from
its default level of fatal error (12) to warning (4) and error code 2805 (maximum row length
exceeded) has its severity level increased from its default level of warning (4) to non-fatal error
(8).
ARCMAIN DBSERROR=(2805,4) DBSERROR=(2805,12) DBSERROR=(2805,8).
In this example, error code 2805 (maximum row length exceeded) is specified 3 times, each
with a different severity level. Changing from its default level of warning (4) to fatal error (12)
to non-fatal error (8). Since non-fatal error(8) is the last severity level specified, it will be the
severity level used.
Teradata Archive/Recovery Utility Reference, Release 16.00
127
Chapter 4: Runtime Parameters
DEFAULT
DEFAULT
Purpose
The DEFAULT parameter defines the file path name that contains default options for Teradata
ARC.
Syntax
DEFAULT
= filename
KM01A008
where:
Syntax Element
filename
Definition
Name of the file that contains Teradata ARC default options
Usage Notes
•
If DEFAULT is used, it must be the first runtime parameter specified in the list of
parameters on the command line or in the environment variables.
•
If the file path of the DEFAULT parameter includes embedded spaces or special characters,
enclose the path with single (’) or double (“) quote marks.
•
If ARCDFLT is specified, or if DEFAULT= starts either ARCENV or the actual commandline argument, the file pointed to by ARCDFLT or DEFAULT is read into memory and its
content prepended to the command-line string.
•
On IBM platforms, PARM is the runtime parameter that provides the analogous
functionality to DEFAULT.
The following example illustrates the use of DEFAULT:
DEFAULT=C:\TESTARC\CONFIG.ARC
Default Configuration File
To use a certain set of runtime parameters every time Teradata ARC is invoked, save them in a
default configuration file pointed to by the DEFAULT runtime parameter. Each line in the
default configuration file can contain one or more parameters. Add comments after two
semicolons. If two semicolons are encountered in a line, the remaining characters of the line
are ignored.
128
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
DEFAULT
For example, the following default configuration file called C:\TESTARC\CONFIG.ARC
includes a comment in the file and characters that are ignored:
CATALOG;;enables direct positioning
VERBOSE
;;IOPARM= 'device=tape1 tapeid=1-5 volset=980731backup autoload=120'
FILEDEF=(archive,ARCHIVE_%UEN%.DAT)
CHKPT=1000
Teradata Archive/Recovery Utility Reference, Release 16.00
129
Chapter 4: Runtime Parameters
ENCODINGCHARSET
ENCODINGCHARSET
Purpose
The ENCODINGCHARSET parameter adds support for the UTF-16 encoding character sets
to Teradata ARC. Use the ENCODINGCHARSET, ENNAME, or EN parameters when
specifying the UTF-16 encoding character set. Use the ENCODINGCHARSET parameter
when invoking the ARCMAIN program at startup.
Syntax
ENCODINGCHARSET
=UTF16
ENNAME
CHARSETNAME
EN
=UTF8
$
Note: The CHARSETNAME parameter set to UTF8 must always accompany the UTF16
encoding character set parameter because it is the only valid character set name used in the
interaction of Teradata ARC with DBS.
Usage Notes
ARCMAIN has the ability to handle UTF-16 strings when interacting with its interfaces. These
interfaces could be input files or output files. The data within parcels (data structures) and
SQL strings sent to the DBS is converted from the UTF-16 format to the DBS format. The data
received from the DBS is converted from the DBS format to UTF-16 format. Currently the
DBS only supports the UTF-8 format.
Thus ARCMAIN is able to:
•
read input files encoded using UTF-16
•
write output files encoded using UTF-16
Two restrictions apply to the encoding character set parameter:
•
The only allowed value for the ENOCDINGCHARSET parameter is UTF16. ARC does not
allow any other value to be specified here.
•
The only valid value for the CHARSETNAME parameter is UTF8 as the Teradata ARC
interaction with DBS must occur in another unicode compatible charset, namely UTF8
Example
arcmain ENCODINGCHARSET=UTF16 CHARSETNAME=UTF8 < arc.in
Where the arc.in file contains one or more archive/recovery control language statements.
130
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
ERRLOG
ERRLOG
Purpose
The ERRLOG parameter duplicates all the error messages generated by Teradata ARC and
stores them in an alternate file. But all error messages are still present in the standard output
stream.
Syntax
ERRLOG
=name
GR01A023
where:
Syntax Element
name
Definition
Name of the alternate error log file for storing these messages
Usage Notes
The use of the ERRLOG parameter depends on the operating system. On z/OS systems, name
refers to a DD name defined within the step that calls ARCMAIN and is also limited to eight
characters. This limit does not apply to other platforms.
Note: On all other platforms, name refers to a disk file in the current directory.
Specify NOERRLOG to disable the ERRLOG parameter.
Teradata Archive/Recovery Utility Reference, Release 16.00
131
Chapter 4: Runtime Parameters
FATAL
FATAL
Purpose
The FATAL parameter provides a runtime option to the ARCMAIN client to reclassify a
normally nonfatal error condition as fatal, and abend if the selected message is issued.
Syntax
FATAL
,
=(
errorno )
2412B007
where:
Syntax Element
errorno
Definition
Changes specified errors to a completion code of 12
This is a fatal condition which terminates Teradata ARC.
Usage Notes
FATAL changes the severity of any Teradata ARC error to FATAL, exit code = 12. For instance,
if ARC0802 is specified as a fatal error, Teradata ARC immediately exits at ARC0802.
In the next example, if an ARC0802 I/O error or ARC1218 warning message occurs, FATAL
allows the immediate termination of Teradata ARC rather than having the program run to
completion. Either normally nonfatal error condition is interpreted as fatal, and the program
terminates:
ARCMAIN FATAL=(802,1218)
The FATAL option allows stopping the job immediately when one copy of the dump fails. This
ensures quick recovery action when only one good copy exists of the archive dump.
ARC1218 warns that a problem table will be skipped and the next table processed. If
immediate termination is preferable to continuing, the FATAL parameter is invoked.
Delimiters, such as =, are mandatory.
Please see Table 18 on page 168 for a list of the Teradata ARC error codes with their default
severity levels.
132
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
FILEDEF
FILEDEF
Purpose
The FILEDEF parameter maps the internal name of a file to an external name, if specified.
%UEN% enables ARCMAIN to embed the UEN (Utility Event Number) in a filename
Syntax
FILEDEF
= (internal name ,'external name' )
FILE
FDEF
KM01A005
where:
Syntax Element
Definition
external name
External name of a file
internal name
Internal name of a file specified in the following Teradata ARC statements
after the FILE keyword:
•
•
•
•
ANALYZE
ARCHIVE
RESTORE
COPY
Usage Notes
At execution, internal name, as specified in FILEDEF, is mapped to external name. But not all
internal names need to be defined with a FILEDEF parameter. If a FILEDEF parameter is not
specified, the internal name itself is used as the external name.
Multiple FILEDEFs may be specified (one filename per FILEDEF). FILEDEFs do not override
each other. Thus, there can be a list of FILEDEFs:
FILEDEF=(archive,ARCHIVE_%UEN%.DAT)
FILEDEF=(ARCHIVE1,C:\TESTARC\ARCHIVE1)
FILEDEF=(ARCHDICT,DICTIONARY_%UEN%)
If %UEN% is part of external name, then at execution of the Teradata ARC statement it is
expanded to the actual UEN (Utility Event Number) or substituted with the value specified in
the UEN runtime parameter in a RESTORE/COPY statement.
Teradata Archive/Recovery Utility Reference, Release 16.00
133
Chapter 4: Runtime Parameters
FILEDEF
For example, if FILEDEF is defined as follows in an environment variable or in a default
configuration file:
FILEDEF=(archive,ARCHIVE_%UEN%.DAT)
then in the following Teradata ARC script:
ARCHIVE DATA TABLES (DB), RELEASE LOCK, FILE=ARCHIVE
ARCHIVE is the internal name, ARCHIVE_%UEN%.DAT is the external name, and the final
external name is something like ARCHIVE_1234.DAT, assuming that the UEN is 1234.
134
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
HALT
HALT
Purpose
The HALT parameter saves the current state into the restart log and terminates the current
task if, during archive or recovery, the Teradata Database fails or restarts, and the new
configuration differs from the configuration that preceded the failure.
Syntax
HALT
GR01A024
Usage Notes
Use this parameter only when online configuration changes and non-fallback tables are
involved. Processing is aborted when any table is non-fallback. If all tables are fallback, HALT
is ignored. HALT affects the database as follows:
•
If HALT is not specified, Teradata ARC displays a message and skips to the next table.
•
If HALT and PAUSE are both specified, Teradata ARC displays an error message. Specify
HALT or PAUSE, but not both.
Note: If the HALT option is specified and a disk is replaced after an archive or recovery
operation has been started, the operation cannot be restarted from where it left off. Restart the
operation from the beginning.
Specify NOHALT to disable the HALT parameter.
Teradata Archive/Recovery Utility Reference, Release 16.00
135
Chapter 4: Runtime Parameters
HEX
HEX
Purpose
The HEX parameter enables the display of object names in hexadecimal notation. This display
is in internal Teradata Database hexadecimal format, except that the HEX display of object
names from the ANALYZE statement is in the external client format (X’...’).
Syntax
HEX
GR01A025
Usage Notes
Use the HEX option to archive/restore table or database names that are created in a character
set that is different from the currently specified character set.
If the HEX option is not used when object names are created in a character set that is not the
currently specified character, the displayed object names might be unreadable or unprintable.
The HEX option is retained across client and Teradata Database restarts.
Following is an example of the HEX parameter from z/OS.
//ARCCPY
EXEC PGM=ARCMAIN,PARM=’SESSIONS=8 HEX’
Specify NOHEX to disable the HEX parameter.
136
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
IOMODULE
IOMODULE
Purpose
The IOMODULE parameter specifies the name of an access module.
Syntax
IOMODULE
= name
KM01A001
where:
Syntax Element
name
Definition
Name of an access module file
Usage Notes
The value of this parameter passes directly to the tape access module. ARCMAIN does not
interpret the value.
Teradata Archive/Recovery Utility Reference, Release 16.00
137
Chapter 4: Runtime Parameters
IOPARM
IOPARM
Purpose
The IOPARM parameter specifies the initialization string for the access module named in the
IOMODULE runtime parameter.
Syntax
IOPARM
= 'InitString'
KM01A002
where:
Syntax Element
‘InitString’
Definition
Access module initialization string
Usage Notes
The value of this parameter passes directly to the tape access module. ARCMAIN does not
interpret the value.
138
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
LINEWRAP
LINEWRAP
Purpose
The LINEWRAP parameter specifies the number of characters at which Teradata ARC wraps
its lines in the output log, or alternately, turns off wrapping.
Syntax
LINEWRAP
=
val
2412A036
where:
Syntax Element
val
Description
Specifies the maximum number of characters that Teradata ARC outputs on a
line.
Specify one of the following:
• A value of 0 to disable line wrapping.
• A value between 80 - 255, otherwise the width of the line will reset to 80
characters.
Teradata Archive/Recovery Utility Reference, Release 16.00
139
Chapter 4: Runtime Parameters
LOGON
LOGON
Purpose
The LOGON parameter enables the runtime definition of the logon string.
Syntax
LOGON
= 'logon string'
KM01A007
Usage Notes
To avoid defining a password in a Teradata ARC script (for security reasons), use the LOGON
parameter to define a logon string at execution time.
If the $LOGON token is specified in the LOGON statement, the logon string is replaced at
execution.
Delimiters are mandatory.
(Teradata Wallet users) Because Teradata Wallet usernames and passwords contain special
characters, you must use double quotation marks to log on. For example:
.logon bohrum/"$tdwallet(rsl_dbc_user)","$tdwallet(rls_dbc_pw)"
Example
If LOGON is specified on the command line, as follows:
ARCMAIN LOGON=’TDP6/USER1,PWD1’
and the Teradata ARC script contains:
LOGON $LOGON;
then $LOGON is expanded to TPD6/USER1,PWD1 at runtime.
140
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
LOGSKIPPED
LOGSKIPPED
Purpose
The LOGSKIPPED parameter instructs Teradata ARC to log all of the tables that are skipped
during an ARCHIVE operation.
Syntax
LOGSKIPPED
2412A020
Usage Notes
The LOGSKIPPED parameter instructs Teradata ARC to log all of the tables that are skipped
during an ARCHIVE operation. The tables are logged in a table named SkippedTables, which
is created in the ARC catalog database ($ARC by default, or $NETVAULT_CATALOG when
using NetVault).
Teradata ARC skips a table when the table is:
•
Currently involved in a restore operation from a different instance of Teradata ARC
•
Currently involved in a FastLoad or MultiLoad operation
•
Missing a table header or a dictionary row, or is otherwise damaged
For each table that is skipped, Teradata ARC adds a row in the SkippedTables table. The
columns included in the SkippedTables table are:
•
EVENTNUM—Event number of the archive job where the table was skipped.
•
DATABASENAME—Database containing the skipped table.
•
TABLENAME—Name of the table that was skipped.
•
ERRORTIME—TIMESTAMP value of when the error occurred on the table.
•
ERRORCODE—Error code that caused Teradata ARC to skip the table.
•
ERRORTEXT—Error text for the corresponding error code.
Teradata ARC does not automatically retry tables that are added to the SkippedTables table.
Therefore, ensure that these tables are separately archived when an error condition is fixed.
Teradata Archive/Recovery Utility Reference, Release 16.00
141
Chapter 4: Runtime Parameters
NOTMSM
NOTMSM
Purpose
The NOTMSM parameter turns off ARC TMSM event logging.
Syntax
NOTMSM
$
Usage Notes
Use this parameter to turn off ARC TMSM event logging. Without this parameter,
applications that use Teradata ARC and TMSM would see events logged twice. The NOTMSM
parameter allows the application to turn off the TMSM event logging in Teradata ARC.
142
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
NOWARN
NOWARN
Purpose
The NOWARN parameter provides a runtime option to the ARCMAIN client to selectively
reclassify a normally warning condition as normal to ensure normal exit if the selected
message is issued.
Syntax
NOWARN
,
=(- errorno -)
2412_40
where:
Syntax Element
errorno
Definition
Changes specified warning error codes to a severity code of 0
Usage Notes
NOWARN changes the severity of an Teradata ARC warning code to NORMAL, severity code
= 0. This runtime parameter is useful for those who do not want specific warning codes to
generate an exit code with severity level of 4 (WARNING).
The warning message will still be displayed in the job output but it will be classified as an
'INFO' message with severity 0 instead of a 'WARNING' message with severity 4. If no other
conditions are found, the job will terminate with severity 0.
This option only applies to warning codes; any non-warning code entered with this option
will be considered to be an invalid warning code and will result in a fatal error condition and
cause ARCMAIN to abort.
ARCMAIN NOWARN=(1032,1033,1034)
Delimiters, such as =, are mandatory.
To see a list of the Teradata ARC warning codes, please see Table 18 on page 168 in the
WARNING Severity Level section.
Notice:
If this runtime parameter is used, the user will be less likely to see the warning
messages that are being filtered out. For this reason, when this runtime parameter is
used, the user is solely responsible for any consequences caused by ignoring the
specified warning messages.
Teradata Archive/Recovery Utility Reference, Release 16.00
143
Chapter 4: Runtime Parameters
NOWARN
Example
Without the NOWARN runtime parameter: ARCMAIN
08/22/2013 15:43:44
.
.
08/22/2013 15:43:44
08/22/2013 15:43:44
08/22/2013
08/22/2013
08/22/2013
08/22/2013
15:43:44
15:43:44
15:43:44
15:43:44
08/22/2013
08/22/2013
08/22/2013
08/22/2013
08/22/2013
08/22/2013
08/22/2013
08/22/2013
15:43:44
15:43:44
15:43:44
15:43:44
15:43:44
15:43:44
15:43:44
15:43:44
*** Warning ARC1032:Object "ARCTESTDB_DB1"
has one or more Online tables that is still logging.
DUMP COMPLETED
*** Warning ARC1033:After archive, one or more Online
tables is still logging.
STATEMENT COMPLETED
LOGGED OFF
7 SESSIONS
STATEMENT COMPLETED
ARCMAIN TERMINATED WITH SEVERITY 4
With the NOWARN runtime parameter: ARCMAIN NOWARN=(1032,1033)
08/22/2013 18:35:24
08/22/2013 18:35:24
.
.
08/22/2013 18:35:31
08/22/2013 18:35:31
08/22/2013
08/22/2013
08/22/2013
08/22/2013
18:35:31
18:35:31
18:35:31
18:35:31
08/22/2013
08/22/2013
08/22/2013
08/22/2013
08/22/2013
08/22/2013
08/22/2013
08/22/2013
18:35:31
18:35:31
18:35:31
18:35:31
18:35:31
18:35:31
18:35:31
18:35:31
SEVERITY LEVEL CHANGED TO NORMAL(0): ARC1032, ARC1033
*** Info ARC1032:Object "ARCTESTDB_DB1" has one or more
Online tables that is still logging.
DUMP COMPLETED
*** Info ARC1033:After archive, one or more Online
tables is still logging.
STATEMENT COMPLETED
LOGGED OFF
7 SESSIONS
STATEMENT COMPLETED
ARCMAIN TERMINATED WITH SEVERITY 0
With an invalid WARNING code: ARCMAIN NOWARN=(1030)
The job will terminate with the following error:
08/22/2013 19:08:33
144
*** Failure ARC0016:TERMINATING BECAUSE OF
UNANTICIPATED CONDITIONS. 1030 is not a valid
warning number for NOWARN.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
OUTLOG
OUTLOG
Purpose
The OUTLOG parameter duplicates standard (stdout) output to a file.
Syntax
OUTLOG
= name
KM01A009
Usage Notes
If OUTLOG is specified, all output lines that go to standard (stdout) output are duplicated
and saved in the file pointed to by OUTLOG.
Any existing file with the same name is overwritten.
In mainframe platforms (that is, z/OS), limit file names to eight characters.
Teradata Archive/Recovery Utility Reference, Release 16.00
145
Chapter 4: Runtime Parameters
PARM
PARM
Purpose
The PARM parameter identifies the file that contains frequently used runtime parameters.
Syntax
PARM
=
ddname
GR01B021
where:
Syntax Element
ddname
Definition
ddname (z/OS platform) of the file containing the frequently used runtime
parameters
Usage Notes
The PARM runtime parameter saves frequently used runtime parameters in a separate
parameter file. Its functionality is analogous to DEFAULT.
ARCMAIN reads the PARM file, then builds the parameter list line and appends the actual
parameter list to it. PARM or PARM=ddname must be the first parameter in the parameter
list, meaning that PARM cannot appear in the middle of the parameter list.
On IBM platforms, if PARM is specified without a ddname, then the default ddname is
SYSPARM.
Use the following NO parameters to disable any previously specified runtime parameters,
including those specified in PARM:
•
NOHALT
•
NOPAUSE
•
NORESTART
•
NOHEX
•
NOVERBOSE
•
NOERRLOG
The SESSIONS and CHECKPOINT parameters can be specified more than once to override
previous instances of these parameters. However, the last instance of the parameter is the value
that is used. For example, if these runtime parameters are specified once in PARM, then one
or more times in the parameter list, only the last instance of the parameter is used.
146
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
PARM
Example
The next example illustrates the use of PARM, including an instance when more than one
parameter (in this case, SESSIONS) is specified in PARM:
//REST EXEC PGM=ARCMAIN,PARM='SESSIONS=80 SESSIONS=40 VERBOSE'
In the example, 40 sessions are used, not 80, because SESSIONS=40 is the last instance of the
parameter specified in PARM.
JCL for the Above Example
Following is sample JCL that includes a PARM file and a parameter list.
//REST
EXEC PGM=ARCMAIN,PARM='PARM NOHALT CHECKPOINT=2000'
//*--------------------------------------------//* A SEPARATE DATASET CAN BE USED AND SYSPARM JUST POINT TO IT.
//* SYSPARM IS EXPLICITLY SHOWN HERE FOR ILLUSTRATION PURPOSE.
//SYSPARM DD *
VERBOSE
CHECKPOINT=5000
SESSIONS=80
HALT
/*
//DBCLOG
DD DSN=&&T86CLOG,DISP=(NEW,PASS),
//
UNIT=SCR,SPACE=(TRK,(8,8),RLSE)
//ARCHIVE DD DSNAME=TEST.TEST.TEST,
//
DISP=(OLD,KEEP,KEEP)
//SYSPRINT DD SYSOUT=*
//SYSTERM DD SYSOUT=*
//SYSABEND DD SYSOUT=*
//SYSIN
DD *
LOGON TDQA/TEST,TEST;
COPY DATA TABLES (TEST)
,RELEASE LOCK
,DDNAME=ARCHIVE;
LOGOFF;
/*
Note: For z/OS, DDNAME and the FILE keyword are interchangeable. For example,
DDNAME=ARCHIVE is the same as FILE=ARCHIVE.
Teradata Archive/Recovery Utility Reference, Release 16.00
147
Chapter 4: Runtime Parameters
PARM
Sample Output
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
148
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
READING DEFAULTS FROM FILE=[SYSPARM]
Copyright 1989-2014, Teradata Corporation.
All Rights Reserved.
***
*
*
*****
*
*
*
*
****
*
*
****
* *
*
*
****
*
*
*
****
PROGRAM: ARCMAIN
RELEASE: 15.10.00.00
BUILD:
150108I (Jun 19 2014)
PARAMETERS IN USE:
SESSIONS 80
CHECKPOINT FREQUENCY CHANGED TO 2000
VERBOSE LEVEL 1
NO HALT
CHARACTER SET IN USE: EBCDIC
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
PAUSE
PAUSE
Purpose
The PAUSE parameter saves the current state into the restart log and pauses execution if,
during archive or recovery, the Teradata Database fails or restarts and the new configuration
differs from the configuration that preceded the failure.
Syntax
PAUSE
GR01A026
Usage Notes
Use this parameter for non-fallback tables only.
PAUSE causes Teradata ARC to wait for five minutes, then checks the current configuration to
see if it has returned to its previous state. PAUSE remains in its delay loop until the Teradata
Database either returns to its previous state or a time limit is exceeded:
•
If the AMPs are in the original configuration, Teradata ARC continues processing the
archive or restore operation from where it left off.
•
If the configuration still differs, Teradata ARC pauses for another five minutes and repeats
the process described above.
Teradata ARC repeats the five minute pause and check process 48 times (about four
hours). If after 48 times the configuration is still different, Teradata ARC saves the current
state into the restart log and terminates the current task.
•
If PAUSE is not specified with non-fallback tables, the archive is aborted and Teradata ARC
skips to the next table.
•
If HALT and PAUSE are both specified, Teradata ARC displays an error message. Either
HALT or PAUSE may be specified, but not both.
Note: If the PAUSE option is specified and a disk is replaced after an archive or recovery
operation is started, the operation cannot be restarted from where it left off. Restart the
operation from the beginning.
Specify NOPAUSE to disable the PAUSE parameter.
Teradata Archive/Recovery Utility Reference, Release 16.00
149
Chapter 4: Runtime Parameters
PERFFILE
PERFFILE
Purpose
The PERFFILE parameter produces a log of Teradata ARC performance data as a source for
job performance monitoring, analysis, and debugging.
Syntax
PERFFILE
=PLOG
$
PLOG is the user-specified name for the Performance file. This is a separate file from the
archive file and will be located in either the working directory or the path that is specified. See
“Usage Notes” on page 151 for an example.
Each line of the Performance file contains a two-character record type followed by the data
provided for this type, as shown in the next table.
Syntax Element
Definition
BG <timestamp>
<IOPARM>
Beginning of performance log
EN <timestamp>
<exitcode>
Indicates the end of the Teradata ARC job
ER <timestamp>
<exitcode>
<errorcode>
<string>
Indicates a warning, error, or failure by Teradata ARC
<IOPARM> is the list of parameters given in the IOPARM command-line option (if any).
<exitcode> is the final exit code returned by ARC.
<exitcode> is the severity of the error (4 for warning, 8 for error, 12 for failure, and 16 for internal
error).
<errorcode> is the Teradata ARC or Teradata Database error code for the error.
<string> is the string output by Teradata ARC indicating the error.
LG <timestamp>
<logon-str>
Displays the logon time and the logon string given to Teradata ARC in the ARC script
OE <timestamp>
<TS-end>
<objtype>
<name>
Indicates the end time for the named object
<TS-end> is the end time for the object.
<objtype> is the type of object (database-level, 'D', or table-level, 'T').
<name> is the name of the database (and table for table-level objects).
OS <timestamp>
<objtype>
<name>
Indicates the start time for the named object
<objtype> is the type of object (database-level, 'D', or table-level, 'T').
<name> is the name of the database (and table for table-level objects).
150
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
PERFFILE
Syntax Element
Definition
ST <timestamp>
<statement>
Displays the currently executing statement (from the ARC script)
TB <TS-begin>
<TS-build>
<TS-end>
<bytes>
<name>
Indicates the output at the end of archiving or restoring each table
TS-begin is the time the table started.
TS-build is the time the build process started.
TS-end is the ending.
<bytes> is the number of bytes of data archived or restored.
<name> is the name of the database and table archived or restored.
VB <timestamp>
<string>
Indicates that a verbose message is output by Teradata ARC
<string> is the message that is displayed.
Usage Notes
In mainframe platforms (that is, z/OS), limit file names to eight characters.
The following example demonstrates using PERFFILE to create a Performance log file:
ARCMAIN PERFFILE=C:\baruser\ARC_Test_perf.txt
Contents of Performance log: C:\baruser\ARC_Test_perf.txt
BG 12/02/2008 16:37:43
LG 12/02/2008 16:37:43 LOGON TDPID/dbc,;
ST 12/02/2008 16:37:46
ST 12/02/2008 16:37:46 copy data tables
ST 12/02/2008 16:37:46 (DB1),
ST 12/02/2008 16:37:46 release lock,
ST 12/02/2008 16:37:46 file=ARCH1;
VB 12/02/2008 16:37:47 ARCHIVED AT 12-02-08 13:34:03
VB 12/02/2008 16:37:47 ARCHIVED FROM ALL AMP DOMAINS
VB 12/02/2008 16:37:47 UTILITY EVENT NUMBER IN ARCHIVE - 770
VB 12/02/2008 16:37:47 RELEASE:13.00.00.00; VERSION:13.00.00.00;
OS 12/02/2008 16:37:47 D "DB1".""
ER 12/02/2008 16:37:47 4 3803 *** Warning 3803:Table 'tb1' already
exists.
VB 12/02/2008 16:37:50 Starting table "tb1"
VB 12/02/2008 16:37:50 Table "tb1" is not a deferred object
TB 12/02/2008 16:37:50 12/02/2008 16:39:07 12/02/2008 16:39:07
436,208,086 "DB1"."tb1"
VB 12/02/2008 16:39:07 Clearing build flag for index 0.
OE 12/02/2008 16:37:47 12/02/2008 16:39:07 D "DB1".""
ST 12/02/2008 16:39:07
ST 12/02/2008 16:39:07 logoff;
EN 12/02/2008 16:39:07 4
Teradata Archive/Recovery Utility Reference, Release 16.00
151
Chapter 4: Runtime Parameters
RESTART
RESTART
Purpose
The RESTART parameter specifies that the current operation is a restart of a previous
operation that was interrupted by a client failure.
Syntax
RESTART
GR01A027
Usage Notes
In the case of restart, Teradata ARC uses the SESSIONS parameter that is specified the first
time the utility is run. Do not change the input file between the initial execution and restart.
Specify NORESTART to disable the RESTART parameter.
Note: The RESTART runtime parameter only applies to Mainframe systems (z/OS). Restart of
Teradata ARC is no longer supported on network-attached systems.
For further information about dealing with client failures during an archive or recovery
operation, see Chapter 7: “Restarting Teradata ARC.”.
152
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
RESTARTLOG
RESTARTLOG
Purpose
The RESTARTLOG parameter specifies the restart log name for ARCMAIN.
Syntax
RESTARTLOG
= filename
RLOG
KM01A003
Usage Notes
The RESTARTLOG runtime option is available only on Windows platforms. (The restart log
name on z/OS platforms cannot be changed by RESTARTLOG.)
Use the RESTARTLOG as follows:
•
Teradata ARC adds the extension type RLG to the name of the file specified in
RESTARTLOG. Therefore, do not use this extension in the name of the restart log.
•
The restart log is created under the current directory or the working directory, if defined,
unless the full path is specified.
•
If RESTARTLOG is not specified, database DBCLOG is the default name of the restart log
on the IBM platform. On other platforms, yymmdd_hhmmss is date and time, and <n> is
the process ID in the following example of a default restart log name:
ARCLOGyymmdd_hhmmss_<n>.RLG.
•
Teradata ARC does not automatically remove the restart log files after the successful
completion, therefore clean the files periodically.
•
Delimiters depend on the name of the restart log being specified.
•
On non-IBM platforms, all restart log names should be unique to prevent unintentional
sharing of the restart log among multiple jobs. The restart log contains status information
about the ARC job being executed and if multiple jobs specify the same restart log name
those jobs could share usage of the same restart log if those jobs are run at the same time.
Shared usage of a restart log could lead to unpredictable results since multiple jobs will be
updating the same file at the same time. This is particularly important for cluster jobs and
multi-stream jobs but would also apply to any set of ARC jobs that are executed at the
same time.
Note: The RESTARTLOG runtime parameter only applies to Mainframe systems (z/OS).
Restart of Teradata ARC is no longer supported on network-attached systems.
Restart Log File Size Limit
Even though a Restart of Teradata ARC is only supported on mainframe systems, the Restart
Log is integral to arcmain's operations and is therefore still used by arcmain on all supported
Teradata Archive/Recovery Utility Reference, Release 16.00
153
Chapter 4: Runtime Parameters
RESTARTLOG
platforms. When using 32-bit versions of arcmain, the 32-bit version of the 'C' library is
compiled into the arcmain executable file and it may limit the size of the files (including the
Restart Log) that arcmain creates to 2GB in size. If the Restart Log exceeds this size, you may
get the following error message when arcmain tries to write another record to the Restart Log:
*** Internal Failure ARC0001:NO DATA ARE WRITTEN TO 'DBCLOG'.
Reaching the 2GB file size limit will cause your arcmain job to abort. The ARC0001 error
message is a generic message that can be generated by many different error conditions. To
determine if the 2GB file size limit is the cause of your particular error, look at the size of your
Restart Log file to see if it has reached the 2GB threshold. The size of the Restart Log is
dependent on how many objects are being processed. Therefore, one work-around to this 2GB
file size limit is to break up your archive/copy/restore job into smaller pieces so that each piece
processes fewer objects, thus requiring a smaller Restart Log. If you are executing a 32-bit
version of arcmain on a 64-bit machine, another work-around is to switch to the 64-bit
version of arcmain, which is compiled with the 64-bit version of the 'C' library. The 64-bit
version of the 'C' library should allow for files larger than 2GB in size. Note that using the 64bit version of arcmain is not possible on a 32-bit machine.
Example
RESTARTLOG=RESTARTLOG
RLOG=’C:\TESTARC\JOB12345’
154
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
SESSIONS
SESSIONS
Purpose
The SESSIONS parameter specifies the number of Teradata Database sessions that are
available for archive and recovery operations. This number does not include any additional
sessions that might be required to control archive and recovery operations.
Syntax
SESSIONS
= nnn
2412A018
where:
Syntax Element
nnn
Definition
Number of data sessions.
The default is 4.
Usage Notes
In general, the amount of system resources (that is, memory and processing power) that are
required to support the archive or recovery increases with the number of sessions. The impact
on a particular system depends on the specific configuration.
Teradata ARC uses at least two (or three) control sessions to control archive and recovery
operations. The LOGON statement always connects these sessions no matter what type of
operation being performed. Teradata ARC connects additional data sessions based on the
number indicated in the SESSIONS parameter. These sessions are required for the parallel
processing that occurs in archive and restore or copy operations.
If additional data sessions are required, Teradata ARC connects them at one time. Teradata
ARC calculates the number of parallel sessions it can use, with maximum available being the
number of sessions indicated with this parameter. Any connected sessions that are not actually
used in the operation result in wasted system resources.
Teradata Archive/Recovery Utility Reference, Release 16.00
155
Chapter 4: Runtime Parameters
SESSIONS
Table 15 indicates the data session requirements for archive and restore operations.
Table 15: Data Session Requirements
Operation
Data Session Requirements
All-AMPs archive
No more than one session per AMP
Cluster or specific
archive
Calculates the nearest multiple of online AMPs for the operation and assigns tasks evenly among the
involved AMPs
For example, if four AMPs are archived with nine sessions specified, Teradata ARC actually uses eight
sessions (two per AMP). The extra connected session wastes system resources since it is unused.
Restore/copy
Use all the sessions specified using the sessions parameter
Note: If Teradata Active System Management (TASM) is running on the system where a
Teradata ARC job is running, TASM may override the SESSIONS value that the user has
requested based on system utilization. If this should happen, ARC displays a message similar
to the following:
ARC HAS REQUESTED 14 SESSIONS, TASM HAS GRANTED IT 4 SESSIONS.
156
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
STARTAMP
STARTAMP
Purpose
The STARTAMP parameter allows specification of a starting AMP for every all-AMP
ARCHIVE job.
Syntax
=ampid
STARTAMP
AMP
2412A016
where:
Syntax Element
ampid
Definition
Starts AMP for a single all-AMP ARCHIVE job
Usage Notes
Using STARTAMP during multiple all-AMP ARCHIVE jobs can increase archive
performance. Specifying a different AMP for each job minimizes potential file contention.
If STARTAMP is not used, Teradata ARC randomly selects a starting AMP for each archived
table.
Example
STARTAMP=1
or
AMP=1
Teradata Archive/Recovery Utility Reference, Release 16.00
157
Chapter 4: Runtime Parameters
UEN (Utility Event Number)
UEN (Utility Event Number)
Function
The UEN parameter specifies the value that defines %UEN% in a RESTORE/COPY
operation. UEN is automatically generated during an archive.
Syntax
=n
UEN
KM01A006
where:
Syntax Element
n
Definition
Utility event number, for example, UEN=12345
Usage Notes
In a RESTORE/COPY statement, %UEN%, if used, is expanded using the Utility Event
Number specified in UEN.
Example
If FILEDEF is defined as follows in an environment variable or in a default configuration file:
FILEDEF=(archive,ARCHIVE_%UEN%.DAT)
then in the following Teradata ARC script, ARCHIVE is the internal name, and
ARCHIVE_%UEN%.DAT is the external name:
ARCHIVE DATA TABLES (DB), RELEASE LOCK, FILE=ARCHIVE
The final external name of this ARC script is ARCHIVE_1234.DAT, assuming that the UEN is
1234. (The output of this archive displays the UEN.)
Therefore, if a restore is run with UEN=1234 and the above FILEDEF is used, the external
name is ARCHIVE_1234.DAT in the following Teradata ARC script:
RESTORE DATA TABLES (DB), RELEASE LOCK, FILE=ARCHIVE
158
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
VERBOSE
VERBOSE
Purpose
The VERBOSE parameter directs Teradata ARC progress status information to the standard
output device (for example, SYSPRINT for IBM platforms). The amount of status
information sent is determined by the level specified in the parameter.
Syntax
VERBOSE
VERBOSEn
VERB
VERBn
2412A025
VB
VBn
NOVERBOSE
NOVERB
NOVB
where:
Syntax Element
n
Definition
Level of program status information sent to the standard output device, where n represents 2 or 3
VERBOSE without an n value provides the lowest level of status information.
The amount of process status information sent to standard output increases with a higher n value.
Usage Notes
Progress status report is displayed on a standard output device. On the IBM mainframe
platform, the progress status is sent to the SYSPRINT DD with other regular Teradata ARC
output lines.
VERBOSE provides three levels of progress status (1 through 3):
•
Level 1is specified as:
VERBOSE, VERB or VB
•
Levels 2 through 3 are specified as:
Level 2 - VERBOSE2, VERB2 or VB2
Level 3 - VERBOSE3, VERB3 or VB3
While VERBOSE level 1 is designed for normal operation, the other two levels are for
diagnosis and debugging. The performance of Teradata ARC is affected when these higher
VERBOSE levels are used, especially if a low CHECKPOINT value is specified.
Teradata Archive/Recovery Utility Reference, Release 16.00
159
Chapter 4: Runtime Parameters
VERBOSE
The output of VERBOSE is preceded by three dashes (---) for easy identification and filtering
by other utilities.
For ARCHIVE and RESTORE/COPY operations, the amount of data processed in the data
phase is displayed at each checkpoint. Teradata ARC internally takes checkpoints for every n
number of data blocks processed for restart purposes. Therefore, it is possible to control the
frequency of display of status information when VERBOSE is specified by the CHECKPOINT
parameter setting. See “CHECKPOINT” on page 121.
For the RESTORE/COPY operations, each build request is displayed in addition to the data
processed before the request is sent to the Teradata Database. However, the client has no
control of the actual build execution. Therefore, after the request is sent, Teradata ARC waits
for the completion of the build request.
Specify NOVERBOSE to disable the VERBOSE parameter. The default is NOVERBOSE.
Example
Following is sample output that results from specifying VERBOSE level 1.
Note: The organization and contents of informational progress report lines changes without
notice with new releases of Teradata ARC. Therefore, do not develop scripts or programs that
exactly copy this example.
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
160
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:53
03:25:54
03:25:55
03:25:55
03:25:55
03:25:55
03:25:55
03:25:55
03:25:55
03:25:55
03:25:55
03:25:55
READING DEFAULTS FROM FILE=[SYSPARM]
Copyright 1989-2014, Teradata Corporation.
All Rights Reserved.
***
*
*
*****
*
*
*
*
****
*
*
****
* *
*
*
****
*
*
*
****
PROGRAM: ARCMAIN
RELEASE: 15.10.00.00
BUILD:
150108I (Jun 19 2014)
PARAMETERS IN USE:
SESSIONS 80
CHECKPOINT FREQUENCY CHANGED TO 5
VERBOSE LEVEL 1
NO HALT
CHARACTER SET IN USE: EBCDIC
LOGON TDSL/DBC,;
LOGGED ON 2 SESSIONS
DBS LANGUAGE SUPPORT MODE Standard
DBS RELEASE 15.10i.00.181
DBS VERSION 15.10i.00.181
STATEMENT COMPLETED
ARCHIVE DATA TABLES
(SYSTEMFE),
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
VERBOSE
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
03:25:55
03:25:55
03:25:56
03:26:13
03:26:13
03:26:13
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
03:26:18
RELEASE LOCK,
FILE = ARC1;
UTILITY EVENT NUMBER - 35954
LOGGED ON
80 SESSIONS
ARCHIVING DATABASE "SYSTEMFE"
MACRO "AllRestarts" - ARCHIVED
MACRO "BynetEvents" - ARCHIVED
VIEW "CleanupQCF" - ARCHIVED
--- Starting table "CleanupQCFVer"
--- Table "CleanupQCFVer" is not a spanned object
--- CHKPT:
2,063 bytes, 28 rows received
--- Data subtable:
2,063 bytes, 28 rows received
TABLE "CleanupQCFVer" 2,063 BYTES, 28 ROWS ARCHIVED
VIEW "CreateQCF" - ARCHIVED
--- Starting table "CreateQCFVer"
--- Table "CreateQCFVer" is not a spanned object
--- CHKPT:
71,286 bytes, 523 rows received
--- Data subtable:
71,286 bytes, 523 rows received
TABLE "CreateQCFVer" - 71,286 BYTES, 523 ROWS ARCHIVED
MACRO "DiskEvents" - ARCHIVED
MACRO "EventCount" - ARCHIVED
MACRO "FallBack_All" - ARCHIVED
MACRO "FallBack_DB" - ARCHIVED
MACRO "ListErrorCodes" - ARCHIVED
MACRO "ListEvent" - ARCHIVED
MACRO "ListRestart_Logon_Events" - ARCHIVED
MACRO "ListSoftware_Event_Log" - ARCHIVED
MACRO "MiniCylpacks" - ARCHIVED
MACRO "NoFallBack_All" - ARCHIVED
MACRO "NoFallBack_DB" - ARCHIVED
--- Starting table "opt_cost_table"
--- Table "opt_cost_table" is not a spanned object
--- Data subtable:
7,400 bytes, 0 rows received
TABLE "opt_cost_table" 7,400 BYTES, 0 ROWS ARCHIVED
--- Starting table "Opt_DBSCtl_Table"
--- Table "Opt_DBSCtl_Table" is not a spanned object
--- Data subtable:
620 bytes, 0 rows received
TABLE "Opt_DBSCtl_Table" - 620 BYTES, 0 ROWS ARCHIVED
--- Starting table "opt_ras_table"
--- Table "opt_ras_table" is not a spanned object
--- Data subtable:
8,646 bytes, 0 rows received
TABLE "opt_ras_table" 8,646 BYTES, 0 ROWS ARCHIVED
--- Starting table "Opt_XMLPlan_Table"
--- Table "Opt_XMLPlan_Table" is not a spanned object
--- Data subtable:
750 bytes, 0 rows received
TABLE "Opt_XMLPlan_Table" - 750 BYTES, 0 ROWS ARCHIVED
MACRO "PackDisk" - ARCHIVED
MACRO "ReconfigCheck" - ARCHIVED
--- Starting table "Temp_ReconfigSpace"
--- Table "Temp_ReconfigSpace" is not a spanned object
--- Data subtable:
464 bytes, 0 rows received
TABLE "Temp_ReconfigSpace" - 464 BYTES, 0 ROWS ARCHIVED
"SYSTEMFE" - LOCK RELEASED
DUMP COMPLETED
STATEMENT COMPLETED
RESTORE DATA TABLES
Teradata Archive/Recovery Utility Reference, Release 16.00
161
Chapter 4: Runtime Parameters
VERBOSE
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
03:26:18
03:26:18
03:26:18
03:26:18
03:26:19
03:26:19
03:26:19
03:26:19
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
03:26:20
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:24
03:26:25
03:26:25
03:26:26
03:26:26
03:26:26
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:27
03:26:29
03:26:29
162
(SYSTEMFE),
RELEASE LOCK,
FILE = ARC1;
UTILITY EVENT NUMBER - 35957
--- ARCHIVED AT
07-09-14 03:26:13
--- ARCHIVED FROM ALL AMP DOMAINS
--- UTILITY EVENT NUMBER IN ARCHIVE - 35954
--- SOURCE RDBMS VERSION: LANGUAGE SUPPORT MODE:Standard;
--- RELEASE:15.10i.00.181; VERSION:15.10i.00.181;
STARTING TO RESTORE DATABASE "SYSTEMFE"
"AllRestarts" - MACRO RESTORED
"BynetEvents" - MACRO RESTORED
"CleanupQCF" - VIEW RESTORED
"CleanupQCFVer" - DICTIONARY ROWS RESTORED
"CreateQCF" - VIEW RESTORED
"CreateQCFVer" - DICTIONARY ROWS RESTORED
"DiskEvents" - MACRO RESTORED
"EventCount" - MACRO RESTORED
"FallBack_All" - MACRO RESTORED
"FallBack_DB" - MACRO RESTORED
"ListErrorCodes" - MACRO RESTORED
"ListEvent" - MACRO RESTORED
"ListRestart_Logon_Events" - MACRO RESTORED
"ListSoftware_Event_Log" - MACRO RESTORED
"MiniCylpacks" - MACRO RESTORED
"NoFallBack_All" - MACRO RESTORED
"NoFallBack_DB" - MACRO RESTORED
"opt_cost_table" - DICTIONARY ROWS RESTORED
"Opt_DBSCtl_Table" - DICTIONARY ROWS RESTORED
"opt_ras_table" - DICTIONARY ROWS RESTORED
"Opt_XMLPlan_Table" - DICTIONARY ROWS RESTORED
"PackDisk" - MACRO RESTORED
"ReconfigCheck" - MACRO RESTORED
"Temp_ReconfigSpace" - DICTIONARY ROWS RESTORED
DICTIONARY RESTORE COMPLETED
--- Starting table "CleanupQCFVer"
--- Table "CleanupQCFVer" is not a deferred object
"CleanupQCFVer" 2,063 BYTES, 28 ROWS RESTORED
--- Starting table "CreateQCFVer"
--- Table "CreateQCFVer" is not a deferred object
"CreateQCFVer" - 71,286 BYTES, 523 ROWS RESTORED
--- Starting table "opt_cost_table"
--- Table "opt_cost_table" is not a deferred object
"opt_cost_table" 7,400 BYTES, 0 ROWS RESTORED
--- Starting table "Opt_DBSCtl_Table"
--- Table "Opt_DBSCtl_Table" is not a deferred object
"Opt_DBSCtl_Table" - 620 BYTES, 0 ROWS RESTORED
--- Starting table "opt_ras_table"
--- Table "opt_ras_table" is not a deferred object
"opt_ras_table" 8,646 BYTES, 0 ROWS RESTORED
--- Starting table "Opt_XMLPlan_Table"
--- Table "Opt_XMLPlan_Table" is not a deferred object
"Opt_XMLPlan_Table" - 750 BYTES, 0 ROWS RESTORED
--- Starting table "Temp_ReconfigSpace"
--- Table "Temp_ReconfigSpace" is not a deferred object
"Temp_ReconfigSpace" - 464 BYTES, 0 ROWS RESTORED
--- Building fallback subtable for index 0.
--- Clearing build flag for index 0.
--- Building fallback subtable for index 0.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 4: Runtime Parameters
VERBOSE
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
07/09/2014
03:26:30
03:26:30
03:26:31
03:26:31
03:26:32
03:26:33
03:26:34
03:26:34
03:26:35
03:26:35
03:26:36
03:26:36
03:26:36
03:26:36
03:26:36
03:26:36
03:26:36
03:26:36
03:26:36
03:26:36
03:26:36
--- Clearing
--- Building
--- Clearing
--- Building
--- Clearing
--- Building
--- Clearing
--- Building
--- Clearing
--- Building
--- Clearing
"SYSTEMFE" -
build flag for index 0.
fallback subtable for index
build flag for index 0.
fallback subtable for index
build flag for index 0.
fallback subtable for index
build flag for index 0.
fallback subtable for index
build flag for index 0.
fallback subtable for index
build flag for index 0.
LOCK RELEASED
0.
0.
0.
0.
0.
STATEMENT COMPLETED
LOGOFF;
LOGGED OFF 82 SESSIONS
STATEMENT COMPLETED
07/09/2014 03:26:36
ARCMAIN TERMINATED WITH SEVERITY 0
Teradata Archive/Recovery Utility Reference, Release 16.00
163
Chapter 4: Runtime Parameters
WORKDIR
WORKDIR
Purpose
The WORKDIR parameter specifies the name of the working directory.
Syntax
WORKDIR
= directory path
WDIR
KM01A004
Usage Notes
When a directory is specified with WORKDIR, the restart log, the output log, the error log,
the stack trace dump file, and the hard disk archive files, are created in and read from this
directory. If WORKDIR is not specified, the current directory is the default working directory.
Delimiters depend on the name of the working directory being specified.
Note: For information on file size limits, see“Restart Log File Size Limit” on page 153.
Example
WORKDIR=C:\TESTARC
WORKDIR=’C:\TEST ARC’
164
Teradata Archive/Recovery Utility Reference, Release 16.00
CHAPTER 5
Return Codes and UNIX Signals
This chapter describes these topics:
•
Return Codes
•
UNIX Signals
•
Stack Trace Dumps
Return Codes
Teradata ARC provides a return (or condition) code value to the client operating system
during termination logic. The return code indicates the final status of a task.
Return code integers increase in value as the events causing them increase in severity, and the
presence of a return code indicates that at least one message that set the return code is printed
in the output file. Any number of messages associated with a lesser severity might also be
printed in the output file. For more information about error messages, see Messages.
Teradata Archive/Recovery Utility Reference, Release 16.00
165
Chapter 5: Return Codes and UNIX Signals
Return Codes
Table 16 lists the Teradata ARC return codes, their descriptions, and the recommended
solution for each.
Table 16: Return Codes
Return
Code
Description
Recommended Action
0
A normal termination of the utility.
4
At least one warning message is in the output listing.
Review the output to determine which warnings and
possible consequences are printed.
8
At least one nonfatal error message is in the output
listing.
Review the output to determine which errors are listed,
and then deal with each error accordingly.
A nonfatal error normally indicates that a request
could not be met. A nonfatal error does not
adversely affect other requests nor does it terminate
task execution.
12
A fatal error message is in the output listing.
A fatal error terminates execution. Restarting the
task might be possible.
16
An internal software error message is in the output
listing.
Review the output to determine which error occurred and
its cause.
Gather as much information as possible (including the
output listing), and contact Teradata customer support.
An internal error terminates execution.
Table 17 lists the Teradata Database error message numbers, their associated severity levels,
and the associated return codes.
The six error messages listed as NORMAL severity are restartable errors that indicate the
Teradata Database failed. Teradata ARC waits for the Teradata Database to restart before it
automatically resubmits or restarts any affected requests or transactions.
Any other message numbers not shown in this table are treated as FATAL.
166
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 5: Return Codes and UNIX Signals
Return Codes
Table 17: Database Error Messages
Database Error Message Numbers
2825
2828
3120
2826
3119
3897
2123
2921
4321
7485
8271
2631
2971
4322
7486
8274
2639
2972
4323
7535
8295
2641
3111
4324
8234
8297
2654
3598
5312
8258
9012
2805
3603
5419
8261
9017
2835
3803
5512
8267
9019
2837
3804
5588
8268
9020
2838
3805
5729
8269
9021
2840
4320
6933
8270
9136
2815
3523
3656
3807
3916
2830
3524
3658
3824
3933
2897
3531
3706
3853
5310
2920
3566
3737
3873
8232
2926
3613
3802
3877
8298
2538
2843
8230
8246
8252
2541
2866
8231
8247
8253
2644
2868
8242
8248
8254
2801
3596
8243
8249
2809
8228
8244
8250
2828
8229
8245
8251
None
Severity Level
Return Code
NORMAL
0
WARNING
4
ERROR
8
FATAL
12
INTERNAL
16
Note: The DBSERROR command line parameter allows the specification of a different
severity level for any of the Teradata Database error codes listed above. See “DBSERROR” on
page 126 for more details.
Table 18 lists the error message numbers that are generated on the client side, their associated
severity levels, and associated return codes.
Teradata Archive/Recovery Utility Reference, Release 16.00
167
Chapter 5: Return Codes and UNIX Signals
Return Codes
Table 18: Client-Generated Error Messages
Client Generated Error Message Number (ARCxxxx)
5
113
1003
1025
1218
1249
1278
6
118
1004
1026
1223
1250
1401
12
120
1005
1027
1225
1251
1402
13
122
1006
1029
1226
1254
1403
18
126
1010
1031
1228
1256
1404
20
225
1011
1032
1232
1257
1501
22
707
1012
1033
1235
1263
1800
101
708
1014
1034
1239
1266
1801
103
721
1015
1036
1240
1267
1803
104
725
1016
1201
1241
1269
1804
106
726
1022
1213
1242
1272
1805
107
809
1023
1214
1244
1274
1900
111
1001
1024
1217
1245
1277
2202
108
729
1030
1227
1407
2102
119
730
1035
1248
1408
2103
123
802
1200
1268
1409
2104
715
1013
1202
1273
1410
2107
727
1017
1206
1406
2100
2201
Severity Level
Return Code
WARNING
4
ERROR
8
728
168
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 5: Return Codes and UNIX Signals
UNIX Signals
Table 18: Client-Generated Error Messages (continued)
Client Generated Error Message Number (ARCxxxx)
3
109
216
243
800
1219
1275
4
110
217
244
801
1220
1276
7
112
218
245
803
1221
1279
8
114
219
246
804
1222
1280
9
115
220
247
805
1230
1400
10
116
221
600
806
1231
1405
11
117
222
601
807
1233
1500
14
121
223
700
808
1234
1802
15
124
224
701
1000
1236
1806
16
125
226
702
1002
1237
1901
17
127
227
704
1018
1238
2000
19
200
228
705
1019
1243
2001
21
201
229
710
1020
1246
2002
23
202
230
711
1021
1247
2003
24
204
231
712
1028
1252
2004
25
205
232
713
1037
1253
2101
26
207
233
714
1201
1255
2105
27
208
234
716
1203
1258
2106
28
209
235
717
1204
1259
2200
29
210
236
718
1207
1260
2203
30
211
237
719
1208
1261
2204
31
212
238
720
1209
1262
2205
32
213
239
722
1210
1264
2206
100
214
240
723
1211
1265
102
215
241
724
1215
1270
242
731
1216
1271
105
1
206
Severity Level
Return Code
FATAL
12
INTERNAL
16
UNIX Signals
UNIX signals are predefined messages sent between two UNIX processes to communicate the
occurrence of unexpected external events or exceptions.
A knowledge of UNIX signals is important when running Teradata ARC in a UNIX operating
system. Teradata ARC UNIX signals are not supported in modules or routines written for use
with Teradata ARC: when one of these signals is received, Teradata ARC terminates with a
return code of 16.
Teradata Archive/Recovery Utility Reference, Release 16.00
169
Chapter 5: Return Codes and UNIX Signals
Stack Trace Dumps
Teradata ARC uses these UNIX signals:
•
SIGILL (illegal signal)
•
SIGINT (interrupt signal)
•
SIGSEGV (segmentation violation signal)
•
SIGTERM (terminate signal)
Stack Trace Dumps
Certain error conditions will now automatically generate a Stack Trace Dump file under the
Windows and Linux platforms. A Stack Trace Dump will no longer be available for mainframe
platforms. The error conditions that can generate a Stack Trace Dump file will include Access
Violation errors for Windows and SIGSEGV errors for Linux. Other error conditions may also
generate a Stack Trace Dump.
The error condition that generated the Stack Trace Dump will be identified in the Stack Trace
Dump file.
The name of the Stack Trace Dump file will have the following format:
Arcmain_Crash_<pid>.txt
The Stack Trace Dump file is a separate file that is created in the 'local' directory where the
user is located when executing ARCMAIN or in the 'working' directory, if one is defined.
The Stack Trace Dump file contains stack trace debugging information and is for internal use
only.
Usage Notes
Windows
To help generate the Stack Trace Dump file on Windows platforms, another file must now be
installed in the \bin directory along with arcmain.exe. This new file is arcmain.pdb.
In earlier versions of ARC on Windows platforms, an Access Violation error would generate
the following output:
12/02/2010 09:55:25
*** Internal Failure ARC0001:UNEXPECTED
TERMINATION,REASON=ACCESS VIOLATION at address
00533412, accessing memory at 0053c636
12/02/2010 09:55:25 --- Disconnecting all sessions.
12/02/2010 09:55:25 LOGGED OFF
7 SESSIONS
================================================================
Execution traceback not available on this platform.
================================================================
12/02/2010 09:55:25 ARCMAIN TERMINATED WITH SEVERITY 16
170
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 5: Return Codes and UNIX Signals
Stack Trace Dumps
If a Stack Trace Dump is generated, ARC will now display the following:
12/02/2010 09:55:25
*** Internal Failure ARC0001:UNEXPECTED
TERMINATION,REASON=ACCESS VIOLATION at address
00535292, accessing memory at 0053e636
12/02/2010 09:55:25 --- Disconnecting all sessions.
12/02/2010 09:55:25 LOGGED OFF
7 SESSIONS
================================================================
Stack trace dump file generated: Arcmain_Crash_2788.txt.
================================================================
12/02/2010 09:55:25 ARCMAIN TERMINATED WITH SEVERITY 16
If a Stack Trace Dump is not generated, ARC will now display the following:
12/02/2010 09:55:25
*** Internal Failure ARC0001:UNEXPECTED
TERMINATION,REASON=ACCESS VIOLATION at address
00533412, accessing memory at 0053c636
12/02/2010 09:55:25 --- Disconnecting all sessions.
12/02/2010 09:55:25 LOGGED OFF
7 SESSIONS
================================================================
Execution traceback not available.
================================================================
12/02/2010 09:55:25 ARCMAIN TERMINATED WITH SEVERITY 16
Windows Example
On Windows platforms, the contents of the Stack Trace Dump file (Arcmain_Crash_2788.txt)
looks like the following:
Stack Dump file:
Arcmain_Crash_2788.txt:
UNEXPECTED TERMINATION, REASON=ACCESS VIOLATION at address 00535292,
accessing memory at 0053e636
Registers:
EAX:0053E636 EBX:7FFDE000 ECX:00000030 EDX:0053E65B
ESI:01CB9249 EDI:E1C5FAC4 EBP:0010C6F4 ESP:0010C44C Flags:00010206
CS:EIP:001B:00535292 DS:0023 ES:0023 FS:003B GS:0000 SS:0023
1: 00535292 UTXPrint+1F2 c:\...\utprint.c:230
2: 00483984 SetObjHasDataFlag+84 c:\...\dmpsvc1.c:777
3: 004823A4 GetTBIDs+1C4 c:\...\dmpsvc1.c:2084
4: 004810BC DMDICT+D6C c:\...\dmpsvc1.c:1758
5: 0050CC95 SHStHndl+415 c:\...\shsthndl.c:377
6: 004EB9FE DMPhase1+DE c:\...\shdump.c:1615
7: 0050CC95 SHStHndl+415 c:\...\shsthndl.c:377
8: 004EDDF0 DMObject+480 c:\...\shdump.c:2485
9: 004ECBC7 SHDump+627 c:\...\shdump.c:2991
10: 00475921 CTCMain+12F1 c:\...\ctcmain.c:1775
11: 0046CA15 main+2D5 c:\...\arcmain.c:195
12: 00539336 __tmainCRTStartup+1A6 f:\...\crtexe.c:586
13: 0053917D mainCRTStartup+D f:\...\crtexe.c:403
14: 7C817067 RegisterWaitForInputIdle+49 Unknown file
Teradata Archive/Recovery Utility Reference, Release 16.00
171
Chapter 5: Return Codes and UNIX Signals
Stack Trace Dumps
Each line in the Stack Dump file contains the following information:
1: 00535292 UTXPrint+1F2 c:\...\utprint.c:230
1) '1:'
2) '00535292'
-- line item number
-- memory address of last line executed in current
routine
3) 'UTXPrint+1F2'
-- routine name plus memory offset into the routine
4) 'c:\...\utprint.c' -- full path of the module containing the routine
5) ':230'
-- line offset within the module
Linux
In earlier versions of ARC on Linux platform, an illegal memory access error would generate
the following output:
12/03/2010 11:42:48
*** Internal Failure ARC0001:OpsSHndl:An illegal
memory access was attempted
12/03/2010 11:42:48 LOGGED OFF
5 SESSIONS
================================================================
Execution traceback not available on this platform.
================================================================
12/03/2010 11:42:48 ARCMAIN TERMINATED WITH SEVERITY 16
If a Stack Trace Dump is generated, Teradata ARC displays the following:
12/03/2010 09:39:37
*** Internal Failure ARC0001:OpsSHndl:An illegal
memory access was attempted
12/03/2010 09:39:37 LOGGED OFF
5 SESSIONS
================================================================
Stack trace dump file generated: Arcmain_Crash_22434.txt.
================================================================
12/03/2010 09:39:37 ARCMAIN TERMINATED WITH SEVERITY 16
If a Stack Trace Dump is not generated, Teradata ARC displays the following:
12/03/2010 09:39:37
*** Internal Failure ARC0001:OpsSHndl:An illegal
memory access was attempted
12/03/2010 09:39:37 LOGGED OFF
5 SESSIONS
================================================================
Execution traceback not available.
================================================================
12/03/2010 09:39:37 ARCMAIN TERMINATED WITH SEVERITY 16
Linux Example
On Linux platforms, The contents of the Stack Trace Dump file (Arcmain_Crash_22434.txt)
looks like the following:
Stack Dump file:
Arcmain_Crash_22434.txt:
Signal Kind 11 [Segmentation fault]
Stack Trace:
[bt]: (1) /lib64/libc.so.6 [0x2aaaaabf6fc0]
[bt]: (2) arcmain(RstData+0xdd) [0x460b7d]
[bt]: (3) arcmain(MiniARC_RestoreTable+0x1d0) [0x426b20]
172
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 5: Return Codes and UNIX Signals
Stack Trace Dumps
[bt]:
[bt]:
[bt]:
[bt]:
[bt]:
[bt]:
(4)
(5)
(6)
(7)
(8)
(9)
arcmain(MiniARC_GoRestore+0x250) [0x427120]
arcmain(SHMiniARC+0x532) [0x429762]
arcmain(CTCMain+0xfec) [0x40e87c]
arcmain(main+0x9) [0x40c3f9]
/lib64/libc.so.6(__libc_start_main+0xf4) [0x2aaaaabe4304]
arcmain(strcat+0x52) [0x40c34a]
Each line in the Stack Dump file contains the following information:
[bt]: (2) arcmain(RstData+0xdd) [0x460b7d]
1) '[bt]:'
2) '(2)'
3) 'arcmain'
4) '(RstData+0xdd)'
5) '[0x460b7d]'
Teradata Archive/Recovery Utility Reference, Release 16.00
-----
backtrace
line item number
name of the program
routine name plus memory offset into the
routine
-- memory address of last line executed in
current routine
173
Chapter 5: Return Codes and UNIX Signals
Stack Trace Dumps
174
Teradata Archive/Recovery Utility Reference, Release 16.00
CHAPTER 6
Archive/Recovery Control Language
The syntax of Teradata ARC control language is similar to Teradata SQL, that is, each
statement is composed of words and delimiters and is terminated by a semicolon. Teradata
ARC writes each statement (and a response to each statement) to an output file.
Note: For simplified descriptions of the Teradata ARC control language, refer to the Archive/
Recovery chapter of the Teradata Tools and Utilities Command Summary, which provides only
syntax diagrams and a brief description of the syntax variables for each Teradata client utility.
The Teradata ARC control statements perform these types of activities:
•
Session statements begin and end utility tasks
•
Archive statements perform archive and restore activities, and build indexes
•
Recovery statements act on journals, and rollback or rollforward database or table activity
Note: Starting with TTU 15.10.00.00, the use of Permanent Journal tables is deprecated by the
Teradata Archive/Recovery Utility. This includes all statements and options that are related to
Journal tables that are mentioned in this Manual.
Table 19 summarizes Teradata ARC statements. For more detail, refer to the sections that
follow the table.
Table 19: Summary of Teradata ARC Statements
Statement
Activity
Function
ANALYZE
Archive
Reads an archive tape and displays content information
ARCHIVE
Archive
Archives a copy of database content to a client resident file in the specified
format
Beginning with TTU 13.00.00, there is support for archiving all objects as
individual objects.
BUILD
Archive
Builds indexes for data tables and fallback data for fallback tables
CHECKPOINT
Recovery
Marks a journal table for later archive or recovery activities
COPY
Archive
Recreates copies of archived databases or objects; also restores archived files
from one system to another
Beginning with TTU 13.00.00, there is support for copying all objects as
individual objects.
Note: Triggers cannot be copied.
DELETE DATABASE
Miscellaneous
Deletes a partially restored database
DELETE JOURNAL
Recovery
Deletes a saved or restored journal
Teradata Archive/Recovery Utility Reference, Release 16.00
175
Chapter 6: Archive/Recovery Control Language
Table 19: Summary of Teradata ARC Statements (continued)
Statement
Activity
Function
LOGDATA
Security
Specifies any additional logon or authentication data required by an
extended security mechanism
LOGGING ONLINE
ARCHIVE OFF
Archive
Ends the online archive process
LOGGING ONLINE
ARCHIVE ON
Archive
Begins the online archive process
LOGMECH
Security
Sets the extended security mechanism used by Teradata ARC to log onto a
Teradata Database
LOGOFF
Session Control
Ends a Teradata session and terminates Teradata ARC
LOGON
Session Control
Begins a Teradata session
QUIT
Session Control
See LOGOFF
RELEASE LOCK
RESTORE
Miscellaneous
Archive
Releases a utility lock on a database or table
Restores objects from an archive file to specified AMPs
Beginning with TTU 13.00.00, there is support for restoring all objects as
individual objects.
REVALIDATE
REFERENCES FOR
Archive
Validates referenced indexes that have been marked inconsistent during
restore
ROLLBACK
Recovery
Restores a database or tables to a state that existed before some change
ROLLFORWARD
Recovery
Restores a database or tables to its (their) state following a change
SET QUERY_BAND
176
Miscellaneous
Provides values for a session to precisely identify the origins of a
query.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ANALYZE
ANALYZE
Purpose
The ANALYZE statement reads an archive tape and displays information about its content.
Syntax
$1$/<=(
$
$//
GEQDPH
GEQDPH72 GEQDPH
),/( QDPH
$
&$7$/2*
',63/$<
9$/,'$7(
/21*
%
where:
Syntax Element
Description
ALL *
Analyzes all databases in the archive file
CATALOG
Generates/rebuilds the CATALOG table in the CATALOG database
See CATALOG in Chapter 4: “Runtime Parameters”.
(dbname)
Name of the database to analyze
(dbname1) TO (dbname2)
List of alphabetically client databases
The delimiting database names need not identify actual databases.
Database DBC is not included as part of the range.
DISPLAY
Displays information about the identified databases
FILE
Analyzes a file
LONG
Displays the names of all objects within the specified databases
name
Name of the archive data set to analyze
VALIDATE
Reads each archive record for the specified database to check that
each block on the file can be read
Teradata Archive/Recovery Utility Reference, Release 16.00
177
Chapter 6: Archive/Recovery Control Language
ANALYZE
Usage Notes
The ANALYZE statement provides the following information about the specified databases:
•
Time and date the archive operation occurred.
•
Whether the archive is of all AMPs, clusters of AMPs, or specific AMPs.
•
The name of each database, the name of each object in each database, and the fallback
status of the tables (if the LONG option is specified).
•
The number of bytes and rows in each table (if the LONG or VALIDATE option is
specified).
•
If an archive file contains a selected partition archive of a table, the bounding condition
used to select the archived partitions is displayed with an indication as to whether the
bounding condition is well-defined.
•
Online logging information.
•
User-Defined Method (UDM) and User-Defined Type (UDT) information.
•
If the archive file contains Stat Collection DDL, the DDL can be displayed by specifying the
DISPLAY LONG option.
The DISPLAY and VALIDATE options can both be specified in a single ANALYZE statement.
The default is DISPLAY.
Alternate Character Set Support
The ANALYZE statement requires a connection to the Teradata Database when object names
are specified in internal RDBMS hexadecimal format (’<...>’XN) on the input statement,
because Teradata ARC cannot normally accept object names in internal RDBMS format.
Although the ANALYZE statement does not typically require a connection to the Teradata
Database, under this circumstance the object names in internal hexadecimal must be
translated by the Teradata Database into the external client format (X’...’) to be used by the
ARCMAIN program. This connection is done by the LOGON statement.
Failure to make a connection to the Teradata Database results in the object name being
skipped by the ANALYZE operation, which produces the following warning message:
ARC0711 DBS connection required to interpret object name. %s skipped.
In addition, when the HEX runtime parameter is executed, the HEX display of object names
from the ANALYZE statement is in the external client format (X’...’). This is because object
names in ANALYZE statements may contain “hybrid” EBCDIC and kanji EUC, or EBCDIC
and kanjiShiftJIS single byte and multibyte characters, which cannot be translated into
internal RDBMS format, but can be translated into external client format.
Note: Hybrid object names result when an object is archived that was created:
•
In the KANJIEUC or KANJISJIS character set from a KANJIEBCDIC mainframe client
•
The hexadecimal representation of the object being archived is not used
In such a circumstance, the Teradata Database cannot properly translate the character set and
produces a hybrid object name composed of single byte characters from the EBCDIC
character set and multibyte characters from the creating character set (UNIX OS or DOS/V).
The hybrid object name cannot be understood by the Teradata Database.
178
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ARCHIVE
ARCHIVE
Purpose
The ARCHIVE statement archives a copy of a database or table to some type of portable
media. Use ARCHIVE to extract data from a Teradata Database. Use the COPY or RESTORE
statement to import data to a Teradata Database.
DUMP is an alias of ARCHIVE.
Note: Beginning with TTU 13.00.00, there is support for archiving all objects as individual objects.
Teradata Archive/Recovery Utility Reference, Release 16.00
179
Chapter 6: Archive/Recovery Control Language
ARCHIVE
Syntax
,
ARCHIVE
DATA
DICTIONARY
A
(dbname)
TABLES
TABLE
ALL
NO FALLBACK
JOURNAL
(options)
(dbname.tablename)
B
A
,
, EXCLUDE
(xdbname)
ALL
(xdbname1)-TO-(xdbname2)
B
C
,
,
,
AMP=
4096
nnn
=
CLUSTERS
CLUSTER
5
n
D
C
, RELEASE LOCK
, INDEXES
FORCED
E
D
, ABORT
, ONLINE
, KEEP LOGGING
, NOSYNC
F
E
, USE
READ LOCK
, SKIP JOIN INDEX
, SKIP STAT COLLECTION
GROUP
2
, FILE = name
F
;
, NONEMPTY DATABASES
, NONEMPTY DATABASE
180
2412_12
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ARCHIVE
RSWLRQV
$
(;&/8'( 7$%/(7$%/(6 [WDEOHQDPH
[GEQDPH [ WDEOHQDPH
$
$
3$57,7,216:+(5(FRQGLWLRQDOH[SUHVVLRQ
$
where:
Syntax Element
Description
DATA TABLE or DATA
TABLES
Archives fallback, non-fallback, or both types of tables from all AMPs or from clusters of
AMPs
DICTIONARY TABLE or
DICTIONARY TABLES
Archives only the dictionary rows of an object
A dictionary archive of a database includes the definition of all objects in that database,
including the dictionary entries for stored procedures. If an individual object is specified,
then the archive only includes the definition for that object.
NO FALLBACK TABLE or
NO FALLBACK TABLES
Archives non-fallback table from specific AMPs to complete a previous all-AMPs or cluster
archive taken with processors offline
JOURNAL TABLE or
JOURNAL TABLES
Archives dictionary rows for journal table and the saved subtable of journal table
(dbname)
Name of the database from which tables are archived
This causes all tables of the specified type in the database to be archived.
ALL
Archives all tables from the named database and its descendants, in alphabetical order
(dbname.tname)
Name of a table within the named database to archive
All tables must be distinct within the list. Duplicate tables are ignored.
EXCLUDE
Prevents the listed databases from being archived
Note that individual objects cannot be excluded using this option. Individual objects can be
excluded using the EXCLUDE TABLE(S) option after each database. See more in
EXCLUDE TABLE, below.
(xdbname)
Name of a database to exclude
ALL
Excludes the named database and its descendants
(xdbname1) TO (xdbname2)
Alphabetical list of client databases to exclude
The delimiting database names need not identify actual databases. Database DBC is not
included within a range.
EXCLUDE TABLE or
EXCLUDE TABLES
Prevents individual objects in the listed database from being archived
Teradata Archive/Recovery Utility Reference, Release 16.00
181
Chapter 6: Archive/Recovery Control Language
ARCHIVE
Syntax Element
Description
(xtablename)
Name of an individual object in the designated database to exclude.
Multiple objects are separated by commas.
This form (without a database name prefix) is used only when ALL has not been specified
for the current object.
(xdbname.xtablename)
List of fully-qualified objects (prefixed by database name) to exclude.
PARTITIONS WHERE
Specifies the conditional expression for selecting partitions
If the condition selects a partial partition, the entire partition is archived.
(!conditional expression!)
Conditional expression for specifying selected partitions.
CLUSTERS = nnn or
CLUSTER = nnn
Specifies AMP clusters to archive, where nnn is the number of the clusters, up to 4096
clusters.
AMP = n
Specifies the AMP (or a list of AMPs) to archive for Teradata Database, where n is the
number of the AMP, up to five AMPs.
RELEASE LOCK
Releases utility locks on the identified objects when the archive operation completes
successfully
Notice:
FORCED
Releasing a HUT lock while another Teradata ARC job is running could result in
data corruption and/or unexpected errors from ARCMAIN or the Teradata
Database. Refer to “RELEASE LOCK Keyword” on page 186 for more
information.
Instructs Teradata ARC to try and release any placed locks if the archive fails
Note: The HUT lock is not guaranteed to be released in all cases. The following cases will
result in a leftover lock:
INDEXES
•
If Teradata ARC is forcibly aborted by the user or operating system.
•
If communication to the Teradata Database is lost and cannot be reestablished.
•
If an internal failure occurs in Teradata ARC, such that program control cannot
proceed to, or complete, the release lock step.
Archives data blocks used for secondary indexing
This option is not available for archiving selected partitions of PPI tables, although it is
allowed for a full-table archive of a PPI table.
ABORT
Aborts an all-AMPs or cluster operation if an AMP goes offline during the archive of nonfallback tables or single image journal tables
ONLINE
Initiates an online archive on a table or database
NOSYNC
Enables online archive on all tables involved in the archive but does not require all tables to
be enabled before access to those tables is allowed. Tables that were blocked during the
initial enabling request will be retried as individual requests and will therefore have
different sync points. As each table or group of tables is enabled for online logging, it will
become available for access by other jobs.
KEEP LOGGING
Overrides the automatic termination of an online archive that was initiated with the
ONLINE option
182
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ARCHIVE
Syntax Element
Description
USE READ LOCK or USE
GROUP READ LOCK
Applies a read or group HUT lock on the entity being archived
This option is available only when archiving data tables.
Specify the GROUP parameter only when archiving all AMPs or cluster AMPs. The
GROUP keyword is rejected in a specific AMP archive operation.
If the GROUP parameter is specified during an archive, that archive is unusable if restoring
to a system that has a different hash function or configuration than the source system.
Therefore, perform an archive with no GROUP parameter prior to a major release upgrade
or reconfiguration.
To use the GROUP READ LOCK option, after-image journaling must be enabled for the
tables.
NONEMPTY DATABASES
or NONEMPTY DATABASE
Excludes any empty users or databases from the ARCHIVE operation
FILE
Names an output archive file
This option works with all types of archives. If a journal is archived, databases without a
journal are excluded.
Specifying this option twice in the same ARCHIVE statement is supported for mainframe
platforms. If the option is specified twice, Teradata ARC generates two identical archive
files concurrently.
Note: For all Windows and Linux platforms, only one FILE option is supported per
ARCHIVE statement.
Notice:
If more than one ARCHIVE statement is specified in a script, the value for the
FILE option should be different for each statement. Otherwise, the file created
by the second ARCHIVE statement will overwrite the file created by the first
ARCHIVE statement.
name
Name of the output archive file
SKIP JOIN INDEX
Skip the archive of join and hash indexes
SKIP STAT COLLECTION
Skip the archive of Stat Collections
Access Privileges
To archive a database or a table in a database, the user named in the LOGON statement must
have one of the following:
•
The DUMP privilege on the database or table to archive
•
Ownership of the database or table
To archive database DBC, the user name must either be “User DBC” or be granted the DUMP
and SELECT privileges by user DBC.
To archive a journal table, the user name must have the DUMP privilege on one of the
following:
•
The journal table itself
•
The database that contains the journal table
Teradata Archive/Recovery Utility Reference, Release 16.00
183
Chapter 6: Archive/Recovery Control Language
ARCHIVE
To Archive a UIF object, the user named in the LOGON statement must have:
•
The DUMP privilege on the database to be archived
Note: You cannot grant the DUMP privilege on the UIF object itself, but with the databaselevel DUMP privilege you can do a table-level archive of the individual UIF object.
Archiving Process
Database SYSUDTLIB is linked with database DBC and is only archived if DBC is archived.
SYSUDTLIB cannot be specified as an individual object in an ARCHIVE statement.
If database DBC is involved in an archive operation, it is always archived first, followed by
database SYSUDTLIB. Any additional databases that are being archived follow SYSUDTLIB in
alphabetical order.
By default, Teradata ARC counts output sectors and output rows. The row count is the
number of primary data rows archived when the DATA or NO FALLBACK option is specified.
Both counts are included in the output listing.
Archives have these limitations:
•
When using the multi-stream feature, a dictionary only archive must be run as a singlestream job.
•
Journal and data (or dictionary) tables cannot be stored on the same archive file.
•
When a journal table is archived, the saved subtable portion of the journal is archived. If
two archives are performed without doing a CHECKPOINT WITH SAVE operation in
between, the two archive files are identical.
•
If an archive is created when the saved subtable is empty, the archive file contains only
control information used by Teradata ARC.
Archive Messages
During an archive of a Teradata Database, Teradata ARC issues messages in the format shown
below as it archives different types of objects. The format of the byte and row counts agree
with the format of these counts in the restore or copy output.
FUNCTION "UDFname" - n,nnn,nnn BYTES, n,nnn,nnn ROWS ARCHIVED
JOURNAL "Journalname" - n,nnn,nnn BYTES ARCHIVED
MACRO "macroname" - ARCHIVED
METHOD "methodname" - ARCHIVED
METHOD "methodname" - n,nnn,nnn BYTES, n,nnn,nnn ROWS ARCHIVED
PROCEDURE "procedurename" - n,nnn,nnn BYTES, n,nnn,nnn ROWS ARCHIVED
TABLE "tablename" - n,nnn,nnn BYTES, n,nnn,nnn ROWS ARCHIVED
TRIGGER "triggername" - ARCHIVED
TYPE "UDTname" - ARCHIVED
VIEW "viewname" - ARCHIVED
When the objects are archived, Teradata ARC issues this message:
DUMP COMPLETED
At the start of each database archive operation or after a system restart, the output listing
shows all offline AMPs.
184
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ARCHIVE
If an archive file contains a selected partition archive of a table, a message reports the
bounding condition that was used to select the archived partitions and whether the bounding
condition is well-defined.
If online logging is initiated for one or more tables during an archive, online logging
information is displayed for each of the tables in the output listing.
Archive Results
After each archive operation, review the listing to determine whether any non-fallback tables
were archived while AMPs were offline. If any AMPs were offline during the archive, create a
specific AMPs archive for those tables. When the offline AMPs are brought back online, use
this archive to complete the operation.
Teradata ARC does not completely archive data tables that are in the process of being restored
or data tables that are involved in a load operation (by FastLoad or MultiLoad). If tables are
being restored or loaded, Teradata ARC archives only enough data to allow tables to be
restored empty and issues a message.
Archive Limitation
Tables with unresolved referential integrity constraints cannot be archived. An unresolved
constraint occurs when a CREATE TABLE (child) statement references a table (parent) that
does not exist. Use the SQL command CREATE TABLE to create the parent (that is,
referenced) table and resolve these constraints. For more information, refer to the Introduction
to Teradata.
Using Keywords with ARCHIVE
ARCHIVE is affected by the following keywords. For information about keywords that are
specific to archiving selected partitions in a PPI table, see “Archiving Selected Partitions of PPI
Tables” on page 189.
AMP Keyword
With Teradata Database, use the AMP=n option to specify up to five AMPs. If a specified
processor is not online when the archive is for a particular database or table, Teradata ARC
does not archive any data associated with that AMP.
This option archives individual processors and is useful for following up an all-AMPs or
cluster archive when the AMPs were offline. Use this option only with the NO FALLBACK
TABLES or JOURNAL TABLES option. Specify the option for (database DBC) ALL to
automatically exclude database DBC.
CLUSTER Keyword
When using the CLUSTER keyword, maintain a dictionary archive for the same databases,
stored procedures, or tables. Cluster archiving improves archive and recovery performance of
very large tables and simplifies specific AMP restoring of non-fallback tables.
Teradata Archive/Recovery Utility Reference, Release 16.00
185
Chapter 6: Archive/Recovery Control Language
ARCHIVE
A complete cluster archive can restore all databases, stored procedures, and tables if it
includes:
•
A dictionary archive of the databases, tables, or stored procedures
•
A set of cluster archives
•
A set of specific AMP archives (if archiving non-fallback tables and AMPs were offline
during the cluster archive) so all data is contained in archive files
Create a dictionary archive whenever the tables in the cluster archive might be redefined.
Although it is necessary to generate a dictionary archive only once if tables are not redefined,
make a dictionary archive regularly in case an existing dictionary archive is damaged.
If a cluster archive is created over several days and a table is redefined in that archive, create an
all-AMPs archive of the changed tables. Otherwise, the table structure might differ between
archive files.
When archiving journal tables, archive the saved subtable created by a previous checkpoint.
Consequently, it is not necessary to use locks on the data tables that are using the journal table.
The Teradata Database places an internal lock on the saved journal subtable to prevent a
concurrent deletion of the subtable.
The following rules govern the CLUSTER keyword:
•
The option is allowed only when archiving data tables.
•
A dictionary archive must be performed before a cluster archive.
•
If any AMPs in the clusters are offline, archive non-fallback tables from those AMPs when
they are brought back online.
•
Utility locks are placed and released at the cluster level.
The dictionary archive contains data stored in the dictionary tables of the Teradata Database
for the different objects being archived. Do the following to ensure that the table data in the
cluster archive matches the dictionary definition in the dictionary archive:
•
Archive clusters on tables with definitions that never change.
•
Create the dictionary archive immediately prior to the cluster archives, without releasing
Teradata ARC locks. Then initiate a set of cluster archive requests covering all AMPs.
Do not archive database DBC in cluster archive operations. However, specifying DBC (ALL) is
supported. In this case, database DBC is excluded automatically by Teradata ARC.
RELEASE LOCK Keyword
Notice:
Releasing a HUT lock while another Teradata ARC job is running could result in data
corruption and/or unexpected errors from ARCMAIN or the Teradata Database.
This can occur if a HUT lock is explicitly released during an ARC job, or if multiple jobs are
run concurrently on the same object with the same Teradata user, with each job specifying the
RELEASE LOCK option. For example, a database-level archive, with one or more tables
excluded, is run concurrently with a table-level archive of the excluded tables. In this situation,
the RELEASE LOCK for the database-level job will release the table-level locks if the job
completes first.
186
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ARCHIVE
Use different Teradata users for each archive job when running concurrent archives against the
same objects. This ensures that locks placed by one job are not released by another.
For the purposes of locking, a set of cluster archive jobs, with each job archiving different
clusters, counts as only a single job running against the object. If all concurrent jobs running
on an object archive different sets of clusters, it is acceptable to use the same Teradata user for
each of the cluster archive jobs.
INDEXES Keyword
Specifying the INDEXES keyword increases the time and media required for an archive. This
option does not apply to journal table archives. The option is ignored for cluster and specific
AMP archives.
The INDEXES keyword usually causes Teradata ARC to archive unique and non-unique
secondary indexes on all data tables. However, if an AMP is offline, Teradata ARC only
archives unique secondary indexes on fallback tables. Therefore, for the INDEXES keyword to
be most effective, do not use it when AMPs are offline.
If the archive applies a group read HUT lock, Teradata ARC writes no indexes to the archive
file.
ABORT Keyword
For cluster archives, Teradata ARC only considers AMPs within the specified clusters. The
ABORT keyword does not affect specific AMP archives.
GROUP Keyword
To avoid the full read HUT lock that Teradata ARC normally places, specify USE GROUP
READ LOCK for a rolling HUT lock. (It is also possible to avoid the full read HUT lock by
initiating an online archive using the ONLINE option during an ALL AMPS archive, or by
using LOGGING ONLINE ARCHIVE ON.)
If the GROUP keyword is specified, the Teradata Database:
1
Applies an access HUT lock on the whole table.
2
Applies a read HUT lock sequentially on each group of rows in the table as it archives
them, then:
3
•
Releases that read HUT lock.
•
Applies another read HUT lock on the next group of rows to be archived.
After the last group of rows is archived, releases the last group read HUT lock and the
access HUT lock on the table.
Using the GROUP keyword locks out updates to a table for a much shorter time than an
archive operation without the option. The size of the locked group is approximately 64,000
bytes.
Teradata Archive/Recovery Utility Reference, Release 16.00
187
Chapter 6: Archive/Recovery Control Language
ARCHIVE
A transaction can update a table while the table is being archived under a group read HUT
lock. If an update occurs, the archive might contain some data rows changed by the
transaction and other data rows that were not changed. In this case, the archived copy of the
table is complete only when used in conjunction with the after image journal created by the
transaction.
Use a group read HUT lock only for tables with an after image journal. This means that online
backups of database-level objects also require every table in the database have after-journaling
defined. If a table is excluded as part of a database-level GROUP READ LOCK archive, it must
still have an after-journal table defined so ARC can accept the GROUP READ LOCK option.
If a group read HUT lock is specified for tables without an after image journal, the Teradata
Database does not archive the table, and it prints an error message.
If the GROUP keyword is not specified and the archive operation references a database, the
Teradata Database locks the database. This HUT lock is released after all data tables for the
database have been archived.
If ARCHIVE specifies the archive of individual data tables, the Teradata Database locks each
table before it is archived and releases the HUT lock only when the archive of the table is
complete. HUT locks are released only if the RELEASE LOCK keywords are specified.
ONLINE Keyword
The ONLINE keyword initiates an online archive on a table or database. During an online
archive, the system creates and maintains a log for the specified table or a separate log for each
table in the specified database. All changes to a table are captured in its log. Each log of
changed rows is then archived as part of the archive process. When the archive process
completes, the online archive is terminated and the change logs are deleted.
When a table is restored, the table's change log is restored also. The changed rows in the log are
rolled back to the point that existed prior to the start of the archive process. The table is then
rolled back to that same point. All tables within the same transaction in the online archive are
restored to the same point in the transaction.
For tables involved in a load operation, see section “Archiving Online” on page 51.
NOSYNC Keyword
For information on the NOSYNC keyword as it relates to Online Archive, see “Archiving
Online” on page 51.
KEEP LOGGING Keyword
The KEEP LOGGING keyword overrides automatic termination of:
•
an ALL AMPs online archive that was initiated with the ONLINE option
•
a cluster online archive that was initiated with LOGGING ONLINE ARCHIVE ON
When KEEP LOGGING is used, the online archive process continues after the archive process
completes. Changed rows continue to be logged into the change log. Use this keyword with
caution: online logging continues until it is terminated manually with LOGGING ONLINE
188
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ARCHIVE
ARCHIVE OFF. If online logging is not terminated, there is a possibility that the logs could fill
all available perm space.
If any table is still enabled for Online Logging either after the archive is finished or due to a
graceful abnormal termination of Teradata ARC, a warning message appears that indicates
one or more tables are still enabled for online logging. For more details, see section “Archiving
Online” on page 51.
NONEMPTY DATABASE Keywords
Teradata ARC writes no information to tape about empty objects. If there are many empty
users on a system, using these keywords can appreciably reduce the archive time.
The NONEMPTY keyword is only meaningful when used in conjunction with the ALL
option. Using NONEMPTY, without ALL for at least one object, produces an ARC0225
warning message. In this case, the NONEMPTY option is ignored and execution continues.
Teradata ARC uses the ARC_NonEmpty_List macro to determine which databases to archive.
The ARC_NonEmpty_List macro is a DBS macro located in database DBC and is created by
the DIP scripts. If this macro was not installed or if you do not have the EXECUTE privilege,
Teradata ARC still archives all objects. Thus, if this macro is not installed, the time savings is
not realized.
Archiving Selected Partitions of PPI Tables
An all-AMPs archive on one or more partitions of a table can be performed rather than
performing a full-table backup and restore. This feature is limited to all-AMP archives.
Dictionary, cluster, and journal archives are not supported.
Use selected partitions to archive only a subset of data and avoid archiving data that has
already been backed up. (Minimizing the size of the archive can improve performance.)
Before using this feature, be sure to understand “Potential Data Risks When Archiving/
Restoring Selected Partitions” on page 45.
For procedures and script examples of selecting partitions, see “Archiving Selected Partitions
of PPI Tables” on page 41.
Restrictions on Archiving Selected Partitions
These restrictions apply to archiving selected partitions of a PPI table:
•
Cluster, dictionary, and journal archives are not supported.
•
It is recommended that tables containing large objects (BLOB and CLOB columns) not be
archived with selected partitions (using PARTITIONS WHERE) because a number of
manual steps are needed to restore the data. Instead of using selected partitions, use a fulltable archive and restore for tables that contain both PPIs and LOBs.
For additional information, see “Potential Data Risks When Archiving/Restoring Selected
Partitions” on page 45 and “Considerations Before Restoring Data” on page 59.
Teradata Archive/Recovery Utility Reference, Release 16.00
189
Chapter 6: Archive/Recovery Control Language
ARCHIVE
Keywords for Archiving Selected Partitions
To perform an archive of selected partitions, specify a conditional expression in the
PARTITIONS WHERE option in an ARCHIVE script. This conditional expression should
only reference the column(s) that determine the partitioning for the table being archived.
PARTITIONS WHERE Keyword
Use the PARTITIONS WHERE option to specify the conditional expression, which contains a
definition of the rows to be archived. To be effective, limit this expression to the columns that
determine the partitioning for the table to be archived.
These restrictions apply to the use of PARTITIONS WHERE:
•
The object is an individual table (not a database).
•
The source table has a PARTITIONS BY expression defined.
•
The archive is an all-AMP archive (not a dictionary, cluster, or journal archive).
•
The INDEXES option is not used.
•
If the table belongs to a database that is specified in the ARCHIVE statement, the table is
excluded from the database-level object (with EXCLUDE TABLES) and is individually
specified.
•
Any name specified in the conditional expression is within the table being specified.
(Using table aliases and references to databases, tables, or views that are not specified
within the target table result in an error.) It is recommended that the only referenced
columns in the PARTITIONS WHERE condition be the partitioning columns or systemderived column PARTITION of the table. References to other columns do not contribute
to partition elimination, and might accidently qualify more partitions than intended.
Examples of ARCHIVE Keywords
This example executes an archive of all the rows in the TransactionHistory table in the
SYSDBA database for the month of July 2002:
ARCHIVE DATA TABLES
(SYSDBA.TransactionHistory)
(PARTITIONS WHERE
(! TransactionDate BETWEEN DATE ‘2002-07-01’ AND DATE ‘2002-07-31’ !) )
,
RELEASE LOCK,
FILE=ARCHIVE;
Changes Allowed to a PPI Table
The following changes can be made to a PPI table without affecting or preventing the
restoration of data from an archive of selected partitions:
190
•
Table-level options that are unrelated to semantic integrity, such as FALLBACK protection,
journaling attributes, free space percentage, block size, and others.
•
Use of the MODIFY PRIMARY INDEX option, which changes the partitioning expression.
•
Changes to table-level CHECK CONSTRAINTS.
•
Changes that add, drop, or modify a CHECK CONSTRAINT on a column.
•
Secondary indexes may be added or dropped.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ARCHIVE
Other changes to a PPI table are more significant in that they affect the DBC.TVM.UtilVersion.
Do not restore archives of selected partitions if the archive UtilVersion does not match the
table UtilVersion. DBC.TVM.UtilVersion is initially set to 1. ALTER TABLE increases the value
of DBC.TVM.UtilVersion to match the table DBC.TVM.Version whenever the following
significant changes occur to the referential integrity of a child table:
•
Modification of the columns of a primary index
•
Addition or deletion of columns
•
Modification of the definition of an existing column
•
Addition, deletion, or modification of the referential integrity among tables
When DBC.TVM.UtilVersion is updated for a table, previous archives are invalid for future
restores or copies of selected partitions to that table, but a full table restore or copy is still
valid. For more information, see the Data Dictionary.
Table-Level Exclude Option with ARCHIVE
The table-level exclude object option is allowed in an ARCHIVE statement (and in COPY and
RESTORE statements). This option is only accepted in a database-level object in a DATA
TABLES of all-AMPs or cluster operations. A database-level object that has one or more
excluded objects is a partial database.
An archive of a partial database contains the dictionary information and if the excluded object
has data, the table header row of the excluded object. However, the actual data rows are not
archived. If a partial database archive is restored, the table header row is restored for the
excluded objects with data, but no data rows.
If an object is excluded, Teradata ARC restores the dictionary information and the table
header row (if present), but leaves the table in restore state. This prevents the table from being
accessed by another application before the table level restore is performed. A table-level
restore for the excluded objects is expected to follow the partial database restore to fully
restore a partial database.
If a table-level restore of the excluded object(s) is not going to be run, run a BUILD statement
for the excluded objects. The excluded objects become accessible but empty, and can then be
dropped.
If the ALL keyword is not specified after the object name, either non-qualified object names
(without a database name prefix) or fully qualified object names (databasename.tablename)
can be specified in the list of EXCLUDED TABLES. If a non-qualified object name is specified,
it is presumed to belong to the source database that it is associated with.If the ALL keyword is
specified after the object name, then only fully qualified object names in the form of
databasename.tablename are accepted in the list of EXCLUDE TABLES.
Do not use EXCLUDE TABLES with the following options:
table level object: (db.tb) - DICTIONARY, JOURNAL, NO FALLBACK - AMP=.
Teradata ARC issues ARC0215 if any of the above conditions is detected.
For a list of individual object types that can be specified in the EXCLUDE TABLES clause for
Archive, see Appendix C.
Teradata Archive/Recovery Utility Reference, Release 16.00
191
Chapter 6: Archive/Recovery Control Language
ARCHIVE
Notice:
During a full database-level restore of an archive with excluded objects, the data dictionaries
and the table headers of all objects, including excluded objects, are replaced. As a result, all of
the existing rows in the excluded objects are deleted. To leave an existing excluded object intact
during a restore or copy, use the EXCLUDE TABLES option in the RESTORE or COPY
statement. For more details, see “RESTORE” on page 231 and “COPY” on page 199.
Restoring individual objects from a database-level archive with excluded objects is supported.
In the RESTORE statement, individually specify all the objects to be restored, except the
excluded objects. By omitting the excluded objects, the data dictionaries and table headers of
the excluded objects are preserved. The database is restored from the archive without altering
the excluded objects.
Table-Level Exclude Errors
The following error messages can occur when using the table-level exclude object option in an
ARCHIVE statement.
ARC0106: “User excluded table(s) (%s) does/do not belong to database
%s”
One or more tables specified in the EXCLUDE TABLES object option do not belong to the
database object.
This is only a warning. No action is required; the archival will continue.
ARC0107: “%s is a %s. Not excluded.”
Views, macros, stored procedures, and triggers are not allowed in the EXCLUDE TABLE list.
Only tables are allowed in the EXCLUDE TABLE list. Teradata ARC ignores this entry and
continues the operation.
Remove the entry from the EXCLUDE TABLE list.
ARC0108: “Invalid table name in EXCLUDE TABLE list: %s.%s”
This error occurs when a table specified in EXCLUDE TABLE list does not exist in the
database. Correct the table name and resubmit the job.
ARC0109: “One or more tables in EXCLUDE TABLE list for (%s) ALL is/
are not valid”
This error occurs when validation of one or more tables in the EXCLUDE TABLE list failed.
The invalid table names displayed prior to this error. Correct the database/table name and
resubmit the job.
ARC0110: “%s is not parent database of %s in EXCLUDE TABLE list”
This error occurs when a table specified in EXCLUDE TABLE list does not belong to any of the
specified parent’s child databases. Correct the database/table name and resubmit the job.
ARC0710: "Duplicate object name has been specified or the table level
object belongs to one of database level objects in the list: %s"
The severity of ARC0710 has been changed to FATAL (12) from WARNING (4).
192
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ARCHIVE
Do not specify excluded tables in a statement that also contains the database excluding them
(unless using the PARTITIONS WHERE option to archive selected partitions of PPI tables, in
which excluded tables are allowed). For example:
ARCHIVE DATA TABLES (db) (EXCLUDE TABLES (a)), (db.a), RELEASE LOCK,
FILE=ARCHIVE; /* ARC0710 */
This fatal error resulted from one of the following:
•
the specified object is a duplicate of a previous object in the list
•
the specified object is a table level object that belongs to one of the database-level objects
in the list
•
the ALL option has been specified and one of the child databases is a duplicate of an object
explicitly specified in the list
•
the object name was longer than 30 characters
Remove one of the duplicate names or the table level object, or edit the length of the object
name. Resubmit the script.
ARC1242 “Empty Table restored/copied: %s”
When restoring a table excluded by a user, this warning will be displayed. It normally displays
during archiving for the tables that have been excluded. No data rows were restored or copied.
Only the dictionary definition and the table header row were restored or copied.
This warning indicates that a table-level restore/copy must follow in order to fully restore the
database.
The table is currently in restore state. Do one of the following:
•
Complete the partial restore of the database by running a table level restore/copy of the
table, or
•
Access the table by running an explicit build. This results in an empty table.
Teradata Archive/Recovery Utility Reference, Release 16.00
193
Chapter 6: Archive/Recovery Control Language
BUILD
BUILD
Purpose
The BUILD statement generates:
•
Indexes for fallback and non-fallback tables
•
Fallback rows for fallback tables
•
Journal tables by sorting the change images
Syntax
,
BUILD
A
(dbname)
DATA TABLES
ALL
(dbname.tablename)
JOURNAL TABLES
NO FALLBACK TABLES
NO FALLBACK TABLE
A
B
,
, EXCLUDE
, RELEASE LOCK
(dbname)
ALL
(dbname1 ) TO (dbname2 )
;
B
, ABORT
, SKIP JOIN INDEX
GR01B003
where:
Syntax Element
Description
DATA TABLESJOURNAL
TABLESNO FALLBACK
TABLES or NO FALLBACK
TABLE
Keywords specifying the type of table to build
The default is NO FALLBACK TABLE.
Specify DATA TABLES when building fallback, nonfallback, or both types of tables from all
AMPs. This option normally follows the restore of a cluster archive.
Specify NO FALLBACK TABLE only when building indexes for non-fallback tables.
(dbname)
Name of a database
ALL
Keyword to indicate the operation affects the named database and its descendants
(dbname.tablename)
Name of a non-fallback table for which indexes are built
EXCLUDE
Keyword to prevent indexes from being built in specified databases
(dbname)
Name of database excluded from the build operation
194
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
BUILD
Syntax Element
Description
ALL
Keyword to indicate the operation affects the named database and its descendants
(dbname1) TO (dbname2)
List of alphabetically client sorted databases excluded from the build operation
The delimiting database names need not identify actual databases.
Database DBC is not included as part of a range.
RELEASE LOCK
Keywords to release utility locks on the specified objects when the build operation
completes successfully
ABORT
Keyword to abort the build operation if an AMP is offline and if the object being built is
either a nonfallback table or is a database that contains a nonfallback table
SKIP JOIN INDEX
Skip the build of join and hash indexes
Access Privileges
The user name specified in LOGON must have one of the following:
•
RESTORE privileges on the specified database or table
•
Ownership of the database or table
Usage Notes
Teradata ARC processes databases, tables, and stored procedures in alphabetical order.
Database DBC does not contain non-fallback tables and cannot be restored by clusters.
Therefore, Teradata ARC does not try to build indexes or fallback data for any table in
database DBC.
A build operation creates unique and non-unique secondary indexes for a specified table. The
build operation also generates the fallback copy of fallback tables if DATA TABLES keywords
are specified.
Typically, this statement is used after one of the following:
•
Specific AMP restore of non-fallback tables
•
Cluster archive restoring
•
Series of all-AMPs restores that specified the NO BUILD option (usually a restore to a
changed hardware configuration)
Note: To speed the recovery of all AMPs, specify the NO BUILD option of the RESTORE
statement to prevent non-fallback tables from being built after the restore is complete.
If an AMP is offline during a build operation, all unique secondary indexes of non-fallback
tables are left invalidated. As a result, requests may run more slowly when secondary indexes
are used for search and selection. Insert and update operations cannot occur.
When a database or table copies or restores from an older version source system with a
different configuration, it affects the time and space required to restore data to the destination
system. The data must be redistributed among the new set of AMPs. The system uses a
different algorithm to copy and restore when the number of AMPs is different between the
source and destination systems. This new algorithm is usually faster when the configurations
are different. The old algorithm may be faster in some restricted situations.
Teradata Archive/Recovery Utility Reference, Release 16.00
195
Chapter 6: Archive/Recovery Control Language
CHECKPOINT
CHECKPOINT
Purpose
CHECKPOINT places a bookmark in a permanent journal or prepares a permanent journal
for archiving.
Syntax
,
CHECKPOINT
A
(dbname)
ALL
(dbname.tablename)
A
B
,
, EXCLUDE
(dbname)
ALL
(dbname1) TO ( dbname 2)
C
B
, WITH SAVE
, USE
ACCESS
READ
LOCK
;
C
, NAMED chkptname
GR01B004
where:
Syntax Element
Description
(dbname)
Name of the database with the journal table to checkpoint
ALL
Indicates that the operation affects the journals in the named database and all the descendants
(dbname.tablename)
Name of a specific journal table to checkpoint
All tables should be distinct within the list. Duplicate tables are ignored.
EXCLUDE
Prevents journal tables from being check pointed
(dbname)
Name of database excluded from the checkpoint operation
ALL
Excludes the named database and its descendants
(dbname1) TO
(dbname2)
Alphabetic list of client databases
The delimiting database names need not identify actual databases.
Database DBC is not included as part of a range.
WITH SAVE
196
Logically moves the contents of the active subtable of the identified journals to the saved subtable
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
CHECKPOINT
Syntax Element
Description
USE ACCESS LOCK
or USE READ LOCK
Applies an access or read lock on data tables that contribute journal rows to the journal tables on
which Teradata ARC is setting checkpoints
USE READ LOCK is the default.
NAMED
References the entry in database recovery activities
chkptname
Name of the database recovery event
Access Privileges
The user name specified in LOGON must have one of the following:
•
CHECKPOINT privileges on the journal table or on the database that contains the journal
table
•
Ownership of the database that contains the journal table
Usage Notes
Specifying CHECKPOINT with the SAVE keyword prepares a journal table for archiving.
Teradata ARC writes a checkpoint row to the active journal table. This subtable is then
appended to any existing saved table of the named journal. Archive this saved portion by using
ARCHIVE JOURNAL TABLES.
The Teradata Database returns an event number that Teradata ARC assigns to the checkpoint
entry.
Checkpoint functionality with regards to permanent journals is supported on both
Mainframe systems (z/OS) and network-attached systems.
WITH SAVE Keywords
Only the saved subtable can be archived.
If a saved subtable for any identified journals exists, Teradata ARC appends the active subtable
to it. Teradata ARC writes the checkpoint record to the active subtable and then moves the
active subtable to the saved subtable.
Without this option, Teradata ARC leaves the active subtable as is after writing a checkpoint
record.
USE LOCK Keywords
USE LOCK specifies the lock level Teradata ARC applies to data tables that contribute rows to
the journal tables on which Teradata ARC is setting checkpoints.
With USE LOCK, request the specified lock level on the data tables for a journal in the object
list. When Teradata ARC obtains the requested lock, it writes the checkpoint and then releases
the lock. Teradata ARC then requests locks for data tables for the next journal table in the
object list. Teradata ARC places journal tables in the object list in alphabetical order by
database name.
Teradata Archive/Recovery Utility Reference, Release 16.00
197
Chapter 6: Archive/Recovery Control Language
CHECKPOINT
If an ACCESS LOCK is specified, making updates to data tables while generating the
checkpoint is supported. If a checkpoint is taken with an ACCESS LOCK, any transaction that
has written images to the journal and has not been committed is logically considered to have
started after the checkpoint was taken.
When specifying an ACCESS LOCK, also specify the WITH SAVE option.
READ LOCK is the default, resulting in Teradata ARC suspending all updates to the tables
until it writes the checkpoint record. Use READ LOCK to identify the transactions included in
the archive.
NAMED Keyword
If the chkptname duplicates an existing entry in a journal, qualify the entry by the system
assigned event number.
Alpha characters in a checkpoint name are not case sensitive. For example, chkptname ABC is
equal to chkptname abc.
An alternative to using chkptname is referencing the entry by its event number.
198
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
COPY
COPY
Purpose
The COPY statement copies a database or table from an archived file to the same or different
Teradata Database from which it was archived.
Use COPY to move data from an archived file back to the Teradata Database. The target
database must exist on the target Teradata Database, however COPY creates a new table if it
does not already exist on the target database. If a copy of selected partitions is requested, the
target table must exist on the target Teradata Database.
COPY between the same database version is supported. COPY from an older database version
to a newer database version is supported. COPY of a newer database version back to an older
database version is not supported.
Note: Beginning with TTU 13.00.00, there is support for copying all objects as individual
objects. Triggers cannot be copied. For a complete list of objects supported by Teradata ARC,
see “Appendix A Database Objects” on page 271.
Syntax
,
COPY
DATA
DICTIONARY
JOURNAL
NO FALLBACK
TABLES
TABLE
A
(dbname)
(dbname.tablename)
options
ALL FROM ARCHIVE
B
A
,
, EXCLUDE
(xdbname )
(xdbname 1) TO (xdbname 2)
C
B
,
, CLUSTERS
, CLUSTER
, NO BUILD
, ABORT
nnn
=
,
, AMP=
4096
5
n
D
C
, RELEASE LOCK
, SKIP JOIN INDEX
D
SKIP STAT COLLECTION
, FILE = name
;
2412_10
Teradata Archive/Recovery Utility Reference, Release 16.00
199
Chapter 6: Archive/Recovery Control Language
COPY
!
!
"
" #
"
" !
$
!
!
%&%&
where:
Syntax Element
Description
DATA TABLE or DATA
TABLES
Copies data tables.
DICTIONARY TABLE or
DICTIONARY TABLES
Copies dictionary tables.
JOURNAL TABLE or
JOURNAL TABLES
Copies journal tables.
NO FALLBACK TABLE
or NO FALLBACK
TABLES
Copies non-fallback tables to specific AMPs.
ALL FROM ARCHIVE
Copies all databases/tables in the given archive file
EXCLUDE
Prevents specified databases from being copied
(dbname)
Name of database to exclude from copy
ALL
Excludes the named database and all descendants
200
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
COPY
Syntax Element
Description
(xdbname1) TO
(xdbname2)
Alphabetical list of client databases to exclude from the copy
The delimiting database names need not identify actual databases.
Database DBC is not part of a range.
(dbname) or
(dbname.tablename)
Name of the target object
FROM
Name of the source object when copying to a different location or name on the target system
NO FALLBACK
Copies fallback tables into non-fallback tables
Teradata ARC replaces the target object with the object from the archive.
If the archived table is already non-fallback, this option has no effect.
REPLACE CREATOR
Replaces the LastAlterUID, creator name, and Creator ID of the tables in the target database
with the user ID and the current user name, i.e., the user name specified in the LOGON
command
NO JOURNAL
Copies all tables with journaling disabled, whether journaled in the archive or not
WITH JOURNAL
TABLE
Specifies that a copied database has journaling for the specified database and journal table
APPLY TO
Identifies the tables in the target system where the change images apply
NO BUILD
Prevents the Fallback and Secondary Index rows from being created
If NO BUILD is requested when restoring database DBC, the request is ignored.
If the NO BUILD keywords are used during a RESTORE statement, a separate BUILD
statement must be run for all databases and/or tables that were restored. The tables will not be
accessible until a BUILD statement is run.
ABORT
Aborts the copy operation if an AMP to which a non-fallback or journal table to restore is
offline
This option affects only an all-AMPs copy.
RELEASE LOCK
Releases client utility locks when the copy operation completes
FILE
Copies a file
name
Name of the file that contains the archive to copy
EXCLUDE TABLE or
EXCLUDE TABLES
Prevents individual objects in the listed database from being copied
(xtablename)
Name of an individual object in the designated database to exclude
For a list of individual object types that can be specified in the EXCLUDE TABLES clause for
Copy, see Appendix C.
Multiple objects are separated by commas.
This form, without a database name prefix, is used only when ALL has not been specified for
the current object.
(xdbname.xtablename)
List of fully qualified objects (prefixed by database name) to exclude
PARTITIONS WHERE
Specifies the conditional expression for partition-level operations
(!conditional expression!)
The conditional expression for specifying selected partitions
Teradata Archive/Recovery Utility Reference, Release 16.00
201
Chapter 6: Archive/Recovery Control Language
COPY
Syntax Element
Description
LOG WHERE
Specifies the rows that are logged to an error table for manual insertion and deletion
(!conditional expression!)
The conditional expression for specifying rows to log to the error table when copying selected
partitions
ALL PARTITIONS
Restores all archived partitions for an archived PPI object
See “ALL PARTITIONS Keyword” on page 212 for conditions that must be met.
QUALIFIED
PARTITIONS
Copies the same partitions specified in a previous selected-partition copy
ERRORDB and
ERRORTABLES
Specifies the location of the error log for partition-level operations
SKIP JOIN INDEX
Skip the creation of join and hash indexes
SKIP STAT
COLLECTION
Skip the copy of Stat Collections
Access Privileges
The COPY statement requires these access privileges for database-level and table-level objects:
•
Create Macro (CM)
•
Create Table (CT)
•
Create View (CV)
•
Restore (RS)
The target database must also exist before making the copy.
To Copy a UIF object, the user named in the LOGON statement must have
•
The RESTORE privilege on the database targeted for the copy
Note: You are not cannot grant the RESTORE privilege on the UIF object itself, but with the
database-level RESTORE privilege, you can do a table-level copy of the individual UIF object.
Usage Notes
Do not use database DBC as the target database name in a COPY statement. Use DBC as the
source database in COPY FROM, but specify a target database name that is different than
DBC. This allows DBC to be copied from a source system to a different database on a target
system. When used as the source database in COPY FROM, database DBC is not linked to
database SYSUDTLIB. Therefore, only database DBC is copied.
Do not use database SYSUDTLIB as the target database name in a COPY statement. Use
SYSUDTLIB as the source database in COPY FROM, but specify a target database name that is
different than SYSUDTLIB. This SYSUDTLIB to be copied from a source system to a different
database on a target system. When used as the source database in COPY FROM, database
SYSUDTLIB is not linked to database DBC. Therefore, only database SYSUDTLIB is copied.
202
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
COPY
Teradata ARC supports duplicate rows in a copy operation. However, if a restart occurs during
the copy of a table with duplicate rows, the duplicate rows involved in the restart may be
removed.
When restoring a cluster archive to a reconfigured system, run the cluster archives serially. If
this restore scenario is detected by ARCMAIN, the NO BUILD option is automatically
enabled, if not specified, and an explicit BUILD command must be run after all restore jobs
are completed.
COPY moves an archive copy of a database or table to a new environment. Copy options allow
renaming archive databases or creating tables in existing databases. The table does not have to
exist in the database it is being copied into. However, because the Teradata Database internal
table identifier is different for the new table, access privileges for the table are not copied with
the data.
When a database or table copies from an older version source system with a different
configuration, it affects the time and space required to copy data to the destination system.
The data must be redistributed among the new set of AMPs. The system uses a different
algorithm to copy and restore when the number of AMPs is different between the source and
destination systems. This new algorithm is usually faster when the configurations are
different. The old algorithm may be faster in some restricted situations.
Do not copy a table name generated in a character set different from the default character set
or the character set specified in the Teradata ARC invocation. For example, a table name from
a multibyte character set archive cannot be copied into a single-byte character set database.
To work around this restriction:
1
ANALYZE the archive tape with the HEX runtime option specified.
2
Result: The table names are displayed in hexadecimal format (X’
C2C9C72A750A3AC5E4C5E4C36DD9D6E6)
3
Run the COPY table statements using the hexadecimal table names.
To copy more than one table or database with one COPY statement, separate the names with
commas. Teradata ARC reads the archive data for each COPY statement it processes, therefore
one COPY statement yields better performance than multiple COPY statements.
Example
In the next example, the first table is copied to a different table name, and the next two tables
are copied with the same table name.
COPY DATA TABLES (test.test1) (FROM (oldtest.test1))
,(test2.test)
,(test3.test)
,RELEASE LOCK
,FILE = ARCHIVE;
If the configuration of the source and target platforms are different, copy cluster archives only
to all AMPs or specific AMPs.
Teradata Archive/Recovery Utility Reference, Release 16.00
203
Chapter 6: Archive/Recovery Control Language
COPY
Use these options with all types of copy operations:
•
FROM (dbname.tablename)
•
RELEASE LOCK
•
NO BUILD
•
CLUSTER = nnn
•
AMP = n
•
FILE = name
These options are suitable only for data table copy operations, and only with dictionary and
data table copies:
•
WITH JOURNAL TABLE (dbname.tablename)
•
NO FALLBACK
•
NO JOURNAL
Journal copy options are not supported after the target system is reconfigured. Journal table
copy options define target (receiving) tables for change image journaling.
Teradata ARC uses the journal table copy options to:
•
Select the set of journal images from an archive to copy.
•
Name the tables in the receiving system to which the images apply during rollforward or
rollback operations.
When NO BUILD is specified, Teradata ARC leaves the restored journal in a prebuild state.
Teradata ARC does not allow roll operations on journal tables in this state, and generates an
error message if an attempt is made to roll this journal back or forward.
If data tables are copied into a database that does not already have tables with those names,
Teradata ARC creates the required tables. This is true only for data tables. Journal tables can
never be created by a copy operation. They must exist in the receiving database. Similarly, do
not create receiving databases with COPY.
COPY preserves table names without converting them to upper case and returns these table
names in their original format.
Database DBC.DBCAssociation
Teradata ARC creates a row in database DBC.DBCAssociation table for each dictionary row
Teradata ARC copies successfully. This row contains columns with information that associate
the copied table back to its originating Teradata Database.
The row in database DBC.DBCAssociation also contains columns that link it to the TVM
table, which associates the row to its receiving table, and to the DBase table, which associates
the row to its receiving database.
Teradata ARC also creates a column in each row of database DBC.DBCAssociation that links
them to the RCEvent table. This link is the restore event number. For successful copy
operations, the EventType in the RCEvent table is COPY.
204
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
COPY
Locks
COPY places exclusive client utility locks during its operations. If an entire database is copied
from a full database archive, the entire database is locked for the duration of that operation. If
a table level archive or a single table is copied, Teradata ARC locks only that table.
Views, Macros, Stored Procedures, Triggers, UDFs, and UDTs
Copying a full database archive is the same as a restore operation. Teradata ARC drops all
existing objects and dictionary information in the receiving system. The data in the archive is
then copied to the receiving database.
However, triggers cannot be copied. If a trigger is defined in a database, <trigger> NOT
COPIED is displayed when the COPY statement is processed. This is not an error or warning.
Triggers must be manually recreated with SQL.
If the views, stored procedures, and macros have embedded references to databases and
objects that are not in the receiving environment, those views, stored procedures, and macros
will not work. To make any such views, stored procedures, and macros work, recreate or copy
the references to which they refer into the receiving database.
If the views, stored procedures, and macros have embedded references to databases and
objects that are in the receiving environment, they will work correctly.
Note: Fully qualify all table, stored procedure, and view names in a macro and all table names
in a view. Otherwise, an error can result. When COPY is used, partial names are fully qualified
to the default database name. In some cases, this might be the name of the old database.
Referential Integrity
After an all-AMPs copy, copied tables do not have referential constraints. Referential
constraints are not copied into the dictionary definition tables, database DBC.ReferencedTbls
and database DBC.ReferencingTbls, for either a referenced (parent) or referencing (child)
table copied into a Teradata Database. All referential index descriptors are deleted from the
archived table header before it is inserted into the copied table.
Reference constraints remain, however, on any table for which the copied table is a referenced
table (a parent table) or any table the copied table (as a child table) references. Hence, when
either a referenced or a referencing table is copied, the reference may be marked in the
dictionary definition tables as inconsistent.
While a table is marked as inconsistent with respect to referential integrity, no updates, inserts
or deletes are permitted. The table is fully usable only when the inconsistencies are resolved.
•
Use ALTER TABLE DROP INCONSISTENT REFERENCES to drop inconsistent
references.
•
Use ALTER TABLE ADD CONSTRAINT FOREIGN KEY REFERENCES to add referential
constraints to the copied table.
For an all-AMPs copy, REVALIDATE REFERENCES FOR is not applicable to COPY.
Teradata Archive/Recovery Utility Reference, Release 16.00
205
Chapter 6: Archive/Recovery Control Language
COPY
Partial-AMPs Copy
A Partial-AMPs copy occurs when copying with fewer streams in a multi-stream copy than
were used in a multi-stream archive. For more information, see “Restoring with PartialAMPs” on page 79.
System Join Index Copy
When using the COPY FROM syntax, System Join Indexes (SJIs) can now be copied to a
different database. For more information, see “Copying System Join Indexes (SJIs)” on
page 84 and “COPYSJI” on page 124.
Using Keywords with COPY
COPY is affected by the following keywords. For information about the keywords that apply
to copying partitioned data, see “Restores of Selected Partitions” on page 239.
NO FALLBACK Keywords
This option applies only during a copy of a dictionary archive or an all-AMPs archive.
If a fallback table has permanent journaling on the archive, the table has dual journaling of its
non-fallback rows after the copy when Teradata ARC applies the NO FALLBACK option
(unless NO JOURNAL is specified).
FROM Keyword
The object specified in the FROM keyword identifies the archive object. This option applies
only during a copy of a dictionary archive or an all-AMPs archive.
Journal enabled tables in the original archive carry their journaling forward to the copy unless
NO JOURNAL keywords are specified.
The NO JOURNAL keywords apply to all tables in a database when a database is copied. This
option has no effect on a target database’s journaling.
If the object specified in the FROM option is a table, Teradata ARC copies only that table.
If the object specified in the FROM option is a database, one of the following occurs:
•
If the receiving object is a table, then Teradata ARC searches through the archived database
for the named table and copies the table.
•
If the receiving object is a database, then Teradata ARC copies the entire database into the
receiving database.
Copying into database DBC is not valid, however specifying database DBC as the object of the
FROM option is valid. In other words, copy database DBC into another database.
Similarly, do not copy into database SYSUDTLIB. Specify database SYSUDTLIB as the object
of the FROM option. That is, copy database SYSUDTLIB into another database.
When specifying the FROM and EXCLUDE TABLES options in the same COPY statement,
specify the FROM clause first. Then Teradata ARC can generate a default source database
name for the excluded tables from the FROM clause if a database name is not specified in the
EXCLUDE TABLES clause.
206
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
COPY
WITH JOURNAL TABLE Keywords
This option only applies during a copy of a dictionary archive or an all-AMPs archive. To use
this option, INSERT access privileges to the referenced journal table are required.The source
table must have a journal or this option has no effect.
When copying a database, the journaling specified with this option applies only to those tables
that had journaling enabled in the original database. This option has no effect on a receiving
database’s default journaling.
If the database has default journaling specified, Teradata ARC uses those options. This option
only overrides the journal table in the receiving database, and is only valid if the originating
table had journaling enabled.
To carry journaling from the original table forward without specifying the WITH JOURNAL
TABLE option, Teradata ARC uses one of the following:
•
The default journal of the receiving database if the table is to be created. If there is no
default journal, Teradata ARC rejects the copy.
•
The journal specified in the WITH JOURNAL option when the table was created.
•
The journal of the table being replaced if the table already exists.
APPLY TO Keywords
This option is required when copying journal images. Restore access privileges to each table
that is necessary. Applying journal images to as many as 4096 tables is valid, however, all of the
table names in the APPLY TO option cannot exceed 8 KB.
When a journal table is copied, Teradata ARC restores only checkpoint rows and those rows
that apply to the receiving environment. The tables to which those images apply and their
journaling options must have already been copied. Thus, to perform a COPY JOURNAL with
an APPLY TO option, the appropriate COPY DATA TABLE statement must have been issued
at least once prior to using COPY JOURNAL to establish the source/target tableid mapping in
database DBC.DBCAssociation table. Teradata ARC copies only those images that have the
correct table structure version (as defined by the originating table).
If a journal image for the source table is applied to more than one target table, Teradata ARC
writes two occurrences of the journal image into the restored journal.
CLUSTER Keyword
This option is valid only if the source archive is a cluster archive and the copy is to the identical
Teradata Database. The option is useful in restoring or copying a dropped table to a different
database on the same Teradata Database.
AMP Keyword
With Teradata Database, use the AMP=n option to specify up to five AMPs.
Only the NO FALLBACK TABLE or JOURNAL TABLE options are applicable. It is useful
following all-AMPs copies where all AMPs were not available at the time of the copy operation
and non-fallback or journal receiving tables were involved.
Teradata Archive/Recovery Utility Reference, Release 16.00
207
Chapter 6: Archive/Recovery Control Language
COPY
NO BUILD Keywords
Use the NO BUILD keywords for all archives except the last one. Omit the NO BUILD option
for the last archive, so that Teradata ARC can:
•
Build indexes for tables.
•
Sort change images for journal tables.
If the NO BUILD keywords are used during a RESTORE or COPY statement, run a separate
BUILD statement for all databases and/or tables that were restored. The tables are not
accessible until a BUILD statement is run.
If the NO BUILD keywords are used while copying a database or table that contains Join or
Hash indexes, the database/table will be in a 'restoring' state when Teradata ARC attempts to
copy the Join/Hash indexes. When in the 'restoring' state, the Teradata Database will not allow
any access to the database/table. This will cause the copy of the Join/Hash index to fail.
In order to copy the Join/Hash indexes, follow this procedure:
1
Run a separate BUILD job to build the fallback rows and secondary indexes for the
database/table. This will clear the 'restoring' state and allow user access to the database/
table.
2
Copy the Join/Hash index by rerunning the copy job and just specifying the Join/Hash
indexes to be copied as individual objects in the copy list.
If the NO BUILD keywords are used while copying a database or table that contains Stat
Collections, use the same procedure above for Join/Hash indexes in order to copy them.
ALL FROM ARCHIVE Keywords
The ALL FROM ARCHIVE keywords take the place of the database and/or table names that
are normally specified after the DATA, DICTIONARY, JOURNAL, or NO FALLBACK
TABLES keywords.
Do not specify any other database or table names to be copied when using ALL FROM
ARCHIVE. Exercise caution when using this option, as all databases and tables in the given
archive file will be copied, and any existing databases or tables will be overwritten.
CATALOG and Fastpath are not supported while using ALL FROM ARCHIVE. If CATALOG
(or Fastpath) is enabled when ALL FROM ARCHIVE is specified, it will be disabled and a
warning message will be given.
ALL FROM ARCHIVE cannot be used to copy database DBC, and database DBC must be
excluded by the user if it is present in the archive being copied.
COPY supports the EXCLUDE option; the syntax and function of the option is identical to the
RESTORE version of EXCLUDE. Unlike RESTORE, COPY only supports the EXCLUDE
option when using ALL FROM ARCHIVE.
The FROM, WITH JOURNAL TABLE, APPLY TO, and EXCLUDE TABLES object options are
not allowed when using ALL FROM ARCHIVE.
208
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
COPY
The ALL FROM ARCHIVE command cannot be used to copy a table that has a join or hash
index defined or a database that contains a table that has such indexes defined. See “Join and
Hash Indexes in Restoring” on page 59 for more details.
EXCLUDE TABLES Keyword
During a database-level COPY, the EXCLUDE TABLES option can be used to skip the copy of
one or more objects from the archive file to the target database.
If an object is excluded, Teradata ARC will not copy any of the dictionary information or any
of the object data for the specified objects. If any of the specified objects already exist in the
target database, they are unaffected by the COPY operation. By not copying any information
or data for the specified object, the EXCLUDE TABLES option can also be used to prevent the
deletion of existing objects from the target database during a copy.
Use the EXCLUDE TABLES option to skip the copy of an object from the archive file. If the
object already exists in the target database, it will be unaffected by the COPY operation. If the
COPY is run without the EXCLUDE TABLES option, the target object is dropped and
replaced by the dictionary information and data stored in the archive file for that object.
Use the EXCLUDE TABLES option to skip the copy of an excluded object from the archive file.
When an object is excluded during the archive, the only things archived for that object are the
dictionary information and the table header, but no object data. If the excluded object is
copied, the target object is dropped and replaced by the dictionary information for that object.
But since there is no data in the archive file for that object, no data is copied, which produces
an empty object. To prevent replacing an existing object with an empty object due to that
object being excluded during the archive, use the EXCLUDE TABLES option to skip the copy
of that object. By using the EXCLUDE TABLES option, that object is unaffected by the COPY
operation if it currently exists in the target database.
Use the EXCLUDE TABLES option to prevent the deletion of new objects from the target
database. During a normal COPY of a database-level object, all objects in the target database
are dropped before the objects from the archive file are copied to it. If a new object has been
added to the target database since the archive was taken, a situation exists where the archive
file does not have any archived data for that new object. If a copy is run without the EXCLUDE
TABLES option, Teradata ARC drops that new object from the target database even though
there is no data in the archive file to copy for that object. To prevent dropping the new object
during the copy, use the EXCLUDE TABLES option to specify the new object in the target
database. This communicates to Teradata ARC to not drop that object from the target
database during the copy. By using the EXCLUDE TABLES option, the new object in the target
database is unaffected by the COPY operation.
Either non-qualified object names (without a database name prefix) or fully qualified object
names (databasename.tablename) can be specified in the list of EXCLUDE TABLES. If a nonqualified object name is specified, it is presumed to belong to the source database that it is
associated with in the COPY statement.
The EXCLUDE TABLES option can only be used with database-level objects.
The EXCLUDE TABLES option can not be used when ALL FROM ARCHIVE is specified.
Teradata Archive/Recovery Utility Reference, Release 16.00
209
Chapter 6: Archive/Recovery Control Language
COPY
If a specified object is in neither the archive file nor the target database, the copy of the
database will be skipped.
If the FROM and EXCLUDE TABLES options are specified in the same COPY statement,
specify the FROM clause first. Then Teradata ARC can generate a default source database
name for the excluded objects from the FROM clause if a database name is not specified in the
EXCLUDE TABLES clause.
If the FROM and EXCLUDE TABLES options are specified in the same COPY statement, the
excluded objects in the EXCLUDE TABLES clause can be qualified with either the source
database name or the target database name. When the FROM clause is not used, the excluded
objects in the EXCLUDE TABLES clause can only be qualified by the source database name.
For a list of individual object types that can be specified in the EXCLUDE TABLES clause for
Copy, see Appendix C.
Notice:
There is no explicit maximum length for a EXCLUDE TABLES clause, but when the
EXCLUDE TABLES clause specified is too long to fit in the SQL buffer, the SQL request can
not be sent to the database. The buffer that is used to send the EXCLUDE TABLES clause to
the database has a maximum length and the EXCLUDE TABLES must fit completely within
that buffer. Since other data may exist in the buffer and may be variable in length, a max size
for a EXCLUDE TABLES clause cannot be given. If the above situation occurs, ARC will
display following error message:
ARC0127: “EXCLUDE TABLES list is too long to fit in the SQL buffer.”
Modify the EXCLUDE TABLES list to reduce its size and resubmit the job.
Copying Partitioned Data
Selected partitions of PPI tables can be copied. This means that one or more partitions of a
table can be backed up so that only a subset of data in a table can be archived, restored, and
copied.
Before attempting to copy selected partitions, read “Potential Data Risks When Archiving/
Restoring Selected Partitions” on page 45.
Restrictions on Copying Partitioned Data
Normally, the table to be copied does not need to exist in the target database. The table must
exist, however, in the target database to copy selected partitions to the table. Additionally, the
target table must be a full table, not just selected partitions.
Copying selected partitions is not allowed to a table that has undergone any of the following
major DDL changes:
•
Changing primary index columns
•
Changing from PPI to NPPI, or visa versa
•
Adding or changing referential integrity constraints
Other restrictions exist for archiving selected partitions. See “Restrictions on Archiving
Selected Partitions” on page 189.
210
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
COPY
Keywords for Copying Selected Partitions
The options available for copying selected partitions include:
•
PARTITIONS WHERE
•
LOG WHERE
•
ERRORDB/ERRORTABLES
•
ALL PARTITIONS
•
QUALIFIED PARTITIONS
PARTITIONS WHERE Keyword
Use the PARTITIONS WHERE option to specify the conditional expression for selected
partitions. The expression must contain the definition of the partitions that will be copied.
The following restrictions apply to the use of PARTITIONS WHERE:
•
The object is an individual table (not a database).
•
The source and target tables have a PARTITIONS BY expression defined.
•
The copy is an all-AMP copy (not a dictionary, cluster, or journal copy).
•
If the table belongs to a database that is specified in COPY, the table is excluded from the
database-level object (with EXCLUDE TABLES) and is individually specified.
•
Any name specified in the conditional expression is within the table being specified.
(Using table aliases and references to databases, tables, or views that are not specified
within the target table result in an error.) It is recommended that the only referenced
columns in the conditional expression be the partitioning columns or system-derived
column PARTITION of the table. References to other columns does not contribute to
partition elimination, and might accidently qualify more partitions than intended.
LOG WHERE Keyword
If the PARTITIONS WHERE option does not capture all the rows that need to be copied, use
the LOG WHERE option. This option inserts, into a Teradata-generated error table, archived
rows that both fall outside the partitions specified by the PARTITIONS WHERE conditional
expression and match the LOG WHERE conditional expression.
Use the option only if PARTITIONS WHERE is also specified for the object. If LOG WHERE
is omitted, the default is to log to the error table only the rows in the partitions being restored
that have errors.
ERRORDB/ERRORTABLES Keyword
The ERRORDB and ERRORTABLES options are mutually exclusive; specify only one option
for an object. Also, specify either the PARTITIONS WHERE or ALL PARTITIONS option
when using either ERRORDB or ERRORTABLES.
•
If ERRORTABLES is specified without a database name, or if neither ERRORTABLES nor
ERRORDB is specified, the error table is created in the same database as the base table.
•
If ERRORTABLES is specified, by default the naming convention for the error table is the
name of the base table plus the prefix “RS_”. For example, the error table for a table named
“DataTable” is “RS_DataTable.” Names are truncated to 30 bytes.
Teradata Archive/Recovery Utility Reference, Release 16.00
211
Chapter 6: Archive/Recovery Control Language
COPY
ALL PARTITIONS Keyword
Use the ALL PARTITIONS option to copy all of the archived partitions in a table. These
restrictions apply:
•
The object being copied is an individual table, or the ALL FROM ARCHIVE option is
specified.
•
The source and target tables have a PARTITIONS BY expression defined.
•
The copy is an all-AMP copy rather than a dictionary, cluster, or journal copy.
•
PARTITIONS WHERE is not specified for the object.
•
The partition bounding condition must have been well-defined when the backup was
performed. A bounding condition is well-defined if the PARTITION BY expression on the
source table consists of a single RANGE_N function, and if the specified range does not
include NO RANGE or UNKNOWN. (Use ANALYZE to determine whether a selected
partition is well-defined.)
If a conditional expression is not well-defined, Teradata ARC issues an error. Use
PARTITIONS WHERE for the restore operation rather than ALL PARTITIONS.
QUALIFIED PARTITIONS Keyword
Use this option only to copy a specific-AMP archive after copying selected partitions from an
all-AMP archive that was done while an AMP is down.
Alternate Character Set Support
Teradata ARC runs on IBM mainframes, which use the kanjiEBCDIC character set. However,
object names created in kanji character sets (UNIX OS or MS/DOS platforms) other than
EBCDIC can be copied from archive, but not translated into internal RDBMS format names.
The reason for this is that “hybrid” EBCDIC and kanjiEUC, or EBCDIC and kanjiShiftJIS
single byte and multibyte characters cannot be translated into internal RDBMS format. If the
object name is in internal RDBMS format, the Teradata Database can translate it into an
external client format name.
Note: Hybrid object names result when archiving an object created in the KANJIEUC or
KANJISJIS character set from a KANJIEBCDIC mainframe client and the hexadecimal
representation of the object being archived is not used. In such a circumstance, the Teradata
Database cannot properly translate the character set and produces a hybrid object name
composed of single byte characters from the EBCDIC character set and multibyte characters
from the creating character set (UNIX OS or DOS/V). The hybrid object name cannot be
understood by the Teradata Database. Correct this in one of two ways:
•
Use the internal hexadecimal object name, in format (’......’XN). This is the preferred
method.
The ARCMAIN program uses Teradata Database to translate the internal hexadecimal
format names to external hexadecimal format for the purpose of searching the archive file.
•
If the internal hexadecimal object name is unknown:
a
212
Run the ANALYZE statement on the archive tape without making a connection to the
Teradata Database by means of the LOGON statement.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
COPY
b
ANALYZE identifies the database in a hybrid format of the name.
c
Use the hybrid format of the name in the “FROM” syntax to specify that the object
name in the archive file is different from the name of the target object name.
Teradata Archive/Recovery Utility Reference, Release 16.00
213
Chapter 6: Archive/Recovery Control Language
DELETE DATABASE
DELETE DATABASE
Purpose
DELETE DATABASE deletes the contents of one or more databases, including all objects
except for journal tables.
To drop a journal table contents and structure, use the SQL statement MODIFY DATABASE
with the DROP JOURNAL TABLE option. Refer to SQL Request and Transaction Processing for
more information.
Syntax
,
DELETE DATABASE
A
(dbname)
ALL
;
A
,
, EXCLUDE
(dbname)
ALL
(dbname1) TO ( dbname 2)
GR01B007
where:
Syntax Element
Description
(dbname)
Name of the database.
ALL
Indicates the operation affects the named database and its
descendants.
EXCLUDE
Protects the named database from deletion.
(dbname)
Name of the database to protect from deletion.
ALL
Indicates the operation protects the named database and its
descendants.
(dbname1) TO (dbname2)
Alphabetical list of databases to protect from deletion.
The delimiting database names need not identify actual databases.
Database DBC is not part of a range.
Usage Notes
To delete a database, the user name specified in LOGON must have DROP privilege on the
database to be deleted.
To delete a database that is partially restored but unusable, delete the database with DELETE
DATABASE. Executing the statement deletes all access privileges for all objects in the database.
214
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
DELETE DATABASE
To release the locks that remain after the restore, run RELEASE LOCK before using DELETE
DATABASE.
Deleting All Objects from the Teradata Database
In order to restore database DBC, no objects can exist in any database outside of DBC. Delete
all objects outside of DBC prior to restoring DBC by using this command:
DELETE DATABASE (DBC) ALL, EXCLUDE (DBC);
To drop permanent journal tables, use MODIFY USER or MODIFY DATABASE. (See the
MODIFY DATABASE/USER description in SQL Data Definition Language.)
Enter the command exactly as shown above. Specifying SYSUDTLIB as an object of this
DELETE DATABASE statement is not required. Teradata ARC removes the usual link between
SYSUDTLIB and DBC so that only DBC is excluded from the delete operation. SYSUDTLIB is
deleted so that DBC can be restored, however, it is deleted after all other databases have been
deleted. This allows any objects that have definitions based on UDTs stored in SYSUDTLIB to
be deleted before the UDTs themselves are deleted.
Teradata Archive/Recovery Utility Reference, Release 16.00
215
Chapter 6: Archive/Recovery Control Language
DELETE JOURNAL
DELETE JOURNAL
Purpose
DELETE JOURNAL removes a journal subtable from the Teradata Database.
Syntax
,
DELETE
JOURNAL
SAVED
RESTORED
A
(dbname)
ALL
(dbname.tablename)
;
A
,
, EXCLUDE
(dbname)
ALL
(dbname1) TO ( dbname 2)
GR01B006
where:
Syntax Element
Description
SAVED
Deletes the portion of the current journal that has previously been
saved with CHECKPOINT JOURNAL
RESTORED
Deletes the portion of a journal table that was restored from archive
for use in a recovery activity
(dbname)
Name of the database that contains the journal to delete
ALL
Indicates the operation affects the named database and its
descendants
(dbname.tablename)
Journal table within the named database to delete
All tables should be distinct within the list. Duplicate tables are
ignored.
EXCLUDE
Protects the named journal tables from deletion
(dbname)
Name of the protected journal table
ALL
Keyword to protect the named journal and all its descendants
(dbname1) TO (dbname2)
Alphabetical list of databases to be protected from deletion
The delimiting database names need not identify actual databases.
Database DBC is not part of a range.
216
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
DELETE JOURNAL
Access Privileges
To delete a journal table, the user name specified in LOGON must have one of the following:
•
The RESTORE privilege on the database or journal table being deleted
•
Ownership of the database that contains the journal table
Usage Notes
Do not use the journal archive to recover all AMPs when all of the following conditions exist:
•
The checkpoint with an access lock creates a saved journal.
•
The journal is not a dual journal.
•
AMPs are offline.
Transactions between the all-AMPs archive and the single-AMP archive might not be
consistent, therefore do not delete a saved journal with an AMP offline that has no dual
journal.
Teradata Archive/Recovery Utility Reference, Release 16.00
217
Chapter 6: Archive/Recovery Control Language
LOGDATA
LOGDATA
Purpose
LOGDATA specifies any additional logon or authentication data that is required by an
extended security mechanism.
Syntax
LOGDATA
.
;
˝logon-data˝
2412J013
Usage Notes
The format of data in LOGDATA varies, depending on the logon mechanism being used.
However, to take effect, LOGDATA must be specified (with a LOGMECH statement) before
LOGON.
The data that is required by the logon mechanism is defined as logon-data, as seen in the
syntax diagram above. This logon-data must be valid for all of the Teradata ARC sessions that
will be logging on. For most logon mechanisms, logon-data is case-sensitive, and it must be
enclosed in double quotes if it contains any white space characters or non-alphanumeric
characters.
For more information about extended security mechanisms and LOGDATA, see Teradata
Call-Level Interface Version 2 Reference for Workstation-Attached Systems.
218
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
LOGGING ONLINE ARCHIVE OFF
LOGGING ONLINE ARCHIVE OFF
Purpose
LOGGING ONLINE ARCHIVE OFF ends the online archive process.
Syntax
LOGGING ONLINE ARCHIVE OFF FOR
A
,
A
(databasename )
B
ALL
(databasename.tablename)
C
B
,
, EXCLUDE
(databasename )
ALL
(database1)-TO-(database2)
;
C
, OVERRIDE
2412A031
where:
Syntax Element
Description
database
Indicates the database that is included in online archive logging
ALL
[Optional] Indicates that the specified database and all of its child
databases are included in the statement
databasename.tablename
Indicates the table and database that is included in online archive
logging
EXCLUDE
[Optional] Specifies the databases that are excluded from the
statement
OVERRIDE
[Optional] Allows online archive logging to be stopped:
• By someone other than the user who initiates the process
• After the dictionary dump
Teradata Archive/Recovery Utility Reference, Release 16.00
219
Chapter 6: Archive/Recovery Control Language
LOGGING ONLINE ARCHIVE OFF
Usage Notes
An online archive is a process in which rows are archived from a table at the same time update,
insert, or delete operations on the table are taking place. When an online archive is initiated
on a table or a database, Teradata ARC creates a log for the specified table or a separate log for
each table in the specified database. LOGGING ONLINE ARCHIVE OFF ends the online
archive. The log, which contains all changes to the table, is archived as a part of the archive
process. When the table is restored or copied, the changes recorded in the table’s log are used
to roll back the changes in the table. In other words, the table is rolled back to the point that
was established when online archive logging was started on that table.
LOGGING ONLINE ARCHIVE OFF is used after the archive process has completed, unless
the OVERRIDE option is used. If the OVERRIDE option is specified, logging stops even if a
table is in the process of being archived. This option is useful for archiving only the dictionary
rows of a database, without archiving the database itself.
LOGGING ONLINE ARCHIVE OFF explicitly ends online archive logging on the tables and
databases that were just archived or in the process of being archived. When the online archive
logging process is terminated, the logging of changed rows is stopped, and the log is deleted.
If the online archive logging process is not terminated on a table after the table has been
archived, logging continues, consuming system resources such as CPU cycles and perm space.
In time, it is possible to run out of perm space, causing the Teradata Database to crash.
If any table is still enabled for Online Logging either after the archive is finished or due to a
graceful abnormal termination of Teradata ARC, a warning message appears that indicates
one or more tables are still enabled for online logging. For more details, see section “Archiving
Online” on page 51.
Example
This example stops online archive logging on Table1.
LOGGING ONLINE ARCHIVE OFF FOR (DatabaseA.Table1);
Example
This example stops online archive logging on all the tables in the DatabaseA.
LOGGING ONLINE ARCHIVE OFF FOR (DatabaseA);
Example
This example archives only the dictionary rows of DatabaseA, without archiving the database
itself.
LOGGING ONLINE ARCHIVE ON FOR (DatabaseA);
ARCHIVE DICTIONARY TABLES (DatabaseA), RELEASE LOCK, FILE=ARCHIVE;
LOGGING ONLINE ARCHIVE OFF FOR (DatabaseA), OVERRIDE;
220
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
LOGGING ONLINE ARCHIVE ON
LOGGING ONLINE ARCHIVE ON
Purpose
LOGGING ONLINE ARCHIVE ON begins the online archive process.
Use this statement for cluster archives.
Syntax
$
/2**,1*21/,1($5&+,9(21)25
$
%
$//
%
(;&/8'(
126<1&
$//
72
%
where:
Syntax Element
Description
database
Indicates the database that is included in online archive logging
ALL
[Optional] Indicates that the specified database and all of its child databases are included in
the statement
databasename.tablename
Indicates the table and database that is included in online archive logging
EXCLUDE
[Optional] Specifies the databases that are excluded from the statement
NOSYNC
Enables online archive on all tables involved in the archive but does not require all tables to be
enabled before access to those tables is allowed. Tables blocked during initial enabling request
are retried as individual requests; as a result, those tables will have different sync points. As
each table or group of tables is enabled for online logging, it becomes available for access by
other jobs.
Teradata Archive/Recovery Utility Reference, Release 16.00
221
Chapter 6: Archive/Recovery Control Language
LOGGING ONLINE ARCHIVE ON
Access Privileges
To execute LOGGING ONLINE ARCHIVE ON, the user name specified in the logon
statement must have one of these privileges:
•
DUMP privilege on the database or table that being logged. The privilege is granted to a
user or an active role for the user.
•
Ownership of the database or table.
Usage Notes
An online archive is a process in which rows are archived from a table at the same time update,
insert, or delete operations on the table are taking place. When an online archive is initiated
on a table or a database, Teradata ARC creates a log for the specified table or a separate log for
each table in the specified database. The log, which contains all changes to the table, is saved as
a part of the archive process. The changes recorded in the log can be applied to any changes to
the table that occurred during the archive, such as a restore operation.
EXAMPLE
This example starts online archive logging on Table1.
LOGGING ONLINE ARCHIVE ON FOR (DatabaseA.Table1);
EXAMPLE
This example starts online archive logging on Table1, Table2 and Table3.
LOGGING ONLINE ARCHIVE ON FOR
(DatabaseA.Table1),
(DatabaseA.Table2),
(DatabaseA.Table3);
Use LOGGING ONLINE ARCHIVE ON to start online archive logging for specified objects
before submitting an archive job for those objects. After logging is started for a target table,
any changed rows to that table are recorded into a log that grows until the database runs out of
space or LOGGING ONLINE ARCHIVE OFF is used to end online archive logging.
Database objects can be created during online archive. The new database objects created
during online archive may be included in the archive if it is a full database archive. If the new
database object is a table, then that table will be archived without benefit of Online Archive,
meaning that a read lock will be applied to the table at the beginning of the archive and will
remain in place until the archive is complete, thereby, blocking any write transactions to that
table while it is being archived.
The total number of tables that can be in an active state of online archive logging at the same
time is 1,000,000.
For information on the NOSYNC keyword as it relates to Online Archive, see “Archiving
Online” on page 51.
For tables involved in a load operation, see “Archiving Online” on page 51.
222
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
LOGGING ONLINE ARCHIVE ON
If any table is still enabled for Online Logging either after the archive is finished or due to a
graceful abnormal termination of Teradata ARC, a warning message appears that indicates
one or more tables are still enabled for online logging. For more details, see section “Archiving
Online” on page 51.
Disallowed Tables
There are restrictions on the types of tables used with LOGGING ONLINE ARCHIVE ON.
Do not specify these tables, or the online archive process will abort:
•
Temporary tables
•
Permanent journal tables
•
Online archive logging activated tables
Teradata Archive/Recovery Utility Reference, Release 16.00
223
Chapter 6: Archive/Recovery Control Language
LOGMECH
LOGMECH
Purpose
LOGMECH sets the extended security mechanism used by Teradata ARC to log onto a
Teradata Database.
Syntax
LOGMECH logon-mechanism ;
.
2412A022
Usage Notes
To take effect, specify LOGMECH with LOGDATA before specifying LOGON.
Use logon-mechanism as the name of the logon mechanism for a specific archive job. The
specified logon mechanism is then used for all Teradata ARC sessions that log on.
For a complete list of supported logon mechanisms, see Teradata Call-Level Interface Version 2
Reference for Workstation-Attached Systems.
224
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
LOGOFF
LOGOFF
Purpose
LOGOFF ends all Teradata Database sessions logged on by the task and terminates Teradata
ARC.
Syntax
LOGOFF
;
QUIT
.LOGOFF
.QUIT
2412A017
Usage Notes
When Teradata ARC accepts the LOGOFF statement, it issues this message and ends the
session:
ARCMAIN HAS LOGGED OFF n SESSIONS
Teradata ARC control language parser ignores any statements following the LOGOFF
statement.
Teradata Archive/Recovery Utility Reference, Release 16.00
225
Chapter 6: Archive/Recovery Control Language
LOGON
LOGON
Purpose
LOGON specifies the name of the Teradata machine that Teradata ARC should connect to, as
well as the user name and password that should be used.
Note: A LOGON string that does not contain a userid and a password is interpreted as a SSO
(single sign-on) logon.
Syntax
;
LOGON
.LOGON
tdpid/
userid
, password
, 'acctid'
2412B006
where:
Syntax Element
tdpid
Description
The identifier of the Teradata machine that will be used for this job
Note: tdpid is followed by a “/”.
The tdpid must be a valid identifier configured for the site.
If a value is not entered, tdpid defaults to the id established by the system administrator.
userid
User identifier
A userid can be up to 30 characters.
Your userid must be authorized to perform the operations specified by subsequent utility statements.
Note: This field is optional if single sign-on (SSO) is implemented
password
Password associated with the user name
A password can be up to 30 characters.
For a null password, include a comma before the semicolon.
Note: This field is optional if single sign-on (SSO) is implemented
’accid’
Account identifier associated with the userid
If this value is omitted, Teradata ARC uses the default account identifier defined when your userid was
created.
226
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
LOGON
Usage Notes
The LOGON statement causes Teradata ARC to log on only two control sessions for the user.
When Teradata ARC encounters ARCHIVE, RESTORE, or COPY, it automatically logs on any
additional data sessions specified in the SESSIONS runtime parameter. ARCHIVE, RESTORE,
and COPY are the only statements that need these sessions.
LOGON establishes sessions by:
•
Identifying the user to the Teradata Database
•
Specifying the account to charge for system resources
When LOGON is accepted, this message is displayed:
2 SESSIONS LOGGED ON
If the user who is specified by userid is logged onto the Teradata Database through a program
other than Teradata ARC, Teradata ARC terminates as soon as it encounters BUILD, COPY,
RESTORE, ROLLBACK, or ROLLFORWARD. To determine whether a user is logged onto the
Teradata Database, Teradata ARC performs a SELECT on database DBC.sessionInfo.
Therefore, the user attempting to log on must have SELECT privileges on DBC.sessionInfo.
If a LOGON statement has already been run by Teradata ARC, Teradata ARC logs off all
Teradata Database sessions from the previous logon.
The implementation of the SSO feature provides the ability to log onto a workstation once
and then access the Teradata Database without having to provide a user name and password.
SSO saves the time required to enter these fields, which are now optional. Additionally, certain
authentication mechanisms (e.g., Kerberos, NTLM) do not send passwords across the
network to further heighten security. SSO requires support of both the database and the client
software.
Teradata Archive/Recovery Utility Reference, Release 16.00
227
Chapter 6: Archive/Recovery Control Language
RELEASE LOCK
RELEASE LOCK
Purpose
RELEASE LOCK removes a utility (HUT) lock from the identified databases or tables.
Syntax
,
RELEASE LOCK
A
(dbname)
ALL
(dbname.tablename)
B
A
,
, EXCLUDE
(dbname)
ALL
(dbname1)-TO-(dbname2)
C
B
, 4096
, CLUSTERS
, CLUSTER
=
, OVERRIDE
nnn
,
, AMP=
, ALL
5
n
;
C
, BACKUP NOT DOWN
2412B013
where:
Syntax Element
Description
(dbname)
Name of the database on which the HUT lock is to be released
ALL
Indicates the operation affects the named database and its descendants
(dbname.tablename)
Name of the table on which the HUT lock is to be released
EXCLUDE
Prevents HUT locks on the named databases from being released
(dbname)
Name of database to exclude from release
ALL
Excludes the named database and its descendants
(dbname1) TO (dbname2)
Alphabetical list of databases to exclude from release
The delimiting database names need not identify actual databases.
Database DBC is not included as part of a range.
228
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
RELEASE LOCK
Syntax Element
Description
CLUSTERS = nnn or
CLUSTER = nnn
Specifies AMP cluster for which locks are released
nnn is the number of the cluster on which locks are released.
Specify up to 4096 cluster numbers.
Locks are not released on any offline AMPs in the specified clusters unless the ALL option is
specified.
ALL
Releases locks on AMPs that are offline when RELEASE LOCK is used
Teradata ARC releases HUT locks when the AMPs return to online operation.
Note: Do not use the ALL keyword with the AMP option.
OVERRIDE
Allows HUT locks to be released by someone other than the user who set them
BACKUP NOT DOWN
Allows HUT locks to remain on non-fallback tables (with single after image journaling) for
those AMPs where the permanent journal backup AMP is down
Teradata ARC releases all other HUT locks requested.
Access Privileges
To release the HUT locks on a database or a table in a database, the user name specified in
LOGON must have one of the following:
•
The DUMP or RESTORE privilege on the database or table where HUT locks are to be
released
•
Ownership of the database or table where HUT locks are to be released
When the OVERRIDE keyword is specified, the user name specified in LOGON must have
one of the following:
•
The DROP DATABASE privilege on the specified database or table
•
Ownership of the database or table
Usage Notes
During an archive or recovery operation, Teradata ARC places HUT locks on the objects
affected by the operation. These locks remain active during a Teradata Database or client
restart, and must be explicitly released by the RELEASE LOCK statement or by the RELEASE
LOCK keywords available on the ARCHIVE, REVALIDATE REFERENCES FOR, ROLLBACK,
ROLLFORWARD, RESTORE and BUILD statements. Teradata ARC issues a message to report
that the release operation is complete. It releases HUT locks when the AMPs return to online
operation.
If the user name is specified in the logon statement, RELEASE LOCK or the equivalent option
in a utility statement releases all HUT locks placed on the specified database or table. If the
OVERRIDE keyword is specified in RELEASE LOCK, Teradata ARC releases all HUT locks on
the specified database or table.
If HUT locks were placed at the database-level, they must be released at the database-level.
Teradata Archive/Recovery Utility Reference, Release 16.00
229
Chapter 6: Archive/Recovery Control Language
RELEASE LOCK
A table-level release lock against a database-level lock has no effect. Conversely, a databaselevel release lock releases all locks on the database, including any database-level lock and all
table level locks.
Do not specify ALL with the AMP keyword.
SYSUDTLIB can now be specified as an individual database name in the RELEASE LOCK
statement.
BACKUP NOT DOWN Keywords
With Teradata Database, BACKUP NOT DOWN allows HUT locks to remain on non-fallback
tables with remote single after image journaling when the backup AMP is offline.
230
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
RESTORE
RESTORE
Purpose
RESTORE moves data from archived files to the same Teradata Database from which it was
archived, or moves data to a Teradata Database other than the one used for the archive, if the
database DBC is already restored. To use RESTORE to import data to the Teradata Database,
the table being restored must exist on the target Teradata Database.
RESTORE between the same database version is supported. RESTORE from an older database
version to a newer database version is supported. RESTORE of a newer database version back
to an older database version is not supported.
Note: Beginning with TTU 13.00.00, there is support for restoring all objects as individual
objects. For a complete list of objects supported by Teradata ARC, see “Appendix A Database
Objects” on page 271.
Teradata Archive/Recovery Utility Reference, Release 16.00
231
Chapter 6: Archive/Recovery Control Language
RESTORE
Syntax
,
RESTORE
DATA
DICTIONARY
NO FALLBACK
TABLES
TABLE
A
(dbname)
ALL
options
(dbname.tablename)
ALL FROM ARCHIVE
JOURNAL
B
A
,
, EXCLUDE
(xdbname)
ALL
(xdbname 1) TO (xdbname 2)
B
C
, CLUSTERS
, 4096
nnn
=
, RESTORE FALLBACK
, CLUSTER
,
, AMP=
5
n
D
C
, NO BUILD
, ABORT
, RELEASE LOCK
E
D
, DBSIZEFILE = filename
, SKIP JOIN INDEX
SKIP STAT COLLECTION
E
, FILE = name
;
2412_19
232
!"
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
RESTORE
where:
Syntax Element
Description
DATA TABLE or DATA
TABLES
Restores both fallback and non-fallback tables to all AMPs or clusters of AMPs
DICTIONARY TABLE or
DICTIONARY TABLES
Restores only dictionary rows
This option is only allowed for:
• Archives created using the DICTIONARY option of the ARCHIVE statement
• All-AMPs journal archives
NO FALLBACK TABLE or
NO FALLBACK TABLES
Restores tables without fallback to complete a previous all-AMPs or cluster restore where
AMPs were offline, or to restore non-fallback data to specific AMPs recovering from disk
failure
JOURNAL TABLE or
JOURNAL TABLES
Restores an archived journal for subsequent use in a roll operation
ALL FROM ARCHIVE
Specifies that all databases and tables in the given archive file will be restored and that any
existing databases and tables will be overwritten
This option is used in place of database/table names normally specified.
No other database or table names can be specified to be restored with this option.
(dbname)
Name of the database to restore
This form restores all tables of the type specified, and all stored procedures (if specified
type is FALLBACK) that are in the named database and on the input file.
ALL
Restores the named database and all descendants
(dbname.tablename)
Name of a table within the named database to restore
All tables should be distinct within the list. Duplicate table names are ignored.
EXCLUDE
Prevents specified databases from being restored
(xdbname)
Name of database to exclude from restore
ALL
Excludes the named database and all descendants
(xdbname1) TO (xdbname2)
Alphabetical list of client databases to exclude from the restore
The delimiting database names need not identify actual databases.
Database DBC is not part of a range.
CLUSTERS = nnn or
CLUSTER = nnn
Specifies AMP clusters to restore
nnn is the cluster number.
Specify up to 4096 clusters.
AMP = n
Specifies the AMP (or a list of AMPs) to restore for Teradata Database, where n is the
number of the AMP
Specify up to five AMPs with this option.
Teradata Archive/Recovery Utility Reference, Release 16.00
233
Chapter 6: Archive/Recovery Control Language
RESTORE
Syntax Element
Description
RESTORE FALLBACK
Restores the fallback copy of primary and unique secondary indexes while restoring the
data
This option applies only to data table restores of fallback tables.
Note: When the RESTORE FALLBACK option is specified and the system formats or hash
functions of the source and target systems are different. Teradata ARC disables the
RESTORE FALLBACK request.
NO BUILD
Prevents the Fallback and Secondary Index rows from being created
If NO BUILD is requested when restoring database DBC, the request will be ignored.
If the NO BUILD keywords are used during a RESTORE statement, a separate BUILD
statement must be run for all databases and/or tables that were restored. The tables will not
be accessible until a BUILD statement is run.
RELEASE LOCK
Releases utility (HUT) locks on the named databases when the restore completes
successfully
ABORT
Aborts the restore if an AMP to which a non-fallback table is to be restored is not online
This option affects only an all-AMPs restore. It does not affect a specific AMP restore.
EXCLUDE TABLE or
EXCLUDE TABLES
prevents individual objects in the listed database from being restored
(xtablename)
Name of an individual object in the designated database to exclude
Multiple objects are separated by commas.
This form (without a database name prefix) is used only when ALL has not been specified
for the current object.
(xdbname.xtablename)
List of fully qualified objects (prefixed by database name) to exclude
PARTITIONS WHERE
Specifies the conditional expression for partition-level operations
(!conditional expression!)
The conditional expression for specifying selected partitions
LOG WHERE
Specifies the rows that are logged to an error table for manual insertion and deletion
(!conditional expression!)
The conditional expression for specifying rows to log to the error table when restoring
selected partitions
ALL PARTITIONS
Restores all archived partitions for an archived PPI object
QUALIFIED PARTITIONS
Restores the same partitions specified in a previous selected-partition restore
ERRORDB and
ERRORTABLES
Specifies the location of the error log for partition-level operations
DBSIZEFILE
Prompts Teradata ARC to read the file specified in filename for the desired perm space for
each specified database or user
“Resizing the Teradata Database with DBSIZEFILE” on page 246 has more information.
filename
Specifies the name of the text file that contains pairs of database or user names and sizes
The information is used in resizing a database.
SKIP JOIN INDEX
234
Skip the restore of join and hash indexes
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
RESTORE
Syntax Element
Description
SKIP STAT COLLECTION
Skip the restore of Stat Collections
Access Privileges
To restore a database or a data table within the database, the user name specified in LOGON
must have one of the following:
•
The RESTORE privilege on the database or table that is being restored
•
Ownership of the database or table
Only these users can restore database DBC:
•
User DBC
•
Users granted the RESTORE privilege on database DBC
Note: Full access privileges are required for archiving or restoring DBC. If the DBC does not have full
privileges, those rights need to be granted prior to performing the archive. If the source system is not
available, such as restoring from tape, then once DBC has been restored, full privileges have to be
explicitly granted for each DBC object, view, and macro before running the post_dbc_restore script and
DIP.
To restore a journal table, the user name must have one of the following:
•
The RESTORE privilege on the journal table itself or the database that contains the journal
table
•
Ownership of the database that contains the journal table
To use the restored journal in recovery activities also requires RESTORE privilege on the data
tables being rolled forward.
To Restore a UIF object, the user named in the LOGON statement must have:
•
The RESTORE privilege on the database to be restored
Note: You are cannot grant the RESTORE privilege on the UIF object itself, but with the
database-level RESTORE privilege, you can do a table-level restore of the individual UIF
object.
Usage Notes
Database SYSUDTLIB is linked with database DBC and is only restored if DBC is restored.
SYSUDTLIB cannot be specified as an individual object in a RESTORE statement.
If database DBC is involved in a restore operation, it is always restored first, followed by
database SYSUDTLIB. If additional databases are being restored, they follow SYSUDTLIB in
alphabetical order.
An all-AMPs restore or a specific AMP restore can use archive files created by any of the
following:
•
An all-AMPs archive
•
A cluster archive
Teradata Archive/Recovery Utility Reference, Release 16.00
235
Chapter 6: Archive/Recovery Control Language
RESTORE
•
A specific AMP archive
•
An archive of selected partitions
During a restore of a Teradata Database, Teradata ARC issues messages in the following
format as it restores different types of objects:
"UDFname" - n,nnn,nnn BYTES, n,nnn,nnn ROWS RESTORED
"procedurename" - n,nnn,nnn BYTES, n,nnn,nnn ROWS RESTORED
"macroname" - MACRO RESTORED
"methodname" - METHOD RESTORED
"methodname" - n,nnn,nnn BYTES, n,nnn,nnn ROWS RESTORED
"tablename" - n,nnn,nnn BYTES, n,nnn,nnn ROWS RESTORED
"triggername" - TRIGGER RESTORED
"UDTname" - TYPE RESTORED
"viewname" - VIEW RESTORED
When all tables are restored, Teradata ARC issues this message:
STATEMENT COMPLETED
At the start of a restore, or after a system restart, all offline AMPs are listed in the output log.
After each restore, compare the contents of the output log with listings of tables that were
restored to determine which tables were not restored to offline AMPs.
Teradata ARC supports duplicate rows in a restore operation. However, if a restart occurs
during the restore of a table with duplicate rows, the duplicate rows involved in the restart
might be removed.
When a database or table restores from an older version source system with a different
configuration, it affects the time and space required to restore data to the destination system.
The data must be redistributed among the new set of AMPs. The system uses a different
algorithm to copy and restore when the number of AMPs is different between the source and
destination systems. This new algorithm is usually faster when the configurations are
different. The old algorithm may be faster in some restricted situations.
Unique secondary indexes are distributed across all AMPs. Therefore, RESTORE does not
rebuild unique secondary indexes, even if the indexes are in a cluster that is not being restored.
Use the BUILD statement to build and revalidate unique indexes.
All-AMP Restores
If an archive file from an all-AMPs archive is used for an all-AMPs restore of data tables,
Teradata ARC deletes the current copy of the data tables being restored before it restores the
archive copy.
Cluster AMP Restores
When restoring a cluster archive to a reconfigured system, run the cluster archives serially. If
this restore scenario is detected by ARCMAIN, the NO BUILD option is automatically
enabled, if not specified, and an explicit BUILD command must be run after all restore jobs
are completed.
When using NO BUILD for a cluster restore operation, submit an all-AMPs BUILD statement
after all the clusters are restored. Do not use the NO BUILD keywords for fallback table cluster
236
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
RESTORE
restore operations. If a cluster loses an AMP before the all-AMPs build, restore that cluster
manually.
If an archive file from a cluster archive for an all-AMPs restore of data tables is used, a
dictionary restore must precede the all-AMPs data tables restore.
Do not access fallback tables until a build operation is completed. Do not update non-fallback
tables until a build operation is completed.
Specific-AMP Restores
Use the specific-AMP restore to restore non-fallback data tables. Only perform this recovery
when the AMP to be restored is online to normal Teradata Database operations. If an archive
file from a specific-AMP archive is used, Teradata ARC does not delete the current copy of any
non-fallback table being restored before it restores the archive copy.
If one or more AMPs are offline during archive or restore, it might be necessary to perform
specific-AMP restores after completing an all-AMPs or cluster restore. For example, if an AMP
is online at the start of a restore and goes offline during the process, it might be necessary to
restore some non-fallback tables to the AMP when it comes back online. If, however, an AMP
is offline at the start of a restore and comes back online before the procedure completes,
Teradata ARC restores subsequent non-fallback tables to the AMP. In either case, specify the
NO FALLBACK TABLES option of RESTORE.
Partial-AMPs Restores
A Partial-AMPs restore occurs when restoring with fewer streams in a multi-stream restore
than were used in a multi-stream archive. For a discussion regarding Partial-AMPs restore, see
“Restoring with Partial-AMPs” on page 79.
Journal Table Restores
Restore dictionary definitions of the journal tables with the RESTORE DICTIONARY
statement against a journal archive. This creates empty journal tables.
If archive data sets containing journal tables are concatenated by some operation outside of
the Teradata Database, subsequent journal tables are not accessed from the concatenation by
the restore process.
Restore the definition of the journal table to a reconfigured system by performing a
DICTIONARY RESTORE from an all-AMPs journal archive.
Table Restores
When restoring an entire table, restore an all-AMPs archive before any specific-AMP archive
files. If restoring a single AMP from a disk failure, use the specific-AMP archive. Table restores
accomplish the following:
•
An all-AMPs restore restores all specified tables that also exist on the input archive file.
•
An all-AMPs data table restore from an all-AMPs or dictionary tables archive file also
restores the Data Dictionary entries for all the objects that belong to the databases being
restored.
Teradata Archive/Recovery Utility Reference, Release 16.00
237
Chapter 6: Archive/Recovery Control Language
RESTORE
Dictionary Table Restores
An all-AMPs dictionary tables restore must precede cluster restores involving fallback tables. A
DICTIONARY restore leaves all fallback tables recovered in a state of restoring and leaves all
indexes invalidated for non-fallback tables.
Restoring After a Disk Failure
To recover from a disk failure, run the Disk Copy and Table Rebuild utilities to restore
database DBC and all user data tables defined with the fallback option, then perform a specific
AMP restore.
To specify the non-fallback tables that were not restored while the AMP was inoperative,
specify database names or names of individual tables with the EXCLUDE keyword.
When restoring part of a non-fallback table during an all-AMPs restore is not possible because
an AMP is offline, the system invalidates unique secondary indexes for the table. After the
AMP is back online and the table has been restored, either drop and then recreate the unique
secondary indexes for the table, or use BUILD to recreate secondary indexes.
To restore from a catastrophic condition (for example, to restore database DBC) on a system
that has permanent journaling, restore the system using an archive created from an ARCHIVE
JOURNAL TABLES statement in the following order:
1
RESTORE DATA TABLES (DBC) ...;
2
RESTORE DICTIONARY TABLES (DBC) ALL, EXCLUDE (DBC), (TD_SYSFNLIB)...,
FILE=ARCHIVE;
Note: For this example, ARCHIVE is a journal archive.
3
;RESTORE DATA TABLES (DBC) ALL, EXCLUDE (DBC), (TD_SYSFNLIB)...;
4
RESTORE JOURNAL TABLES (DBC) ALL, EXCLUDE (DBC), (TD_SYSFNLIB);
Note: Follow Step 4 only if rolling the journal images forward or backward.
RESTORE builds the fallback copy of primary and non-unique secondary index subtables
within a cluster. If the system fails during this process, RESTORE rebuilds fallback copies
automatically.
Referential Integrity
When a table is restored into a Teradata Database, the dictionary definition of that table is also
restored. The dictionary definitions of both the referenced (parent) and referencing (child)
table contain the complete definition of a reference.
Nonetheless, in restoring tables creating an inconsistent reference definition in the Teradata
Database can happen. Hence, when either a referenced (parent) or a referencing (child) table
is restored, the reference might be marked in the dictionary definition tables as inconsistent.
When a table is marked as inconsistent with respect to referential integrity, no updates, inserts
or deletes are permitted. The table is fully usable only when the inconsistencies are resolved.
•
238
If both the referenced (parent) and the referencing (child) tables are restored, use
REVALIDATE REFERENCES FOR to validate references. See “REVALIDATE
REFERENCES FOR” on page 249.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
RESTORE
If inconsistent constraints remain after a REVALIDATE REFERENCES FOR statement has
been executed, use ALTER TABLE DROP INCONSISTENT REFERENCES to remove
inconsistent constraints.
•
If either the referenced (parent) table or the referencing (child) table is restored (but not
both tables), the REVALIDATE REFERENCES FOR statement is not applicable. Use
ALTER TABLE DROP INCONSISTENT REFERENCES to drop the inconsistent
references.
Restores of Selected Partitions
Restoring selected partitions of a PPI table allows archiving and restoring only a subset of data
in a PPI table.
Restrictions on Restoring Selected Partitions
Before attempting to restore selected partitions, read “Potential Data Risks When Archiving/
Restoring Selected Partitions” on page 45 and “Changes Allowed to a PPI Table” on page 190.
The following restrictions apply to restoring selected partitions:
•
Restoring selected partitions is not allowed to a machine with a hash function that is
different from the source machine, but a different configuration is allowed.
•
To restore or copy selected partitions, a table must already exist on the target system. For a
copy, the existing table must have been created by a full-table copy from the source
machine.
•
Restoring selected partitions is not allowed to a table that has undergone any of the
following major DDL changes:
•
Adding, modifying, or dropping columns.
•
Certain changes during RESTORE and COPY operations. For more information, see
“Restrictions on Copying Partitioned Data” on page 210.
Restoring selected partitions is not allowed to a table that has undergone any of the following
major DDL changes:
•
Changing primary index columns
•
Changing from PPI to NPPI, or visa versa
•
Adding or changing referential integrity constraints
Other restrictions exist for archiving selected partitions of PPI tables. For more information,
see “Restrictions on Archiving Selected Partitions” on page 189.
Keywords for Restoring Selected Partitions
These options are available for restoring selected partition archives:
•
PARTITIONS WHERE
•
LOG WHERE
•
ERRORDB/ERRORTABLES
•
ALL PARTITIONS
Teradata Archive/Recovery Utility Reference, Release 16.00
239
Chapter 6: Archive/Recovery Control Language
RESTORE
•
QUALIFIED PARTITIONS
The next sections describe how to use the options for selecting partitions of PPI tables.
PARTITIONS WHERE Keyword
Use the PARTITIONS WHERE option to specify the conditional expression, which contains
the definition of the partitions that to be restored. The following restrictions apply to
PARTITIONS WHERE:
•
The object is an individual table (not a database).
•
The source and target tables have a PARTITIONS BY expression defined.
•
The restore is an all-AMP restore (not a dictionary, cluster, or journal restore).
•
If the table belongs to a database that is specified in RESTORE, the table is excluded from
the database-level object (with EXCLUDE TABLES) and is individually specified.
•
Any name specified in the conditional expression is within the table being specified.
(Using table aliases and references to databases, tables, or views that are not specified
within the target table result in an error.) It is recommended that the only referenced
columns in the conditional expression be the partitioning columns or system-derived
column PARTITION of the table. References to other columns does not contribute to
partition elimination, and might accidently qualify more partitions than intended.
LOG WHERE Keyword
If the PARTITIONS WHERE option does not capture all the rows that need to be restored, use
the LOG WHERE option. This option inserts into a Teradata-generated error table archived
rows that both fall outside the partitions specified by the PARTITIONS WHERE conditional
expression and match the LOG WHERE conditional expression.
Use the option only if PARTITIONS WHERE is also specified for the object. If LOG WHERE
is omitted, the default is to log to the error table only the rows in the partitions being restored
that have errors.
ERRORDB/ERRORTABLES Keyword
The ERRORDB and ERRORTABLES options are mutually exclusive; specify only one option
for an object. Also, specify either the PARTITIONS WHERE or ALL PARTITIONS option
when using either ERRORDB or ERRORTABLES.
•
If ERRORTABLES is specified without a database name, or if neither ERRORTABLES nor
ERRORDB is specified, the error table is created in the same database as the base table.
•
If ERRORTABLES is not specified, by default the naming convention for the error table is
the name of the base table plus the prefix “RS_”. For example, the error table for a table
named “DataTable” is “RS_DataTable.” Names are truncated if they exceed 30 bytes.
ALL PARTITIONS Keyword
Use the ALL PARTITIONS option to restore all of the archived partitions in a table. These
restrictions apply:
•
240
The object being restored is an individual table, or the ALL FROM ARCHIVE option is
specified.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
RESTORE
•
The source and target tables contain a defined PARTITIONS BY expression.
•
The restore is an all-AMP restore rather than a dictionary, cluster, or journal restore.
•
PARTITIONS WHERE is not specified for the object.
•
The partition bounding condition must have been well-defined when the backup was
performed. A bounding condition is well-defined if the PARTITION BY expression on the
source table consists of a single RANGE_N function, and if the specified range does not
include NO RANGE or UNKNOWN. (Use ANALYZE to determine whether a selected
partition is well-defined.)
If a conditional expression is not well-defined, Teradata ARC issues an error. Use
PARTITIONS WHERE for the restore operation rather than ALL PARTITIONS.
QUALIFIED PARTITIONS Keyword
Use this option only to restore a specific-AMP archive after restoring selected partitions from
an all-AMP archive done while an AMP is down.
Examples of Keywords for Restoring Selected Partitions
All of the rows of the TransactionHistory table for the month of July 2002 are restored in this
example:
RESTORE DATA TABLES
(SYSDBA.TransactionHistory)
(PARTITIONS WHERE
(! TransactionDate BETWEEN DATE ‘2002-07-01’ AND DATE ‘2002-0731’ !)
),
RELEASE LOCK,
FILE=ARCHIVE;
All of the rows for the TransactionHistory table for the month of July 2001 are restored and all
rows for the month of August 2001 are logged to an error table called TransError in this
example:
RESTORE DATA TABLES
(SYSDBA.TransactionHistory)
(PARTITIONS WHERE
(! TransactionDate BETWEEN DATE ‘2001-07-01’ AND DATE ‘2001-0731’ !),
LOG WHERE
(! TransactionDate BETWEEN DATE ‘2001-08-01’ AND DATE ‘2001-0831’ !),
ERRORTABLES SYSDBA.TransError
),
RELEASE LOCK,
FILE=ARCHIVE;
The following example restores all data for all tables in database SYSDBA, including all
partitions archived for table TransactionHistory:
RESTORE DATA TABLES
(SYSDBA)
(EXCLUDE TABLES (TransactionHistory)),
(SYSDBA.TransactionHistory)
(ALL PARTITIONS),
Teradata Archive/Recovery Utility Reference, Release 16.00
241
Chapter 6: Archive/Recovery Control Language
RESTORE
RELEASE LOCK,
FILE=ARCHIVE;
Using Keywords with RESTORE
RESTORE is affected by the following keywords. For information about the keywords that
apply to restoring partitioned data, see “Restores of Selected Partitions” on page 239.
RESTORE DICTIONARY TABLE Keywords
Use the all-AMPs journal archive to restore journal dictionary rows to a Teradata Database
that has been reconfigured since the journal archive was created. If the dictionary is restored
for an entire database, Teradata ARC restores the definitions for all the objects belonging to
that database. If specific tables are restored, Teradata ARC restores only the definitions for
those tables.
AMP Keyword
Do not specify this option for:
•
Dictionary table restores
•
Database DBC when restoring data tables
With Teradata Database, use the AMP=n option to specify up to five AMPs.
This option applies only with the NO FALLBACK TABLE or JOURNAL TABLE options. It is
useful following all-AMPs restores when all AMPs are not available at the time of the
operation and non-fallback or journal receiving tables are involved.
When restoring journal tables to specific processors, specify the processor to recover with the
journal.
If the journal to restore contains after images on a backup processor, the journal rows restored
by Teradata ARC are sent to the backup processor for the AMP that is specified in the
statement.
CLUSTER Keyword
The following rules apply to this option:
•
The option is allowed only when restoring data tables.
•
Restore the dictionary before restoring the cluster.
•
Restore a cluster only from a cluster archive.
•
Perform a build operation on a non-fallback table with unique secondary indexes before
updating it using Teradata SQL.
•
The building of fallback data during cluster restores depends on the Teradata Database
software release.
Teradata ARC builds all indexes except unique indexes. Do not update a fallback table with
unique indexes using Teradata SQL after a cluster restore until a build is performed.
242
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
RESTORE
If any AMPs in the list of clusters are offline, restore any non-fallback table to those AMPs
when they are brought back online. A specific AMP restore can be from a specific AMP or
cluster archive.
Teradata ARC releases HUT locks at the cluster level.
RESTORE FALLBACK Keywords
If this option is not specified, Teradata ARC restores only a single copy and then builds the
fallback copy.
If an AMP is down during either the archive or the restore, Teradata ARC does not restore
non-unique indexes.
If the RESTORE FALLBACK and NO BUILD options are both specified, one of the following
occurs:
•
If the archive is for all AMPs, then Teradata ARC ignores NO BUILD.
•
If the archive and restore are both cluster level, Teradata ARC ignores NO BUILD.
•
If the table has unique indexes, the restore operation is not complete until an all-AMPs
BUILD statement is applied to the fallback tables. Teradata ARC builds non-unique
indexes within the cluster.
If the RESTORE FALLBACK and NO BUILD options are both specified to a cluster level
archive and the restore is to all AMPs, Teradata ARC does not build indexes or revalidate the
primary subtables. In this case, the table is not fully restored. To access the table using Teradata
SQL, do one of the following first:
•
Perform another restore operation without the NO BUILD option and with the RESTORE
FALLBACK option.
•
Perform another restore operation and apply an all-AMPs BUILD statement to the
fallback tables.
When restoring a set of cluster archives to a reconfigured Teradata Database, restore each
cluster archive to all-AMPs. Use the RESTORE FALLBACK option on all of the cluster restore
operations or on none of the cluster restore operations. Otherwise, if the archive contains
fallback tables, Teradata ARC reports an error and terminates the restore operation.
Use NO BUILD for all but the last all-AMPs restore operation. NO BUILD is optional for the
last all-AMPs restore operation.
When restoring a set of cluster archives to the same configuration, always perform cluster level
restore operations.
If the RESTORE FALLBACK option is specified without the NO BUILD option, the restore
operation continues processing on a reconfigured system instead of restarting.
Note: If a fallback table is being processed, this option disables the HALT and PAUSE runtime
parameters until the restore completes.
Teradata Archive/Recovery Utility Reference, Release 16.00
243
Chapter 6: Archive/Recovery Control Language
RESTORE
NO BUILD Keywords
These keywords prevent fallback data and secondary indexes from being built on fallback
tables when restoring cluster archives to all AMPs. Use this option if restoring cluster archives
to a Teradata Database that was reconfigured after the cluster archives were created.
When restoring journal tables to a Teradata Database that has been reconfigured since the
journal archive was made, specify NO BUILD to prevent sorting of the restored journal and
distribution of fallback rows. This procedure is useful when restoring more than one journal
archive (that is, restoring an all-AMPs journal archive and one or more specific AMP journal
archives).
If the NO BUILD keywords are used with RESTORE, run a separate BUILD statement for all
databases and/or tables that were restored. The tables will not be accessible until a BUILD
statement is run.
If the NO BUILD keywords are used while restoring a database or table that contains Join or
Hash indexes, the database/table will be in a 'restoring' state when Teradata ARC attempts to
restore the Join/Hash indexes. When in the 'restoring' state, the Teradata Database will not
allow any access to the database/table. This will cause the restore of the Join/Hash index to fail.
In order to restore the Join/Hash indexes, follow this procedure:
1
Run a separate BUILD job to build the fallback rows and secondary indexes for the
database/table. This will clear the 'restoring' state and allow user access to the database/
table.
2
Restore the Join/Hash index by rerunning the restore job and just specifying the Join/Hash
indexes to be restored as individual objects in the restore list.
If the NO BUILD keywords are used while restoring a database or table that contains Stat
Collections, use the same procedure above for Join/Hash indexes in order to restore them.
ALL FROM ARCHIVE Keywords
The ALL FROM ARCHIVE keywords take the place of the database and/or table names that
are normally specified after the DATA, DICTIONARY, JOURNAL, or NO FALLBACK
TABLES keywords.
Do not specify any other database or table names to be restored when using ALL FROM
ARCHIVE. All databases and tables in the given archive file will be restored, and any existing
databases or tables will be overwritten.
CATALOG and Fastpath are not supported while using ALL FROM ARCHIVE. If CATALOG
(or Fastpath) is enabled when ALL FROM ARCHIVE is specified, it will be disabled and a
warning message will be given.
ALL FROM ARCHIVE cannot be used to restore database DBC, and DBC must be excluded
by the user if it is present in the archive being restored.
The EXCLUDE TABLES object option is not allowed when using ALL FROM ARCHIVE.
The ALL FROM ARCHIVE command cannot be used to restore a table that has a join or hash
index defined or a database that contains a table that has such indexes defined. See “Join and
Hash Indexes in Restoring” on page 59 for more details.
244
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
RESTORE
EXCLUDE TABLES Keyword
During a database-level RESTORE, the EXCLUDE TABLES option can be used to skip the
restore of one or more objects from the archive file to the target database. If a object is
excluded, Teradata ARC will not restore any of the dictionary information or any of the object
data for the specified object(s). If any of the specified object(s) already exist in the target
database, they are unaffected by the RESTORE operation. By not restoring any information/
data for the specified object, the EXCLUDE TABLES option can also be used to prevent the
deletion of existing objects from the target database during a restore.
Use the EXCLUDE TABLES option to skip the restore of a object from the archive file. If the
object already exists in the target database, it will be unaffected by the RESTORE operation. If
the RESTORE is run without the EXCLUDE TABLES option, the target object is dropped and
replaced by the dictionary information and data stored in the archive file for that object.
Use the EXCLUDE TABLES option to skip the restore of an excluded object from the archive
file. When a object is excluded during the archive, the only things archived for that object are
the dictionary information and the table header, but no object data. If the excluded object is
restored, the target object will be dropped and replaced by the dictionary information for that
object. But since there is no data in the archive file for that object, no data will be restored,
which will produce an empty object. To prevent replacing an existing object with an empty
object due to that object being excluded during the archive, use the EXCLUDE TABLES option
to skip the restore of that object. By using the EXCLUDE TABLES option, that object will be
unaffected by the RESTORE operation if it currently exists in the target database.
Use the EXCLUDE TABLES option to prevent the deletion of new objects from the target
database. During a normal RESTORE of a database-level object, all objects in the target
database are dropped before the objects from the archive file are restored to it. If a new object
has been added to the target database since the archive was taken, you will have a situation
where the archive file does not have any archived data for that new object. If you were to run a
restore without the EXCLUDE TABLES option, Teradata ARC will drop that new object from
the target database even though there is no data in the archive file to restore for that object. To
prevent dropping the new object during the restore, use the EXCLUDE TABLES option to
specify the new object in the target database. This will tell Teradata ARC to not drop that
object from the target database during the restore. By using the EXCLUDE TABLES option,
the new object in the target database will be unaffected by the RESTORE operation.
If the ALL keyword is not specified after the object name, either non-qualified object names
(without a database name prefix) or fully qualified object names (databasename.tablename)
can be specified in the list of EXCLUDE TABLES. If a non-qualified object name is specified, it
is presumed to belong to the database that it is associated with in the RESTORE statement.
If the ALL keyword is specified after the object name, then only fully qualified object names in
the form of databasename.tablename are accepted in the list of EXCLUDE TABLES.
The EXCLUDE TABLES option can only be used with database-level objects.
The EXCLUDE TABLES option can not be used when ALL FROM ARCHIVE is specified.
If a specified object is in neither the archive file nor the target database, the restore of the
database will be skipped.
Teradata Archive/Recovery Utility Reference, Release 16.00
245
Chapter 6: Archive/Recovery Control Language
RESTORE
For a list of individual object types that can be specified in the EXCLUDE TABLES clause for
Restore, see Appendix C.
Notice:
There is no explicit maximum length for a EXCLUDE TABLES clause, but when the
EXCLUDE TABLES clause specified is too long to fit in the SQL buffer, the SQL request can
not be sent to the database. The buffer that is used to send the EXCLUDE TABLES clause to
the database has a maximum length and the EXCLUDE TABLES must fit completely within
that buffer. Since other data may exist in the buffer and may be variable in length, a max size
for a EXCLUDE TABLES clause cannot be given. If the above situation occurs, ARC will
display following error message:
ARC0127: “EXCLUDE TABLES list is too long to fit in the SQL buffer.”
Modify the EXCLUDE TABLES list to reduce its size and resubmit the job.
Resizing the Teradata Database with DBSIZEFILE
To resize a database during a restore operation, use the DBSIZEFILE option. DBSIZEFILE is
valid only when restoring one of the following:
•
Database DBC by itself
•
'(DBC) ALL'
The option, 'DBSIZEFILE=filename', prompts Teradata ARC to read the file indicated by
filename. The file specifies the perm space size for each specified database or user. The format
is one or more pairs that indicate each of the following:
•
Database or user name
•
Perm space size
Each name and size pair must appear on a separate line. Specify the new size in bytes, for
example:
'database_name1 5000000'
‘foo_database 3000000’
‘user1 1000000’
Table 20 lists requirements for the file specified by filename.
246
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
RESTORE
Table 20: File Characteristics
File Characteristics
File Extension
The file extension must be:
• Valid for the operating system
• Valid on the statement
When specifying one or both of the following as part of the file name, enclose the entire file name in double quotes:
• A file extension
• Path
EXAMPLE
dbsizefile="<path>filename.extension"
Perm Space
The perm space amount can be any value. This is useful when reducing the size of a source database or user, which may have
unused perm space. When the source database or user is restored to a target system, it fits within the target system's perm
space.
File Location
Store the file anywhere on the system where Teradata ARC is running. Teradata ARC must have the appropriate permissions
to access the file.
Quotation Marks and Parentheses
Do not use quotation marks or parentheses to enclose the name and size pair. If specified with the name, the quotation marks
or parentheses are included and used as part of the name.
Spaces
• Only spaces are considered whitespace. Other non-printable characters are valid characters.
• A space delimits each element of the name and size pair, plus any comments at the end of the line.
• In the areas where space acts as a delimiter, any number of consecutive spaces are valid.
Special Characters
Teradata Archive/Recovery Utility Reference, Release 16.00
247
Chapter 6: Archive/Recovery Control Language
RESTORE
Table 20: File Characteristics (continued)
File Characteristics
When including special characters in the name, use the escape character '\', plus the special characters. Valid special
characters include, but are not limited to:
• '\ '
insert a single space
• '\\'
insert a single '\' character
• '\#'
insert a single '#' character
• '\a'
insert a single 'a' character (this can be any alphabetic character)
• '\*'
insert a single '*' character
The value of the perm size can include commas, however their positioning in the value will not be validated. ARC strips all
commas to form a single number without any commas.
The value of the perm size can be specified fully, or in exponential format (for example, '123000000' or '123E6').
File Comments
• Begin a comment with '#' (without the '\' character).
• No character is required to end a comment.
• A comment can start at the beginning of a line (the entire line would be a comment), or after the name and size pair (the
rest of the line would be a comment).
• Comments do not wrap to the next line. However, multiple line comments are valid if each comment line begins with '#'.
Teradata ARC restores DBC first, then resizes it according to the perm value stored in the file.
After the resize operation is completed, the restore job continues restoring and resizing,
starting with database SYSUDTLIB.
If Teradata ARC encounters errors while preparing to resize (for example, read errors on the
file, errors in syntax on the resize file contents, or any problems in communications with the
Teradata Database), it aborts the Teradata ARC restore job. Correct the errors and resubmit
the job.
After Teradata ARC starts the resize operation, it reports but ignores any errors that it
encounters (for example, missing privileges or database not found). Teradata ARC attempts
all resize requests and reports those results. If there was a resize failure, the restore operation
will fail again at a later point if enough perm space is still not available. If this occurs,
manually resize the user or database that failed or correct the errors causing the resize failures.
Resubmit the restore job.
248
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
REVALIDATE REFERENCES FOR
REVALIDATE REFERENCES FOR
Purpose
When a referenced (or parent) or referencing (or child) table is restored, the reference is
inconsistently marked in the database dictionary definitions. The table cannot be subject to
updates, inserts or deletes.
REVALIDATE REFERENCES FOR validates reference indexes that are marked as inconsistent.
Syntax
,
REVALIDATE REFERENCES FOR
A
(dbname)
ALL
(dbname.tablename)
B
A
,
, EXCLUDE
(dbname)
ALL
(dbname1) TO ( dbname 2)
;
B
, RELEASE LOCK
, ERRORDB dbname
GR01B013
where:
Syntax Element
Description
(dbname)
Name of database to be revalidated
ALL
Revalidates all child databases of specified database
All database tables with inconsistent indexes are processed.
(dbname.tablename)
Name of a table in the database to be revalidated
EXCLUDE
Omits identified database from processing
(dbname)
Name of database excluded from processing
ALL
Excludes all child databases of specified database
(dbname1) TO (dbname2)
Alphabetical list of databases to be excluded from processing
Database DBC is NOT included in this list. It will be excluded only if specifically identified.
RELEASE LOCK
Keywords to release HUT lock on table upon completion of processing (optional)
ERRORDB
Keyword to generate error tables in designated database (optional)
Teradata Archive/Recovery Utility Reference, Release 16.00
249
Chapter 6: Archive/Recovery Control Language
REVALIDATE REFERENCES FOR
Syntax Element
Description
(dbname)
Name of database containing error tables generated
If no table name is specified, error tables are created in the database containing the table being
processed.
Access Privileges
To revalidate the reference indexes of a table, the user name specified in LOGON must have
one of the following:
•
RESTORE privileges on the table being revalidated
•
Ownership privileges on the database or table
Duplicate names in database or tablename lists are ignored. The list of excluded names
takes precedence over the list of included names. If a database or table name is covered in
both lists, only the exclusion list is effective.
Usage Notes
Many referential constraints can be defined for a table. When such a table is restored, these
constraints are marked inconsistent. REVALIDATE REFERENCES FOR validates these
inconsistent constraints against the target table.
If inconsistent constraints remain after REVALIDATE REFERENCES FOR has been executed,
the SQL statement ALTER TABLE DROP INCONSISTENT REFERENCES must be used to
remove them. See “RESTORE” on page 231.
REVALIDATE REFERENCES FOR creates error tables containing information about data
rows that failed referential constraint checks.
One error table is created for each referential constraint when the corresponding reference
index becomes valid again. The name of each error table consists of the referencing (or child)
table name appended by the reference index number. Each error table has the same fields
definitions as the corresponding referencing table. The rows it contains are exact copies of the
rows that violated the referential constraint in the referencing table.
For inconsistent reference indexes that can be validated, REVALIDATE REFERENCES FOR:
•
Validates the inconsistent reference index on the target table and its parent/child tables
•
Creates an error table
•
Inserts rows that fail the referential constraint specified by the reference index into the
error table
If an inconsistent reference cannot be validated, the index is skipped and reported in a
message at the end of the operation.
For information on referential integrity, see the SQL Fundamentals.
250
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
REVALIDATE REFERENCES FOR
Example
Suppose an Employee table has just been restored with two inconsistent references defined on
the table. One indicates the Employee table references Department table; the other indicates
Employee table references Project table.
Now, Project table is dropped before a restore operation and the following statement is
submitted:
REVALIDATE REFERENCES FOR (Employee);
Assuming the referential constraint specifying Employee references Department has reference
index number 4, the following occurs:
1
Error table Employee_4 is created with information about the data rows in Employee that
reference Department.
2
The reference index specifying Employee table is referencing Department table is
validated.
3
The reference index specifying that Employee table is referencing Project table remains
inconsistent. A message reports that Employee table still contains inconsistent reference
indexes.
4
Submit the SQL statement:
ALTER TABLE EMPLOYEE DROP INCONSISTENT REFERENCES;
5
Query error table Employee_4 to correct the data rows in the Employee table.
Teradata Archive/Recovery Utility Reference, Release 16.00
251
Chapter 6: Archive/Recovery Control Language
ROLLBACK
ROLLBACK
Purpose
ROLLBACK recovers specified data tables, with before journals, to the state they were in before
they were modified.
Syntax
,
ROLLBACK
A
(dbname)
ALL
(dbname.tablename)
B
A
,
, EXCLUDE
(dbname)
ALL
(dbname1)-TO-(dbname2)
C
B
, TO
chkptname
,
chkptname, eventno
eventno
, AMP=
5
n
D
C
, PRIMARY DATA
D
, USE
, RELEASE LOCK
, NO DELETE
, DELETE
, ABORT
JOURNAL
CURRENT
RESTORED
;
2412B014
where:
Syntax Element
Description
(dbname)
Name of the database to be rolled back
ALL
Indicates that the rollback affects the named database and its descendants
ALL recovers all database tables with after images.
(dbname.tablename)
Name of the table
Specify only distinct tables within the list.
If recovering a specific AMP, the specified table must be non-fallback because fallback
tables are completely recovered by any previous all-AMPs rollback.
EXCLUDE
Prevents the named database from being recovered
(dbname)
Name of the excluded database
252
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ROLLBACK
Syntax Element
Description
ALL
Excludes the named database and its descendants from being recovered
(dbname1) TO (dbname2)
Alphabetical list of client databases to be excluded from the recovery operation
The delimiting database names need not identify actual databases.
Database DBC is not included as part of a range.
chkptnamechkptname,
eventno or eventno
Specifies the termination point for the rollback
PRIMARY DATA
Applies primary and fallback row images during the rollback process
Teradata ARC ignores secondary index rows and fallback rows for online AMPs.
RELEASE LOCK
Releases utility locks on the named databases automatically when the rollback operation
completes
NO DELETE or DELETE
Deletes, or refrains from deleting, the restored journal subtable after the rollback is
complete
The default is DELETE.
ABORT
Stops the operation if an AMP to which a non-fallback archive is to be recovered is offline
This option does not affect specific AMP operations.
USE CURRENT JOURNAL
or USE RESTORED
JOURNAL
Uses the current journal or a subtable that was previously restored to the Teradata Database
CURRENT uses the active journal subtable followed by any saved subtable.
Access Privileges
To rollback a database or a table in the database, the user name specified in LOGON must
have one of the following:
•
The RESTORE privilege on the objects to recover
•
Ownership of the objects to recover
Usage Notes
ROLLBACK erases all changes made to a database or selected tables of a database. Use
ROLLBACK to roll back data tables on all or a subset of the processors. A rollback uses a single
journal table as input. All objects in the object list must share the same journal table.
A rollback is not allowed over a data definition statement that changed a table’s structure. If
Teradata ARC encounters such a change to a table during a rollback, the rollback for that table
stops and the program prints a message in the output listing.
If an AMP is offline and a nonfallback table is being recovered with unique secondary indexes,
Teradata ARC invalidates unique secondary indexes on the table. After the AMP returns
online, drop and recreate the unique secondary indexes for that table.
RELEASE LOCK can be used instead of the RELEASE LOCK keywords. Use the RELEASE
LOCK statement to release all HUT locks and the RELEASE LOCK keyword to release specific
HUT locks.
Teradata Archive/Recovery Utility Reference, Release 16.00
253
Chapter 6: Archive/Recovery Control Language
ROLLBACK
To roll forward using a current single image journal, first roll back to the checkpoint. After the
rollback has completed, use ROLLFORWARD to restore the data.
Note: When a PPI table or a Column Partition (CP) table is restored, Teradata ARC
automatically executes the Revalidate step during the BUILD phase. The Revalidate process
increases the Util version of the table. This is considered to be a structural change to the table.
Because of this change, a PPI table or a CP table cannot be recovered using either the RollBack
or RollForward operation after it is restored.
chkptname Parameter
Use CHECKPOINT to generate the checkpoint for the journal used in the recovery (the one
for the data tables that will be rolled back).
With this option, Teradata ARC scans the journal table for the identified checkpoint. If that
checkpoint does not exist, Teradata ARC returns an error.
If an unqualified chkptname is used and there are multiple rows in the journal table that all
have the same name, Teradata ARC uses the last chronological entry made. In the
ROLLFORWARD statement, on the other hand, Teradata ARC uses the first chronological
entry made.
Alpha characters in a chkptname are not case sensitive. For example, chkptname ABC is equal
to chkptname abc. If this option is not specified, Teradata ARC uses the entire journal table.
DELETE Keyword
Do not specify any DELETE options when using the current journal for the recovery. Use NO
DELETE when recovering selected tables and then later want to recover the balance of the
tables that might have changes in the journal.
ABORT Keyword
ABORT aborts an all-AMPs roll operation that includes non-fallback tables or single copy
journal tables if an AMP is down. This option does not affect specific AMP roll operations.
PRIMARY DATA Keywords
Use this keyword phrase to reduce the amount of I/O during rollback. Using this option
improves the rollback performance when recovering a specific AMP from a disk failure.
If a specific AMP is recovered, unique indexes are invalid. Always follow a ROLLBACK
statement that uses the PRIMARY DATA option with a BUILD statement.
254
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ROLLFORWARD
ROLLFORWARD
Purpose
ROLLFORWARD recovers specified data tables, with after journals, to their state following a
modification.
Syntax
,
A
(dbname)
ROLLFORWARD
ALL
(dbname.tablename)
B
A
,
, EXCLUDE
(dbname)
ALL
(dbname1)-TO-( dbname2)
C
B
, TO
chkptname
chkptname, eventno
eventno
,
5
n
, AMP=
D
C
, PRIMARY DATA
, RELEASE LOCK
, NO DELETE
, DELETE
D
, USE
, ABORT
CURRENT
RESTORED
JOURNAL
;
2412A015
where:
Syntax Element
Description
(dbname)
Name of the database to be rolled forward
ALL
Rolls forward the named database and its descendants
(dbname.tablename)
Name of the table to be rolled forward
When recovering a specific AMP, the specified table must be non-fallback because fallback tables
are completely recovered by any previous activity.
EXCLUDE
Prevents the named database from recovery
(dbname)
Name of the database excluded from recovery
ALL
Excludes from recovery the named database and its descendants
Teradata Archive/Recovery Utility Reference, Release 16.00
255
Chapter 6: Archive/Recovery Control Language
ROLLFORWARD
Syntax Element
Description
(dbname1) TO
(dbname2)
Alphabetical list of client databases to be excluded from recovery
The delimiting database names need not identify actual databases.
Database DBC is not included as part of a range.
chkptname
Specifies the termination point for the rollforward operation
or
chkptname, eventno
or
eventno
PRIMARY DATA
Applies primary and fallback row images during the rollforward process
Teradata ARC ignores secondary index rows and fallback rows for online AMPs.
RELEASE LOCK
Releases utility (HUT) locks on the named databases automatically when rollforward completes
NO DELETE
Deletes, or refrains from deleting, the restored journal subtable after the rollforward is complete.
or
The default is DELETE.
DELETE
ABORT
Aborts the roll operation if an AMP to which a non-fallback archive to be recovered is offline
This option affects only all-AMPs roll operations. It does not affect specific AMP operations.
CURRENT or
RESTORED
Uses the current journal or a subtable that was previously restored to the Teradata Database
CURRENT uses the active journal subtable followed by any saved subtable.
Access Privileges
To roll forward a database or a table in the database the user name specified in LOGON must
have one of the following:
•
The RESTORE privilege on the objects being recovered
•
Ownership of the objects
Usage Notes
A rollforward operation uses a single journal table as its input. Define all tables specified for
recovery with the same journal table. ROLLFORWARD can recover all data tables or a subset
of the tables that are protected by the same journal table.
Use ROLLFORWARD to rollforward data tables on all or a subset of the AMPs.
Do not execute a rollforward over a data definition statement that changed a table’s structure.
If Teradata ARC encounters such a change to a table during a rollforward, the rollforward for
that table stops and an error message is placed in the output listing.
When recovering a nonfallback table with unique secondary indexes and an AMP is offline,
Teradata ARC invalidates unique secondary indexes on the table. After the AMP returns
256
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
ROLLFORWARD
online, drop and then recreate the unique secondary indexes for that table or use BUILD
DATA TABLES on the table.
Using BUILD DATA TABLES is much faster than dropping and recreating the unique
secondary indexes.
If the table being recovered has a single after image journal and the recovery operation uses
the current journal, do not roll forward data rows on the AMP that are backed up by an AMP
that is offline. If the table being recovered has a local single after image journal, it may be
recovered only to the last archived data.
If the tables to be rolled forward have a dual after-image journal, then the rollforward
operation (by default) uses the journal tables that exist on each AMP to roll forward those
AMPs.
When performing a restore and rollforward, bring the affected data tables on the AMPs to
their state after a modification. Specific AMP rollforwards are typically performed to do one
of the following:
•
Complete a previous all-AMPs rollforward that happened while some AMPs were offline.
•
Recover a single AMP from a disk failure.
Note: When a PPI table or a Column Partition (CP) table is restored, Teradata ARC
automatically executes the Revalidate step during the BUILD phase. The Revalidate process
increases the Util version of the table. This is considered to be a structural change to the table.
Because of this change, a PPI table or a CP table cannot be recovered using either the RollBack
or RollForward operation after it is restored.
chkptname Parameter
To set the point in time that ROLLFORWARD (or ROLLBACK) rolls to, a CHECKPOINT
statement must have been previously issued with the same value for chkptname. For example,
if CHECKPOINT statements with different checkpoint names (for example, check1, check2,
check3) were issued, use the same names in the ROLLFORWARD or ROLLBACK statement.
Doing so returns the table to the state it was when each CHECKPOINT statement was
originally issued.
If this option is specified, Teradata ARC first scans the journal table for the identified
checkpoint. If the checkpoint does not exist, Teradata ARC returns an error.
If an unqualified chkptname is used, and there are multiple checkpoint rows in the journal
table with the same name, Teradata ARC uses the first chronological entry made. Note this
difference from the ROLLBACK statement, where Teradata ARC uses the last entry made.
Alpha characters in a chkptname are not case sensitive; for example, chkptname ABC is equal
to chkptname abc. If this option is not specified, Teradata ARC uses the entire journal table.
PRIMARY DATA Keywords
Use this keyword phrase to reduce the amount of I/O during rollforward. Using this option
improves the rollforward performance when recovering a specific AMP from a disk failure.
When a specific AMP is recovered, unique indexes are invalid. Always follow a
ROLLFORWARD statement that uses the PRIMARY DATA option with a BUILD statement.
Teradata Archive/Recovery Utility Reference, Release 16.00
257
Chapter 6: Archive/Recovery Control Language
ROLLFORWARD
RELEASE LOCK Keywords
With Teradata Database, RELEASE LOCK does not release HUT locks on non-fallback tables
with remote single after image journaling when the backup AMP is offline.
DELETE Keyword
Use the NO DELETE keyword phrase to recover selected tables if the balance of the tables will
be recovered later with changes in the journal.
Do not specify any DELETE options when using the current journal for the recovery.
ABORT Keyword
The ABORT keyword aborts an all-AMPs roll operation that includes non-fallback tables or
single copy journal tables if an AMP is down. This option does not affect specific AMP roll
operations.
258
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 6: Archive/Recovery Control Language
SET QUERY_BAND
SET QUERY_BAND
Purpose
SET QUERY_BAND provides values for a session to precisely identify the origins of a query.
Syntax
SET
.SET
QUERY_BAND
=
'
query_band_string ;
'
;
2412A033
where:
Syntax Element
Description
query_band_string
Associates a name with a value
Example
SET QUERY_BAND = 'Job=payroll;' ;
Usage Notes
A query band is a set of name and value pairs that can be set on a session to identify where a
query originated. These identifiers are in addition to the current set of session identification
fields, such as user id, account string, client id, and application name. Query bands offer a way
to identify the user and application, for example, when SQL-generating tools and web
applications use pooling mechanisms that hide the identity of users because each connection
in the pool logs into the database using the same user account. Without query bands, there is
no way to tell the source of the request when the request comes from a multitiered application.
Another potential use of query bands is troubleshooting, when it is necessary to provide the
specific user, application, or report that issued a request.
All SET QUERY_BAND statements submitted by Teradata ARC are session query bands. A
session query band is stored in a session table and recovered after a system reset. (Teradata
ARC does not use transaction query bands.)
Teradata Archive/Recovery Utility Reference, Release 16.00
259
Chapter 6: Archive/Recovery Control Language
SET QUERY_BAND
260
Teradata Archive/Recovery Utility Reference, Release 16.00
CHAPTER 7
Restarting Teradata ARC
This chapter describes how to recover if the client system, Teradata ARC, or the Teradata
Database fail during an archive/recovery operation. Topics include:
•
Restart Log
•
Restarting Teradata ARC
•
Restart After Client or Teradata ARC Failure
•
Restart After a Teradata Database Failure
•
Recovery Control Catalog
Note: The information in this chapter only applies to Mainframe systems (z/OS). Restart of
Teradata ARC is no longer supported on network-attached systems.
Restart Log
Teradata ARC does not record most of the modifications it makes to a database in the
transient journal. Instead, Teradata ARC uses a file called restart log to maintain the operation
status of the job being run.
The restart log catalogs the effects of utility operations on a database or table. Teradata ARC
writes restart entries in the log at the beginning and end of each significant step (and at
intermediate points in a long running task). If a job is interrupted for some reason, Teradata
ARC uses this log to restart the archive or restore operation that was being performed.
When Teradata ARC is started, it first places markers into the restart log to keep track of the
current input and logon statements. Then Teradata ARC copies the command stream to the
restart log. Teradata ARC uses these commands during restart to continue with the most
recent activity using the same logon user originally entered.
During a restore, Teradata ARC defers all build steps for a table until all table data has been
completely restored. Then the deferred build steps are execute to completely restore the table.
As each subtable is restored, the build step for that subtable is stored in an internal deferred
build list in the Restart Log. If Teradata ARC must be restarted, Teradata ARC rebuilds the
deferred build list from the build steps that are stored in the Restart Log for the table being
restarted. Teradata ARC can store a maximum of 100 build steps per table in the Restart Log.
If a table has more than 100 subtables, the build steps exceeding 100 are lost during the restart
as the build steps can not be reconstructed. If a table has more than 100 subtables, and a
restart is requested for that table, Teradata ARC displays message 1281, which directs the user
to run a standalone BUILD statement to rebuild that table after it has been restored.
Teradata Archive/Recovery Utility Reference, Release 16.00
261
Chapter 7: Restarting Teradata ARC
Restarting Teradata ARC
Note: For information on file size limits, see“Restart Log File Size Limit” on page 153.
Restarting Teradata ARC
The Teradata Database uses task execution logic to restart an archive or restore job. The logic
differs depending on the operation, as follows.
During a Database DBC Operation
If a client system or Teradata ARC failure interrupts an archive or restore of database DBC, do
not restart the operation. Perform the entire restore process again, including reinitializing the
Teradata Database and the dictionary tables.
Use the RESTART option if a multiple database archive or restore that includes database DBC
is interrupted during the archive or restore of some database other than database DBC. Before
restoring database DBC, initialize the Teradata Database:
1
Reconfigure the system from a single AMP to the desired configuration.
2
Run the DBS Database Initialization Program (DIP) and execute the DIPERR and
DIPVIEWS scripts.
Do not run any of the other DIP scripts at this time, including DIPALL.
See the Teradata Database Node Software IUMB document for information on running
DIP. Please view the specific version of this document that matches the database version
that you are restoring to. For instructions on how to access the Data Migration documents,
see the “Data Migration documents” section of the table on page 7, under “Additional
Information.”
During an Archive Operation
Restarting an archive operation causes Teradata ARC to:
•
Reposition the output archive file at the data block indicated in the last restart point
recorded in the restart log before the failure occurred.
•
Restart the operation at the beginning of the last table or secondary index subtable that
was being archived before the failure. Teradata ARC performs this step only if the AMP
configuration has changed, that is, if any of the AMPs were online before the failure but are
now offline after the failure.
In the case of an interrupted archive of database DBC, resubmit the ARCHIVE statement
without specifying the RESTART parameter.
During a Restore Operation
Restarting a restore operation causes Teradata ARC to:
•
262
Reposition the input archive file at the data block indicated in the last restart point
recorded in the restart log before the failure occurred.
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 7: Restarting Teradata ARC
Restart After Client or Teradata ARC Failure
•
Restart the operation at the beginning of the last table or secondary index subtable that
was being restored before the failure. Teradata ARC performs this step only if the AMP
configuration has changed, that is, if any of the AMPs were online before the failure but are
now offline after the failure.
•
For the table you are restarting, rebuild the deferred build list from the build steps saved in
the Restart Log.
During a Recovery Operation
Restarting a recovery (rollback or rollforward) operation causes Teradata ARC to:
•
Resubmit the original recovery request to the Teradata Database.
•
Reinitiate the complete recovery action.
The Teradata Database reads the entire input journal subtable and ignores recovery actions for
change images already used.
Restart After Client or Teradata ARC Failure
If the client system or Teradata ARC fails during an archive, restore, or recovery procedure,
restart the operation. Teradata ARC responds to the failure by logging all its sessions off the
Teradata Database. The Teradata Database does not journal Teradata ARC statements,
therefore Teradata ARC takes no recovery action at logoff. Database locks placed by a utility
statement on databases or tables remain intact following a failure.
To restart a utility operation automatically, specify the RESTART runtime option when
starting Teradata ARC. The restart log is used to determine the state of the pending operation.
Command execution continues using the copy of the original source statement file. Teradata
ARC reads the restart log to determine its restart point, then reenters the statement that was
being processed at the time of the failure.
If a client or utility failure interrupted an archive or restore of database DBC, do not restart the
operation.
If an archive or restore of multiple databases including database DBC is interrupted, resubmit
the ARCHIVE statement. Or, specify the RESTART option.
Restarting an Archive Operation
Restarting an archive operation causes Teradata ARC to:
•
Reposition the output archive file at the data block indicated in the last restart point
recorded in the restart log before the failure occurred.
•
If the AMP configuration changed (AMPs were online before the failure but offline
afterward), restart begins with the last table being archived before the failure.
If archive of database DBC is interrupted, resubmit the ARCHIVE statement without
specifying the RESTART option.
Teradata Archive/Recovery Utility Reference, Release 16.00
263
Chapter 7: Restarting Teradata ARC
Restart After Client or Teradata ARC Failure
Example
The following example shows the restart of an all-AMPs archive on z/OS.
Note: The RESTART parameter has been added to the EXEC.
Restarting an Archive on z/OS
//DBCDMP1 JOB 1,’DBC OPERATIONS’,REGION=2048K,MSGCLASS=A
//DUMP
EXEC PGM=ARCMAIN,PARM=’RESTART’
//STEPLIB DD DSN=DBC.AUTHLOAD,DISP=SHR
//
DD DSN=DBC.TRLOAD,DISP=SHR
//DBCLOG
DD DSN=DBC.ARCLOG.DATA,DISP=OLD
//DUMP100 DD DSN=DBC.DUMP100,DISP=OLD
//SYSPRINT DD SYSOUT=*,DCB=(RECFM=F,BLKSIZE=132,LRECL=132)
//SYSUDUMP DD SYSOUT=*
//SYSIN DD DATA,DLM=##
LOGON DBC,DBC;
ARCHIVE DATA TABLES (DBC) ALL,
RELEASE LOCK,
INDEXES,
FILE=DUMP100;
LOGOFF;
##
In this example, the tape with VOLSER=DGJA is mounted at the time that Teradata ARC
abends.
Note: When archiving on tape for z/OS, specify the tape level information and all tape volume
serial numbers, including the one mounted at the time of the abend.
Restarting a Restore Operation
Restarting a restore operation causes Teradata ARC to:
•
Reposition the input archive file at the data block indicated in the last restart point
recorded in the restart log before the failure occurred.
•
If the AMP configuration changed (AMPs are online before the failure but offline
afterward), restart begins with the last table being restored before the failure.
•
For the table you are restarting, rebuild the deferred build list from the build steps saved in
the Restart Log.
If a restore of database DBC is interrupted for any reason, perform the entire restoration
again, including reinitializing the Teradata Database. Even if the Teradata Database was
initialized prior to database DBC restore, initialize it again.
Example
For z/OS, the only difference between the original job and the restarted job is to add the
RESTART parameter to the PARM statement of the EXEC.
264
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 7: Restarting Teradata ARC
Restart After a Teradata Database Failure
Restarting a Restore on z/OS
//DBCRST1 JOB 1,’DBC OPERATIONS’,REGION=2048K,MSGCLASS=A
//RESTORE EXEC PGM=ARCMAIN,PARM=’RESTART’
//STEPLIB DD DSN=DBC.AUTHLOAD,DISP=SHR
//
DD DSN=DBC.TRLOAD,DISP=SHR
//DBCLOG
DD DSN=DBC.ARCLOG.DATA,DISP=OLD
//DUMP100 DD DSN=DBC.DUMP100,DISP=OLD
//SYSPRINT DD SYSOUT=*,DCB=(RECFM=F,BLKSIZE=132,LRECL=132)
//SYSIN DD DATA,DLM=##
LOGON DBC,DBC;
RESTORE DATA TABLES (PERSONNEL) ALL,
RELEASE LOCK,
FILE=DUMP100;
LOGOFF;
##
Restarting a Checkpoint Operation
When CHECKPOINT writes rows to a journal, the transient journal also keeps a record of
these rows. Consequently, if the client or Teradata ARC fails, the results of the checkpoint
operation are rolled back by the transient journal.
•
If a Teradata ARC statement specified checkpoints on multiple journal tables, Teradata
ARC rolls back only the checkpoint being taken at the time of the client or utility failure.
•
If CHECKPOINT terminates abnormally, all locks placed by Teradata ARC are removed.
When Teradata ARC restarts, the CHECKPOINT statement completes.
Restart After a Teradata Database Failure
If the Teradata Database fails during an archive, restore, or recovery procedure, Teradata ARC
attempts to restart the operation automatically at the point of failure. During this process,
Teradata ARC restarts all sessions and then continues to the current operation by reissuing the
last request made prior to the database failure.
If it is not possible to restart, Teradata ARC aborts the job. If the job is aborted, the user can
resubmit the job and specify the RESTART option.
Restarting an Archive Operation
Restarting an archive operation causes Teradata ARC to:
•
Reposition the output archive file at the data block indicated in the last restart point
recorded in the restart log before the failure occurred.
•
If the AMP configuration changed (AMPs are online before the failure but offline
afterward), restart begins with the last table being archived before the failure.
If the Teradata Database fails during an archive of database DBC, Teradata ARC’s default
action is to wait for the Teradata Database to come back online and then try the archive
operation again. Some options (for example, ABORT, HALT, and PAUSE) can affect whether
Teradata ARC stops, but the default is to try again after the system recovers.
Teradata Archive/Recovery Utility Reference, Release 16.00
265
Chapter 7: Restarting Teradata ARC
Restart After a Teradata Database Failure
Restarting a Restore Operation
Restarting an restore operation causes Teradata ARC to:
•
Reposition the input archive file at the data block indicated in the last restart point
recorded in the restart log before the failure occurred.
•
If the AMP configuration changed (AMPs are online before the failure but offline
afterward), restart begins with the last table being archived before the failure.
•
Rebuilding the deferred build list is unnecessary, because the deferred build list remains
intact after a Teradata Database restart.
If the failure occurred during a restore of database DBC, repeat the entire restore operation,
including reinitializing the Teradata Database. Some options (for example, ABORT, HALT,
and PAUSE) can affect whether Teradata ARC stops, but the default is to try again after the
system recovers.
Restarting a Recovery Operation
Restarting a recovery (rollback or rollforward) operation causes Teradata ARC to:
•
Resubmit the original recovery request to the Teradata Database.
•
Reinitiate the complete recovery action.
The Teradata Database reads the entire input journal subtable, ignoring recovery actions for
change images already processed.
Restarting a Checkpoint Operation
When CHECKPOINT writes rows to a journal, the transient journal also keeps a record of
these rows. Consequently, if the Teradata Database fails, the results of the checkpoint
operation are rolled back by the transient journal.
•
If Teradata ARC statement specified checkpoints on multiple journal tables, Teradata ARC
rolls back only the checkpoint being taken at the time of the hardware failure.
•
If CHECKPOINT terminates abnormally, all locks Teradata ARC placed are removed.
When Teradata ARC restarts, the CHECKPOINT statement completes.
Removing HUT Locks After a Restart
Teradata ARC places HUT locks on tables and databases to prevent modifications to the tables
and databases before the job is restarted. Consequently, when a Teradata ARC operation does
not finish, HUT locks might remain on tables and databases.
Occasionally, you might decide that you cannot restart a job for some reason. In this case,
specifically remove HUT locks to allow the database or table to be accessed again. To remove
HUT locks:
266
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 7: Restarting Teradata ARC
Recovery Control Catalog
1
Run a utility statement that specifies the RELEASE LOCK statement for the currently
locked database.
2
Specify a new restart log. (The restart log used when the failure occurred indicates that a
restart is necessary, which prevent the removal of Teradata ARC HUT locks.)
Example
The following example shows the statements needed to release the locks on a database named
Database_Name.
LOGON TDPI/DBC,DBC;
RELEASE LOCK ( Database_Name ),
LOGOFF;
OVERRIDE;
For additional information on the RELEASE LOCK statement and the options available on
this statement, refer to “RELEASE LOCK” on page 228.
Recovery Control Catalog
The Teradata Database maintains a Recovery Control Catalog (RCC) to record all archive and
recovery activity. Use the RCC to monitor restart and recovery activity, or use the
RCVMANAGER utility.
The RCC consists of the following standard tables in the database DBC:
•
RCEvent contains a row for each archive and/or recovery activity.
•
RCConfiguration contains a row for each archive and/or recovery activity that did not
include all AMPs defined in the hardware configuration.
•
RCMedia contains a row for each removable device used in either an archive or recovery
activity.
User DBC must authorize access to these tables for selection and deletion operations. The
Teradata Database automatically generates all inserts to the tables, so remove unnecessary
rows from the table.
Recording Row Activity
Each row in the RCEvent table contains multiple columns. Some of these columns are the
same for all rows and others are optional, depending on the type of event being recorded.
Table 21 describes fields in the RCEvent table.
Teradata Archive/Recovery Utility Reference, Release 16.00
267
Chapter 7: Restarting Teradata ARC
Recovery Control Catalog
Table 21: RCEvent Table Fields
Field
Description
Standard Fields
Event Number
A unique integer value assigned by the Teradata Database to each recorded event
Teradata Archive/Recovery Utility assigns event numbers in ascending order. For example, events that
occur earlier in time have smaller numbers. The event number (not the date and time stamp) is the
most accurate way to determine the chronology of events.
Date and Time
The value of the Teradata Database time-of-day clock when the event is recorded
Earlier events can have later time stamps than later events. Therefore, do not use this value to
determine the chronology of checkpoint events. Instead, use event numbers.
User Name
Logon ID for the user who initiated the event
Event Type
A text string identifying the event
The event types are:
•
•
•
•
•
•
•
•
Database Name
BUILD
CHECKPOINT
COPY
DELETE JOURNAL
DUMP (ARCHIVE)
RESTORE
ROLLBACK
ROLLFORWARD
The name of the database affected by the activity
If multiple databases are affected, a row is recorded for each database.
Object Type
A single character that defines the type of object involved
The character values and their meanings are:
• D
Total database
• T
Data table
• J
Journal table
Object Identifier
A four byte field containing either a database ID or the uniqueness portion of a table identifier
This field further defines the object involved in the event. These fields are derived from the DBASE and
TVM dictionary tables and are maintained by the Teradata Database.
All-AMPs Flag
The value “A” means the following:
•
•
•
•
268
All AMPs
Rows in RCConfiguration define any offline AMPs
Rows in RCConfiguration define the list of AMPs involved
Rows in RCConfiguration define the list of AMPs involved
Teradata Archive/Recovery Utility Reference, Release 16.00
Chapter 7: Restarting Teradata ARC
Recovery Control Catalog
Table 21: RCEvent Table Fields (continued)
Field
Description
Restart Count
If the event is restarted and the online AMP configuration has changed, this column increases by one.
If the event was completed without a configuration change, this field contains a zero.
Operation in
Progress
This field contains a Y when the Teradata ARC operation specified is not complete.
Operations that terminate abnormally and are not restarted are left with Y. When the specified
Teradata ARC operation completes, this field contains an N.
Optional Fields
Dataset Name
The client file name specified for an archive or recovery activity
Table Name
The table name specified for an archive or recovery activity
Contains one row for each table affected unless all tables are affected
If all rows are affected, BTEQ shows a question mark. This indicates a NULL field.
Checkpoint Name
The label given to the checkpoint
Linking Event
Number
The number assigned to some other event that influenced the current event
Journal Used
A single character indicating which journal table the event used:
The termination point for a rollback is a linking event number.
• C
Current journal table
• R
Restored portion of the journal table
• S
Saved portion of the journal table
Journal Saved
Y or N to indicate that the CHECKPOINT statement did (or did not) contain the SAVE option
Index Present
Y or N to indicate whether the archive included the index option
Dup Archive Set
Y or N to indicate if this is a duplicate file created by an archive request
If two files are created by a single archive request, two event rows are generated. One has the flag set to
Y to indicate that it is a duplicate. The other has a value of N to indicate that it is the primary copy.
Lock Mode
A single character to identify the HUT lock that is applied by an archive operation:
• A
The operation executed under either an access lock or a group read lock.
• R
The operation executed under a read lock.
The RCEvent table also has other fields not listed in the table, which are reserved for future
use.
Teradata Archive/Recovery Utility Reference, Release 16.00
269
Chapter 7: Restarting Teradata ARC
Recovery Control Catalog
Recording AMP Information
Teradata ARC inserts rows into the RCConfiguration table for each archive activity that does
not affect all of the AMPs in the configuration:
•
If the activity is for all of the AMPs, but some AMPs are offline, Teradata ARC records a
row for each offline AMP.
•
If the activity is for specific AMPs, Teradata ARC records a row for each AMP that is both
specified and online.
A row contains these attributes:
•
The event number assigned to the activity.
•
The logical identifier for the processor.
•
The cabinet and slot number for the processor.
•
A state flag of D (down) to indicate that a processor was offline during an all-AMPs
operation, or a flag of U to indicate the processor participated in a specific AMPs
operation.
•
A restart count to indicate during which restart of the event the row was inserted.
If the Teradata Database fails during an archive or recovery activity and the operation is
restarted and the configuration changes, then Teradata ARC writes rows to the RCEvent and
RCConfiguration tables. The restart count ties these rows together.
Recording Device Information
The utilities insert rows into the RCMedia table for each removable device used in an archive
or recovery activity. A row contains these attributes:
270
•
The event number assigned to the activity.
•
The six-character volume serial assigned to the removable device.
•
A sequence number that assigns the device its position within the set.
•
A duplicate-file flag to indicate the device belongs to a duplicate file created by an archive
request. Y indicates a duplicate; N indicates the primary file.
Teradata Archive/Recovery Utility Reference, Release 16.00
APPENDIX A
Database Objects
This appendix lists database objects that are supported by Teradata ARC.
Database Object Types
Table 22 gives a brief description of different database object types.
Table 22: Database Object Types
Object Type
Object
Description
Indexes
Join Index
Theses objects do not have data, therefore only
dictionary information is processed.
Hash Index
Macro
Macro
Theses objects do not have data, therefore only
dictionary information is processed.
Procedures
Stored Procedure
These objects have data, therefore dictionary and data
information is processed.
External Stored Procedure
Java JAR Stored
Procedures
Tables
Journal Table
Partitioned Primary Index
(PPI) Table
These objects have data, therefore dictionary and data
information is processed.
Tables can be specified as individual objects with any
version of the Teradata Database.
Table
No Primary Index (NOPPI) Table
Triggers
Trigger
Teradata Archive/Recovery Utility Reference, Release 16.00
Theses objects do not have data, therefore only
dictionary information is processed.
271
Appendix A: Database Objects
Individual Object Support
Table 22: Database Object Types (continued)
Object Type
Object
Description
User Defined Functions (UDF)
Aggregate UDF
These objects may have data depending on their usage,
therefore the information is processed accordingly.
Scalar UDF
Statistical UDF
Table UDF
Combined Aggregate and
Statistical UDF
Table Function(Operator)
Parser Contract Function
UDF Authorization
Theses objects do not have data, therefore only
dictionary information is processed.
User Defined Methods (UDM)
UDM
These objects may have data depending on their usage,
therefore the information is processed accordingly.
User Defined Types (UDT)
UDT
Theses objects do not have data, therefore only
dictionary information is processed.
User Installed Objects (UIF)
UIF
These objects have data, therefore dictionary and data
information is processed.
Views
View
Theses objects do not have data, therefore only
dictionary information is processed.
Globally Persistent Object (GLOP)
GLOP
Theses objects do not have data, therefore only
dictionary information is processed.
Server
Remote Server
Theses objects do not have data, therefore only
dictionary information is processed.
Schema
Schema
These objects do not have data, therefore only
dictionary information is processed.
Individual Object Support
Teradata ARC allows you to specify all objects as individual objects in an Archive, Restore, or
Copy statement. Support for performing these tasks with some objects depends on Teradata
Database version.
You can only specify Tables as individual objects with Teradata Database 12.00 or earlier.
Specify the following objects as individual objects with Teradata Database 13.00 or later:
272
•
GLOPs
•
Indexes (Join and Hash)
•
Macros
Teradata Archive/Recovery Utility Reference, Release 16.00
Appendix A: Database Objects
Individual Object Support
•
Stored procedures
•
Triggers (cannot be copied)
•
User Defined Functions (UDFs)
•
User Defined Methods (UDMs)
•
User Defined Types (UDTs)
•
Views
Specify the following objects as individual objects with Teradata Database 14.10 or later:
•
Table Function (Operator)
•
Parser Contract Function
Specify the following objects as individual objects with Teradata Database 15.00 or later:
•
User Installed File (UIF)
•
Remote Servers
Specify the following objects as individual objects with Teradata Database 16.00 or later:
•
Schema
Teradata Archive/Recovery Utility Reference, Release 16.00
273
Appendix A: Database Objects
Individual Object Support
274
Teradata Archive/Recovery Utility Reference, Release 16.00
APPENDIX B
Restoring and Copying Objects
This appendix lists individual object types, whether they can be restored or copied, and under
what conditions.
Selective Backup and Restore Rules
When performing a selective backup and restore on a table or database, observe the following
rules in order for the operation to be successful.
•
Methods and UDTs are stored in SYSUDTLIB; SYSUDTLIB is only backed up or restored
as part of a DBC backup.
•
While an operation may be allowed, there is no guarantee the object will function as
desired once the operation is complete.
•
For some copy operations, certain dependencies must be met before the operation can be
successful. For example, to copy a database with a journal table — the destination database
must be created with a journal table.
Table 23: Selective Backup and Restore Rules
DB Level
Object Level
DB Level
Object Level
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
JAR
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
External Stored Procedures
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Standard Function
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Trigger
Yes
Yes
No
No
No
No
No
No
No
No
Instant or Constructor Method
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Join Index
Yes
Yes
Yes
Yes
No
No
Yes
Yes
No
No
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
DB Level
Aggregate Function
Type
System Join Index
Macro
a
Teradata Archive/Recovery Utility Reference, Release 16.00
Object Level
Copy and
Rename to
different DBS
DB Level
Copy to
different DBS
Object Level
Copy and
Rename to
same DBS
DB Level
Copy to same
DBS
Object Level
Restore to
same DBS
275
Appendix B: Restoring and Copying Objects
Selective Backup and Restore Rules
Table 23: Selective Backup and Restore Rules (continued)
DB Level
Object Level
DB Level
Object Level
Yes
Yes
Yes
Yes
No
No
Yes
Yes
No
No
Stored Procedure
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Queue Table
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Table Function
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Table Function(Operator)
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Table
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
PPI Table
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
NOPI Table
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
User-Defined Data Type (UDT)
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
View
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Authorization
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Parser Contract Function
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
User Installed File (UIF)
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Remote Serverbc
Yes
Yes
Yes
Yes
No
No
Yes
Yes
No
No
Schema
Yes
Yes
Yes
Yes
No
Yes
Yesb
Yesb
No
Yesb
DB Level
Hash Index
Type
Object Level
Copy and
Rename to
different DBS
DB Level
Copy to
different DBS
Object Level
Copy and
Rename to
same DBS
DB Level
Copy to same
DBS
Object Level
Restore to
same DBS
a. System Join Indexes (SJIs) can be copied to a different database. The DB name can be changed but the object name can
not.
b. All dependent objects used by the remote server or schema object must exist prior to the copy.
c. Remote Server objects must reside in the TD_SERVER_DB database and cannot be copied out to a different database.
But, individual objects in the TD_SERVER_DB database can be copied and renamed as long as the Remote Server
objects are copied back into the TD_SERVER_DB database.
Meaning of Columns
276
•
Restore to Same DBS – restore back to the same database name, same object name, and
same DBS system)
•
Copy to Same DBS – copy back to the same database name, same object name, and same
DBS system)
•
Copy and Rename to Same DBS – COPY FROM to the same DBS system and changing
either the DB name or the object name or both)
•
Copy to Different DBS – copy back to the same database name and same object name in a
different DBS system)
Teradata Archive/Recovery Utility Reference, Release 16.00
Appendix B: Restoring and Copying Objects
Selective Backup and Restore Rules
•
Copy and Rename to Different DBS – COPY FROM to a different DBS system and
changing either the DB name or the object name or both)
Teradata Archive/Recovery Utility Reference, Release 16.00
277
Appendix B: Restoring and Copying Objects
Selective Backup and Restore Rules
Note: Even though Stat Collections are not 'objects', they will follow the objects that they are
associated with. Accordingly, they will have the same Restore/Copy characteristics as the
objects that they are associated with. Specifically, if the associated object can be restored/
copied to a target name and/or location (see Table 23 above for valid combinations), then its
associated Stat Collection(s) will also be restored/copied to the same target name and/or
location.
278
Teradata Archive/Recovery Utility Reference, Release 16.00
APPENDIX C
Excluding Tables
This appendix lists the different individual object types that can or can not be specified for the
EXCLUDE TABLES clause in the Archive/Restore/Copy statements.
Exclude Tables Object Types
Table 24: Exclude Tables Object Types
Exclude tables
during archive
Exclude tables
during restore
Exclude tables
during copy
Table
yes
yes
yes
CP Table
yes
yes
yes
PPI Table
yes
yes
yes
NOPI Table
yes
yes
yes
Journal Table
no
no
no
Queue Table
yes
yes
yes
View
no
yes
yes
Macro
no
yes
yes
Trigger
no
yes
yes
User-Defined Type (UDT)
no
nob
nob
Glop
no
yes
yes
Stored Procedures
yes
yes
yes
External Stored Procedures
yes
yes
yes
Java (JAR) procedure
yes
yes
yes
Join Index
yesa
yesa
yesa
Hash Index
yesa
yesa
yesa
Authorization Function
no
yes
yes
Standard (Scalar) Function (with data)
yes
yes
yes
Object type
Teradata Archive/Recovery Utility Reference, Release 16.00
279
Appendix C: Excluding Tables
Exclude Tables Object Types
Table 24: Exclude Tables Object Types (continued)
Exclude tables
during archive
Exclude tables
during restore
Exclude tables
during copy
Standard (Scalar) Function (without data)
no
yes
yes
Aggregate Function (with data)
yes
yes
yes
Aggregate Function (without data)
no
yes
yes
Statistical Function (with data)
yes
yes
yes
Statistical Function (without data)
no
yes
yes
AggrStat Function (with data)
yes
yes
yes
AggrStat Function (without data)
no
yes
yes
Table Function (with data)
yes
yes
yes
Table Function (without data)
no
yes
yes
Instant or Constructor Method (with data)
yes
nob
nob
Instant or Constructor Method (without data)
no
nob
nob
Parser Contract Function (with data)
yes
yes
yes
Parser Contract Function (without data)
no
yes
yes
Table Function (Operator) (with data)
yes
yes
yes
Table Function (Operator) (without data)
no
yes
yes
User Installed File (UIF)
yes
yes
yes
Remote Server (without data)
no
yes
yes
Schema (without data)
no
yes
yes
Object type
a. Do not specify these objects in the exclusion list, use SKIP JOIN INDEX instead.
b. The EXCLUDE TABLES clause can not be applied to SYSUDTLIB nor UDTs/UDMs. This is
because UDTs/UDMs are stored in the SYSUDTLIB database and a restore/copy of SYSUDTLIB
is not allowed
280
Teradata Archive/Recovery Utility Reference, Release 16.00
Glossary
A
access lock The requestor is willing to accept minor inconsistencies of the data while
accessing the database (an approximation is sufficient).
An access lock permits modifications on the underlying data while the SELECT operation is in
progress.
access module processor (AMP) A virtual processor that receives steps from a parsing
engine (PE) and performs database functions to retrieve or update data. Each AMP is
associated with one virtual disk, where the data is stored. An AMP manages only its own
virtual disk and not the virtual disk of any other AMP.
AMP
AP
See access module processor (AMP).
Application Processor
APRC
ARC
application Processor Reset Containment
Teradata’s Archive/Recovery Utility
B
BTEQ
Basic Teradata Query
build step Teradata ARC runs a build step for each subtable (primary data, indexes, LOG,
LOB columns) that is included as part of a table.
C
Call-Level Interface (CLI) A programming interface designed to support SQL access to
databases from shrink-wrapped application programs. SQL/CLI provides and international
standard implementation-independent CLI to access SQL databases. Client-server tools can
easily access database through dynamic link libraries. It supports and encourages a rich set of
client-server tools.
Column Partitioned (CP) tables Tables partitioned based on one or more columns.
D
database A related set of tables that share a common space allocation and owner. A
collection of objects that provide a logical grouping for information. The objects may include,
tables, views, macros, triggers, and stored procedures.
DBC Database computer. Name of database with Teradata Database system tables.
Teradata Archive/Recovery Utility Reference, Release 16.00
281
Glossary
DIP Database DBC Initialization Procedure
E
exclusive lock The requester has exclusive rights to the locked resource. No other process
can read from, write to, or access the locked resource in any way.
G
GUI
Graphical user interface
I
IFP
Interface Processor
I/O Input/output
J
JCL
Job Control Language
join A select operation that combines information from two or more tables to produce a
result.
K
KB Kilobytes
L
LOB Large object
log A file that records events. Many programs produce log files. Often a log file is used to
determine what is happening when problems occur. Log files have the extension “.log”.
M
multi-stream The multi-stream feature internally breaks a single job into multiple pieces as
streams. This results in multiple sub-jobs, where each stream performs a subset of the work
defined in your job. This improves job parallelism, which can increase job performance.
Multi-stream is only available when using a tape backup product. When using the associated
GUI to interface with a tape backup product, you can specify how many streams to use for
your job. However, if you run ARC from the command line, there is no way to specify multistream so you will always run as a single-stream job.
The multi-stream feature is not supported on Mainframe systems.
282
Teradata Archive/Recovery Utility Reference, Release 16.00
Glossary
O
Open Database Connectivity (ODBC) Under ODBC, drivers are used to connect
applications with databases. The ODBC driver processes ODBC calls from an application, but
passes SQL requests to the Teradata Database for processing.
P
partitioning expression The expression defined in the table definition for a table with a
partitioned primary index (PPI) that specifies how the data for the table is to be partitioned.
PDE
PE
PPI
Parallel Database Extension
Parsing Engine
Partitioned Primary Index
R
RCC Recovery Control Catalog
read lock Several users can hold read locks on a resource, during which time the system
permits no modification of that resource.
Read locks ensure consistency during read operations such as those that occur during a
SELECT statement.
Relational Database Management System (RDBMS) A database management system in
which complex data structures are represented as simple two-dimensional tables consisting of
columns and rows. For Teradata SET, RDBMS is referred to as “Teradata Database.”
S
SQL
See Structured Query Language (SQL).
stream See Multi-stream.
Structured Query Language (SQL) The initials, SQL, are pronounced either see-kwell or as
separate letters. SQL is a standardized query language for requesting information from a
database. SQL consists of a set of facilities for defining, manipulating, and controlling data in
a relational database.
T
table A two-dimensional structure made up of one or more columns with zero or more
rows that consist of fields of related information. See also database.
U
UDF User-defined function. By providing a mechanism that supports the creation of SQL
functions, scalar and aggregate UDFs allow users to add their own extensions to Teradata
Teradata Archive/Recovery Utility Reference, Release 16.00
283
Glossary
Database SQL. These functions can be used to operate on any Teradata Database data type and
can be utilized wherever built-in functions are used. If a function doesn’t exist to perform a
specific task or algorithm, write one that does. UDFs can also simplify complex SQL syntax.
UDT User-defined type. UDTs introduce object-oriented technology to SQL. Users can
define custom data types that model the structures and behaviors of the data in their
applications.
UIF User-installed files. UIFs are user developed scripts written in external language like
Ruby, Python, Perl and 'R'.
V
VM
virtual Machine
W
write lock The requester has exclusive rights to the locked resource, except for readers not
concerned with data consistency.
284
Teradata Archive/Recovery Utility Reference, Release 16.00
Index
Symbols
* character
ANALYZE statement 177
$ARC
database added by LOGSKIPPED 141
default database 111
A
ABORT keyword
ARCHIVE statement 182, 187
BUILD statement 195
COPY statement 201
RESTORE statement 234
ROLLBACK statement 253, 254
ROLLFORWARD statement 256, 258
access privileges
ARCHIVE statement 183
BUILD statement 195
CHECKPOINT statement 197
COPY statement 202
DELETE JOURNAL statement 217
RELEASE LOCK statement 229
RESTORE statement 235
REVALIDATE REFERENCES FOR statement 250
ROLLBACK statement 253
ROLLFORWARD statement 256
AccessRights table 34
accid parameter, LOGON statement 226
AccLogRuleTbl table 34
Accounts table 34
ALL 191
ALL FROM ARCHIVE keywords
COPY statement 200, 208
RESTORE statement 233, 244
ALL keyword
ANALYZE statement 177
ARCHIVE statement 181
BUILD statement 194, 195
CHECKPOINT statement 196
COPY statement 200
DELETE DATABASE statement 214
DELETE JOURNAL statement 216
RELEASE LOCK statement 228, 229
RESTORE statement 233
REVALIDATE REFERENCES FOR statement 249
ROLLBACK statement 252, 253
Teradata Archive/Recovery Utility Reference, Release 16.00
ROLLFORWARD statement 255
ALL PARTITIONS
potential risks 45
all-amps
archive 189
restore 65
Always On 39
AMP keyword
ARCHIVE statement 182, 185
COPY statement 207
RESTORE statement 233, 242
AMPs
archive configuration 39
offline
archives 48
checkpoints 98
recovering 81, 82
restoring
after reconfiguration 76
all AMPs online 70
offline 73
specific 78
ANALYZE statement
* character 177
ALL keyword 177
CATALOG keyword 177
character set support 178
DISPLAY keyword 177
FILE keyword 177
kanji 178
LONG keyword 177
VALIDATE keyword 177
APPLY TO keywords
COPY statement 201, 207
ARC
return codes 165
script samples 29
terminology 82
ARCDFLT
about defaults 28
environment variable 103, 104
ARCENV
about defaults 28
environment variable 103
usage 105
ARCENVX
about defaults 28
285
Index
environment variable 103
usage 105
archive
AMPs 39
cluster archive 47
databases 39
dictionary archive 47
general characteristics 39
large objects 49
locks during archive of selected partitions 42
nonhashed tables 49
offline AMPs 48
potential risks for selected partitions 45
procedure for selected partitions 43
sample script 29
selected partitions of PPI tables 41
tables 39
Teradata Database 14.0 and later 54
Teradata Database versions prior to 14.0 53
ARCHIVE script 99
ARCHIVE statement
ABORT keyword 182, 187
access privileges 183
ALL keyword 181
AMP keyword 182, 185
CATALOG operations 112
CLUSTER/CLUSTERS keyword 182, 185
DATA TABLE/TABLES keywords 181
DICTIONARY TABLE/TABLES keywords 181
EXCLUDE keyword 181
EXCLUDE TABLE/TABLES 181, 201, 234
FILE keyword 183
GROUP keyword 183, 187
INDEXES keyword 182, 187
JOURNAL TABLE/TABLES keywords 181
NO FALLBACK TABLE/TABLES keywords 181
NONEMPTY DATABASE/DATABASES keywords 183,
189
referential integrity 185
RELEASE LOCK keywords 182
USE GROUP READ/USE GROUP READ LOCKS
keywords 183
archiving online 51
archiving, DDL modifications 39
ARCMAIN
defined 24
required files for 24
starting
from Windows 27
from z/OS 24
Teradata ARC 23
statements
ARCHIVE 179
BUILD 194
286
CHECKPOINT 196
COPY 199
DELETE DATABASE 214
DELETE JOURNAL 216
LOGOFF 225
LOGON 226
RELEASE LOCK 228
RESTORE 231
REVALIDATE REFERENCES FOR 249
ROLLBACK 251, 252
ROLLFORWARD 255
AsgdSecConstraints 34
B
BACKUP NOT DOWN keywords
RELEASE LOCK statement 229, 230
backup rules 275
batch mode 23
BUILD 191
BUILD script 99
BUILD statement
ABORT keyword 195
access privileges 195
ALL keyword 194, 195
DATA TABLE/TABLES keywords 194
EXCLUDE keyword 194
JOURNAL TABLES keywords 194
NO FALLBACK TABLE/TABLES keywords 194
RELEASE LOCK keywords 195
BusinessCalendarException 34
BusinessCalendarPattern 34
C
cancel operation 29
CATALOG
ANALYZE statement 177
creating multiple tables 112
default database 111
primary index in 112
runtime parameter
defined 107
syntax 110
change images
restored 73
character set
ANALYZE statement 178
COPY statement 212
establishing 116
CHARSETNAME runtime parameter
defined 107
syntax 115
CHECKPOINT runtime parameter
defined 107
Teradata Archive/Recovery Utility Reference, Release 16.00
Index
syntax 121
CHECKPOINT statement
access privileges 197
ALL keyword 196
chkptname parameter 197
EXCLUDE keyword 196
NAMED keyword 197, 198
USE ACCESS LOCK keywords 197
USE LOCK keywords 197
USE READ LOCK keywords 197
WITH SAVE keywords 196, 197
CHECKSUM runtime parameter
defined 107
syntax 123
Chinese, simplified 115
chkptname parameter
CHECKPOINT statement 197
ROLLBACK statement 253, 254
ROLLFORWARD statement 256, 257
client SQL requests 39
CLUSTER 207
cluster archive
NO BUILD option 77
See also archive
CLUSTER/CLUSTERS keyword
ARCHIVE statement 182, 185
COPY statement 207
RELEASE LOCK statement 229
RESTORE statement 233, 242
CollationTbl table 34
concurrency control 40
conditional expression
considerations during archive of selected partitions 42
example 42
ConstraintFunctions 34
ConstraintNames 41
ConstraintValues 34
copy
database 86
large objects 84
COPY statement
ABORT keyword 201
access privileges 202
ALL FROM ARCHIVE keywords 200, 208
ALL keyword 200
AMP keyword 207
APPLY TO keywords 201, 207
character set support 212
CLUSTER/CLUSTERS keyword 207
DATA TABLE/TABLES keywords 200
DBC.DBCAssociation table 204
DICTIONARY TABLE/TABLES keywords 200
examples 84, 203
EXCLUDE keyword 200
Teradata Archive/Recovery Utility Reference, Release 16.00
FILE keyword 201
FROM keyword 201, 206
HUT locks 205
JOURNAL TABLE/TABLES keywords 200
kanji 212
NO BUILD keywords 201, 208
NO FALLBACK keywords 201, 206
NO FALLBACK TABLE/TABLES keywords 200
NO JOURNAL keywords 201
referential integrity 205
RELEASE LOCK keywords 201
REPLACE CREATOR keywords 201
WITH JOURNAL TABLE keywords 201, 207
CP table restoration
revalidation failure 76
table structure change 254, 257
CREATE INDEX keyword 59
creating tables for ARCHIVE statements 112
CURRENT keyword
ROLLFORWARD statement 256
D
data compression/decompression 101
data dictionary definitions
restoration of 58
DATA TABLE/TABLES keywords
ARCHIVE statement 181
BUILD statement 194
COPY statement 200
RESTORE statement 233
database
archive, see archive
copying 82
Database DBC
interrupted restore 63
restoring 62
database SYSUDTLIB 36
DATAENCRYPTION runtime parameter
defined 107
syntax 125
DBase table 34
DBC Initialization Procedure (DIP)
during restore with one AMP offline 73
DBC, archiving with Multi-Stream 51
DBC.DBCAssociation table
COPY statement 204
DBCAssociation 41
DBCLOG
starting from z/OS 25
DBCPFX
starting from z/OS 25
DBQLRuleCountTb1 34
DBQLRuleTb1 34
287
Index
DBSERROR parameter 127
DEFAULT runtime parameter
defined 107
syntax 128
DELETE DATABASE statement
ALL keyword 214
EXCLUDE keyword 214
DELETE JOURNAL statement
ALL keyword 216
EXCLUDE keyword 216
RESTORED keyword 216
SAVED keyword 216
DELETE keyword
ROLLBACK statement 253, 254
ROLLFORWARD statement 256, 258
DELETE statement 70
Dependency 41
DICTIONARY 191
dictionary archive, see archive
DICTIONARY TABLE/TABLES keywords
ARCHIVE statement 181
COPY statement 200
RESTORE statement 233, 242
DISPLAY keyword
ANALYZE statement 177
DUMP
keyword to start on z/OS 25
See also ARCHIVE
E
environment variables
ARCDFLT 103
ARCENV 103, 105
ARCENVX 103, 105
override priority 28
ERRLOG runtime parameter
defined 107
syntax 131
error tables
characteristics 67
ERRORDB keyword
REVALIDATE REFERENCES FOR statement 249
eventno parameter
ROLLFORWARD statement 256
EXCLUDE keyword
ARCHIVE statement 181
BUILD statement 194
CHECKPOINT statement 196
COPY statement 200
DELETE DATABASE statement 214
DELETE JOURNAL statement 216
RELEASE LOCK statement 228
RESTORE statement 233
288
REVALIDATE REFERENCES FOR statement 249
ROLLBACK statement 252
ROLLFORWARD statement 255
EXCLUDE TABLE/TABLES keywords
all-AMP restore of selected partitions 65
ARCHIVE statement 181, 201, 234
EXCLUDE TABLES 191, 279
EXCLUDE TABLES keyword 209, 245
F
failed restores 63
fallbacks
restoring 73
FATAL runtime parameter
syntax 132
FILE keyword
ANALYZE statement 177
ARCHIVE statement 183
COPY statement 201
FILEDEF runtime parameter
defined 108
syntax 133
FROM keyword 206
COPY statement 201, 206
full-table locks
during archive of selected partitions 42
G
Globally Persistent Object (GLOP) 272
GROUP keyword
ARCHIVE statement 183, 187
group read locks
restoring selected partitions 91
H
HALT runtime parameter
defined 108
syntax 135
HELP DATABASE keyword 59
HEX runtime parameter
defined 108
syntax 136
host utility locks, see HUT locks
Hosts table 34
HUT locks
archiving and 89
COPY statement 205
other operations and 92
restoring 91
using 88
Teradata Archive/Recovery Utility Reference, Release 16.00
Index
I
IdCol 41
incremental archives 42
indexes
hash 60
join 60
limitation on archive/restore 59
secondary tables 59
unique secondary 73
INDEXES keyword
ARCHIVE statement 182, 187
insufficient memory
during restore 59
for large tables 59
interactive mode 23
interrupted restores 63
IOMODULE 30
IOMODULE runtime parameter
defined 108
syntax 137
IOPARM 30
IOPARM runtime parameter
defined 108
syntax 138
J
Join Index 98
join indexes
limitation on archive/restore 59
requirements for archive and restore 60
JOURNAL 191
JOURNAL TABLE/TABLES keywords
BUILD statement 194
COPY statement 200
RESTORE statement 233
journal tables
archiving 95, 96
change data location 94
checkpoint operations, controlling 97
creating 94
data location 94
local 95
remote 95
setting up 93
subtable
active 95
restored 95
saved 95
K
kanji
ANALYZE statement 178
Teradata Archive/Recovery Utility Reference, Release 16.00
ARC support for 117
character sets 115
COPY statement 212
katakana character sets 115
keywords
ABORT 187, 254
ALL FROM ARCHIVE 208, 244
AMP 185, 207, 242
APPLY TO 207
BACKUP NOT DOWN 230
CLUSTER 185, 207, 242
DELETE 254, 258
DICTIONARY TABLE 242
EXCLUDE TABLES 209, 245
FROM 206
GROUP 187
INDEXES 187
NAMED 198
NO BUILD 208, 244
NO FALLBACK 206
NONEMPTY DATABASE 189
PRIMARY DATA 257
RELEASE LOCK 258
RESTORE FALLBACK 243
USE LOCK 197
WITH JOURNAL TABLE 207
WITH SAVE 197
Korean character sets 115
L
large objects (LOBs)
archiving 49
copying 84
limitations
character set 117
local journaling, see journal tables
locks
during an all-AMP restore 73
in copy operations 84
restoring 91
See also HUT locks
See utility locks
LOG WHERE 65
LOGGING ONLINE ARCHIVE OFF statement 219
LOGGING ONLINE ARCHIVE ON statement 221
logical space 77
LOGOFF statement 225
LOGON runtime parameter
defined 108
syntax 140
LOGON statement
accid parameter 226
password parameter 226
289
Index
tdpid parameter 226
userid parameter 226
LogonRuleTbl table 34
LOGSKIPPED runtime parameter
defined 108
syntax 141
LONG keyword
ANALYZE statement 177
M
migration, online tables 55
multiple CATALOG tables 112
Multi-Stream 51
Multi-Stream, restoring with 78
N
NAMED keyword
CHECKPOINT statement 197, 198
NetBackup
using with Teradata ARC 27
Next table 34
NO BUILD keywords
COPY statement 201, 208
RESTORE statement 234, 244
NO BUILD option 77
NO DELETE keywords
ROLLBACK statement 253
ROLLFORWARD statement 256
NO FALLBACK 191
NO FALLBACK keywords
COPY statement 201, 206
NO FALLBACK TABLE/TABLES keywords
ARCHIVE statement 181
BUILD statement 194
COPY statement 200
RESTORE statement 233
NO JOURNAL keywords
COPY statement 201
NONEMPTY DATABASE/DATABASES keywords
ARCHIVE statement 183, 189
NOSYNC Keyword
Teradata Database 14.0 and later 54
Teradata Database versions prior to 14.0 53
NOSYNC keyword 182
O
offline AMPs 48
OLDCATALOG parameter 112
OldPasswords table 34
online archiving 51
ONLINE keyword 182
OUTLOG runtime parameter
290
defined 108
syntax 145
OVERRIDE keyword
RELEASE LOCK statement 229
overrides
by ARCENV 28
Owners table 34
P
Parents table 34
PARM runtime parameter
defined 108
syntax 146
Partial-AMPs Copy 206
Partial-AMPs, restoring with 79
partitioning, see selected partitions
PARTITIONS BY 65
PARTITIONS WHERE keyword
define rows for archive 190
potential risks 45
restore partitioned data 65
password parameter
LOGON statement 226
PasswordRestrictions 34
PAUSE runtime parameter
defined 108
syntax 149
PERFFILE runtime parameter
defined 109
syntax 150
perm space
affecting restore 77
physical space 77
platform support 21
PPI Table
backup and restore rules 276
PPI table restoration 254, 257
PPI tables
archiving selected partitions 41
restoring 76
PRIMARY DATA keywords
ROLLBACK statement 254
ROLLFORWARD statement 253, 256, 257
primary index
in CATALOG table 112
product version numbers 3
Profiles table 34
Q
query band 259
QueryStatsTbl 41
Teradata Archive/Recovery Utility Reference, Release 16.00
Index
R
RCConfiguration table 34, 270
RCEvent table 34, 267
RCMedia table 34, 270
ReconfigDeleteOrderTbl 35
ReconfigInfoTbl 35
ReconfigRedistOrderTbl 35
reconfiguration
restoring after 76
recovering
general characteristics 80
offline AMPs 81
specific AMP 82
tables and databases 80
Redrive 39
ReferencedTbls 41
ReferencingTbls 41
referential integrity
after all-AMPs copy 205
ARCHIVE statement 185
RESTORE statement 238
reinitialization prior to restore 63
RELEASE LOCK keywords
ARCHIVE statement 182
BUILD statement 195
COPY statement 201
RESTORE statement 234
REVALIDATE REFERENCES FOR statement 249
ROLLBACK statement 253
ROLLFORWARD statement 256, 258
RELEASE LOCK statement
access privileges 229
ALL keyword 228, 229
BACKUP NOT DOWN keywords 229, 230
CLUSTER/CLUSTERS keyword 229
EXCLUDE keyword 228
OVERRIDE keyword 229
REPLACE CREATOR keywords
COPY statement 201
RESTART runtime parameter
defined 109
syntax 152
RESTARTLOG runtime parameter
defined 109
syntax 153
restore
after AMP reconfiguration 76
all databases, example 71
change images 73
conditions of 58
considerations before 59
data dictionaries 58
defined 58
Teradata Archive/Recovery Utility Reference, Release 16.00
difference in perm space 77
dropped items 59
entire database 64
error tables for selected partitions 67
example for selected partitions 44
EXCLUDE option 71
failed or interrupted 63
fallbacks 73
full database from selected table archive 64
general characteristics 58
NO BUILD option 77
potential risks for selected partitions 45
sample script 30
selected partitions 64
selected table 64
specific AMP 78
types of 64
restore rules 275
RESTORE script 99
RESTORE statement
ABORT keyword 234
access privileges 235
ALL FROM ARCHIVE keywords 233, 244
ALL keyword 233
AMP keyword 233, 242
CLUSTER/CLUSTERS keyword 233, 242
DATA TABLE/TABLES keywords 233
DICTIONARY TABLE/TABLES keywords 233, 242
EXCLUDE keyword 233
JOURNAL TABLE/TABLES keywords 233
NO BUILD keywords 234, 244
NO FALLBACK TABLE/TABLES keywords 233
referential integrity 238
RELEASE LOCK keywords 234
RESTORE FALLBACK keywords 234, 243
RESTORED keyword
DELETE JOURNAL statement 216
ROLLFORWARD statement 256
restoring
CP tables 254, 257
PPI tables 254, 257
restoring Database DBC 62
restoring with all AMPs online 70
restoring with Multi-Stream 78
restoring with partial amps 79
restoring with specific AMP archive 71
resume operation, see cancel operation
REVALIDATE REFERENCES FOR statement
access privileges 250
ALL keyword 249
ERRORDB keyword 249
example 251
EXCLUDE keyword 249
RELEASE LOCK keywords 249
291
Index
RoleGrants table 35
Roles table 35
ROLLBACK statement
ABORT keyword 253, 254
access privileges 253
ALL keyword 252, 253
chkptname parameter 253, 254
DELETE keyword 253, 254
EXCLUDE keyword 252
NO DELETE keywords 253
RELEASE LOCK keywords 253
USE CURRENT JOURNAL keywords 253
USE RESTORED JOURNAL keywords 253
ROLLFORWARD statement
ABORT keyword 256, 258
access privileges 256
ALL keyword 255
chkptname parameter 256, 257
CURRENT keyword 256
DELETE keyword 256, 258
eventno parameter 256
EXCLUDE keyword 255
NO DELETE keywords 256
PRIMARY DATA keywords 253, 256, 257
RELEASE LOCK keywords 256, 258
RESTORED keyword 256
runtime parameters
CATALOG 110
CHARSETNAME 115
CHECKPOINT 121
CHECKSUM 123
DATAENCRYPTION 125
DEFAULT 128
ERRLOG 131
FATAL 132
FILEDEF 133
HALT 135
HEX 136
IOMODULE 137
IOPARM 138
LOGON 140
LOGSKIPPED 141
on z/OS systems 107
OUTLOG 145
PARM 146
PAUSE 149
PERFFILE 150
RESTART 152
RESTARTLOG 153
SESSIONS 155
STARTAMP 157
UEN 133, 158
VERBOSE 159
WORKDIR 164
292
S
sample scripts
archive 29
restore 30
SAVED keyword
DELETE JOURNAL statement 216
scripts
ARCHIVE 99
BUILD 99
RESTORE 99
SecConstraints 35
secondary indexes
during a restore without fallback 73
secondary table indexes 59
selected partitions
all-amps
restore 65
archive 41
archiving PPI tables 189
error tables 67
full-table locks 42
group read locks 91
keywords for 189
limitations 189
locks 58
PARTITIONS WHERE keyword 190
potential risks 45
restoring 64
UtilVersion match 58
selected table
restoring 64
session query band 259
SESSIONS runtime parameter
defined 109
syntax 155
SET QUERY_BAND statement 259
SHOW JOIN INDEX keyword 59
SIGILL (illegal signal) 170
SIGINT (interrupt signal) 170
SIGSEGV (segmentation violation signal) 170
SIGTERM (terminate signal) 170
Single Sign-On (SSO) 226, 227
SKIP JOIN INDEX 98
RESTORE 234
syntax 194
SKIP JOIN INDEX statement 183
SKIP STAT COLLECTION statement 100
SkippedTables table 141
software releases
supported 3
specifications 23
SQL requests, client 39
SSO (Single Sign-On) 226, 227
Teradata Archive/Recovery Utility Reference, Release 16.00
Index
stack trace dumps 170
STARTAMP runtime parameter
defined 109
syntax 157
starting Teradata ARC 23
statistics 42
StatsTbl 41
SysSecDefaults table 35
system database 36
system tables 33
SYSUDTLIB 275
T
table indexes 59
TableConstraints 41
table-level exclude
ARCHIVE statement 191
tables
copying 82
dropped during restore 59
excluding 279
insufficient memory 59
migrating, online 55
nonhashed, archiving 49
PPI tables 41
recovering 80
restoring tables 73
statistics during archive of selected partitions 42
tdpid parameter
LOGON statement 226
Teradata Relational Database Management System, see
Teradata Database
Teradata Wallet considerations 140
terms used by Teradata ARC 22
TextTbl 41
Translation table 35
TriggersTbl 41
troubleshooting
risks for selected partitions 45
TVFields 41
TVM 41
U
UDFInfo 41
UDTCast table 35
UDTInfo table 35
UDTTransform table 35
UEN runtime parameter
defined 109
in a file name 133
syntax 158
UNIX signals
SIGILL 170
Teradata Archive/Recovery Utility Reference, Release 16.00
SIGINT 170
SIGSEGV 170
SIGTERM 170
UnresolvedReferences 41
USE ACCESS LOCK keywords
CHECKPOINT statement 197
USE CURRENT JOURNAL keywords
ROLLBACK statement 253
USE GROUP READ keywords
ARCHIVE statement 183
USE GROUP READ LOCKS keywords
ARCHIVE statement 183
USE LOCK keywords
CHECKPOINT statement 197
USE READ LOCK keywords
CHECKPOINT statement 197
USE RESTORED JOURNAL keywords
ROLLBACK statement 253
User Defined Methods (UDM) 272
User Defined Types (UDT) 272
User Installed Files (UIF) 272, 273, 276, 280
userid parameter
LOGON statement 226
users
dropped during restore 59
Utility Event Number, see UEN runtime parameter
utility locks
during restores 58
UtilVersion
matching for selected partitions 58
V
VALIDATE keyword
ANALYZE statement 177
variables
override priority 28
VERBOSE runtime parameter
defined 109
syntax 159
verifying version number 22
Veritas NetBackup
using with Teradata ARC 27
version numbers 3, 22
views
dropped during restore 59
W
Windows
starting ARCMAIN 27
WITH JOURNAL TABLE keywords
COPY statement 201, 207
WITH SAVE keywords
CHECKPOINT statement 196, 197
293
Index
WORKDIR runtime parameter
defined 109
syntax 164
write locks
during a restore of selected partitions 58
Z
z/OS
starting ARCMAIN 24
294
Teradata Archive/Recovery Utility Reference, Release 16.00