Download Migration Guide - Information Products

Document related concepts

IMDb wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Oracle Database wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Concurrency control wikipedia , lookup

Ingres (database) wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Database wikipedia , lookup

Functional Database Model wikipedia , lookup

Relational model wikipedia , lookup

Clusterpoint wikipedia , lookup

ContactPoint wikipedia , lookup

Database model wikipedia , lookup

Transcript
What would you do if you knew?™
Teradata Database Node Software
Migration Guide
Linux
Release 15.0
B035-5942-034K
December 2015
The product or products described in this book are licensed products of Teradata Corporation or its affiliates.
Teradata, Active Data Warehousing, Active Enterprise Intelligence, Applications-Within, Aprimo Marketing Studio, Aster, BYNET,
Claraview, DecisionCast, Gridscale, MyCommerce, QueryGrid, SQL-MapReduce, Teradata Decision Experts, "Teradata Labs" logo, Teradata
ServiceConnect, Teradata Source Experts, WebAnalyst, and Xkoto are trademarks or registered trademarks of Teradata Corporation or its
affiliates in the United States and other countries.
Adaptec and SCSISelect are trademarks or registered trademarks of Adaptec, Inc.
AMD Opteron and Opteron are trademarks of Advanced Micro Devices, Inc.
Apache, Apache Avro, Apache Hadoop, Apache Hive, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks
of the Apache Software Foundation in the United States and/or other countries.
Apple, Mac, and OS X all are registered trademarks of Apple Inc.
Axeda is a registered trademark of Axeda Corporation. Axeda Agents, Axeda Applications, Axeda Policy Manager, Axeda Enterprise, Axeda
Access, Axeda Software Management, Axeda Service, Axeda ServiceLink, and Firewall-Friendly are trademarks and Maximum Results and
Maximum Support are servicemarks of Axeda Corporation.
Data Domain, EMC, PowerPath, SRDF, and Symmetrix are registered trademarks of EMC Corporation.
GoldenGate is a trademark of Oracle.
Hewlett-Packard and HP are registered trademarks of Hewlett-Packard Company.
Hortonworks, the Hortonworks logo and other Hortonworks trademarks are trademarks of Hortonworks Inc. in the United States and other
countries.
Intel, Pentium, and XEON are registered trademarks of Intel Corporation.
IBM, CICS, RACF, Tivoli, and z/OS are registered trademarks of International Business Machines Corporation.
Linux is a registered trademark of Linus Torvalds.
LSI is a registered trademark of LSI Corporation.
Microsoft, Active Directory, Windows, Windows NT, and Windows Server are registered trademarks of Microsoft Corporation in the United
States and other countries.
NetVault is a trademark or registered trademark of Dell Inc. in the United States and/or other countries.
Novell and SUSE are registered trademarks of Novell, Inc., in the United States and other countries.
Oracle, Java, and Solaris are registered trademarks of Oracle and/or its affiliates.
QLogic and SANbox are trademarks or registered trademarks of QLogic Corporation.
Quantum and the Quantum logo are trademarks of Quantum Corporation, registered in the U.S.A. and other countries.
Red Hat is a trademark of Red Hat, Inc., registered in the U.S. and other countries. Used under license.
SAP is the trademark or registered trademark of SAP AG in Germany and in several other countries.
SAS and SAS/C are trademarks or registered trademarks of SAS Institute Inc.
SPARC is a registered trademark of SPARC International, Inc.
Symantec, NetBackup, and VERITAS are trademarks or registered trademarks of Symantec Corporation or its affiliates in the United States
and other countries.
Unicode is a registered trademark of Unicode, Inc. in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and company names mentioned herein may be the trademarks of their respective owners.
The information contained in this document is provided on an "as-is" basis, without warranty of any kind, either express
or implied, including the implied warranties of merchantability, fitness for a particular purpose, or non-infringement.
Some jurisdictions do not allow the exclusion of implied warranties, so the above exclusion may not apply to you. In no
event will Teradata Corporation be liable for any indirect, direct, special, incidental, or consequential damages, including
lost profits or lost savings, even if expressly advised of the possibility of such damages.
The information contained in this document may contain references or cross-references to features, functions, products, or services that are
not announced or available in your country. Such references do not imply that Teradata Corporation intends to announce such features,
functions, products, or services in your country. Please consult your local Teradata Corporation representative for those features, functions,
products, or services available in your country.
Information contained in this document may contain technical inaccuracies or typographical errors. Information may be changed or
updated without notice. Teradata Corporation may also make improvements or changes in the products or services described in this
information at any time without notice.
To maintain the quality of our products and services, we would like your comments on the accuracy, clarity, organization, and value of this
document. Please e-mail: [email protected]
Any comments or materials (collectively referred to as "Feedback") sent to Teradata Corporation will be deemed non-confidential. Teradata
Corporation will have no obligation of any kind with respect to Feedback and will be free to use, reproduce, disclose, exhibit, display,
transform, create derivative works of, and distribute the Feedback and derivative works thereof without limitation on a royalty-free basis.
Further, Teradata Corporation will be free to use any ideas, concepts, know-how, or techniques contained in such Feedback for any purpose
whatsoever, including developing, manufacturing, or marketing products or services incorporating Feedback.
Copyright © 2014 - 2015 by Teradata. All Rights Reserved.
Table of Contents
Preface.............................................................................................................................................................7
Purpose.................................................................................................................................................................. 7
Audience................................................................................................................................................................ 7
Prerequisites.......................................................................................................................................................... 7
Revision History................................................................................................................................................... 7
Supported Releases............................................................................................................................................... 8
Additional Information........................................................................................................................................8
Related Links..................................................................................................................................................8
Product Safety Information.................................................................................................................................8
Chapter 1:
Migration Planning............................................................................................................................ 11
Migration Paths.................................................................................................................................................. 11
Migrations between Operating Systems and Database Releases.......................................................... 11
About Platform Requirements..........................................................................................................................12
Software Requirements...................................................................................................................................... 12
Backup and Restore Network Software.................................................................................................... 12
Plan Additional Space on Destination System........................................................................................ 14
Delete Crashdumps..................................................................................................................................... 15
Join and Hash Indexes................................................................................................................................ 15
Re-write UDFs, UDTs, Stored Procedures and External Stored Procedures.......................................16
Database Query Logging Data Setup and Save....................................................................................... 16
Statistics Tables............................................................................................................................................ 16
Security Configurations..............................................................................................................................17
Support for Earlier Teradata Tools and Utilities Releases......................................................................18
Products Upgrades...................................................................................................................................... 18
Security Impact............................................................................................................................................18
Internet Connection Needed..................................................................................................................... 18
PUT Package Updates.................................................................................................................................19
Obtaining PUT Software............................................................................................................................ 19
Restoring Data to a Smaller System................................................................................................................. 19
Workload Management Pre-Migration Tool.................................................................................................. 20
Pre-Migration System Inspections................................................................................................................... 20
Migration Timetable...........................................................................................................................................21
Migration Guide, Release 15.0
3
Table of Contents
Chapter 2:
Preparing Source Data for Migration...............................................................................23
Working with the dbc Password........................................................................................................................ 23
Setting Up a Temporary Directory.................................................................................................................... 23
Running Pre-Upgrade Preparation Script........................................................................................................23
Inspecting Pre-Upgrade Preparation Script Output....................................................................................... 24
Working With Reserved Words.................................................................................................................. 25
Saving DBQL Log Data................................................................................................................................26
Referential Integrity Revalidation on LOB Tables....................................................................................27
Fixing Stored Procedures.............................................................................................................................28
Disabling MDS Flag..................................................................................................................................... 28
Fixing User DBC Default Roles......................................................................................................................... 29
Saving Security Configuration........................................................................................................................... 29
Opening a Database Window ............................................................................................................................29
About SCANDISK and CheckTable..................................................................................................................30
Running SCANDISK (Pre-Upgrade).........................................................................................................30
Running CheckTable (Pre-Upgrade)......................................................................................................... 31
Finishing Pending Operations........................................................................................................................... 33
Chapter 3:
Archiving Source Data.................................................................................................................... 35
Data Archival Basics............................................................................................................................................35
Archival Mechanisms...................................................................................................................................35
Archival Process Overview..........................................................................................................................35
Archive Size................................................................................................................................................... 35
Source Data Preservation............................................................................................................................ 36
Data Archival Using DSA................................................................................................................................... 36
Backing Up DBC and User Data................................................................................................................ 36
Data Archival Using ARC...................................................................................................................................37
Archiving Source Data Using ARC Archive Command......................................................................... 38
Chapter 4:
Preparing the Destination System..................................................................................... 39
Initializing the Destination System................................................................................................................... 39
Installing Site Security Configuration...............................................................................................................39
The tdlocaledef Utility Output Format............................................................................................................. 40
4
Migration Guide, Release 15.0
Table of Contents
Recording DBC.Hosts Information................................................................................................................. 41
Copying Site-Defined Client Character Sets...................................................................................................41
Chapter 5:
Restoring Data.......................................................................................................................................43
Data Restoration Basics..................................................................................................................................... 43
Time Zone Setting Management.......................................................................................................................43
Checking Time Zone Setting Status.......................................................................................................... 43
Disabling the Time Zone Setting............................................................................................................... 44
Enabling the Time Zone Setting................................................................................................................ 45
Data Restoration Using DSA............................................................................................................................ 45
Restoring DBC and User Data...................................................................................................................45
Data Restoration Using ARC............................................................................................................................ 48
Working with the dbc Password................................................................................................................48
RESTORE Command................................................................................................................................. 48
Migration Rules........................................................................................................................................... 48
Log on Information for Scripts..................................................................................................................49
Restoring Database DBC............................................................................................................................49
Running the post_dbc_restore Script....................................................................................................... 50
Restoring SYSLIB Database....................................................................................................................... 51
Setting Session Mode.................................................................................................................................. 51
Migrating User Data Tables....................................................................................................................... 52
Restoring a Single Table or Database........................................................................................................55
Migrating LOB Tables to Systems with Different Hash Functions....................................................... 55
Chapter 6:
Setting Up Teradata Database.............................................................................................. 57
About Modifying Security Settings.................................................................................................................. 57
Inspecting Stored Procedures........................................................................................................................... 57
Inspecting Java External Stored Procedures................................................................................................... 58
Inspecting UDFs and UDTs.............................................................................................................................. 58
Setting Up DBQL Rules Table...........................................................................................................................59
Verifying DBC.Hosts Configuration................................................................................................................59
Running SCANDISK (Post-Upgrade)............................................................................................................. 59
Running CheckTable (Post Upgrade).............................................................................................................. 60
Re-Synchronizing MDS and DBS.....................................................................................................................61
Enabling Logons................................................................................................................................................. 62
Migration Guide, Release 15.0
5
Table of Contents
6
Migration Guide, Release 15.0
Preface
Purpose
This book contains the steps required for migrating an entire Teradata Database system,
including access rights.
Audience
This guide is intended for use by Teradata customers, Teradata Customer Service
Representatives, and other Teradata associates.
Prerequisites
Before attempting to complete the procedures in this book, you must:
• Be trained on the administration and maintenance of the Teradata Database.
• Understand and know how to work with your operating system:
• SUSE Linux Enterprise Server 10
• SUSE Linux Enterprise Server 11, SP1
• Customers should complete the applicable Teradata Education Network Web-Based
Training course(s):
• Teradata 15.0 Differences, course number 51841
Note: If you do not have a membership that includes web-based training courses, you can
purchase these courses at www.teradata.com/TEN/.
Revision History
Date
Description
December 2015
•
•
June 2015
Migration Guide, Release 15.0
Removed DBCEXTENSION from the group of databases to exclude when
archiving and restoring
Clarified how to set the TimeZoneString value to disable the time zone
setting
Added new topics and edited existing content to address the option of using
DSA for migration
7
Preface
Supported Releases
Date
Description
April 2015
Removed SYSUDTLIB and SYSBAR from group of databases to exclude when
archiving and restoring
February 2015
•
•
•
•
•
•
•
Added "Referential Integrity Revalidation on LOB Tables" topic
Clarified the DIP scripts run in preparation for database DBC restoration
Edited restoration warning messages and ignorable restoration error codes to
provide greater detail
Updated file naming conventions where applicable
Reorganized topics where necessary to more accurately reflect process
sequence
Consolidated topics where appropriate to eliminate unnecessary duplication
Edited various topics for clarity as necessary
December 2014
Maintenance release
September 2014
Maintenance release
March 2014
Initial release
Supported Releases
This book supports migrating to Teradata Database Release 15.00 on all supported Teradata
platforms.
Additional Information
Related Links
URL
Description
www.teradata.com
External site for product, service, resource, support, and
other customer information.
www.info.teradata.com
External site for published Teradata customer
documentation.
https://tays.teradata.com
External site for access to the Teradata software server.
Only accessible with an active service contract.
Product Safety Information
This document may contain information addressing product safety practices related to data
or property damage, identified by the word Notice. A notice indicates a situation which, if not
avoided, could result in damage to property, such as equipment or data, but not related to
personal injury.
8
Migration Guide, Release 15.0
Preface
Product Safety Information
Example
Notice: Improper use of the Reconfiguration utility can result in data loss.
Migration Guide, Release 15.0
9
Preface
Product Safety Information
10
Migration Guide, Release 15.0
CHAPTER 1
Migration Planning
Migration Paths
All migrations are performed under the guidance of Teradata Customer Services.
Many methods and options are available to move data from one system to another. This
document focuses only on migrating an entire system, including access rights. To perform
this type of migration, you must first archive database DBC followed by all user databases
and objects from the source system, and then restore these items in the same order on the
destination system after initializing it using the sysinit command.
For migrations from Linux systems running Teradata Database releases earlier than 14.10 or
from non-Linux systems, you must use the Teradata Archive/Recovery (ARC) utility in
conjunction with Teradata Backup, Archive, and Restore (BAR) application software for
archival and restoration. Starting with Teradata Database release 14.10, you can use the
Teradata Data Stream Archive (DSA) or the ARC utility.
Note: This document focuses only on use of ARC. For information on using DSA to archive
and restore data for migration, see Teradata Data Stream Architecture User Guide. You can
download this document from http://www.info.teradata.com.
Migrations between Operating Systems and Database Releases
Teradata Database release 15.00 can run on the following operating systems:
• Novell SUSE LINUX Enterprise Server 10 Service Pack 3
• Novell SUSE LINUX Enterprise Server 11 Service Pack 1
Migrations from earlier Teradata Database releases running on MP-RAS, Windows, and
Linux platforms to Teradata Database release 15.00 are supported. If upgrading an operating
system, you must follow the appropriate Change Control procedures and validate the
destination system before migrating any data. Contact Teradata Customer Services for
assistance.
Teradata Database supports direct migration between releases when the source Teradata
Database release is no more than two major releases earlier than the destination Teradata
Database release. Otherwise, an intermediate migration or upgrade is required.
Teradata Database release 15.00 supports direct migrations from releases 13.00.00.00+.
Migrations from earlier releases require an intermediate migration or upgrade to a
13.00.00.00+ release before migrating to 15.00.
Migration Guide, Release 15.0
11
Chapter 1 Migration Planning
About Platform Requirements
Teradata Database does not support migrations from a later to an earlier major or minor
release.
For information on the minimum version requirements to perform migrations between
Teradata Database releases, see Knowledge Article IDA00108C82, Minimum Starting Version
and Free Space Requirements for Teradata Database Upgrades and Migrations.
Notice: This document focuses only on direct migration, that is, migration from a Teradata Database
release no more than two major releases earlier than the release to which this document
applies. If you need to perform an intermediate migration before migrating to this release,
additional actions may be necessary to ensure a successful migration. In such scenarios, be
sure to review the Teradata Database Node Software Migration Guide for the intermediate
release in addition to this document before undertaking a migration.
About Platform Requirements
The instructions in this book assume the destination system has been properly set up and
Teradata Database is tested and running properly. For installing details, refer to the
applicable installation documents for the particular platform and operating system. Refer to
the Base System Release Definition for a complete listing of compatible platforms and disk
arrays.
Software Requirements
Teradata Tools and Utilities Release 15.00 supports the new security features of Teradata
Database Release 15.00. You must upgrade all client systems to this release if you want to use
the new security features. The procedures in this manual assume that this upgrade has been
completed.
Backup and Restore Network Software
For migrations from Linux systems running Teradata Database releases earlier than 14.10 or
from non-Linux systems, the ARC utility in conjunction with BAR application software must
be used to back up and restore data; otherwise, the DSA or the ARC utility can be used.
Note: The ARC utility and BAR software can co-exist with DSA on the same BAR hardware.
However, BAR software restores only from BAR archives, and DSA restores only from DSA
archives.
With the ARC utility, you use the command line to archive data. Teradata DSA allows you to
back up and restore data using Teradata Viewpoint portlets and Symantec NetBackup.
Teradata DSA
Teradata DSA allows you to archive and restore data using Teradata Viewpoint portlets and
Symantec NetBackup.
12
Migration Guide, Release 15.0
Chapter 1 Migration Planning
Software Requirements
DSA Configuration
Before using DSA for data archival and restoration, you must set up third-party components
and use the Viewpoint Monitored Systems portlet to add and enable the applicable systems,
making them available in the BAR Setup portlet. For information, see Teradata Viewpoint
User Guide. In addition, you must configure the systems, nodes, backup solutions, and target
groups in the BAR Setup portlet to make them available in the BAR Operations portlet. For
information, see Teradata Data Stream Architecture User Guide.
DSA Hardware Compression Driver Requirement
When using DSA to restore an archive made on a source system with hardware block-level
compression, you must install the hardware compression driver package (teradataexpressdx) on the destination system. Because it allows the destination system to read the
compressed archive, this requirement applies even if the destination system is not set up for
hardware compression.
Note: The teradata-expressdx driver package is provided with systems that are
equipped with compression hardware.
DSA Dependencies
Table 1: DSC Server Software Specifications
Software
Level
Operating system
Teradata SUSE Linux Enterprise Server 11
Note: This installation media is based on the original Novell
installation media and customized by Teradata OS Engineering
for installation on Teradata Enterprise Data Warehouse systems.
Teradata Database (for the DSC Version 14.10.04.xx or later
repository)
Teradata JDK package
Version 7 (teradata-jdk7)
Teradata ActiveMQ
Version 5.6.0.1-3 or later, which requires Teradata-jdk6
Note: The Relay Services Gateway (RSG) virtual processor is required for DSA installation
for both the TPA nodes and the DSA - DSC Server. Configure the RSG virtual processor for
your installation, but do not install the RSG package.
Note: The Teradata Managed Server for DSA - DSC Server can be either a virtual machine
(VM) or a TMSS.
Table 2: Media Server Software Specifications
Software
Level
Operating system
SUSE Linux Enterprise Server 11
Teradata JRE
Version 7 (Teradata-jre7) is required only if the DSA command
line (BARCmdline) is installed on this server
Migration Guide, Release 15.0
13
Chapter 1 Migration Planning
Software Requirements
Table 3: DSC Server Hardware Specifications
Option
Level
Dell 720
Conforms to current Teradata Managed Server for DSA with DSC
specifications
Dell 710
Upgrade to conform to current Teradata Managed Server for DSA
with DSC specifications
Table 4: Media Server Hardware Specifications
Option
Description
Dell 720
Conforms to current Teradata Managed Server for DSA Media
Server specifications
Dell 710
Upgrade to conform to current Teradata Managed Server for DSA
Media Server specifications
Table 5: Related Software Specifications
Software
Description
Teradata Viewpoint
Version 15.00 or higher
Table 6: Teradata Managed Server for Storage (TMSS) - R720XD Software Specifications
Software
Description
nfsserver
Manages an NFS environment
Teradata ARC Utility
With the ARC utility, you use the command line to archive data.
For information on the ARC utility and BAR application software that support this Teradata
Database release, refer to Teradata Archive/Recovery Utility Reference and Teradata BAR
Backup Application Software Release Definition, respectively. You can download these
documents from http://www.info.teradata.com.
Plan Additional Space on Destination System
Several conditions may require an increase in permspace for a database or user on the
destination system to allow for successful restoration of data from the source system. As a
rule of thumb, 20% free permspace on the destination system database is recommended,
but still may not be sufficient in some cases.
If the migration involves a hash function change, there must be enough free space in each
database to hold an extra temporary copy of the largest table in that database. A hash
function change may require every row in a table to be located on a different AMP on the
destination system than the source system. When a row is redistributed to another AMP, the
row gets copied into a buffer to be sent to the new AMP. The space for the original copy of the
row(s) is not freed until the entire table has been restored and redistributed.
14
Migration Guide, Release 15.0
Chapter 1 Migration Planning
Software Requirements
The most common reasons that pre-existing database space is insufficient for a newer system
include the following:
• New features and functionality add fields to existing system tables or create new system
tables.
• Table headers increase in size.
• The new system encompasses a higher number of AMPs, each of which contains a copy
of every table header.
• Hash function changes may result in data skewing related to NOPI tables or PPI tables.
The following query can be used to show both the used and available space for each database
SELECT DatabaseName,SUM(CurrentPerm), SUM(MaxPerm)
FROM DiskSpace GROUP BY 1
WITH SUM(CurrentPerm),SUM(maxperm) ORDER BY 1;
If SUM(CurrentPerm)/SUM(MaxPerm)>80%, the available free space on the system is
already below the recommended threshold for migration. If the migration involves a hash
function change, the available free space required depends on the size of the tables being
restored and could require more than 20% available free space. It is not just large databases
that can run out of space.
If the destination system has insufficient space for a database, the restoration of that
database fails. Additional space must be given to the database on the destination system
before the database can be restored successfully. In addition, other databases included in the
same restoration job that were successfully restored prior to running out of space must be
restored again in order to migrate the statistics, join indexes and hash indexes. Statistics, join
indexes, and hash indexes are processed during the build phase of the restoration process,
and the information is kept in memory. If the restoration job fails to complete, that
information is lost unless the entire restoration job is restarted. For this reason, breaking
user data into multiple smaller archives in preparation for migration is recommended over
creating a single archive containing all databases.
Delete Crashdumps
Unless migrating from one 15.00 Linux system to another, the crashdump files from the
source system are neither compatible with nor relevant to the destination system. Therefore,
migrating these files serves no purpose, and deleting them from the source system before
creating the archive is the easiest course of action.
Join and Hash Indexes
Both the ARC utility and DSA manage saving and rebuilding all join and hash indexes as
part of the archive and restore operations. For further details, see Teradata Archive/Recovery
Utility Reference and Teradata Data Lab User Guide.
If a system running Teradata Database release 14.10 or later is backed up using DSA, join
and hash indexes are likewise programmatically archived and rebuilt. For more information,
see Teradata Data Stream Architecture User Guide.
Migration Guide, Release 15.0
15
Chapter 1 Migration Planning
Software Requirements
The ARC utility performs join and hash index maintenance during the build portion of a
restoration job. If the restoration fails before completion, all join index, hash index, and
statistics information is lost unless the entire job is restarted from the beginning.
DSA performs join and has index maintenance during the post-script phase of a restoration
job, and statistics maintenance during the dictionary phase. As long as the restoration
reaches the data phase of restoration, statistics information is not lost if the job subsequently
fails.
Re-write UDFs, UDTs, Stored Procedures and External Stored Procedures
User-Defined Functions (UDFs), User-Defined Types (UDTs), Stored Procedures and
External Stored Procedures may execute code that is platform-dependent. Such objects may
not recompile or execute correctly when migrated to another platform. UDTs provided by
Teradata are platform-independent.
During the Restore operation, UDFs, UDTs, Stored Procedures and External Stored
Procedures are recompiled automatically. The Restore operation will complete even if there
are compilation failures. However, depending on the failed objects some programs, objects
and possibly entire databases may be unusable until the failed objects can be recompiled
successfully. Teradata recommends that the customer prepare for the migration by copying a
representative sample of the UDTs, UDFs and Stored Procedures to the new platform and
recompiling them prior to the actual migration so that any problems during the migration
can be easily resolved.
Database Query Logging Data Setup and Save
Database Query Logging (DBQL) data is not archived or restored. DBQLRuleTbl and
DBQLRuleCountTbl will be archived/restored as part of database DBC.
If you want to migrate the DBQL logging data to the new system the data must be copied
from the DBQL tables to a user table prior to the archive/restore.
Teradata suggests you take advantage of the Teradata Professional Services offering to set up
standard DBQL rules and logging on your new system.
Statistics Tables
Migrating Statistics Information
Before release 14.0, Teradata Database stored statistics information in columns in the
TVFields and Indexes tables. Migrating to Teradata Database release 14.0 or later converts
statistics information and saves it in system tables created specifically for this type of data.
Statistics Access Rights
Before Teradata Database release 14.0, any user with DROP TABLE and DROP INDEX
access rights could collect statistics. Teradata Database release 14.0 or later requires new
access rights for statistics collection. The database administrator can limit the COLLECT
STATISTICS rights to a subset of users. When statistics information is processed during
16
Migration Guide, Release 15.0
Chapter 1 Migration Planning
Software Requirements
migration, a program generates a script that gives the new access rights to any user that
previously had DROP TABLE or DROP INDEX rights. However, this script does not
automatically execute. After the migration, the database administrator can edit the script as
required before manually executing it.
The program that migrates the statistics information and generates the access rights script
for statistics collection is called upgradestats.pl and is located in the PUTTools
directory. In addition to this script, the program generates a number of bteq scripts, which
the database administrator can also manually execute as appropriate. The following log file
example illustrates the generated scripts.
$logfile
= "/var/log/upgradestatslog.out";
open_logfile($logfile);
# Open script specific logfile as
documented by IE.
$tmp_dir
upgradestats_dbs_release/
$grantstatsrights
$showoldstatsvalues
$showoldstatstables
$dropoldstatstables
$dropremainingoldstats
$collstatsvalues
$recollectstats
= /var/opt/teradata/PUTTools/
=
=
=
=
=
=
=
"$tmp_dir/grantstatsrights.bteq";
"$tmp_dir/showoldstatsvalues.bteq";
"$tmp_dir/showoldstatstables.bteq";
"$tmp_dir/dropoldstatstables.bteq";
"$tmp_dir/dropremainingoldstats.bteq";
"$tmp_dir/collstatsvalues.bteq";
"$tmp_dir/recollectstats.bteq";
Security Configurations
If you have created a custom security configuration for the source system by editing the
default TdgssUserConfigFile.xml file, the configuration is not automatically migrated
to the new destination system. After migrating data to the freshly installed Teradata
Database on the destination system, only the new default security configuration is available.
To transfer a custom security configuration to the new destination system, you must
complete the following steps:
1. Save the edited TdgssUserConfigFile.xml file from the source system.
2. Replace the TdgssUserConfigFile.xml file on the destination system with the one
saved from the source system.
Activate
the custom configuration on the destination system.
3.
If row level security is applied in the source system, SYSLIB has to be archived and restored
before and after migration procedures.
Note: Prior to copying row level security tables, all copied constraints must be re-associated
with their respective constraint functions by reassigning the constraint functions to each
constraint.
After the migration, refer to Security Administration to plan and implement your new
security features. This document is available from www.info.teradata.com.
Related Topics
Saving Security Configuration, on page 29
Installing Site Security Configuration, on page 39
Migration Guide, Release 15.0
17
Chapter 1 Migration Planning
Software Requirements
Support for Earlier Teradata Tools and Utilities Releases
Teradata Tools and Utilities software permits communication between a Teradata Client
workstation and the Teradata Database system and includes many important utilities. Many
of the Teradata Tools and Utilities products are installed on all Client platforms.
Each new major release of Teradata Database will operate properly with the previous
Teradata Tools and Utilities major release. Upgrades can be phased in as needed after the
migration. You can also choose the old method of installing the new Teradata Tools and
Utilities release before migrating to the new Teradata Database release. This wider support
allows much more flexibility in scheduling resources and purchases.
For a comprehensive look at the supported Tool and Utilities for your release:
1. Go to www.info.teradata.com.
2. Type 3119 in the Publication Product ID field and click Search.
3. Click the link to the Teradata Tools and Utilities Supported Platforms and Product Versions
for your Teradata Database release.
Product Upgrades
It is recommended that any TTU applications installed on the destination system be current
with the Teradata Database release installed on the destination system.
Certain specific software products are very closely connected with the Teradata Database
functionality and the latest releases of these products must be installed before migrating to
the new Teradata Database release. If you are using any of the following software products,
you must upgrade them to the new release level before migrating to the new Teradata
Database release level.
• Open Teradata Backup software
• Wizards
• Visual Explain
• Teradata System Emulation Tool
• Query Director
• Metadata Services
Security Impact
If you choose to continue running the previous Teradata Tools and Utilities release, you
cannot take advantage of new security features available in the new Teradata Database
release.
Internet Connection Needed
You need to use an internet connection during the migration to access online sources of
script files and documentation. The internet connection can be on one of the system nodes,
or it can be on a separate computer system. If you use a separate computer system, you will
18
Migration Guide, Release 15.0
Chapter 1 Migration Planning
Restoring Data to a Smaller System
need to download files to the separate computer system temporarily. You must then transfer
all downloaded files to the Control Node on the source system.
PUT Package Updates
Before beginning a data migration, you must download and install the PUT packages:
PUT Package
Description
TDput
TDput is the PUT application. This must be installed first.
PUTSRC [optional]
System Readiness Check (SRC). Must be separately installed after PUT is
installed using PUT's Install/Upgrade Software operation.
PUTTools
A package of tools designed to assist with system changes. This package
contains the Upgrade and Migration Scripts necessary to prepare a
system for these activities.
The files and scripts in the PUTTools package are installed in the
following location: /opt/teradata/PUTTools/
This package must be installed using PUT’s Install/Upgrade Software
operation after PUT is installed.
Obtaining PUT Software
All PUT packages are available from Teradata At Your Service. It is highly recommended
that the latest version of PUT be installed from a download at:
https://tays.teradata.com
Restoring Data to a Smaller System
Most full-system migrations involve destination systems with sufficient space for the
operation. In rare exceptions, or when a disaster-recovery system is not the same
configuration as the production system, the destination system may be smaller than the
source system. In such cases, restoration may fail due to insufficient space. This problem can
sometimes be resolved by giving a failing database more space from DBC database or by
selectively restoring only the most critical databases. For information on using the
DBSIZEFILE option to resize a database during a restore operation, see Teradata Archive/
Recovery Utility Reference.
If the destination system is smaller than the source system, contact Teradata Customer
Services for site-specific instructions specific to ensure a successful migration.
Migration Guide, Release 15.0
19
Chapter 1 Migration Planning
Workload Management Pre-Migration Tool
Workload Management Pre-Migration Tool
On supported Teradata platforms, the Workload Management Pre-migration Tool (known as
the Pre-Migration Tool) enables DBAs to refine the otherwise automatic results of converting
workloads when migrating form SLES10 SP3 to SLES11 SP1. Used prior to migration, the
tool writes output directives to a table called tdwm.premigrate. At upgrade time, TDWMDIP, a
utility within the post_data_restore script, migrates SLES10 SP3 TDWM rule sets into
SLES11 rule sets based on the directives in the tdwm.premigrate table, if any.
Note: Not setting or incorrectly setting workload management values prior to migration may
result in performance problems.
The Pre-Migration Tool allows you to select which rule sets you want to migrate. In this case,
other rule sets are not converted and are not migrated to SLES 11. If you do not use the tool,
TDWMDIP migrates all rule sets. To speed workload migration, delete rule sets that you
have not used and do not intend to use in the future before using the tool and upgrading to
SLES 11.
Note: The ability to control which rule sets are migrated applies only to the following
versions of Teradata Database: 13.10; 14.0 lower than 14.0.4.1; and 14.10 lower than
14.10.1.1. Otherwise, all SLES 10 rule sets and workloads are migrated, regardless of whether
the tool is used.
If you have in-depth understanding of Priority Definition (PD) set configuration and want to
use the Pre-Migration Tool when migrating from a Priority Scheduling Facility (PSF) system,
you must first do the following while still running Teradata Database 13.0 or earlier:
1. Use Priority Scheduler Administrator (PSA) to capture schmon settings in PD sets. For
more information, see Teradata Manager User Guide (B035-2428) and Utilities, Volume 2,
L - Z (B035-1102).
2. Use Teradata Workload Analzyer (TWA) to convert PD sets to rule sets. For more
information, see the Teradata Workload Analyzer User Guide.
Note: The Pre-Migration Tool provides direct access to PSA and TWA.
For further details on workload migration when upgrading from SLES10 SP3 to SLES11 SP1,
see the Workload Pre-Migration User Guide.
Pre-Migration System Inspections
Before you begin the migration, you must run an inspection script on your source system to
identify Teradata Database conditions that are not compatible with your new release. The
inspection scripts locate:
• New Reserved Words that you must change in your data.
• Stored procedures that cannot recompile automatically during the migration.
20
Migration Guide, Release 15.0
Chapter 1 Migration Planning
Migration Timetable
Migration Timetable
Planning and performing a migration is a complex process requiring many activities to
ensure that all the required software and information has been gathered. All migrations are
performed under the guidance of Teradata Customer Services staff, who work with the
customer to develop a plan for system migration. The key to a successful migration is
preparation.
Time
Action
Six to eight weeks before migration
Download all necessary documentation.
Install latest PUTTools package on system.
Run the Pre-Upgrade Preparation script to identify and fix any problems.
Four weeks before migration
Prepare the destination system if different from the source system.
Install the desired software release, including the latest Teradata Tools and
Utilities.
Upgrade firmware on all systems to the latest Teradata certified releases
One week before migration
Run SCANDISK and CheckTable on the source system to check the integrity of
the database and report problems, such as aborted load operations and tables in
PENDINGOP state.
Clean up the source system in preparation for archival/restoration.
Contact Teradata Customer Services for a Change Control number.
One day before migration
Confirm readiness with Change Control.
Verify that the latest PUTTools package is on the system and, if not, install the
package.
Start of migration
Migration Guide, Release 15.0
Expect the time required to restore the data to be at least twice as long as the time
required to archive the data.
21
Chapter 1 Migration Planning
Migration Timetable
22
Migration Guide, Release 15.0
CHAPTER 2
Preparing Source Data for Migration
Working with the dbc Password
Due to security concerns related to password exposure during the execution of
post_dbc_restore, the DBA must change the dbc password to a temporary value for the
migration prior to executing post_dbc_restore, and reset the password to its desired value
after the migration has been completed.
Setting Up a Temporary Directory
1 Create a directory on the control node of the source system to store the downloaded
packages and software.
For example:
/var/opt/teradata/ccnumber
Running Pre-Upgrade Preparation Script
Prerequisite:
The latest PUTTools package must be downloaded and installed on all TPA nodes before this
task can be performed.
The pre_upgrade_prep.pl script is executed on the pre-upgraded system to check various
aspects of the pre-upgraded system for viable upgrade to a new release.
Each Teradata Database release has a unique subfolder in the PUTTools directory for the
scripts applicable to that release. For example, the Linux folder for Teradata Database 15.00
would be:
/opt/teradata/PUTTools/td15.xx.xx.xx/preupgrade
1 Create a temporary directory and change directory (cd) to that directory.
2 From a temporary directory, run the command:
perl /opt/Teradata/PUTTools/td15.xx.xx.xx/preupgrade/
pre_upgrade_prep.pl system_name/dbc,dbc_password output_filename
Example:
Migration Guide, Release 15.0
23
Chapter 2 Preparing Source Data for Migration
Inspecting Pre-Upgrade Preparation Script Output
perl /opt/Teradata/PUTTools/td15.xx.xx.xx/preupgrade/
pre_upgrade_prep.pl tdsysl/dbc,dbc_password prep.out
The report files created by the pre_upgrade_prep.pl script are stored in the local,
temporary directory.
Inspecting Pre-Upgrade Preparation Script
Output
The results of the Pre-Upgrade Script are stored in the script output file. Problems must be
resolved before continuing the migration.
Each of the pre-upgrades checks performed by the script outputs results in the file named on
the command line. For easy identification, results for each of the individual checks are
preceded by an output line such as **** Checking for Reserved Words …
There are pre-upgrade checks for each of the following:
Script Check
Description
Reserved Words
Use of any reserved words in the user-defined tables and stored
procedures.
DBQL Data
Non-empty tables are reported because DBQL tables are not
archived/restored.
Stored Procedures with No
Source Code
Stored procedures without source code cannot be recompiled on
the destination system.
MDS Rows
The MDS table is not archived/restored, so MDS must be resynchronized after migration completes.
DBC Startup String
Check for a startup string assigned to user DBC.
Solution: Set StartupString to NULL for user DBC before the
upgrade because it may prevent logons after the version switch to
the new release.
Identity Columns With
Permanent Journal Data
Check for the existence of an Identity Column Table that uses a
permanent journal.
Solution: The permanent journal data must be moved to a nonidentity column table before the upgrade.
Triggers on
TDWM.RULEBYPASS
Orphaned Access Rights
Check for triggers on the TDWM.RULEBYPASS table.
Solution: Any triggers on the TDWM.RULEBYPASS table MUST
be removed before an upgrade can be performed.
Check for orphaned accessrights rows in DBC.AccessRights.
Solution: Open an incident and assign it to the TGSC DBS
Support group and attach the orphaned rows report file.
Statistics Access Rights
24
Check for objects that need STATISTICS accessrights granted.
Migration Guide, Release 15.0
Chapter 2 Preparing Source Data for Migration
Inspecting Pre-Upgrade Preparation Script Output
Script Check
Description
Solution: Execute the created script, grantstatsrights.bteq,
before the upgrade. If the grant rights script is run, it restores all of
the default access rights associated with the QSTATS database. If
custom access rights are required for this system, apply them
manually.
When running the script, use the UTF8 character set, otherwise,
error 3704 may occur. To run the script, logon and enter:
.SET SESSION CHARSET 'UTF8'
.RUN FILE=grantstatsrights.bteq
DBC Statistics
Check for statistics on DBC tables which must be re-enabled after
the upgrade.
Solution: Execute the script, dbcstats.rpt, after the upgrade to
re-enable statistics collections on the DBC tables.
Storage Profiled
Check if the storage has been profiled.
Solution: The migration type is set to ONE_DIMENSIONAL but
storage has not been profiled. The storage must be profiled or the
migration type must be set to TERADATA_TRADITIONAL prior
to the upgrade.
Geospatial
Check for the external UDT (pre-13.0) version of Geospatial, the
existence of which prevents installation of the Teradata-supplied
(13.0) implementation of Geospatial as part of the migration due
to related problems.
Solution: If you want to use the Teradata implementation of
Geospatial, first either uninstall the pre-13.0 version before
migrating or exclude it when archiving or restoring during
migration. In both cases, the Teradata supplied implementation of
Geospatial then installs as part of the migration.
A summary section at the end of the report indicates system readiness for migration based
on identified problems, noting either System READY for upgrade, if problems were not
found, or Cleanup Required before upgrade!!!, if problems were found.
Note: You can ignore any In-Doubt Transaction messages at this time. These transactions are
part of the Replication operation and do not need to be corrected as part of the Pre-Upgrade
Script process. In-Doubt Transaction problems are corrected at another step in the
migration process.
Working With Reserved Words
About Reserved Words
This topic contains important information about reserved words in Teradata.
• Each new release of the Teradata Database adds entries to the reserved words list.
• Words are never dropped from the list.
Migration Guide, Release 15.0
25
Chapter 2 Preparing Source Data for Migration
Inspecting Pre-Upgrade Preparation Script Output
• The reserved words list contains all previous and currently added words and is part of the
Reserved Words script.
• The complete list of reserved words is published in SQL Reference: Fundamentals and
includes non-reserved and future reserved words.
• A short list of the new reserved words for each database release is in the Release Summary.
This list does not show the non-reserved and future reserved words.
• The check_reserved_words_tpt.bteq script checks for reserved words used by the
TPT utility. Output is contained in the report file named reservedwords_tpt.rpt. If
the customer uses the TPT utility, the output of that script should also be checked.
Checking Reserved Words
One of the pre_upgrade_prep.pl script checks is for reserved word usage in the preupgraded system when targeting an upgrade to the new release. However, the reserved words
list can be checked without executing the pre_upgrade_prep.pl script, by compiling and
installing the SQLRestrictedWords_TBF UDF using PUTTools.
1 Enter the command:
/opt/Teradata/PUTTools/TDrelease/IUMB_scripts/install_udfs.pl
[dbs_version] system/dbc,dbc_password
where TDrelease is td15.00.xx.xx, and dbs_version is optional and specifies the target
release of Teradata Database. If dbs_version is not included, the release is extracted from
TDrelease.
Fixing Reserved Words
Whenever words in the databases and applications are the same as any on the reserved words
list, those words must be changed or placed in quotes before moving to a new version of the
Teradata Database.
1 View the output report file reservedwords.rpt for discovered reserved words.
2 Correct the found words.
For example:
Reserved word: FUNCTION found in the database or application.
Use one of the following methods to correct the problem:
• Change the column name to THEFUNCTION or FUNCTION_
• Replace every occurrence of FUNCTION with "FUNCTION"
Note: The Reserved Words script may not catch all instances of the reserved words
because Client databases are not checked and the names can be obscured through aliases
and embedded SQL statements.
Saving DBQL Log Data
The DBQLRuleTbl and DBQLRuleCountTbl are archived and restored only if database DBC
is migrated but none of the other DBQL tables are archived or restored. To migrate data from
those tables, the tables must be copied into a user database/table. For more information
26
Migration Guide, Release 15.0
Chapter 2 Preparing Source Data for Migration
Inspecting Pre-Upgrade Preparation Script Output
about saving DBQL log data, see Teradata Database Administration, available from http://
www.info.teradata.com.
Note: Definitions of the DBQL log tables often change between major releases, and the data
in DBQL log tables from a prior release does not get converted to match the data collected
using the definitions in the subsequent release.
All of the DBQL rules that were present on the source system are save/restored, so the same
information is collected on the destination system.
It may be helpful to take advantage of the Teradata Professional Services offering to set up
standard DBQL rules and logging on the destination system.
The output from the pre_upgrade_prep.pl script includes a dbql_out.rpt file.
This file details how many rows currently exist in each of the DBQL tables, as illustrated in
the following table.
Table
Count
DBQLExplainTbl
152478
DBQLObjTbl
3078221
DBQLogTbl
219991
DBQLRuleTbl
1
DBQLSqlTbl
220024
DBQLStepTbl
1073967
DBQLSummaryTbl
0
Referential Integrity Revalidation on LOB Tables
Referential integrity is the term used to define the keys and checks that ensure that related
values and indexes can be maintained between tables within the database. These
relationships are also known as foreign key references. Referential integrity is a connection
between multiple tables such that changes to one table cause changes in the other table.
After a migration where data rows may have been redistributed to different AMPs, the
Referential Indexes (RIs) must be revalidated before the tables with referential integrity can
be accessed. Establishment of a foreign key reference creates a Referential Integrity error
table. Because of the way table names are constructed, any data rows in the RI error table
contain information that is valid only on that system. If the RI error table is migrated to a
different system, any attempt to revalidate referential integrity on the new system fails. If an
RI error table is migrated to a different system, the table should be emptied.
Referential integrity on LOB tables should be dropped prior to archival. Otherwise, the
rehashlobs.pl script fails because it cannot drop the rows on the original table involved in
a reference.
Migration Guide, Release 15.0
27
Chapter 2 Preparing Source Data for Migration
Inspecting Pre-Upgrade Preparation Script Output
Fixing Stored Procedures
The migration process automatically recompiles stored procedures if the corresponding
source code exists. Running the pre_upgrade_prep.pl generates the
sp_nospllist.txt report, which lists stored procedures that were not saved with their
source code and must, therefore, be recreated.
Note: Stored procedures created with the NOSPL option (no source code) do not
automatically recompile until you re-create and save them without the NOSPL option.
Stored procedures need to be recompiled only for major upgrades.
1 View the sp_nospllist.txt report.
Note: The report provides stored procedure names in ASCII text and in hexint forms.
2 Recreate each listed stored procedure and save it with its source code.
Disabling MDS Flag
Meta Data Services (MDS) is used to track changes to the Teradata Database Data
Dictionary. Normally, changes to the Data Dictionary are made using SQL Data Definition
Language (DDL) statements that MDS recognizes and reports to the MDS system. During
migrations, significant changes are made to the Data Dictionary using non-SQL DDL
statements, such as the sysinit and Restore commands, which MDS does not recognize.
In such cases, the MDS becomes out of date with the Teradata Database, and you must
resynchronize the MDS system.
To ascertain whether MDS is enabled on the system, generate in the dbscontrol record,
and then review the value for flag 38. If MDS is enabled, it must be disabled before starting
the migration and re-enabled after completing the migration. After re-enabling MDS, you
must resynchronize MDS and Teradata Database again.
Notice: This task must be performed after archiving data, but before performing a sysinit.
1 Open the Database Window (DBW) on your remote workstation using the following
commands:
• # xdbw -display Workstation_IP_Address:0.0 &
2 Open the Supervisor window and, in the command line, start the DBSControl utility
using:
start dbscontrol
3 In the command line, enter:
display general
The returned dbscontrol record includes flag 38 to indicate MDS activity status.
4 Proceed based on whether MDS is enabled:
• If flag number 38 is set to TRUE, indicating that MDS is enabled, disable MDS by
entering:
28
Migration Guide, Release 15.0
Chapter 2 Preparing Source Data for Migration
Fixing User DBC Default Roles
modify general 38 = FALSE write quit
Note: Following migration in this scenario, you must repeat this entire procedure; but
at this step, enable MDS by entering modify general 38 = TRUE write quit.
• If flag number 38 is set to FALSE, indicating that MDS is disabled, close the
DBSControl utility window and the Supervisor window.
Fixing User DBC Default Roles
If default roles are assigned to User DBC, the roles are automatically changed to NULL
during the upgrade. The report file dbc_roles.rpt lists those roles.
Note: DBC must have full access rights for DBC prior to archiving/restoring DBC. If DBC
does not have full access rights for DBC, then those right need to be granted prior to
performing the archive. If the source system is not available, then once DBC has been
restored, full access rights have to be explicitly granted for each object, view, and macro
under DBC before the post_dbc_restore script and DIP will run successfully.
Saving Security Configuration
If you have created a custom security configuration on the source system by editing the
TdgssUserConfigFile.xml file, the file must be saved in preparation for later
installation and activation of the custom configuration on the destination system. If the
default security configuration is instead being used, this task can be omitted from the
migration preparation process because the destination system automatically includes the
new default security configuration.
1 Locate the TdgssUserConfigFile.xml on the Control Node in the /opt/
teradata/tdat/tdgss/site file.
2 Make a copy of the file and save it on an appropriate form of removable media.
Opening a Database Window
1 On your Workstation PC, start the X-Windows server program.
2 In the Command Prompt window, get the IP address of your Workstation PC with the
command:
> ipconfig
The response contains the IP address:
IP Address . . . . . . xxx.xxx.xxx.xxx
3 Open the Database Window (DBW) on your remote workstation using the following
command:
Migration Guide, Release 15.0
29
Chapter 2 Preparing Source Data for Migration
About SCANDISK and CheckTable
xdbw -display IP_address:0 &
Use the IP Address of your workstation. The Database Window opens in the X-Window
server.
About SCANDISK and CheckTable
SCANDISK and CheckTable are both file system utilities designed to check the integrity of
the database and report problems.
You must verify the database before the migration because:
1. Some problems may cause the migration to fail or cause database failures on the new
release.
2. Some problems can be easily corrected on the source system but impossible to correct on
the destination system.
3. If the database already has problems it may cause any new problems introduced by the
migration to go undetected or cause effort to be wasted looking for a migration problem
that doesn't exist.
If the system has other users logged on, execute these utilities at low priority.
Running SCANDISK (Pre-Upgrade)
Running SCANDISK is an optional step. However, if you haven't run SCANDISK in the past
couple of months, it is recommended that you do so now.
1 Coordinate with the system administrator to determine an appropriate setting
(priority=low/medium/high/rush) at which to run SCANDISK.
Running SCANDISK at a lower priority reduces the performance impact of the utility
running while users are on the system.
2 In the Teradata Database Window, open the Supervisor window.
3 In the Supervisor window, open the Ferret window:
start ferret
4 In the Ferret window, start a log file of SCANDISK output:
a Select File > Logging.
The Select Log File for window dialog box appears.
b Enter the path and file name for the logging file at Filter and Selection.
The logging file should be /home/support/ccnumber/pre_upg_scandisk.txt.
c Click OK.
5 Set SCANDISK to run at a specific priority on the Ferret command line:
set priority=low
30
Migration Guide, Release 15.0
Chapter 2 Preparing Source Data for Migration
About SCANDISK and CheckTable
6 Start SCANDISK with the following command on the Ferret command line:
scandisk
SCANDISK advises that the test will be started on all Vprocs.
7 When asked Do you wish to continue based on this scope? (Y/N), enter Y.
SCANDISK takes hours to check larger disk arrays.
• SCANDISK progress can be checked at any time using the inquire command.
• Stop SCANDISK at any time with the abort command.
8 Review the results and address any errors or issues.
Note: If necessary, open a support incident.
9 Run SCANDISK again, repeating this step until SCANDISK reports no errors.
Example:
8 of 8 vprocs responded with no messages or errors.
date time Scandisk has completed.
10 Exit logging by choosing File > Logging in the Ferret window and clearing the Logging
option.
11 Exit the Ferret application window:
exit
12 Leave the Supervisor window open.
Running CheckTable (Pre-Upgrade)
Running CheckTable is an optional step, however, if you haven't run CheckTable in the past
couple of months, it is recommended that you do so now.
1 Coordinate with the system administrator to determine the priority setting to use for
CheckTable, as follows:
• l = Low
• m = Medium (default)
• h = High
• r = Rush
Note: It is recommended that you run CheckTable level 2 on DBC and level 1 on the
user tables. This ensures that CheckTable will complete in a reasonable amount of time.
Additional information on this issue is located in Tech Alert NTA 2234 on the https://
tays.teradata.com Web site.
2 In the Teradata Database Window, open the Supervisor window.
3 In the Supervisor window, start CheckTable:
start checktable
The CheckTable window opens. Do not change the log file.
Migration Guide, Release 15.0
31
Chapter 2 Preparing Source Data for Migration
About SCANDISK and CheckTable
4 In the CheckTable window, start a log file of CheckTable output:
a Select File > Logging.
The Select Log File for window dialog box appears.
b Enter the path and file name for the logging file at Filter and Selection.
The logging file should be /home/support/ccnumber/
preupg_checktable_out.txt.
c Click OK.
5 At the CheckTable command prompt, start a thorough check of DBC system tables:
check dbc at level two with no error limit skiplocks priority=m
error only;
CheckTable commands require a semi-colon at the end.
Note: Use the IN PARALLEL option if there are no users logged on.
6 Inspect the CheckTable log file for any problems.
If CheckTable finds problems with any tables, correct each problem. If necessary, contact
the Teradata Global Support Center for assistance. Then repeat the test to check each
table skipped due to lock contention. Continue fixing problems until there are no further
issues.
7 Check all tables:
check all tables exclude dbc at level one with no error limit
skiplocks priority=m error only;
8 Review the results and address any errors or issues.
Note: If necessary, open a support incident.
9 Repeat the test to check each table and correct problems until CheckTable completes with
no errors.
Example:
52 table(s) checked.
31 fallback table(s) checked.
21 non-fallback table(s) checked.
1 table(s) bypassed because they are Permanent Journals.
1 table(s) bypassed due to logical rows of a Join or Hash index.
0 table(s) failed the check
0 Dictionary error(s) were found
Check completed at time
10 Quit CheckTable:
quit;
11 Exit logging by selecting File > Logging in the CheckTable window to uncheck Logging.
12 Close the Database Window if desired.
32
Migration Guide, Release 15.0
Chapter 2 Preparing Source Data for Migration
Finishing Pending Operations
Finishing Pending Operations
Due to changes in worktable definitions and other considerations, a load operation cannot
begin on one release and complete after the migration on a different release. Therefore, there
can be no pending data transfer jobs during the migration.
Many tables that are in a pending state are the result of aborted load operations and will
never complete loading. These tables must be deleted, with the agreement of the database
administrator. Any active load operations should be allowed to finish.
Coordinate with the site Database Administrator to arrange finishing all pending operations
and to prevent any new operations until after the migration.
1 Run CheckTable using the following command:
check all tables at level pendingop skiplocks priority=h;
2 Drop any pending tables using the BTEQ command:
DROP TABLE databasename.tablename;
Migration Guide, Release 15.0
33
Chapter 2 Preparing Source Data for Migration
Finishing Pending Operations
34
Migration Guide, Release 15.0
CHAPTER 3
Archiving Source Data
Data Archival Basics
Archival Mechanisms
For migrations from Linux systems running Teradata Database releases earlier than 14.10 or
from non-Linux systems, the ARC utility in conjunction with BAR application software
must be used to archive data; otherwise, the DSA or the ARC utility can be used.
Note: The ARC utility and BAR software can co-exist with DSA on the same BAR hardware.
However, BAR software restores only from BAR archives, and DSA restores only from DSA
archives.
Archival Process Overview
When migrating a full system, you must first archive database DBC followed by all user
databases and objects from the source system.
Some databases that are created by DIP scripts or replaced on a newer release should not be
archived or restored. When archiving user data, you must exclude databases DBC and
TD_SYSFNLIB. It is also recommended that you exclude CRASHDUMPS, SYSLIB, and
SYSSPATIAL.
SYSLIB database contains the GetTimeZoneDisplacement and algorithmic compression
UDFs that may be referenced during the restoration process. If user databases contain
algorithmic compressed data, you must restore SYSLIB database after restoring DBC
database and running the post_dbc_restore script and before restoring any other user
databases. You can create a separate archive containing only SYSLIB database for this
purpose. After all other user data is restored, the DIP scripts ensure that the SYSLIB
database is current.
Archive Size
If the destination system has insufficient space for a database, the restoration of that
database fails. Additional space must be given to the database on the destination system
before the database can be restored successfully. In addition, other databases included in the
same restoration job that were successfully restored prior to running out of space must be
restored again in order to migrate the statistics, join indexes and hash indexes. Statistics, join
indexes, and hash indexes are processed during the build phase of the restoration process,
and the information is kept in memory. If the restoration job fails to complete, that
information is lost unless the entire restoration job is restarted. For this reason, breaking
Migration Guide, Release 15.0
35
Chapter 3 Archiving Source Data
Data Archival Using DSA
user data into multiple smaller archives in preparation for migration is recommended over
creating a single archive containing all databases.
Source Data Preservation
It is a good practice to keep the source system operational with the original data until you are
certain that the destination system and data are running correctly.
If you are upgrading your source system to become your destination system, you should
preserve the archived data media until you are certain that the destination system is running
correctly.
Data Archival Using DSA
The BAR Operations portlet in Teradata Viewpoint allows you to archive source data using
DSA.
Before using DSA for data archival and restoration, you must set up third-party components
and use the Viewpoint Monitored Systems portlet to add and enable the applicable systems,
making them available in the BAR Setup portlet. For information, see Teradata Viewpoint
User Guide. In addition, you must configure the systems, nodes, backup solutions, and target
groups in the BAR Setup portlet to make them available in the BAR Operations portlet. For
information, see Teradata Data Stream Architecture User Guide.
Backing Up DBC and User Data
You must create two backup jobs. One includes only the DBC database and one includes all
the databases under DBC and excludes the DBC database automatically.
1 From the BAR Operations Saved Jobs view, create a backup job that saves only the DBC
database:
a Click New Job.
b Select the Backup job type, then click OK.
c In the New Backup Job view, enter a job name, such as Backup-DBC-Only.
d Select a Source System from the list.
e Enter the user credentials.
f
Select a Target Group from the list.
g [Optional] Add a Description.
h Select the DBC database in the Objects tab.
i
[Optional] To verify the parent and objects selected, click the Selection Summary tab.
Note: Size information is not available for DBC backup jobs. N/A displays as the size
value for DBC backup jobs.
36
Migration Guide, Release 15.0
Chapter 3 Archiving Source Data
Data Archival Using ARC
j
[Optional] To adjust job settings for the job, click the Job Settings tab.
k Click Save.
l
Click
on Backup-DBC-Only and select Run.
2 From the BAR Operations Saved Jobs view, create a backup job that saves the databases
under DBC:
a Click New Job.
b Select the Backup job type, then click OK.
c In the New Backup Job view, enter a job name, such as Backup-DBC-All.
d Select a Source System from the list.
e Enter the user credentials.
f
Select a Target Group from the list.
g Select the DBC database in the Objects tab.
h Click
next to the DBC database.
The Settings dialog box appears.
i
Check the Include all children databases and users box.
j
Click OK.
k Click Save.
l
Click
on Backup-DBC-All and select Run.
3 Save the name of both backup job save sets.
The backup jobs must complete with a COMPLETED_SUCCESSFULLY or WARNING
status before you can create a restore job.
Data Archival Using ARC
With the ARC utility, you use the command line to archive data.
Each site has established procedures for archiving data. Use the procedures established as
standard for your site.
To successfully run the required post-restoration scripts, you must create separate archive
files for the database DBC tables, journal tables, and user data tables when using ARC. If
journaling is enabled on any database, archive the journal files.
Note: You must archive database DBC in a single stream. Archives created using the multistream feature cannot be restored. For more information, see Teradata Archive/Recovery
Utility Reference.
Migration Guide, Release 15.0
37
Chapter 3 Archiving Source Data
Data Archival Using ARC
The time required for archival can vary widely depending on the method and media used
and the size of the database.
Archiving Source Data Using ARC Archive Command
The following script examples show archiving using the ARCHIVE command.
1 Archive database DBC by submitting a job with the following ARC commands:
LOGON tdp/dbc,dbc_password;
ARCHIVE DATA TABLES (DBC),
RELEASE LOCK,
FILE=ARCHIVE;
LOGOFF;
2 Archive the journal tables by submitting a job with the following ARC commands:
LOGON tdp/dbc,dbc_password;
ARCHIVE JOURNAL TABLES (DBC)ALL,EXCLUDE(DBC),
NONEMPTYDATABASES,
RELEASE LOCK,
FILE=JNLARCH;
LOGOFF;
3 Archive user databases:
• If you are certain that the destination system has adequate space for all user data,
create a single archive containing all user databases by submitting a job with the
following ARC commands:
LOGON tdp/dbc,dbc_password;
ARCHIVE DATA TABLES (DBC) ALL, EXCLUDE(DBC), (TD_SYSFNLIB),
(CRASHDUMPS), (SYSLIB), (SYSSPATIAL),
RELEASE LOCK,
FILE=USRARCH;
LOGOFF;
• If you are not certain that the destination system has adequate space for all user data,
create multiple smaller archives instead of archiving all user databases in the same
archive.
Note: Because this option guards against having to restore all user data more than
once if the destination system has inadequate space, it is recommended over archiving
all user databases in the same archive. In any case, be sure to exclude databases DBC
and TD_SYSFNLIB when archiving user data. It is also recommended that you
exclude CRASHDUMPS, SYSLIB, and SYSSPATIAL.
4 Archive database SYSLIB:
LOGON tdp/dbc,dbc_password;
ARCHIVE DATA TABLES (SYSLIB) ALL,
RELEASE LOCK,
FILE=SYSLIBONLY;
LOGOFF;
38
Migration Guide, Release 15.0
CHAPTER 4
Preparing the Destination System
Initializing the Destination System
Before migrating data to the destination system and while no users are logged on, initialize
the system as follows.
1 On the destination system, do the following:
a Use the Vproc Manager to verify that all virtual processors (VPROCS) on are online.
b Install the correct Teradata Database release and Teradata Tools and Utilities release
according to the procedures in the corresponding installation guides.
2 On client platforms, install the correct Teradata Tools and Utilities release according to
the procedures in the corresponding installation guide.
3 On the destination system, do the following:
a Execute the sysinit command.
Note: Execution of this command is a critical step in the migration process. It is the
only time when row format and hash function can be changed, and is also the only
time when Japanese language support can be enabled. Consult with Teradata
Customer Services for assistance as necessary.
b Run the DIPMIG script (which runs the DIPERRS, DIPDB, DIPVIEWS, DIPVIEWSV,
and DIPBAR scripts in the required order).
Note: Do not run the DIPALL script after executing the sysinit command. Run only
the DIPMIG script. If this script is not run, or the DIPALL script is run, DBC database
restoration will fail and the sysinit process must be repeated on the destination
system.
4 On the destination system, enable only DBC logons from the Supervisor window in the
DBW console.
Note: Enabling only DBC logons on the destination system before restoring database
DBC reduces outside logon attempts during restoration.
Installing Site Security Configuration
If you are transferring custom security configurations from the source system, you must
replace the TdgssUserConfigFile.xml file on the destination system with the edited file
Migration Guide, Release 15.0
39
Chapter 4 Preparing the Destination System
The tdlocaledef Utility Output Format
that you saved from the source system, and then update the configuration GDO to activate
the custom configuration on the destination system. If the default security configuration is
instead being used, these steps can be omitted from the migration preparation process
because the destination system automatically includes the new default security configuration.
1 Delete the existing default TdgssUserConfigFile.xml file on the destination file.
This file is located in the following directory on the Control Node:
/opt/teradata/tdat/tdgss/site
2 Transfer the saved file from the source system into the same directory on the destination
system.
3 Use the tdgssconfig utility to update the tdgssconfig.gdo file as follows:
/opt/teradata/tdgss/bin/run_tdgssconfig
The tdgssconfig utility generates a new tdgssconfig.gdo file that contains the
properties of the TdgssConfigFile.xml file.
4 Restart the Teradata Database to activate the GDO by typing:
tpareset -f "new tdgssconfig.gdo"
The security configuration that was on the source system is now on the destination
system.
The tdlocaledef Utility Output Format
The Locale Definition utility (tdlocaledef) command-line utility allows you to define or
change how Teradata Database formats numeric, date, time, and currency output.
The utility converts a specification for data formatting (SDF) text file into a Teradata
Database globally distributed object (GDO), an internal binary format file that stores
configuration information. The GDO is made available simultaneously to all nodes of an
MPP system. The utility can also convert the text file to a local non-distributed binary file
that can be converted back to text in order to ensure the syntax formatting validity.
Note: Format changes take effect only after Teradata Database is restarted, and do not affect
columns that were created prior to the restart.
The tdlocaledef utility uses the tdlocaledef.txt SDF file, which is located in
the /etc/opt/teradata/tdconfig file.
Note: After the data has been migrated, verify that the tdlocaledef.txt file on the
destination system is the same as on the source system.
1 From a command prompt, start the Locale Definition utility:
tdlocaledef
Note: Options for this command can be found in the Utilities - Volume 2 L-Z Guide. An
example of a typical format change using the tdlocaledef utility can be found in
Knowledge Article KAP231E5E, How do you change the default date from 2Y to 4Y?.
40
Migration Guide, Release 15.0
Chapter 4 Preparing the Destination System
Recording DBC.Hosts Information
Recording DBC.Hosts Information
The system DBC.Hosts table contains the Logical Host Ids, Host Names and Default
Character Sets for each network and host connection. This information is specific to the
destination system environment and was configured during the installation of the Teradata
Database.
During the migration, the DBC.Hosts file of the source system is transferred to the
destination system, overwriting the destination system configuration data. The source
system settings may not be correct for the destination system environment and may need to
be updated.
Assuming that the destination system Hosts table is set up correctly, the information must be
saved before migrating the data from the other system which will overwrite it. The HostsInfo
view must be used to display the information in a readable format.
1 Log in to the destination system Control Node over the network.
2 Start BTEQ and log in.
3 Find the contents of the current DBC.HostsInfo:
SELECT * FROM dbc.hostsinfo;
The response is similar to:
LogicalHostId HostName DefaultCharSet ------------- ----------------------------- 101 MVS1 KANJIEBCDIC5035_0I 1025 COP
KANJIEUC_OU
This shows the Logical Host ID, Host Name and Default Character Set for each host
system configured in the destination system environment. COP is the network
connection you logged in through for this session. In this example, the Default Character
Set used for the network sessions is KANJIEUC_0U. This example also shows there is a
Channel-connected host, MVS1, connected to the destination system.
Notice: You must use the DBC.HostsInfo view for the information to display correctly.
4 Save the DBC.Hosts query results to a text file for later reference.
Copying Site-Defined Client Character Sets
If Teradata Database uses any Site-Defined Client Character Sets, you must copy some files
manually to a new location. Site-Defined Client Character Sets are described in International
Character Set Support, Knowledge Article WS401-0912200301054. Search for the Knowledge
Article number in the Search Repositories box on the main screen of Teradata @ Your
Service at https://tays.teradata.com.
1 Ask your system administrator or database administrator if there are any Site-Defined
Client Character Sets on the system and whether they need to be saved for future use.
Migration Guide, Release 15.0
41
Chapter 4 Preparing the Destination System
Copying Site-Defined Client Character Sets
2 Inspect the files in the /opt/teradata/tdat/tdbms/starting_version/etc
directory (where starting_version is the previous Teradata Database version) to see if
there are any files that begin with "map."
3 Now inspect the files in the /opt/teradata/tdat/tdbms/new_version/etc
directory (where new_version is the new Teradata Database version) to see if there are
any files that begin with map.
4 If any of the map files in the /opt/teradata/tdat/tdbms/
starting_version/etc directory are not present in the /opt/teradata/tdat/
tdbms/new_version/etc directory, copy the map files into the new directory.
42
Migration Guide, Release 15.0
CHAPTER 5
Restoring Data
Data Restoration Basics
For migrations from Linux systems running Teradata Database releases earlier than 14.10 or
from non-Linux systems, the ARC utility in conjunction with BAR application software
must be used to restore data; otherwise, the DSA or the ARC utility can be used.
Note: The ARC utility and BAR software can co-exist with DSA on the same BAR hardware.
However, BAR software restores only from BAR archives, and DSA restores only from DSA
archives.
Each site has established procedures for restoring data. Use the procedures established as
standard for your site.
To properly run the post-restore scripts, you need to have stored the database DBC
tables, journal tables, and user data tables in separate archive files. In addition, you need to
have archived database DBC archival in a single stream. Any existing archives created using
the multi-stream feature cannot be restored.
Some databases and objects cannot be restored on the destination system unless the related
feature has been enabled on the destination system prior to restoration. Temporal and
Columnar Partitioning as well as any other feature that must be purchased separately fall
into this category.
Time Zone Setting Management
You must disable Time Zone settings on the destination system before restoring DBC
database. Otherwise, the restoration fails. After restoring DBC database and user data, and
then running the necessary DIP script, you can re-enable Time Zone settings on the
destination system, as preferred.
As a first step in the restoration process, check the Time Zone setting status on the
destination system.
Checking Time Zone Setting Status
Before restoring DBC database, use the dbscontrol utility to check whether the Time Zone is
set on the destination system.
Migration Guide, Release 15.0
43
Chapter 5 Restoring Data
Time Zone Setting Management
1 From a command prompt on the destination system, run the dbscontrol utility to display
general settings:
#dbscontrol
> display general
18.System TimeZoneString = American Eastern
If the return value for System TimeZoneString is anything except Not Set, as in the
above example, disable the Time Zone setting before restoring DBC database.
Disabling the Time Zone Setting
If the Time Zone setting is enabled on the destination system, disable the setting before
restoring DBC database.
1 On the destination system, locate the tdlocaledef.txt file that was used to enable the
Time Zone setting, and access that directory. For example:
# locate tdlocaledef.txt
/opt/teradata/tdat/tdbms/15.00.00.00/etc/tdlocaledef.txt
# cd /opt/teradata/tdat/tdbms/15.00.00.00/etc/
2 Save a copy of the current tdlocaledef.txt file. For example:
# cp tdlocaledef.txt tdlocaledef.txt.orig
3 In tdlocaledef.txt, remove the value for the TimeZoneString property, leaving
only the quotation marks. For example:
# vi tdlocaledef.txt
// System Time Zone string
TimeZoneString {""}
:wq!
4 Run the tdlocaledef utility on tdlocaledef.txt:
# /usr/tdbms/bin/tdlocaledef -input tdlocaledef.txt -output new
5 Issue the tpareset command to disable the Time Zone setting:
# tpareset -f remove TimeZoneString
6 Run the dbscontrol utility to confirm that the Time Zone setting is disabled:
# dbscontrol
> display general
18. System TimeZone String = Not Set
44
Migration Guide, Release 15.0
Chapter 5 Restoring Data
Data Restoration Using DSA
Enabling the Time Zone Setting
If you disabled the Time Zone setting on the destination system before restoring DBC
database, enable the setting after running the necessary DIP script following DBC database
and user data restoration.
1 On the destination system, access the directory where you saved the copy of
tdlocaledef.txt. For example:
# cd /opt/teradata/tdat/tdbms/15.00.00.00/etc/
2 Run the tdlocaledef utility on the saved copy of tdlocaledef.txt. For example:
# /usr/tdbms/bin/tdlocaledef -input tdlocaledef.txt.orig -output new
3 Issue the tpareset command to enable the Time Zone setting:
# tpareset -f set TimeZoneString
4 Run the dbscontrol utility to confirm that the Time Zone setting is enabled. For example:
# dbscontrol
> display general
18. System TimeZone String = American Eastern
Data Restoration Using DSA
Before using DSA for data archival and restoration, you must set up third-party components
and use the Viewpoint Monitored Systems portlet to add and enable the applicable systems,
making them available in the BAR Setup portlet. For information, see Teradata Viewpoint
User Guide. In addition, you must configure the systems, nodes, backup solutions, and target
groups in the BAR Setup portlet to make them available in the BAR Operations portlet. For
information, see Teradata Data Stream Architecture User Guide.
Restoring DBC and User Data
You must create two restore jobs from the two backup job save sets. One includes only the
DBC database and one includes all the databases under DBC and excludes DBC.
The backup jobs must have successfully completed to create restore jobs from the save sets.
Note: Before restoring the DBC database, the target system must be reinitialized by running
SYSINIT.
1 On the target system, run the DBS Database Initializing Program (DIP) and execute the
following:
• For DBS 15.0 and higher, run DIPMIG script, which runs DIPERRS, DIPDB,
DIPVIEWS, DIPVIEWSV, and DIPBAR.
• For DBS 14.10, run DIPERRS, DIPVIEWS, and DIPBAR scripts (1, 4, and 21).
Migration Guide, Release 15.0
45
Chapter 5 Restoring Data
Data Restoration Using DSA
2 Check the Time Zone setting status on the target system, and disable the setting if it is
enabled.
3 From the BAR Setup portlet, check the activation status of the system that has been
reinitialized by SYSINIT and perform one of the following:
• If the target system is configured and enabled in the BAR Setup portlet, click
Deactivate System to deactivate the target system, and then click Update system
selector for JMS Messages.
• If the target system is not configured in the BAR Setup portlet, add the system and
click Activate System before the restore can proceed.
4 On the target system, start DSMain from the Database Window (DBW) console
supervisor screen by entering start bardsmain.
5 After DSMain starts, activate the system in the BAR Setup portlet.
6 On the target system, from the Database Window (DBW) console supervisor screen, type
enable dbc logons to enable logons for the DBC user only.
7 From the BAR Operations Saved Jobs view, create a restore job from the backup job save
set that saved only the DBC database:
a Click
on the backup job created for DBC only, and select Create Restore Job.
b Enter a Job Name, such as Restore-DBC-Only.
c Select a Destination System from the list.
d When prompted, enter login credentials for the current DBC user and password for
the target DBS.
e Select a Target Group from the list.
f
Click Job Settings > Set Credentials to enter the credentials for the DBC user and
password of the source system at the time the backup save set was generated.
g Click Save.
h Click
i
on Restore-DBC-Only and select Run.
If there are any errors, follow the instructions in the log file to correct the problem and
run the post dbc script again.
The post restore script output log files are saved in /var/opt/teradata/tdtemp/
post_restore_dbs version.
Note: After the DBC restore is complete, the DBC password is set to the source system's
DBC password.
8 From the BAR Operations Saved Jobs view, create a SYSLIB database restore job from
the backup job save set of the databases under DBC, excluding DBC:
on the backup job created for the databases under DBC, and select Create
Restore Job.
a Click
b Enter a Job Name, such as Restore-SYSLIB.
46
Migration Guide, Release 15.0
Chapter 5 Restoring Data
Data Restoration Using DSA
c Select a Destination System from the list.
d When prompted, enter login credentials for the current DBC user and password for
the target DBS.
e Select a Target Group from the list.
f
On the Objects tab, clear the top check box, then expand the tree and select the check
box for only SYSLIB.
g Click Save.
h Click
on Restore-SYSLIB and select Run.
9 From the BAR Operations Saved Jobs view, create a restore job from the backup job save
set of the databases under DBC, excluding DBC:
a Click
Job.
on the backup job for the databases under DBC, and select Create Restore
b Enter a Job Name, such as Restore-DBC-All.
c Select a Destination System from the list.
d When prompted, enter login credentials for the current DBC user and password for
the target DBS.
e Select a Target Group from the list.
f
For DBS 15.00 or later, clear the check box for TD_SERVER_DB in the Objects tab.
TD_SERVER_DB has some dependencies that must be met before it can be restored.
g Click Save.
h Click
i
on Restore-DBC-All and select Run.
If there are any errors, follow the instructions in the log file to correct the problem
and run the post data script again.
The post restore script output log files are saved in /var/opt/teradata/tdtemp/
post_restore_dbs version.
10 For DBS 15.00 or later, from the BAR Operations Saved Jobs view, create a
TD_SERVER_DB restore job from the backup job save set of the databases under DBC,
excluding DBC:
a Click
Job.
on the backup job for the databases under DBC, and select Create Restore
b Enter a Job Name, such as Restore-TD_SERVER_DB.
c Select a Destination System from the list.
d When prompted, enter login credentials for the current DBC user and password for
the target DBS.
Migration Guide, Release 15.0
47
Chapter 5 Restoring Data
Data Restoration Using ARC
e Select a Target Group from the list.
f
On the Objects tab, clear the top check box, then expand the tree and select the check
box for only TD_SERVER_DB.
g Click Save.
h Click
on Restore-TD_SERVER_DB and select Run.
11 On the target system, run the DBS Database Initializing Program (DIP) and execute the
DIPALL script.
12 If you disabled the Time Zone setting on the target system, enable it.
13 On the target system, from the Database Window (DBW) consoles supervisor screen,
enable logons for all users.
Data Restoration Using ARC
Working with the dbc Password
Due to security concerns related to password exposure during the execution of
post_dbc_restore, the DBA must change the dbc password to a temporary value for the
migration prior to executing post_dbc_restore, and reset the password to its desired value
after the migration has been completed.
RESTORE Command
RESTORE and COPY are the two ARC commands that move data from an archive to a
Teradata Database system. However, only the RESTORE command can be used for fullsystem migrations, as described in this document. For information on using the COPY
command in performing a partial data migration without restoration of database DBC, see
Teradata Archive/Recovery Utility Reference.
Migration Rules
The following rules apply when restoring data as part of migrating a full-system:
• The destination system must be empty before beginning the restoration process. Prepare
the destination system by executing only the sysinit command and the DIPMIG script, in
that order.
• Perform the restoration on the destination system by completing the following steps in the
order indicated:
1. Check the Time Zone setting status, and disable the setting if it is enabled.
2. Restore database DBC from the source system archive.
3. Restore journal tables from the source system archive.
48
Migration Guide, Release 15.0
Chapter 5 Restoring Data
Data Restoration Using ARC
4. Run the post_dbc_restore script.
5. Restore database SYSLIB from the source system archive.
6. Restore user data from the source system archive(s).
Note: When restoring user data, exclude DBC database and TD_SYSFNLIB. It is also
recommended that you exclude CRASHDUMPS, SYSLIB, and SYSSPATIAL. If
migrating from Teradata Database release 15.0 or later, also exclude
TD_SERVER_DB.
7. If you excluded TD_SERVER_DB when restoring user data, restore TD_SERVER_DB
from the source system archive individually.
8. Run the post_data_restore script.
Note: The post_data_restore script executes DIP scripts as appropriate. Some of
these DIP scripts may require a different logon. If the current password security
restrictions would prevent logon with the hard coded password, the
post_data_restore script temporarily modifies the password security restrictions
to enable the logon. After the DIP script completes, the post_data_restore script
issues a tpareset to reset the password security restrictions to the original user
settings.
9. If you disabled the Time Zone setting, enable it.
• Migrate user data using the RESTORE command only once and after the first restoration
of DBC database and first execution of the post_dbc_restore script. Using the COPY
command or restoring user data more than once prevents automatic execution of the
upgradestats.pl script to migrate the statistics for user databases. In such a scenario,
you must run the upgradestats.pl script manually.
Log on Information for Scripts
All of the scripts for the migration have a similar command format. The command format
requires the following information to be included in the script command:
• The system_name
• The DBC System password
Restoring Database DBC
Restoring database DBC is the first step in making a copy of the complete archived source
system on the destination system. Restore database DBC only once, and run the
post_dbc_restore after doing so.
Note: Prior to archiving and restoring database DBC, user DBC must have full access rights
for database DBC. If the source system is not available, such as when restoring from tape,
then full access rights must be explicitly granted for each object, view, and macro in DBC
after database DBC restoration in order for the post_dbc_restore and post-restoration
DIP scripts to run cleanly.
If you have permanent journals, restoring database DBC converts the permanent journal
dictionary to the new format, but does not convert data in the permanent journal.
Migration Guide, Release 15.0
49
Chapter 5 Restoring Data
Data Restoration Using ARC
The following procedure and ARC-based scripts are common to many sites. If your site has
different restoration procedures or job scripts, use those instead to complete the process.
1 If not previously done as part of initializing the destination system, complete the
following steps:
a Execute the sysinit command.
Note: Execution of this command is a critical step in the migration process. It is the
only time when row format and hash function can be changed, and is also the only
time when Japanese language support can be enabled. Consult with Teradata
Customer Services for assistance as necessary.
b Open a Supervisor window, and then enter the following ARC command:
start dip
c Run the DIPMIG script (which runs the DIPERRS, DIPDB, DIPVIEWS, DIPVIEWSV,
and DIPBAR scripts in the required order).
Note: Do not run the DIPALL script after executing the sysinit command. Run only
the DIPMIG script. If this script is not run, or the DIPALL script is run, DBC database
restoration will fail and the sysinit process must be repeated on the destination
system.
d In the Supervisor window of the DBC console, enable only DBC logons by entering
the following code:
enable dbc logons
Note: Enabling only DBC logons on the destination system before restoring database
DBC reduces outside logon attempts during restoration.
2 Restore database DBC.
The following example illustrates code for restoring database DBC:
LOGON tdp/dbc,dbc_password;
DELETE DATABASE (DBC) ALL, EXCLUDE (DBC);
RESTORE DATA TABLES (DBC), RELEASE LOCK, FILE=ARCHIVE;
3 If you have permanent journals, restore the journal tables using the following commands:
RESTORE DICTIONARY TABLES ALL FROM ARCHIVE,
RELEASE LOCK,
FILE=JNLARCH;
Note: Journal data is not restored.
4 Run the post_dbc_restore script.
Running the post_dbc_restore Script
The post_dbc_restore script updates any table IDs for access rights and compiles system
UDTs. If the script is not run prior to restoring user databases, the upgradestats.pl script
does not run automatically and must instead be run manually. This is also the case if user
50
Migration Guide, Release 15.0
Chapter 5 Restoring Data
Data Restoration Using ARC
databases are restored via the COPY command (which should not be used when performing
a full system migration).
1 Log on to the Control Node as user root.
2 Run the post_dbc_restore script.
/opt/teradata/PUTTools/td15.00.xx.xx/IUMB_scripts/post_dbc_restore host_name/
dbc,dbcpassword
The script outputs logs in /var/opt/teradata/PUTTools/
<script>_<td_release>.
3 Use the tail command to monitor the output of the post_dbc_restore script:
tail -f /var/opt/teradata/migfiles/post_dbc_restore_15.00.00.00/
post_dbc_restore.<timestamp>
Upon script completion, a list of errors appears.
Note: Running the post_dbc_restore script on SLES11 after DBC restore of an archive
originated from SLES10 or earlier results in script failure and a message that PSF
conversion is necessary. Using the Migrate PSF Setting option of the Workload
Management Pre-Migration Tool prior to migration allows you to convert existing PD
sets to SLES11 workloads allocated the appropriate level of resources.
If the source system is no longer available (for example, due to a system re-staging with
backup, re-staging, sysinit, or restore), the required PSF configuration can no longer be
acquired using the psfmigration.pl script, so this information is lost.
4 Correct any reported errors, and then run the script again. Repeat this process until no
errors result.
Notice: Do not continue with the migration until the script runs without any errors.
Restoring SYSLIB Database
If user databases contain algorithmic compressed data, you must restore SYSLIB database
after restoring DBC database and running the post_dbc_restore script and before
restoring any other user databases. If you created a separate archive containing only SYSLIB
database, restore the database from that archive. Otherwise, restore SYSLIB database from
the archive containing all user data.
1 Submit a job with the following ARC command:
RESTORE DATA TABLES (SYSLIB),
RELEASE LOCK,
FILES=ARCHIVE;
Setting Session Mode
This task sets the dbscontrol Session Mode flag to Teradata mode for the rest of the
migration.
Migration Guide, Release 15.0
51
Chapter 5 Restoring Data
Data Restoration Using ARC
1 In the DBW Supervisor Window, open the dbscontrol utility by typing:
start dbscontrol
2 In the dbscontrol window, display the General settings by typing:
display general
3 Set the Session Mode to 0 (Teradata mode) by typing:
modify general 8=0
4 Exit the dbscontrol utility by typing:
quit
Migrating User Data Tables
About Migrating User Data Tables
The time required to restore user data is greater than the time required to archive the data for
several reasons, including the following:
• For all tables with fallback protection, two rows (primary and fallback) are written during
the restoration process, compared to only the primary row being written during the
archival process.
• If the destination system is configured differently than the source system, or the hash
function has changed, rows may initially be sent to the incorrect AMP and then need to
be rehashed and redistributed to the correct AMP.
• A change in row format between the source and destination systems requires rebuilding
each row field by field during restoration. Although restoring a system from an aligned
row format to a packed row format may require as much as 5 to 7 percent less space on
the destination system, the restoration process may take 2 or 3 times as long as the
archival process.
To more accurately estimate the time required for the restoration process, any changes
between the source and destination systems should be discussed with Teradata Customer
Services staff as part of the Change Control process prior to initiating migration.
Excluded Databases
When restoring user data, exclude DBC database and TD_SYSFNLIB. It is also
recommended that you exclude CRASHDUMPS, SYSLIB, and SYSSPATIAL. If migrating
from Teradata Database release 15.0 or later, also exclude TD_SERVER_DB.
If migrating from Teradata Database 15.0 or later, restore TD_SERVER_DB in a separate job
after restoring other user data.
User Data Restoration Commands
The following table provides some examples of the commands that might be included in your
restore job scripts. These are only the ARC commands and are not complete scripts.
Note: You must revalidate referential constraints.
52
Migration Guide, Release 15.0
Chapter 5 Restoring Data
Data Restoration Using ARC
Example
Commands
If you have permanent journals, re-create the journal
dictionaries using these commands that exclude database
DBC. The journal data is not restored.
RESTORE DICTIONARY TABLES ALL FROM ARCHIVE,
Commands to restore all user databases except those
specifically excluded.
Note: If migrating from a Teradata Database release earlier
than 15.0, do not include TD_SERVER_DB among the
excluded databases.
These commands restore user databases, secondary indexes,
and fallback.
RELEASE LOCK,
FILE=JNLARCH:
RESTORE DATA TABLES (DBC) ALL,
EXCLUDE(DBC), (TD_SYSFNLIB), (CRASHDUMPS),
(SYSLIB), (SYSSPATIAL), (TD_SERVER_DB),
RELEASE LOCK,
FILES=ARCHIVE;
Release the host utility locks with this command.
RELEASE LOCK (DBC) ALL;
You must revalidate referential constraints with these
commands.
REVALIDATE REFERENCES FOR (DBC) ALL,
EXCLUDE (DBC), (TD_SYSFNLIB)
RELEASE LOCK;
Restore Warning Messages
Using the RESTORE command may result in warning messages that you should review.
Certain of these messages may require subsequent action to resolve the issue. The following
messages related to statistics may require recollection of statistics on the table and column(s)
specified in the SQL information contained in the ARC log.
• ARC0727 CREATE Stat Collection failed with error %s: %s.
Explanation: An error was encountered while trying to recreate a Stat Collection. The
error will include the error number and text returned by the Teradata system.
For Whom: User
Notes: Non-Fatal Error
Remedy: Either resolve the Teradata error that was displayed in the ARC log and
reattempt the restore, or recollect statistics on the table and column(s) specified in the
SQL information provided in the ARC log.
• ARC0728 Stat Collection can not be recreated during a cluster
restore. Stat Collections should be restored from this archive
after all cluster restore and build steps have been completed.
Explanation: During the dictionary part of a cluster restore, ARC cannot recreate stat
collection information, since the data has not yet been restored to the tables. ARC must
defer recreating the stat collection until after the data for all tables has been restored.
For Whom: User
Notes: Non-Fatal Error
Remedy: After the cluster restore and build steps have been completed, the user needs to
manually recollect the statistics.
• ARC0729 SHOW Stat Collection query failed with error %s: %s.
Migration Guide, Release 15.0
53
Chapter 5 Restoring Data
Data Restoration Using ARC
Explanation: An error was encountered while trying to query the definition for a stat
collection. The error will include the error number and text returned by the Teradata
system.
For Whom: User
Notes: Non-Fatal Error
Remedy: Either resolve the Teradata error that was displayed in the ARC log and
reattempt the restore, or recollect statistics on the table and column(s) specified in the
SQL information provided in the ARC log.
• ARC0730 Stat Collection query failed with error %s: %s.
Explanation: An error was encountered while trying to query the list of SC indexes from
the StatsV view in database DBC. The error will include the error number and text
returned by the Teradata system.
For Whom: User
Notes: Non-Fatal Error
Remedy: Either resolve the Teradata error that was displayed in the ARC log, or recollect
statistics on the table and column(s) specified in the SQL information provided in the
ARC log.
Running the post_data_restore Script
The post_data_restore script must be executed after restoring all user data. This script
performs a variety of tasks necessary to prepare the system for use on the Teradata Database
release, including DIP script execution and stored procedure recompilation. You must review
the results of the post_data_restore script to verify that all tasks completed without error.
1 Log in on the Control Node as user root.
2 Run the post_data_restore script.
# /opt/teradata/PUTTools/td15.00.xx.xx/IUMB_scripts/
post_data_restore DBSsystem_name/dbc,dbcpassword
Example:
# /opt/teradata/PUTTools/td15.00.xx.xx/IUMB_scripts/
post_data_restore system1/dbc,mypass
3 When the script finishes, examine the error summary at the end of the output log file,
where each error detected during the script execution is detailed.
Output log files can be found in /var/opt/Teradata/PUTTools/
<script>_<td_release>.
4 After correcting the errors, run the script again until there are no errors that require
correction.
Notice: Do not continue with the migration until the script runs without any errors that require
correction.
54
Migration Guide, Release 15.0
Chapter 5 Restoring Data
Data Restoration Using ARC
Restoring a Single Table or Database
If you need to migrate a single table or database, run the post_data_restore script with
the -t option. The post_data_restore script performs required actions such as
revalidating PPI tables and compiling stored procedures and UDFs on all database objects in
the system. Running the script with the-t option limits the required actions to only the
single database or table being restored. In addition, the -t option suppresses production of
harmless errors that do not require resolution.
Do not run DIP scripts after migrating a single table or database.
1 Log in on the Control Node as user root.
2 Run the post_data_restore script with the -t option.
# /opt/teradata/PUTTools/td15.00.xx.xx/IUMB_scripts/post_data_restore -t
tablename/databasename,dbcpassword
Example:
# /opt/teradata/PUTTools/td15.00.xx.xx/IUMB_scripts/post_data_restore -t mytable/
mydatabase,mypass
Migrating LOB Tables to Systems with Different Hash Functions
If the destination system has a different hash function than the source system, all of the data
rows are redistributed during the migration. Prior to Teradata Database Release 14.0, LOB
tables were not restored if the destination system had a different hash function than the
source system. In Teradata Database Release 14.0 and higher, LOB tables are restored across
different hash functions, however the table cannot be used on the destination system until
the rehashlobs.pl script has been executed, which rebuilds and redistributes the LOB
table correctly.
The rehashlobs.pl script (located in the PUTTools directory) functions as follows:
1.
2.
3.
4.
5.
Create a temporary NoPI table that has the same field listing.
Insert Select the data from the restored table (primary) into the temporary NoPI table.
Delete all data rows from the primary table. (This ensures duplicates are removed.)
Insert Select the data from the temporary table into the primary table.
Drop the temporary table.
Additional changes are needed to NoPI Tables to change the hash bucket unique identifier
for large tables with LOBs.
Note: The script is not automatically executed because it is not required on systems with the
same hash functions or systems without LOB tables.
Migration Guide, Release 15.0
55
Chapter 5 Restoring Data
Data Restoration Using ARC
56
Migration Guide, Release 15.0
CHAPTER 6
Setting Up Teradata Database
About Modifying Security Settings
Teradata Database 15.00 initially starts with security defaults that are compatible with the
security defaults in previous Teradata Database releases and with the newly installed
Teradata Tools and Utilities programs. Do not modify any of the security settings in either
Teradata Database or Teradata Tools and Utilities at this time. After the migration is
complete, you can review the new security capabilities available for Release 15.00 and revise
your security setup if required. See Security Administration for full details.
Inspecting Stored Procedures
If stored procedures exist on the system, the automatic recompile process generates several
output log files for inspection.
1 In /var/opt/teradata/PUTTools/<script>_<td_release>, inspect each of the
following output log files, and take the identified corrective action(s), if necessary.
Recompile Output File
Purpose
spnorecomp.txt
Lists all the stored procedures that failed Review the spconvlog.out file to
to recompile for any reason.
identify the cause of failure.
spconvlog.out
Lists the specific cause of error for each
stored procedure.
mig_spnorecomp.txt
Lists all the stored procedures created
on a different platform that failed to
recompile for any reason.
mig_spconvlog.out
Lists the specific cause of error for each
stored procedure created on a different
platform.
Migration Guide, Release 15.0
Corrective Action
Correct any errors in the source
program language (SPL) that may
be preventing the recompile.
b. Manually recompile the stored
procedure.
a.
Review the mig_spconvlog.out file
to identify the cause of failure.
Correct any errors in the SPL that
may be preventing the recompile.
b. Manually recompile the stored
procedure.
a.
57
Chapter 6 Setting Up Teradata Database
Inspecting Java External Stored Procedures
Recompile Output File
Purpose
Corrective Action
sp_nospllist.txt
Lists all the stored procedures that could Re-create the stored procedure and store
not recompile because no SPL was
the SPL text with the stored procedure.
stored with the procedure.
Inspecting Java External Stored Procedures
The post_data_restore script recompiles Java external stored procedures (JXSPs) and
generates several output files for inspection.
1 In /var/opt/teradata/PUTTools/<script>_<td_release>, inspect each of the
following output log files, and take the identified corrective action(s), if necessary.
Recompile Output File
Purpose
Corrective Action
jxspconv.pl.<date>_<timestamp>.log
Records the execution of the
post_data_restore script,
including creation of files related
to errors encountered, if any.
Review generated error files, if any.
For cleanup operations, this file
shows which JXSPs and JARs
have been removed.
Note: If JXSPs do not exist on the
system, this file reflects the
following message:"No
Teradata Java XSP/UDF or
Jars...Returning from
jxspconv.pl”
mig_jxspteradata.bteq
Contains the SQL statements that None.
actually recompile all of the
JXSPs found.
mig_jxspteradata.log
Lists the results of the bteq script Review this file for causes of recompile
execution.
failures.
Inspecting UDFs and UDTs
Recompilation of user-defined files (UDFs) and user-defined types (UDTs) generates output
files for inspection.
1 In /var/opt/teradata/PUTTools/<script>_<td_release>, inspect each of the
following output log files to verify that the UDFs and UDTs recompiled properly.
• udfalter.pl.<date>_<timestamp>.log
• udtalter.pl.<date>_<timestamp>.log
Note: If no UDFs or UDTs exist on the system, these files contain the following line:
58
Migration Guide, Release 15.0
Chapter 6 Setting Up Teradata Database
Setting Up DBQL Rules Table
Query completed. No rows found.
Setting Up DBQL Rules Table
This task is optional.
Database Query Logging (DBQL) data is not automatically migrated from the source to the
destination system. However, if DBQL logging data was manually copied from the DBQL
tables to user tables prior to archiving the source system, the data can be accessed after
migration by re-establishing DBC.DBQL Rules table on the destination system.
1 Re-establish the DBC.DBQL Rules table using one of the following methods:
• Run the bteq script that was created by the pre_upgrade_prep.pl script with the
BEGIN QUERY LOGGING statements.
• Issue the BEGIN QUERY LOGGING statements manually.
Verifying DBC.Hosts Configuration
If the new contents of the DBC.Hosts table are different from the original data, the data on
the destination system must be changed back to the original data.
1 Repeat the Recording DBC.Hosts Information task.
2 Compare the restored contents with the original contents recorded earlier.
3 Change the new contents back to the original data.
Related Topics
Recording DBC.Hosts Information, on page 41
Running SCANDISK (Post-Upgrade)
Running SCANDISK is an optional step. However, if you haven't run SCANDISK in the past
couple of months, it is recommended that you do so now.
1 Coordinate with the system administrator to determine an appropriate setting
(priority=low/medium/high/rush) at which to run SCANDISK.
Running SCANDISK at a lower priority reduces the performance impact of the utility
running while users are on the system.
2 In the Teradata Database Window, open the Supervisor window.
3 In the Supervisor window, open the Ferret window:
start ferret
4 In the Ferret window, start a log file of SCANDISK output:
Migration Guide, Release 15.0
59
Chapter 6 Setting Up Teradata Database
Running CheckTable (Post Upgrade)
a Select File > Logging.
The Select Log File for window dialog box appears.
b Enter the path and file name for the logging file at Filter and Selection.
The logging file should be /home/support/ccnumber/
post_upg_scandisk.txt.
c Click OK.
5 Set SCANDISK to run at a specific priority on the Ferret command line:
set priority=low
6 Start SCANDISK with the following command on the Ferret command line:
scandisk
SCANDISK advises that the test will be started on all Vprocs.
7 When asked Do you wish to continue based on this scope? (Y/N), enter Y.
SCANDISK takes hours to check larger disk arrays.
• SCANDISK progress can be checked at any time using the inquire command.
• Stop SCANDISK at any time with the abort command.
8 Review the results and address any errors or issues.
Note: If necessary, open a support incident.
9 Run SCANDISK again, repeating this step until SCANDISK gives a report similar to the
following:
8 of 8 vprocs responded with no messages or errors.
date time Scandisk has completed.
10 Exit logging by choosing File > Logging in the Ferret window and clearing the Logging
option.
11 Exit the Ferret application window:
exit
12 Leave the Supervisor window open.
Running CheckTable (Post Upgrade)
Running CheckTable is an optional step, however, if you haven't run CheckTable in the past
couple of months, it is recommended that you do so now.
1 Coordinate with the System Administrator to determine the priority setting to use for
CheckTable, as follows:
• l = Low
• m = Medium (default)
• h = High
60
Migration Guide, Release 15.0
Chapter 6 Setting Up Teradata Database
Re-Synchronizing MDS and DBS
• r = Rush
2 In the Teradata Database Window, open the Supervisor window.
3 In the Supervisor window, start CheckTable:
start checktable
The CheckTable window opens. Do not change the log file.
4 In the CheckTable window, start a log file of CheckTable output:
a Select File > Logging.
The Select Log File for window dialog box appears.
b Enter the path and file name for the logging file at Filter and Selection.
The logging file should be /home/support/ccnumber/
postupg_checktable_out.txt.
c Click OK.
5 At the CheckTable command prompt, start a thorough check of all system tables with
the command below.
check all tables at level one with no error limit in parallel error
only;
CheckTable commands require a semicolon at the end.
Note: Use the IN PARALLEL option if no users are logged on.
6 Inspect the CheckTable log file for any problems.
If CheckTable finds problems with any tables, correct each problem. If necessary, contact
the Teradata Global Support Center for assistance. Then repeat the test to check each
table skipped due to lock contention. Continue fixing problems until there are no further
issues.
7 Review the results and address any errors or issues.
Note: If necessary, open a support incident.
8 Quit CheckTable:
quit;
9 Exit logging by selecting File > Logging in the CheckTable window to uncheck Logging.
10 Close the Database Window if desired.
Re-Synchronizing MDS and DBS
1 Re-synchronize the Teradata Meta Data Services (MDS) with the Teradata Database
system.
See the Teradata Meta Data Services documentation for instructions.
Migration Guide, Release 15.0
61
Chapter 6 Setting Up Teradata Database
Enabling Logons
Enabling Logons
1 Open the Supervisor window.
2 Check the status line in the Database Window (DBW) to see if logons are enabled.
3 If not, then in the Supervisor window, type:
enable logons
4 Verify the status line in the DBW has changed to Logons are enabled.
5 Close the Supervisor window.
62
Migration Guide, Release 15.0