Download EM408 - Backup and Recovery of ASA

Document related concepts

Open Database Connectivity wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

IMDb wikipedia , lookup

Oracle Database wikipedia , lookup

Btrieve wikipedia , lookup

Commitment ordering wikipedia , lookup

Relational model wikipedia , lookup

Ingres (database) wikipedia , lookup

Database wikipedia , lookup

Database model wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Clusterpoint wikipedia , lookup

Versant Object Database wikipedia , lookup

Serializability wikipedia , lookup

ContactPoint wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Concurrency control wikipedia , lookup

Transcript
EM408
Backup and Recovery
of SQL Anywhere
Tips and Techniques
Reg Domaratzki
Sustaining Engineering
iAnywhere Solutions
[email protected]
EM408 - Backup and Recovery of
SQL Anywhere : Tips &
Techniques
Points of Persistent Data
Automatic Recovery
Backing Up Your Database
Backup Considerations in a Replicating
Environment
Recovery from Media Failure
Backup Strategies
Points of Persistent Data
Unlike some DBMS systems, ASA relies
on the operating system to manage the
files associated with maintaining data
Main Database File
DBSpaces
Transaction Log
Mirror Transaction Log
All of these files are read and written to
disk based on the page size specified
on the dbinit command line
Main Database File
The name of the main database file is
specified on the dbinit command line
It may or may not contain the full path to
the database file
The main database file generally
contains all the table pages and index
pages associated with the database
DBSpaces
A DBSpace is another database file that also
keeps table pages and index pages
There are two main reasons that DBspaces are
used
Performance - Putting index pages and table pages on separate
devices can often speed up transactions that require index
balancing
2GB File Limit - On FAT file systems, ASA will only support a file
up to 2GB in size
The path name specified for the location of the
DBSpace must be consistent if you move the
database to another machine
The Transaction Log
The transaction log is stored in a separate file
and is also specified on the dbinit command
line
All changes to the database are stored in the
transaction log in the order that they occur
The transaction log should reside on a different
device than the database file
By default, the transaction log is put on the same
device and in the same directory as the
database—this does not protect against media
failure
Using a Transaction Log Mirror
An identical copy of the transaction log
Maintained at the same time as the
transaction log
Every time a database change is written
to the transaction log, it is also written
to the transaction log mirror file
By default, a mirrored transaction log is
not used
Why Use a Transaction Log
Mirror?
Media failure on the device where the
transaction log resides can result in you losing
your current transaction log
Because changes are written to the transaction
log before they are written to the main
database file, you can lose your most recent
data if your transaction log is lost
By mirroring your transaction log to a different
device, you protect your most recent data in
the case of media failure
Because there is no strong I/O to network
devices, we cannot guarantee recoverability of
the database if a transaction log resides on a
network device
Validating the Transaction Log on
Database Startup
The server checks that the transaction log and
its mirror are identical by carrying out a full
comparison of the two files on startup
If the server finds that the transaction log and
the mirror are identical up to the end of the
shorter of the two files, the remainder of the
longer file is copied into the shorter file
If the check finds that the log and the mirror are
different in the body of the shorter of the two,
one of the two files is corrupt and the engine
will shut down
Where are we?
Points of Persistent Data
Automatic Recovery
Backing Up Your Database
Backup Considerations in a Replicating
Environment
Recovery from Media Failure
Backup Strategies
Automatic Recovery
In order to ensure database recovery in
the case of a system failure, ASA keeps
two other logs internally in addition to
the transaction log
The Checkpoint Log
The Rollback Log
The checkpoint log
A checkpoint is an event that causes all dirty
pages in memory to be written to disk
A database file is composed of pages
Before a page is updated (made dirty), a copy of
the original is always made
The copied pages are place in the checkpoint log
Following a checkpoint, the checkpoint log is
deleted
The checkpoint log is physically located within
the main database file
The rollback log
Also stored in the main database file
There is a separate rollback log for each
connection
Every time a transaction is started, the rollback
log will contain the SQL statements needed to
undo the transaction in case a ROLLBACK
command is executed
The rollback log for a given connection is deleted
once a COMMIT or ROLLBACK is executed
Steps to recover from a system
failure
Adaptive Server Anywhere automatically takes three
steps to recover from a system failure:
Restore all pages to the most recent checkpoint, using the checkpoint
log
Apply any changes made between the checkpoint and the system
failure. These changes are in the transaction log
Roll back all uncommitted transactions, using the rollback logs
Step 3 may take a long time if there are long
uncommitted transactions that have already done a
great deal of work since the last checkpoint
Where are we?
Points of Persistent Data
Automatic Recovery
Backing Up Your Database
Backup Considerations in a Replicating
Environment
Recovery from Media Failure
Backup Strategies
Backing up your Database
Backup Terminology
Validating your Database
Full Backups
Incremental Backups
Backup Terminology
Full backup
Makes a copy of the database file and (optionally) a copy
of the transaction log
Incremental backup
Makes a copy of only the transaction log
Online backups
Backups can be made without stopping the server
Offline backups
Database files are copied while the database server is not
running
Backup Terminology
Client-side Backup
An external process connects to the database engine and copies the
files to an alternate location
Server-side Backups
The database engine itself copies the files to an alternate location
Image Backups
Copies the database and/or transaction log to another directory
Archive Backups
Creates a single file holding all required backup information
Can be a single file or a tape drive device name
Check the validity of the
database
Before doing a full backup, it is a good idea to
verify that the database file is not corrupt
using dbvalid
By default, all dbvalid does is to scan all the
rows in all the tables and ensure that an index
entry exists for each row
You can increase the amount of work done by
dbvalid by adding additional options
WITH INDEX CHECK (-fi) will make sure that each entry in every
index has a corresponding row in the table
WITH DATA CHECK (-fd) will make sure that data in a row that
may spill onto additional pages actually exists
WITH FULL CHECK (-f) does an index check and a data check
Archive Backups
Instead of backing up the database file and
transaction log to another directory, all
database files can be backed up into a single
file that can be written directly to tape
Only one file can be stored on each tape
Meant for backing up very large databases
Performing an offline full backup
An offline backup can be made without using ASA by
physically copying the database and log file to an
alternate location
You can also do a client-side full backup using the
dbbackup command line utility and specifying the
database file name in the connection parameters
It is significantly faster to use the operating system
to copy the files if you are performing an offline
backup
Performing an online full backup
A client-side online full backup can be made
using the dbbackup command line utility and
specifying the server name and database name
in the connection parameters
dbbackup –c “eng=asa8;dbn=db1;uid=dba;pwd=sql” c:\backup
A server-side online full backup can be performed
using the BACKUP command when connected to
the database
BACKUP DATABASE TO ‘c:\backup’;
Note that the directory ‘c:\backup’ in a client-side
backup is relative to where dbbackup is run from,
and in a server-side backup, the directory is
relative to the database engine
Performing an offline incremental
backup
If the database is not running, you can simply
copy the transaction log to your alternate
location
You can also do a client-side incremental backup
using the dbbackup command line utility and
specifying the database file name in the
connection parameters
Again, it is faster to use the operating system
copy commands to perform an offline backup
Performing an online incremental
backup
A client-side online incremental backup can be
made using the dbbackup command line utility
dbbackup –t –c “eng=asa8;dbn=db1;uid=dba;pwd=sql”
c:\backup
A server-side online incremental backup
can be performed using the BACKUP
command
BACKUP DATABASE TO ‘c:\backup’ TRANSACTION
LOG ONLY;
Managing the transaction log
Using operating system commands
If the database engine was shut down cleanly, and the
database engine isn’t running, you can delete or rename
the current transaction log and transaction log mirror
with no side effects
The next time the database engine starts, since no log file
currently exists, and new log file will be created with a
starting log offset that matches with the end of the
recently deleted/renamed log file
Managing the transaction log
Using client-side backups
-t : Only back up the transaction log
-r : Rename and restart the current transaction log
• The current transaction log (after being backed up) is renamed to a file
based on the current date and a new empty current log file is created
-x : Delete and restart the current transaction log
• The current transaction log (after being backed up) is deleted and a new
empty current log file is created
-xo : Delete and restart the current transaction log without backup
• The current log file is simply deleted
• This limits your recoverability options
-n : rename local backup of transaction log to match server
• When backing up the log file to the backup directory, use a file name that
matches the renamed log file of the current transaction log
• This ensures that log files in your backup directory are not overwritten
Managing the transaction log
Using server-side backups
TRANSACTION LOG ONLY
• Equivalent to the –t switch on dbbackup
TRANSACTION LOG RENAME
• Equivalent to the –r switch on dbbackup
TRANSACTION LOG RENAME MATCH
• Equivalent to using the –r –n switches on dbbackup
TRANSACTION LOG TRUNCATE
• Equivalent to the –x switch on dbbackup
When does dbbackup wait for all
transactions to finish?
There are two reasons why dbbackup may wait
for all transactions to finish
The current transaction log is being renamed
• A transaction cannot span to separate log files, so all
transactions must be committed or rolled back for a log file
rename to occur.
The WAIT BEFORE START clause is used on server-side backups
• Using this clause ensures that there is nothing in the database
file that will cause automatic recovery to occur when the
database is started next
• Specifically, all the rollback logs must be empty, meaning that
there are no active transactions)
Running dbvalid on backed up
database files
Do not run dbvalid on a database file that you have
just backed up unless you can guarantee that the
database engine will not need to go through
recovery on startup
The WAIT BEFORE START option was used
You did an offline backup
During recovery, the engine will roll back transactions
that were not committed, and will make changes to
the database and log file
This will prevent this database from being used in
certain recovery situations (more on this later)
If you have used the WAIT BEFORE START or done an
offline backup, then you can start the database
engine in read-only mode and run dbvalid
Keep several full backups
If you back up on top of the previous backup, and
you get a media failure in the middle of the
backup, you are left with no backup at all
You should also keep some of your full backups
offsite to protect against fire, flood,
earthquake, theft, or vandalism
Live Backups
A live backup is an extra precaution you can
take to protect your current log file
A client-side connection is made using
dbbackup -l
dbbackup first takes a full copy of the current transaction log
dbbackup will stay connected to the database engine and every time a
new page is written to the current transaction log, the engine will
also send a copy of this page to dbbackup
Therefore, a duplicate copy of the current transaction log can be
maintained on a separate machine
Scheduling Backups to Run
In v7, you can now use event scheduling to automatically
back up your database
The next few slides show an implementation of a backup
strategy
Database file resides at c:\em408.db
Transaction log resides at d:\em408.log
Local backup directory is f:\
Network mapped drive “\\backup\loc” is an offsite machine used for backup
redundancy
A full backup is taken each Sunday
Before the full backup, all the current files are archived so we don’t overwrite
what might be the only good backup
An incremental backup is taken each night
8 A.M. on Sunday and 3 A.M. each night were chosen to minimize the chances
that there are open transactions at the time
Scheduling a Full Backup
create event FullBackup
schedule
start time ‘9:00AM’ on (‘Sun’)
handler
begin
backup database directory ‘f:\\'
wait before start;
xp_cmdshell(‘xcopy f:\*.* \\backup\loc /D /Q /Y’);
end
Scheduling an Incremental Backup
create event IncrementalBackup
schedule
start time ‘3:00AM' every 24 hours
handler
begin
backup database directory ‘f:\\'
transaction log only
transaction log rename match;
xp_cmdshell(‘xcopy f:\*.* \\backup\loc /D /Q /Y’);
end
Scheduling Clean Up
create event CleanUp
schedule
start time ‘8:00AM’ on (‘Sun’)
handler
begin
xp_cmdshell(‘del /Q /F f:\zip\backup.zip’);
xp_cmdshell(‘pkzip -a f:\zip\backup.zip f:\*.*);
xp_cmdshell(‘del /Q /F f:\*.*’);
xp_cmdshell(‘del /Q /F \\backup\loc’);
xp_cmdshell(‘copy f:\zip\backup.zip \\backup\loc’);
end
Overview
Points of Persistent Data
Automatic Recovery
Backing Up Your Database
Backup Considerations in a Replicating
Environment
Recovery from Media Failure
Backup Strategies
Backup Considerations in a
replicating environment
There are a few other things to consider
in a replicating environment
Protecting the current log files on the consolidated
database
Backing up old log files
Backing up the messaging system
dbremote -u
Protecting the Current Log File
When dbremote runs, it will always scan the
current transaction log
If your current log file is lost, and dbremote has
sent messages from a missing log file, you will
have to re-extract all remote users that picked
up the messages generated from the now
missing log file
It is extra important to make sure that there is a
mirror log file on a separate device from that
where the database file and log file reside
This will protect you from media failure on a
single device
Backing up Old Log Files
In order to manage your log files, you will often
rename and re-start your transaction logs
Until every remote user has confirmed every
transaction in a log file, you cannot delete the old
logs, and dbremote will need to know where these
old log files are located
It is a good idea to backup these old log files to
alternate media in case of media failure
Keeping old log files should be considered completely
separate from your backup and recovery procedures
Just the old renamed log files cannot guarantee
recoverability unless a valid backup strategy is also
in place
Backing up the messaging
system
There is no need to back up the
messaging system
The guaranteed delivery system that is
implemented by dbremote will
guarantee that any messages lost as a
result of system failure or media failure
will be resent
DBREMOTE -u
Running dbremote with the -u option will force
dbremote into only sending transactions that
occur in the off-line logs
Therefore, if the current transaction log (and
mirror log) are both lost, you can guarantee
that no remote users have received messages
from the lost log and nobody will need to be reextracted
This will slow down the speed with which your
remote sites receive data from he
consolidated, but will guarantee that remotes
need not be re-extracted should the current
log be lost
Where are we?
Points of Persistent Data
Automatic Recovery
Backing Up Your Database
Backup Considerations in a Replicating
Environment
Recovery from Media Failure
Backup Strategies
Recovery naming conventions
For the preceding section, the following
naming conventions will be used
The database file resides at c:\em408.db
The transaction log resides at d:\em408.log
The mirror transaction log resides at e:\em408.mlg
The backup directory is located at f:\
A temporary recovery directory will be located at
f:\recover
ALL these directories are local to the machine
Recovery from media failure
Recovery from media failure requires you to keep
the transaction log on a separate device from
the database file
Recovery depends on whether the media failure
is on the device holding your database file or
on the device holding your transaction log
The first step in recovering from a media failure
is to clean up, reformat, or replace the device
that failed
There are some special considerations if you are
recovering a consolidated database
Media failure on the database
file
If your transaction log is still usable,
but you have lost your database file, the
recovery process depends on the
number incremental backups you have
taken since your last full backup
If you have a single transaction
log
If you have not deleted or restarted the transaction
log since the last full backup, the current
transaction log contains everything since the last
backup
Move the current logs to the recovery directory
• MOVE d:\em408.log f:\recover
• MOVE e:\em408.mlg f:\recover
Restore the most recent full backup to the production directories
• COPY f:\em408.db c:\ /Y
• COPY f:\em408.log d:\ /Y
• COPY f:\em408.log e:\em408.mlg /Y
Apply the TWO transaction logs to the backed up database
• DBENG7 c:\em408.db –a d:\em408.log
• DBENG7 c:\em408.db –a f:\recover\em408.log
Failure to apply the current
transaction log
DBENG7 c:\em408.db –a f:\recover\em408.log
Cannot open transaction log file – Can’t use log file
“f:\recover\em408.log” since the offsets don’t match the
offsets in the database file
This error is a result of having started the
backed up version of the database and having
automatic recovery occur on the backed up
database prior to having tried to apply
subsequent log files
If you have multiple transaction
logs
If you have run incremental backups since the last full
backup, each transaction log since the full backup
needs to be applied in sequence to bring the database
up to date
Move the current logs to the recovery directory
Restore the most recent full backup to the production directories
Apply all the transaction logs to the backed up database in the
order with which they were backed up
•
•
•
•
•
DBENG7
DBENG7
DBENG7
DBENG7
DBENG7
c:\em408.db
c:\em408.db
c:\em408.db
c:\em408.db
c:\em408.db
–a
–a
–a
–a
–a
d:\em408.log
f:\01081100.log
f:\01081101.log
f:\01081200.log
f:\recover\em408.log
Media failure on the transaction
log
If your database file is still usable but you have lost
your current or mirror transaction log
Copy the good transaction log to the location of the lost transaction log
• COPY e:\em408.mlg d:\em408.log
If you have lost BOTH the current and mirror
transaction logs
Make a backup of the database file immediately
• COPY c:\em408.db f:\recover
Erase the current transaction logs if they still exist
• DEL d:\em408.log e:\em408.mlg
Restart the database with the -f switch
• DBENG7 –f c:\em408.db
The server will restore the database to the most recent checkpoint and then
roll back any transactions that were not committed at the time of the
checkpoint
WARNING!!!!
NEVER NEVER
NEVER NEVER
NEVER use the -f
recovery switch on
a replicating
database
Engine vs. Server during
recovery
If you normally use the database server for day
to day operations, you should also use dbsrv7
during recovery as well
The stand alone engine (dbeng7) has hard coded
ten connection limit
If you had more than ten active connections to the database server,
when the stand alone engine attempts to apply the log file, it will
fail, since only ten connections are allowed
Start dbsrv7 with “-x none” during recovery to
prevent the communication links from starting
Recovering uncommitted
database changes
As a result of the recovery, all uncommitted
transactions that existed when the last log file
ends will be rolled back when the engine
starts up
You can see which transactions were rolled back
using the dbtran utility with the following
switches
-a : include rollback transactions in output
-f : output from most recent checkpoint
-u : select transactions for listed users
You can translate the most recent log file and
manually determine if you would like to apply
any of the uncommitted transactions
Where are we?
Points of Persistent Data
Automatic Recovery
Backing Up Your Database
Backup Considerations in a Replicating
Environment
Recovery from Media Failure
Backup Strategies
Backup Strategies
There will be different levels of paranoia
based on how you are using ASA
Unconcerned
Concerned
Paranoid
Job depends on backup
Unconcerned Backup Strategy
Physical Database Setup
Database File and Log File reside on same physical
device, probably in the same directory
Backup Procedure
No need to do anything
You could possibly take a full system backup to tape
every now and then
Unconcerned Backup Strategy
Pros
Easy setup, no maintenance required
Protected from system failures
Cons
No protection from media failure
Recovery of database from media failure depends on
system backup
Changes since last system backup are lost even if system
backup exists
Concerned Backup Strategy
Physical Database Setup
Database file and transaction log reside on different physical devices
Backup Procedure
Take a full backup every week and rename and restart the transaction
log
Either have the backup placed on another device (network mapped
drive), or spin the backed up database and transaction log to
alternate media (tape, jaz drive, …) after the backup completes
Test your recovery procedure after having defined your procedure
Concerned Backup Strategy
Pros
Protection from both system and media failure on a single device
No data loss should a media failure occur on a single device
Cons
You might be overwriting your only good backup, so a backup failure
may result in you not having any backup at all
Media failure on device with transaction log could result in data loss
since the last checkpoint
Paranoid Backup Strategy
Physical Database Setup
Database file, transaction log and transaction log mirror all reside on
separate physical devices
Backup Procedure
Run dbvalid on database and check for errors
Take a full backup once a week and an incremental backup every day,
and rename and restart the transaction log
Copy backed up database file and transaction logs to another machine
or alternate media once backup completes
Test your recovery procedure once a month
Paranoid Backup Strategy
Pros
Protection from system and media failures
Running dbvalid will help you from backing up a corrupt database
Placing backed up files on alternate media will give you a point to
recover to should the entire machine be destroyed
Cons
Running dbvalid on large database is time consuming
Problem with disk controller could destroy all hard drives
Lots of backed up files to manage
Database may become corrupt during backup
Job Security Backup Strategy
Physical Database Setup
Database file, transaction log and transaction log mirror
all reside on separate physical devices
Each physical device is controlled by a separate disk
controller from a different manufacturer
Job Security Backup Strategy
Backup Procedure
Run dbvalid on database and check for errors
Take a full backup once a week and an incremental backup every day, and
rename and restart the transaction log
Copy backed up database file and transaction logs to alternate media once
backup completes
Move alternate media off-site as soon as possible
Run dbvalid on the backed up database with the engine in read only mode to
ensure that backed up image is valid
Test your recovery procedure at least once a week
If running dbremote, use the -u switch
Also consider running dbbackup -l (live backup) to keep an up-to-date
version of your log file on a separate machine
Document your backup and recovery procedures so they can continue in case
you are out of the office for some reason
Job Security Backup Strategy
Pros
Protection from system and media failures
Running dbvalid twice ensures that backed up image is also valid
Taking alternate media off-site protects you from a site disaster
Running dbremote with the -u switch ensures that in the case of a site
disaster or system and media failure, no remote users will be
affected
A bug in a disk controller can not destroy all your hard drives
You get to keep your job
Job Security Backup Strategy
Cons
In the case of a site disaster or system and media failure,
there will still be data loss since the last backup
Very time consuming
Lots of backed up files need to be managed
Summary
Points of Persistent Data
Automatic Recovery
Backing Up Your Database
Backup Considerations in a Replicating
Environment
Recovery from Media Failure
Backup Strategies