Download Databases | Microsoft Docs

Document related concepts

Global serializability wikipedia , lookup

Commitment ordering wikipedia , lookup

DBase wikipedia , lookup

Serializability wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Microsoft Access wikipedia , lookup

IMDb wikipedia , lookup

Oracle Database wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

SQL wikipedia , lookup

Functional Database Model wikipedia , lookup

Btrieve wikipedia , lookup

Ingres (database) wikipedia , lookup

Open Database Connectivity wikipedia , lookup

PL/SQL wikipedia , lookup

Concurrency control wikipedia , lookup

Database wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Versant Object Database wikipedia , lookup

Relational model wikipedia , lookup

Database model wikipedia , lookup

Clusterpoint wikipedia , lookup

ContactPoint wikipedia , lookup

Transcript
Table of Contents
Overview
System Databases
master Database
model Database
msdb Database
Resource Database
tempdb Database
Rebuild System Databases
Contained Databases
Modified Features (Contained Database)
Contained Database Collations
Security Best Practices with Contained Databases
Migrate to a Partially Contained Database
SQL Server Data Files in Microsoft Azure
Database Files and Filegroups
Database States
File States
Database Identifiers
Estimate the Size of a Database
Estimate the Size of a Table
Estimate the Size of a Clustered Index
Estimate the Size of a Nonclustered Index
Estimate the Size of a Heap
Copy Databases to Other Servers
Use the Copy Database Wizard
Copy Databases with Backup and Restore
Publish a Database (SQL Server Management Studio)
Deploy a SQL Server Database to a Microsoft Azure Virtual Machine
Database Detach and Attach
Move a Database Using Detach and Attach (Transact-SQL)
Upgrade a Database Using Detach and Attach (Transact-SQL)
Detach a Database
Attach a Database
Add Data or Log Files to a Database
Change the Configuration Settings for a Database
Create a Database
Delete a Database
Delete Data or Log Files from a Database
Display Data and Log Space Information for a Database
Increase the Size of a Database
Manage Metadata When Making a Database Available on Another Server Instance
Move Database Files
Move User Databases
Move System Databases
Rename a Database
Set a Database to Single-user Mode
Shrink a Database
Shrink a File
View or Change the Properties of a Database
Value for Extended Property Dialog Box
Database Object (Extended Properties Page)
Database Properties (General Page)
Database Properties (Files Page)
Database Properties (Filegroups Page)
Database Properties (Options Page)
Database Properties (ChangeTracking Page)
Database Properties (Transaction Log Shipping Page)
Log Shipping Transaction Log Backup Settings
Secondary Database Settings
Log Shipping Monitor Settings
Database Properties (Mirroring Page)
Database Properties (Query Store Page)
View a List of Databases on an Instance of SQL Server
View or Change the Compatibility Level of a Database
Create a User-Defined Data Type Alias
Database Snapshots
View the Size of the Sparse File of a Database Snapshot (Transact-SQL)
Create a Database Snapshot (Transact-SQL)
View a Database Snapshot (SQL Server)
Revert a Database to a Database Snapshot
Drop a Database Snapshot (Transact-SQL)
Database Instant File Initialization
Databases
3/24/2017 • 2 min to read • Edit Online
A database in SQL Server is made up of a collection of tables that stores a specific set of structured data. A table
contains a collection of rows, also referred to as records or tuples, and columns, also referred to as attributes. Each
column in the table is designed to store a certain type of information, for example, dates, names, dollar amounts,
and numbers.
Basic Information about Databases
A computer can have one or more than one instance of SQL Server installed. Each instance of SQL Server can
contain one or many databases. Within a database, there are one or many object ownership groups called schemas.
Within each schema there are database objects such as tables, views, and stored procedures. Some objects such as
certificates and asymmetric keys are contained within the database, but are not contained within a schema. For
more information about creating tables, see Tables.
SQL Server databases are stored in the file system in files. Files can be grouped into filegroups. For more
information about files and filegroups, see Database Files and Filegroups.
When people gain access to an instance of SQL Server they are identified as a login. When people gain access to a
database they are identified as a database user. A database user can be based on a login. If contained databases are
enabled, a database user can be created that is not based on a login. For more information about users, see CREATE
USER (Transact-SQL).
A user that has access to a database can be given permission to access the objects in the database. Though
permissions can be granted to individual users, we recommend creating database roles, adding the database users
to the roles, and then grant access permission to the roles. Granting permissions to roles instead of users makes it
easier to keep permissions consistent and understandable as the number of users grow and continually change.
For more information about roles permissions, see CREATE ROLE (Transact-SQL) and Principals (Database Engine).
Working with Databases
Most people who work with databases use the SQL Server Management Studio tool. The Management Studio tool
has a graphical user interface for creating databases and the objects in the databases. Management Studio also has
a query editor for interacting with databases by writing Transact-SQL statements. Management Studio can be
installed from the SQL Server installation disk, or downloaded from MSDN.
In This Section
System Databases
Delete Data or Log Files from a Database
Contained Databases
Display Data and Log Space Information for a Database
SQL Server Data Files in Microsoft Azure
Increase the Size of a Database
Database Files and Filegroups
Rename a Database
Database States
Set a Database to Single-user Mode
File States
Shrink a Database
Estimate the Size of a Database
Shrink a File
Copy Databases to Other Servers
View or Change the Properties of a Database
Database Detach and Attach (SQL Server)
View a List of Databases on an Instance of SQL Server
Add Data or Log Files to a Database
View or Change the Compatibility Level of a Database
Change the Configuration Settings for a Database
Use the Maintenance Plan Wizard
Create a Database
Create a User-Defined Data Type Alias
Delete a Database
Database Snapshots (SQL Server)
Related Content
Indexes
Views
Stored Procedures (Database Engine)
System Databases
3/24/2017 • 2 min to read • Edit Online
SQL Server includes the following system databases.
SYSTEM DATABASE
DESCRIPTION
master Database
Records all the system-level information for an instance of
SQL Server.
msdb Database
Is used by SQL Server Agent for scheduling alerts and jobs.
model Database
Is used as the template for all databases created on the
instance of SQL Server. Modifications made to the model
database, such as database size, collation, recovery model,
and other database options, are applied to any databases
created afterward.
Resource Database
Is a read-only database that contains system objects that are
included with SQL Server. System objects are physically
persisted in the Resource database, but they logically appear
in the sys schema of every database.
tempdb Database
Is a workspace for holding temporary objects or intermediate
result sets.
Modifying System Data
SQL Server does not support users directly updating the information in system objects such as system tables,
system stored procedures, and catalog views. Instead, SQL Server provides a complete set of administrative tools
that let users fully administer their system and manage all users and objects in a database. These include the
following:
Administration utilities, such as SQL Server Management Studio.
SQL-SMO API. This lets programmers include complete functionality for administering SQL Server in their
applications.
Transact-SQL scripts and stored procedures. These can use system stored procedures and Transact-SQL
DDL statements.
These tools shield applications from changes in the system objects. For example, SQL Server sometimes has
to change the system tables in new versions of SQL Server to support new functionality that is being added
in that version. Applications issuing SELECT statements that directly reference system tables are frequently
dependent on the old format of the system tables. Sites may not be able to upgrade to a new version of
SQL Server until they have rewritten applications that are selecting from system tables. SQL Server
considers the system stored procedures, DDL, and SQL-SMO published interfaces, and works to maintain
the backward compatibility of these interfaces.
SQL Server does not support triggers defined on the system tables, because they might modify the
operation of the system.
NOTE
System databases cannot reside on UNC share directories.
Viewing System Database Data
You should not code Transact-SQL statements that directly query the system tables, unless that is the only way to
obtain the information that is required by the application. Instead, applications should obtain catalog and system
information by using the following:
System catalog views
SQL-SMO
Windows Management Instrumentation (WMI) interface
Catalog functions, methods, attributes, or properties of the data API used in the application, such as ADO,
OLE DB, or ODBC.
Transact-SQL system stored procedures and built-in functions.
Related Tasks
Back Up and Restore of System Databases (SQL Server)
Hide System Objects in Object Explorer
Related Content
Catalog Views (Transact-SQL)
Databases
master Database
3/24/2017 • 3 min to read • Edit Online
The master database records all the system-level information for a SQL Server system. This includes instancewide metadata such as logon accounts, endpoints, linked servers, and system configuration settings. In SQL Server,
system objects are no longer stored in the master database; instead, they are stored in the Resource database.
Also, master is the database that records the existence of all other databases and the location of those database
files and records the initialization information for SQL Server. Therefore, SQL Server cannot start if the master
database is unavailable.
Physical Properties of master
The following table lists the initial configuration values of the master data and log files. The sizes of these files may
vary slightly for different editions of SQL Server.
FILE
LOGICAL NAME
PHYSICAL NAME
FILE GROWTH
Primary data
master
master.mdf
Autogrow by 10 percent
until the disk is full.
Log
mastlog
mastlog.ldf
Autogrow by 10 percent to
a maximum of 2 terabytes.
For information about how to move the master data and log files, see Move System Databases.
Database Options
The following table lists the default value for each database option in the master database and whether the option
can be modified. To view the current settings for these options, use the sys.databases catalog view.
DATABASE OPTION
DEFAULT VALUE
CAN BE MODIFIED
ALLOW_SNAPSHOT_ISOLATION
ON
No
ANSI_NULL_DEFAULT
OFF
Yes
ANSI_NULLS
OFF
Yes
ANSI_PADDING
OFF
Yes
ANSI_WARNINGS
OFF
Yes
ARITHABORT
OFF
Yes
AUTO_CLOSE
OFF
No
AUTO_CREATE_STATISTICS
ON
Yes
AUTO_SHRINK
OFF
No
AUTO_UPDATE_STATISTICS
ON
Yes
DATABASE OPTION
DEFAULT VALUE
CAN BE MODIFIED
AUTO_UPDATE_STATISTICS_ASYNC
OFF
Yes
CHANGE_TRACKING
OFF
No
CONCAT_NULL_YIELDS_NULL
OFF
Yes
CURSOR_CLOSE_ON_COMMIT
OFF
Yes
CURSOR_DEFAULT
GLOBAL
Yes
Database Availability Options
ONLINE
No
MULTI_USER
No
READ_WRITE
No
DATE_CORRELATION_OPTIMIZATION
OFF
Yes
DB_CHAINING
ON
No
ENCRYPTION
OFF
No
MIXED_PAGE_ALLOCATION
ON
No
NUMERIC_ROUNDABORT
OFF
Yes
PAGE_VERIFY
CHECKSUM
Yes
PARAMETERIZATION
SIMPLE
Yes
QUOTED_IDENTIFIER
OFF
Yes
READ_COMMITTED_SNAPSHOT
OFF
No
RECOVERY
SIMPLE
Yes
RECURSIVE_TRIGGERS
OFF
Yes
Service Broker Options
DISABLE_BROKER
No
TRUSTWORTHY
OFF
Yes
For a description of these database options, see ALTER DATABASE (Transact-SQL).
Restrictions
The following operations cannot be performed on the master database:
Adding files or filegroups.
Changing collation. The default collation is the server collation.
Changing the database owner. master is owned by sa.
Creating a full-text catalog or full-text index.
Creating triggers on system tables in the database.
Dropping the database.
Dropping the guest user from the database.
Enabling change data capture.
Participating in database mirroring.
Removing the primary filegroup, primary data file, or log file.
Renaming the database or primary filegroup.
Setting the database to OFFLINE.
Setting the database or primary filegroup to READ_ONLY.
Recommendations
When you work with the master database, consider the following recommendations:
Always have a current backup of the master database available.
Back up the master database as soon as possible after the following operations:
Creating, modifying, or dropping any database
Changing server or database configuration values
Modifying or adding logon accounts
Do not create user objects in master. If you do, master must be backed up more frequently.
Do not set the TRUSTWORTHY option to ON for the master database.
What to Do If master Becomes Unusable
If master becomes unusable, you can return the database to a usable state in either of the following ways:
Restore master from a current database backup.
If you can start the server instance, you should be able to restore master from a full database backup. For
more information, see Restore the master Database (Transact-SQL).
Rebuild master completely.
If severe damage to master prevents you from starting SQL Server, you must rebuild master. For more
information, see Rebuild System Databases.
IMPORTANT
Rebuilding master rebuilds all of the system databases.
Related Content
Rebuild System Databases
System Databases
sys.databases (Transact-SQL)
sys.master_files (Transact-SQL)
Move Database Files
model Database
3/24/2017 • 3 min to read • Edit Online
The model database is used as the template for all databases created on an instance of SQL Server. Because
tempdb is created every time SQL Server is started, the model database must always exist on a SQL Server
system. The entire contents of the model database, including database options, are copied to the new database.
Some of the settings of model are also used for creating a new tempdb during start up, so the model database
must always exist on a SQL Server system.
Newly created user databases use the same recovery model as the model database. The default is user
configurable. To learn the current recovery model of the model, see View or Change the Recovery Model of a
Database (SQL Server).
IMPORTANT
If you modify the model database with user-specific template information, we recommend that you back up model. For
more information, see Back Up and Restore of System Databases (SQL Server).
model Usage
When a CREATE DATABASE statement is issued, the first part of the database is created by copying in the contents
of the model database. The rest of the new database is then filled with empty pages.
If you modify the model database, all databases created afterward will inherit those changes. For example, you
could set permissions or database options, or add objects such as tables, functions, or stored procedures. File
properties of the model database are an exception, and are ignored except the initial size of the data file. The
default initial size of the model database data and log file is 8 MB.
Physical Properties of model
The following table lists initial configuration values of the model data and log files.
FILE
LOGICAL NAME
PHYSICAL NAME
FILE GROWTH
Primary data
modeldev
model.mdf
Autogrow by 64 MB until
the disk is full.
Log
modellog
modellog.ldf
Autogrow by 64 MB to a
maximum of 2 terabytes.
For versions before SQL Server 2016, see model Databasefor default file growth values.
To move the model database or log files, see Move System Databases.
Database Options
The following table lists the default value for each database option in the model database and whether the option
can be modified. To view the current settings for these options, use the sys.databases catalog view.
DATABASE OPTION
DEFAULT VALUE
CAN BE MODIFIED
ALLOW_SNAPSHOT_ISOLATION
OFF
Yes
ANSI_NULL_DEFAULT
OFF
Yes
ANSI_NULLS
OFF
Yes
ANSI_PADDING
OFF
Yes
ANSI_WARNINGS
OFF
Yes
ARITHABORT
OFF
Yes
AUTO_CLOSE
OFF
Yes
AUTO_CREATE_STATISTICS
ON
Yes
AUTO_SHRINK
OFF
Yes
AUTO_UPDATE_STATISTICS
ON
Yes
AUTO_UPDATE_STATISTICS_ASYNC
OFF
Yes
CHANGE_TRACKING
OFF
No
CONCAT_NULL_YIELDS_NULL
OFF
Yes
CURSOR_CLOSE_ON_COMMIT
OFF
Yes
CURSOR_DEFAULT
GLOBAL
Yes
Database Availability Options
ONLINE
No
MULTI_USER
Yes
READ_WRITE
Yes
DATE_CORRELATION_OPTIMIZATION
OFF
Yes
DB_CHAINING
OFF
No
ENCRYPTION
OFF
No
MIXED_PAGE_ALLOCATION
ON
No
NUMERIC_ROUNDABORT
OFF
Yes
PAGE_VERIFY
CHECKSUM
Yes
PARAMETERIZATION
SIMPLE
Yes
QUOTED_IDENTIFIER
OFF
Yes
DATABASE OPTION
DEFAULT VALUE
CAN BE MODIFIED
READ_COMMITTED_SNAPSHOT
OFF
Yes
RECOVERY
Depends on SQL Server edition*
Yes
RECURSIVE_TRIGGERS
OFF
Yes
Service Broker Options
DISABLE_BROKER
No
TRUSTWORTHY
OFF
No
*To verify the current recovery model of the database, see View or Change the Recovery Model of a Database (SQL
Server) or sys.databases (Transact-SQL).
For a description of these database options, see ALTER DATABASE (Transact-SQL).
Restrictions
The following operations cannot be performed on the model database:
Adding files or filegroups.
Changing collation. The default collation is the server collation.
Changing the database owner. model is owned by sa.
Dropping the database.
Dropping the guest user from the database.
Enabling change data capture.
Participating in database mirroring.
Removing the primary filegroup, primary data file, or log file.
Renaming the database or primary filegroup.
Setting the database to OFFLINE.
Setting the primary filegroup to READ_ONLY.
Creating procedures, views, or triggers using the WITH ENCRYPTION option. The encryption key is tied to
the database in which the object is created. Encrypted objects created in the model database can only be
used in model.
Related Content
System Databases
sys.databases (Transact-SQL)
sys.master_files (Transact-SQL)
Move Database Files
msdb Database
3/24/2017 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
The msdb database is used by SQL Server Agent for scheduling alerts and jobs and by other features such as SQL
Server Management Studio, Service Broker and Database Mail.
For example, SQL Server automatically maintains a complete online backup-and-restore history within tables in
msdb. This information includes the name of the party that performed the backup, the time of the backup, and the
devices or files where the backup is stored. SQL Server Management Studio uses this information to propose a
plan for restoring a database and applying any transaction log backups. Backup events for all databases are
recorded even if they were created with custom applications or third-party tools. For example, if you use a
Microsoft Visual Basic application that calls SQL Server Management Objects (SMO) objects to perform backup
operations, the event is logged in the msdb system tables, the Microsoft Windows application log, and the SQL
Server error log. To help your protect the information that is stored in msdb, we recommend that you consider
placing the msdb transaction log on fault tolerant storage.
By default, msdb uses the simple recovery model. If you use the backup and restore history tables, we recommend
that you use the full recovery model for msdb. For more information, see Recovery Models (SQL Server). Notice
that when SQL Server is installed or upgraded and whenever Setup.exe is used to rebuild the system databases,
the recovery model of msdb is automatically set to simple.
IMPORTANT
After any operation that updates msdb, such as backing up or restoring any database, we recommend that you back up
msdb. For more information, see Back Up and Restore of System Databases (SQL Server).
Physical Properties of msdb
The following table lists the initial configuration values of the msdb data and log files. The sizes of these files may
vary slightly for different editions of SQL Server Database Engine.
FILE
LOGICAL NAME
PHYSICAL NAME
FILE GROWTH
Primary data
MSDBData
MSDBData.mdf
Autogrow by 10 percent
until the disk is full.
Log
MSDBLog
MSDBLog.ldf
Autogrow by 10 percent to
a maximum of 2 terabytes.
To move the msdb database or log files, see Move System Databases.
Database Options
The following table lists the default value for each database option in the msdb database and whether the option
can be modified. To view the current settings for these options, use the sys.databases catalog view.
DATABASE OPTION
DEFAULT VALUE
CAN BE MODIFIED
DATABASE OPTION
DEFAULT VALUE
CAN BE MODIFIED
ALLOW_SNAPSHOT_ISOLATION
ON
No
ANSI_NULL_DEFAULT
OFF
Yes
ANSI_NULLS
OFF
Yes
ANSI_PADDING
OFF
Yes
ANSI_WARNINGS
OFF
Yes
ARITHABORT
OFF
Yes
AUTO_CLOSE
OFF
Yes
AUTO_CREATE_STATISTICS
ON
Yes
AUTO_SHRINK
OFF
Yes
AUTO_UPDATE_STATISTICS
ON
Yes
AUTO_UPDATE_STATISTICS_ASYNC
OFF
Yes
CHANGE_TRACKING
OFF
No
CONCAT_NULL_YIELDS_NULL
OFF
Yes
CURSOR_CLOSE_ON_COMMIT
OFF
Yes
CURSOR_DEFAULT
GLOBAL
Yes
Database Availability Options
ONLINE
No
MULTI_USER
Yes
READ_WRITE
Yes
DATE_CORRELATION_OPTIMIZATION
OFF
Yes
DB_CHAINING
ON
Yes
ENCRYPTION
OFF
No
MIXED_PAGE_ALLOCATION
ON
No
NUMERIC_ROUNDABORT
OFF
Yes
PAGE_VERIFY
CHECKSUM
Yes
PARAMETERIZATION
SIMPLE
Yes
QUOTED_IDENTIFIER
OFF
Yes
DATABASE OPTION
DEFAULT VALUE
CAN BE MODIFIED
READ_COMMITTED_SNAPSHOT
OFF
No
RECOVERY
SIMPLE
Yes
RECURSIVE_TRIGGERS
OFF
Yes
Service Broker Options
ENABLE_BROKER
Yes
TRUSTWORTHY
ON
Yes
For a description of these database options, see ALTER DATABASE (Transact-SQL).
Restrictions
The following operations cannot be performed on the msdb database:
Changing collation. The default collation is the server collation.
Dropping the database.
Dropping the guest user from the database.
Enabling change data capture.
Participating in database mirroring.
Removing the primary filegroup, primary data file, or log file.
Renaming the database or primary filegroup.
Setting the database to OFFLINE.
Setting the primary filegroup to READ_ONLY.
Related Content
System Databases
sys.databases (Transact-SQL)
sys.master_files (Transact-SQL)
Move Database Files
Database Mail
SQL Server Service Broker
Resource Database
3/24/2017 • 2 min to read • Edit Online
The Resource database is a read-only database that contains all the system objects that are included with SQL
Server. SQL Server system objects, such as sys.objects, are physically persisted in the Resource database, but they
logically appear in the sys schema of every database. The Resource database does not contain user data or user
metadata.
The Resource database makes upgrading to a new version of SQL Server an easier and faster procedure. In earlier
versions of SQL Server, upgrading required dropping and creating system objects. Because the Resource database
file contains all system objects, an upgrade is now accomplished simply by copying the single Resource database
file to the local server.
Physical Properties of Resource
The physical file names of the Resource database are mssqlsystemresource.mdf and mssqlsystemresource.ldf.
These files are located in <drive>:\Program Files\Microsoft SQL Server\MSSQL<version>.
<instance_name>\MSSQL\Binn\ and should not be moved. Each instance of SQL Server has one and only one
associated mssqlsystemresource.mdf file, and instances do not share this file.
WARNING
Upgrades and service packs sometimes provide a new resource database which is installed to the BINN folder. Changing the
location of the resource database is not supported or recommended.
Backing Up and Restoring the Resource Database
SQL Server cannot back up the Resource database. You can perform your own file-based or a disk-based backup
by treating the mssqlsystemresource.mdf file as if it were a binary (.EXE) file, rather than a database file, but you
cannot use SQL Server to restore your backups. Restoring a backup copy of mssqlsystemresource.mdf can only be
done manually, and you must be careful not to overwrite the current Resource database with an out-of-date or
potentially insecure version.
IMPORTANT
After restoring a backup of mssqlsystemresource.mdf, you must reapply any subsequent updates.
Accessing the Resource Database
The Resource database should only be modified by or at the direction of a Microsoft Customer Support Services
(CSS) specialist. The ID of the Resource database is always 32767. Other important values associated with the
Resource database are the version number and the last time that the database was updated.
To determine the version number of the Resource database, use:
SELECT SERVERPROPERTY('ResourceVersion');
GO
To determine when the Resource database was last updated, use:
SELECT SERVERPROPERTY('ResourceLastUpdateDateTime');
GO
To access SQL definitions of system objects, use the OBJECT_DEFINITION function:
SELECT OBJECT_DEFINITION(OBJECT_ID('sys.objects'));
GO
Related Content
System Databases
Diagnostic Connection for Database Administrators
OBJECT_DEFINITION (Transact-SQL)
SERVERPROPERTY (Transact-SQL)
Start SQL Server in Single-User Mode
tempdb Database
3/24/2017 • 4 min to read • Edit Online
The tempdb system database is a global resource that is available to all users connected to the instance of SQL
Server and is used to hold the following:
Temporary user objects that are explicitly created, such as: global or local temporary tables, temporary
stored procedures, table variables, or cursors.
Internal objects that are created by the SQL Server Database Engine, for example, work tables to store
intermediate results for spools or sorting.
Row versions that are generated by data modification transactions in a database that uses read-committed
using row versioning isolation or snapshot isolation transactions.
Row versions that are generated by data modification transactions for features, such as: online index
operations, Multiple Active Result Sets (MARS), and AFTER triggers.
Operations within tempdb are minimally logged. This enables transactions to be rolled back. tempdb is recreated every time SQL Server is started so that the system always starts with a clean copy of the database.
Temporary tables and stored procedures are dropped automatically on disconnect, and no connections are
active when the system is shut down. Therefore, there is never anything in tempdb to be saved from one
session of SQL Server to another. Backup and restore operations are not allowed on tempdb.
Physical Properties of tempdb
The following table lists the initial configuration values of the tempdb data and log files. The sizes of these files
may vary slightly for different editions of SQL Server.
FILE
LOGICAL NAME
PHYSICAL NAME
INITIAL SIZE
FILE GROWTH
Primary data
tempdev
tempdb.mdf
8 megabytes
Autogrow by 64 MB
until the disk is full
Secondary data files*
temp#
tempdb_mssql_#.ndf
8 megabytes
Autogrow by 64 MB
until the disk is full
Log
templog
templog.ldf
8 megabytes
Autogrow by 64
megabytes to a
maximum of 2
terabytes
* The number of files depends on the number of (logical) cores on the machine. The value will be the number of
cores or 8, whichever is lower.
The default value for the number of data files is based on the general guidelines in KB 2154845.
Performance Improvements in tempdb
In SQL Server, tempdb performance is improved in the following ways:
Temporary tables and table variables may be cached. Caching allows operations that drop and create the
temporary objects to execute very quickly and reduces page allocation contention.
Allocation page latching protocol is improved. This reduces the number of UP (update) latches that are used.
Logging overhead for tempdb is reduced. This reduces disk I/O bandwidth consumption on the tempdb
log file.
Setup adds multiple tempdb data files during a new instance installation. This task can be accomplished with
the new UI input control on the Database Engine Configuration section and a command line parameter
/SQLTEMPDBFILECOUNT. By default, setup will add as many tempdb files as the CPU count or 8, whichever
is lower.
When there are multiple tempdb data files, all files will autogrow at same time and by the same amount
depending on growth settings. Trace flag 1117 is no longer required.
All allocations in tempdb use uniform extents. Trace flag 1118 is no longer required.
For the primary filegroup, the AUTOGROW_ALL_FILES property is turned on and the property cannot be
modified.
Moving the tempdb Data and Log Files
To move the tempdb data and log files, see Move System Databases.
Database Options
The following table lists the default value for each database option in the tempdb database and whether the
option can be modified. To view the current settings for these options, use the sys.databases catalog view.
DATABASE OPTION
DEFAULT VALUE
CAN BE MODIFIED
ALLOW_SNAPSHOT_ISOLATION
OFF
Yes
ANSI_NULL_DEFAULT
OFF
Yes
ANSI_NULLS
OFF
Yes
ANSI_PADDING
OFF
Yes
ANSI_WARNINGS
OFF
Yes
ARITHABORT
OFF
Yes
AUTO_CLOSE
OFF
No
AUTO_CREATE_STATISTICS
ON
Yes
AUTO_SHRINK
OFF
No
AUTO_UPDATE_STATISTICS
ON
Yes
AUTO_UPDATE_STATISTICS_ASYNC
OFF
Yes
CHANGE_TRACKING
OFF
No
CONCAT_NULL_YIELDS_NULL
OFF
Yes
CURSOR_CLOSE_ON_COMMIT
OFF
Yes
DATABASE OPTION
DEFAULT VALUE
CAN BE MODIFIED
CURSOR_DEFAULT
GLOBAL
Yes
Database Availability Options
ONLINE
No
MULTI_USER
No
READ_WRITE
No
DATE_CORRELATION_OPTIMIZATION
OFF
Yes
DB_CHAINING
ON
No
ENCRYPTION
OFF
No
MIXED_PAGE_ALLOCATION
OFF
No
NUMERIC_ROUNDABORT
OFF
Yes
PAGE_VERIFY
CHECKSUM for new installations of SQL
Server.
Yes
NONE for upgrades of SQL Server.
PARAMETERIZATION
SIMPLE
Yes
QUOTED_IDENTIFIER
OFF
Yes
READ_COMMITTED_SNAPSHOT
OFF
No
RECOVERY
SIMPLE
No
RECURSIVE_TRIGGERS
OFF
Yes
Service Broker Options
ENABLE_BROKER
Yes
TRUSTWORTHY
OFF
No
For a description of these database options, see ALTER DATABASE SET Options (Transact-SQL).
Restrictions
The following operations cannot be performed on the tempdb database:
Adding filegroups.
Backing up or restoring the database.
Changing collation. The default collation is the server collation.
Changing the database owner. tempdb is owned by sa.
Creating a database snapshot.
Dropping the database.
Dropping the guest user from the database.
Enabling change data capture.
Participating in database mirroring.
Removing the primary filegroup, primary data file, or log file.
Renaming the database or primary filegroup.
Running DBCC CHECKALLOC.
Running DBCC CHECKCATALOG.
Setting the database to OFFLINE.
Setting the database or primary filegroup to READ_ONLY.
Permissions
Any user can create temporary objects in tempdb. Users can only access their own objects, unless they receive
additional permissions. It is possible to revoke the connect permission to tempdb to prevent a user from using
tempdb, but this is not recommended as some routine operations require the use of tempdb.
Related Content
SORT_IN_TEMPDB Option For Indexes
System Databases
sys.databases (Transact-SQL)
sys.master_files (Transact-SQL)
Move Database Files
See Also
Working with tempdb in SQL Server 2005
Rebuild System Databases
3/24/2017 • 9 min to read • Edit Online
System databases must be rebuilt to fix corruption problems in the master, model, msdb, or resource system
databases or to modify the default server-level collation. This topic provides step-by-step instructions to rebuild
system databases in SQL Server 2016.
In This Topic
Before you begin:
Limitations and Restrictions
Prerequisites
Procedures:
Rebuild System Databases
Rebuild the resource Database
Create a New msdb Database
Follow Up:
Troubleshoot Rebuild Errors
Before You Begin
Limitations and Restrictions
When the master, model, msdb, and tempdb system databases are rebuilt, the databases are dropped and recreated in their original location. If a new collation is specified in the rebuild statement, the system databases are
created using that collation setting. Any user modifications to these databases are lost. For example, you may have
user-defined objects in the master database, scheduled jobs in msdb, or changes to the default database settings in
the model database.
Prerequisites
Perform the following tasks before you rebuild the system databases to ensure that you can restore the system
databases to their current settings.
1. Record all server-wide configuration values.
SELECT * FROM sys.configurations;
2. Record all service packs and hotfixes applied to the instance of SQL Server and the current collation. You
must reapply these updates after rebuilding the system databases.
SELECT
SERVERPROPERTY('ProductVersion ') AS ProductVersion,
SERVERPROPERTY('ProductLevel') AS ProductLevel,
SERVERPROPERTY('ResourceVersion') AS ResourceVersion,
SERVERPROPERTY('ResourceLastUpdateDateTime') AS ResourceLastUpdateDateTime,
SERVERPROPERTY('Collation') AS Collation;
3. Record the current location of all data and log files for the system databases. Rebuilding the system
databases installs all system databases to their original location. If you have moved system database data or
log files to a different location, you must move the files again.
SELECT name, physical_name AS current_file_location
FROM sys.master_files
WHERE database_id IN (DB_ID('master'), DB_ID('model'), DB_ID('msdb'), DB_ID('tempdb'));
4. Locate the current backup of the master, model, and msdb databases.
5. If the instance of SQL Server is configured as a replication Distributor, locate the current backup of the
distribution database.
6. Ensure you have appropriate permissions to rebuild the system databases. To perform this operation, you
must be a member of the sysadmin fixed server role. For more information, see Server-Level Roles.
7. Verify that copies of the master, model, msdb data and log template files exist on the local server. The
default location for the template files is C:\Program Files\Microsoft SQL
Server\MSSQL13.MSSQLSERVER\MSSQL\Binn\Templates. These files are used during the rebuild process
and must be present for Setup to succeed. If they are missing, run the Repair feature of Setup, or manually
copy the files from your installation media. To locate the files on the installation media, navigate to the
appropriate platform directory (x86 or x64) and then navigate to
setup\sql_engine_core_inst_msi\Pfiles\SqlServr\MSSQL.X\MSSQL\Binn\Templates.
Rebuild System Databases
The following procedure rebuilds the master, model, msdb, and tempdb system databases. You cannot specify the
system databases to be rebuilt. For clustered instances, this procedure must be performed on the active node and
the SQL Server resource in the corresponding cluster application group must be taken offline before performing
the procedure.
This procedure does not rebuild the resource database. See the section, "Rebuild the resource Database Procedure"
later in this topic.
To rebuild system databases for an instance of SQL Server:
1. Insert the SQL Server 2016 installation media into the disk drive, or, from a command prompt, change
directories to the location of the setup.exe file on the local server. The default location on the server is
C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\SQLServer2016.
2. From a command prompt window, enter the following command. Square brackets are used to indicate
optional parameters. Do not enter the brackets. When using a Windows operating system that has User
Account Control (UAC) enabled, running Setup requires elevated privileges. The command prompt must be
run as Administrator.
Setup /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=InstanceName
/SQLSYSADMINACCOUNTS=accounts [ /SAPWD= StrongPassword ] [
/SQLCOLLATION=CollationName]
PARAMETER NAME
DESCRIPTION
/QUIET or /Q
Specifies that Setup run without any user interface.
/ACTION=REBUILDDATABASE
Specifies that Setup re-create the system databases.
PARAMETER NAME
DESCRIPTION
/INSTANCENAME=InstanceName
Is the name of the instance of SQL Server. For the default
instance, enter MSSQLSERVER.
/SQLSYSADMINACCOUNTS=accounts
Specifies the Windows groups or individual accounts to
add to the sysadmin fixed server role. When specifying
more than one account, separate the accounts with a
blank space. For example, enter BUILTIN\Administrators
MyDomain\MyUser. When you are specifying an
account that contains a blank space within the account
name, enclose the account in double quotation marks. For
example, enter NT AUTHORITY\SYSTEM.
[ /SAPWD=StrongPassword ]
Specifies the password for the SQL Server sa account. This
parameter is required if the instance uses Mixed
Authentication ( SQL Server and Windows Authentication)
mode.
** Security Note *\The **sa* account is a well-known
SQL Server account and it is often targeted by malicious
users. It is very important that you use a strong password
for the sa login.
Do not specify this parameter for Windows Authentication
mode.
[ /SQLCOLLATION=CollationName ]
Specifies a new server-level collation. This parameter is
optional. When not specified, the current collation of the
server is used.
** Important *\*Changing the server-level collation does
not change the collation of existing user databases. All
newly created user databases will use the new collation by
default.
For more information, see Set or Change the Server
Collation.
[ /SQLTEMPDBFILECOUNT=NumberOfFiles ]
Specifies the number of tempdb data files. This value can
be increased up to 8 or the number of cores, whichever is
higher.
Default value: 8 or the number of cores, whichever is
lower.
[ /SQLTEMPDBFILESIZE=FileSizeInMB ]
Specifies the initial size of each tempdb data file in MB.
Setup allows the size up to 1024 MB.
Default value: 8
[ /SQLTEMPDBFILEGROWTH=FileSizeInMB ]
Specifies the file growth increment of each tempdb data
file in MB. A value of 0 indicates that automatic growth is
off and no additional space is allowed. Setup allows the
size up to 1024 MB.
Default value: 64
PARAMETER NAME
DESCRIPTION
[ /SQLTEMPDBLOGFILESIZE=FileSizeInMB ]
Specifies the initial size of the tempdb log file in MB. Setup
allows the size up to 1024 MB.
Default value: 8.
Allowed range: Min = 8, max = 1024.
[ /SQLTEMPDBLOGFILEGROWTH=FileSizeInMB ]
Specifies the file growth increment of the tempdb log file
in MB. A value of 0 indicates that automatic growth is off
and no additional space is allowed. Setup allows the size
up to 1024 MB.
Default value: 64
Allowed range: Min = 8, max = 1024.
[ /SQLTEMPDBDIR=Directories ]
Specifies the directories for tempdb data files. When
specifying more than one directory, separate the
directories with a blank space. If multiple directories are
specified the tempdb data files will be spread across the
directories in a round-robin fashion.
Default value: System Data Directory
[ /SQLTEMPDBLOGDIR=Directory ]
Specifies the directory for the tempdb log file.
Default value: System Data Directory
3. When Setup has completed rebuilding the system databases, it returns to the command prompt with no
messages. Examine the Summary.txt log file to verify that the process completed successfully. This file is
located at C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\Logs.
4. RebuildDatabase scenario deletes system databases and installs them again in clean state. Because the
setting of tempdb file count does not persist, the value of number of tempdb files is not known during
setup. Therefore, RebuildDatabase scenario does not know the count of tempdb files to be re-added. You
can provide the value of the number of tempdb files again with the SQLTEMPDBFILECOUNT parameter. If
the parameter is not provided, RebuildDatabase will add a default number of tempdb files, which is as many
tempdb files as the CPU count or 8, whichever is lower.
Post-Rebuild Tasks
After rebuilding the database you may need to perform the following additional tasks:
Restore your most recent full backups of the master, model, and msdb databases. For more information, see
Back Up and Restore of System Databases (SQL Server).
IMPORTANT
If you have changed the server collation, do not restore the system databases. Doing so will replace the new collation
with the previous collation setting.
If a backup is not available or if the restored backup is not current, re-create any missing entries. For
example, re-create all missing entries for your user databases, backup devices, SQL Server logins, end
points, and so on. The best way to re-create entries is to run the original scripts that created them.
IMPORTANT
We recommend that you secure your scripts to prevent their being altered by unauthorized by individuals.
If the instance of SQL Server is configured as a replication Distributor, you must restore the distribution
database. For more information, see Back Up and Restore Replicated Databases.
Move the system databases to the locations you recorded previously. For more information, see Move
System Databases.
Verify the server-wide configuration values match the values you recorded previously.
Rebuild the resource Database
The following procedure rebuilds the resource system database. When you rebuild the resource database, all
service packs and hot fixes are lost, and therefore must be reapplied.
To rebuild the resource system database:
1. Launch the SQL Server 2016 Setup program (setup.exe) from the distribution media.
2. In the left navigation area, click Maintenance, and then click Repair.
3. Setup support rule and file routines run to ensure that your system has prerequisites installed and that the
computer passes Setup validation rules. Click OK or Install to continue.
4. On the Select Instance page, select the instance to repair, and then click Next.
5. The repair rules will run to validate the operation. To continue, click Next.
6. From the Ready to Repair page, click Repair. The Complete page indicates that the operation is finished.
Create a New msdb Database
If the msdb database is damaged and you do not have a backup of the msdb database, you can create a new
msdb by using the instmsdb script.
WARNING
Rebuilding the msdb database using the instmsdb script will eliminate all the information stored in msdb such as jobs, alert,
operators, maintenance plans, backup history, Policy-Based Management settings, Database Mail, Performance Data
Warehouse, etc.
1. Stop all services connecting to the Database Engine, including SQL Server Agent, SSRS, SSIS, and all
applications using SQL Server as data store.
2. Start SQL Server from the command line using the command:
NET START MSSQLSERVER /T3608
For more information, see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or
SQL Server Browser Service.
3. In another command line window, detach the msdb database by executing the following command,
replacing <servername> with the instance of SQL Server:
SQLCMD -E -S<servername> -dmaster -Q"EXEC sp_detach_db msdb"
4. Using the Windows Explorer, rename the msdb database files. By default these are in the DATA sub-folder
for the SQL Server instance.
5. Using SQL Server Configuration Manager, stop and restart the Database Engine service normally.
6. In a command line window, connect to SQL Server and execute the command:
SQLCMD -E -S<servername> -i"C:\Program Files\Microsoft SQL
Server\MSSQL13.MSSQLSERVER\MSSQL\Install\instmsdb.sql" -o" C:\Program Files\Microsoft SQL
Server\MSSQL13.MSSQLSERVER\MSSQL\Install\instmsdb.out"
Replace <servername> with the instance of the Database Engine. Use the file system path of the instance of
SQL Server.
7. Using the Windows Notepad, open the instmsdb.out file and check the output for any errors.
8. Re-apply any service packs or hotfix installed on the instance.
9. Recreate the user content stored in the msdb database, such as jobs, alert, etc.
10. Backup the msdb database.
Troubleshoot Rebuild Errors
Syntax and other run-time errors are displayed in the command prompt window. Examine the Setup statement for
the following syntax errors:
Missing slash mark (/) in front of each parameter name.
Missing equal sign (=) between the parameter name and the parameter value.
Presence of blank spaces between the parameter name and the equal sign.
Presence of commas (,) or other characters that are not specified in the syntax.
After the rebuild operation is complete, examine the SQL Server logs for any errors. The default log location
is C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\Logs. To locate the log file that contains the
results of the rebuild process, change directories to the Logs folder from a command prompt, and then run
findstr /s RebuildDatabase summary*.* . This search will point you to any log files that contain the results of
rebuilding system databases. Open the log files and examine them for relevant error messages.
See Also
System Databases
Contained Databases
3/24/2017 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
A contained database is a database that is isolated from other databases and from the instance of SQL Server that
hosts the database. SQL Server 2016 helps user to isolate their database from the instance in 4 ways.
Much of the metadata that describes a database is maintained in the database. (In addition to, or instead of,
maintaining metadata in the master database.)
All metadata are defined using the same collation.
User authentication can be performed by the database, reducing the databases dependency on the logins
of the instance of SQL Server.
The SQL Server environment (DMV's, XEvents, etc.) reports and can act upon containment information.
Some features of partially contained databases, such as storing metadata in the database, apply to all SQL
Server 2016 databases. Some benefits of partially contained databases, such as database level
authentication and catalog collation, must be enabled before they are available. Partial containment is
enabled using the CREATE DATABASE and ALTER DATABASE statements or by using SQL Server
Management Studio. For more information about how to enable partial database containment, see Migrate
to a Partially Contained Database.
Partially Contained Database Concepts
A fully contained database includes all the settings and metadata required to define the database and has no
configuration dependencies on the instance of the SQL Server Database Engine where the database is installed. In
previous versions of SQL Server, separating a database from the instance of SQL Server could be time consuming
and required detailed knowledge of the relationship between the database and the instance of SQL Server.
Partially contained databases make it easier to separate a database from the instance of SQL Server and other
databases.
The contained database considers features with regard to containment. Any user-defined entity that relies only on
functions that reside in the database is considered fully contained. Any user-defined entity that relies on functions
that reside outside the database is considered uncontained. (For more information, see the Containment section
later in this topic.)
The following terms apply to the contained database model.
Database boundary
The boundary between a database and the instance of SQL Server. The boundary between a database and other
databases.
Contained
An element that exists entirely in the database boundary.
Uncontained
An element that crosses the database boundary.
Non-contained database
A database that has containment set to NONE. All databases in versions earlier than SQL Server 2012 are non-
contained. By default, all SQL Server 2012 and later databases have a containment set to NONE.
Partially contained database
A partially contained database is a contained database that can allow some features that cross the database
boundary. SQL Server includes the ability to determine when the containment boundary is crossed.
Contained user
There are two types of users for contained databases.
Contained database user with password
Contained database users with passwords are authenticated by the database. For more information, see
Contained Database Users - Making Your Database Portable.
Windows principals
Authorized Windows users and members of authorized Windows groups can connect directly to the
database and do not need logins in the master database. The database trusts the authentication by
Windows.
Users based on logins in the master database can be granted access to a contained database, but that
would create a dependency on the SQL Server instance. Therefore, creating users based on logins see
comment for partially contained databases.
IMPORTANT
Enabling partially contained databases delegates control over access to the instance of SQL Server to the owners of the
database. For more information, see Security Best Practices with Contained Databases.
Database Boundary
Because partially contained databases separate the database functionality from those of the instance, there is a
clearly defined line between these two elements called the database boundary.
Inside of the database boundary is the database model, where the databases are developed and managed.
Examples of entities located inside of the database include, system tables like sys.tables, contained database users
with passwords, and user tables in the current database referenced by a two-part name.
Outside of the database boundary is the management model, which pertains to instance-level functions and
management. Examples of entities located outside of the database boundary include, system tables like
sys.endpoints, users mapped to logins, and user tables in another database referenced by a three-part-name.
Containment
User entities that reside entirely within the database are considered contained. Any entities that reside outside of
the database, or rely on interaction with functions outside of the database, are considered uncontained.
In general, user entities fall into the following categories of containment:
Fully contained user entities (those that never cross the database boundary), for example sys.indexes. Any
code that uses these features or any object that references only these entities is also fully contained.
Uncontained user entities (those that cross the database boundary), for example sys.server_principals or a
server principal (login) itself. Any code that uses these entities or any functions that references these
entities are uncontained.
Partially Contained Database
The contained database feature is currently available only in a partially contained state. A partially contained
database is a contained database that allows the use of uncontained features.
Use the sys.dm_db_uncontained_entities and sys.sql_modules (Transact-SQL) view to return information about
uncontained objects or features. By determining the containment status of the elements of your database, you can
discover what objects or features must be replaced or altered to promote containment.
IMPORTANT
Because certain objects have a default containment setting of NONE, this view can return false positives.
The behavior of partially contained databases differs most distinctly from that of non-contained databases with
regard to collation. For more information about collation issues, see Contained Database Collations.
Benefits of using Partially Contained Databases
There are issues and complications associated with the non-contained databases that can be resolved by using a
partially contained database.
Database Movement
One of the problems that occurs when moving databases, is that some important information can be unavailable
when a database is moved from one instance to another. For example, login information is stored within the
instance instead of in the database. When you move a non-contained database from one instance to another
instance of SQL Server, this information is left behind. You must identify the missing information and move it with
your database to the new instance of SQL Server. This process can be difficult and time-consuming.
The partially contained database can store important information in the database so the database still has the
information after it is moved.
NOTE
A partially contained database can provide documentation describing those features that are used by a database that
cannot be separated from the instance. This includes a list of other interrelated databases, system settings that the database
requires but cannot be contained, and so on.
Benefit of Contained Database Users with Always On
By reducing the ties to the instance of SQL Server, partially contained databases can be useful during failover
when you use Always On availability groups.
Creating contained users enables the user to connect directly to the contained database. This is a very significant
feature in high availability and disaster recovery scenarios such as in an Always On solution. If the users are
contained users, in case of failover, people would be able to connect to the secondary without creating logins on
the instance hosting the secondary. This provides an immediate benefit. For more information, see Overview of
Always On Availability Groups (SQL Server) and Prerequisites, Restrictions, and Recommendations for Always On
Availability Groups (SQL Server).
Initial Database Development
Because a developer may not know where a new database will be deployed, limiting the deployed environmental
impacts on the database lessens the work and concern for the developer. In the non-contained model, the
developer must consider possible environmental impacts on the new database and program accordingly.
However, by using partially contained databases, developers can detect instance-level impacts on the database
and instance-level concerns for the developer.
Database Administration
Maintaining database settings in the database, instead of in the master database, lets each database owner have
more control over their database, without giving the database owner sysadmin permission.
Limitations
Partially contained databases do not allow the following features.
Partially contained databases cannot use replication, change data capture, or change tracking.
Numbered procedures
Schema-bound objects that depend on built-in functions with collation changes
Binding change resulting from collation changes, including references to objects, columns, symbols, or
types.
Replication, change data capture, and change tracking.
WARNING
Temporary stored procedures are currently permitted. Because temporary stored procedures breach containment, they are
not expected to be supported in future versions of contained database.
Identifying Database Containment
There are two tools to help identify the containment status of the database. The sys.dm_db_uncontained_entities
(Transact-SQL) is a view that shows all the potentially uncontained entities in the database. The
database_uncontained_usage event occurs when any actual uncontained entity is identified at run time.
sys.dm_db_uncontained_entities
This view shows any entities in the database that have the potential to be uncontained, such as those that crossthe database boundary. This includes those user entities that may use objects outside the database model.
However, because the containment of some entities (for example, those using dynamic SQL) cannot be
determined until run time, the view may show some entities that are not actually uncontained. For more
information, see sys.dm_db_uncontained_entities (Transact-SQL).
database_uncontained_usage event
This XEvent occurs whenever an uncontained entity is identified at run time. This includes entities originated in
client code. This XEvent will occur only for actual uncontained entities. However, the event only occurs at run time.
Therefore, any uncontained user entities you have not run will not be identified by this XEvent
See Also
Modified Features (Contained Database)
Contained Database Collations
Security Best Practices with Contained Databases
Migrate to a Partially Contained Database
Contained Database Users - Making Your Database Portable
Modified Features (Contained Database)
3/24/2017 • 1 min to read • Edit Online
The following features have been modified to be supported by a partially contained database. Features are usually
modified so they do not cross the database boundary.
For more information, see Contained Databases.
ALTER DATABASE
Application Level
When using the ALTER DATABASE statement from inside of a contained database, the syntax differs from that used
for a non-contained database. This difference includes restrictions of elements of the statement that extend beyond
the database to the instance. For more information, see ALTER DATABASE (Transact-SQL).
Instance Level
The syntax for the ALTER DATABASE when used outside of a contained database differs from that used for noncontained databases. These changes prevent crossing the database boundary. For more information, see ALTER
DATABASE (Transact-SQL).
CREATE DATABASE
The CREATE DATABASE syntax for a contained database differs from that for a non-contained database. See
CREATE DATABASE (SQL Server Transact-SQL)for information about new syntax requirements and allowances.
Temporary Tables
Local temporary tables are permitted within a contained database, but their behavior differs from those in noncontained databases. In non-contained databases, temporary table data is collated in the collation of tempdb. In a
contained database temporary table data is collated in the collation of the contained database.
All metadata associated with temporary tables (for example, table and column names, indexes, and so on) will be in
the catalog collation.
Named constraints may not be used in temporary tables.
Temporary tables may not refer to user-defined types, XML schema collections, or user-defined functions.
Collation
In the non-contained database model, there are three separate types of collation: Database collation, Instance
collation, and tempdb collation. Contained databases use only two collations, database collation and the new
catalog collation. See Contained Database Collations for more details on contained database collation.
User Options
When enabling contained databases, the user options Option must be set to 0 for the instance of SQL Server.
See Also
Contained Database Collations
Contained Databases
Contained Database Collations
3/24/2017 • 7 min to read • Edit Online
Various properties affect the sort order and equality semantics of textual data, including case sensitivity, accent
sensitivity, and the base language being used. These qualities are expressed to SQL Server through the choice of
collation for the data. For a more in-depth discussion of collations themselves, see Collation and Unicode Support.
Collations apply not only to data stored in user tables, but to all text handled by SQL Server, including metadata,
temporary objects, variable names, etc. The handling of these differs in contained and non-contained databases.
This change will not affect many users, but helps provide instance independence and uniformity. But this may also
cause some confusion, as well as problems for sessions that access both contained and non-contained databases.
This topic clarifies the content of the change, and examines areas where the change may cause problems.
Non-Contained Databases
All databases have a default collation (which can be set when creating or altering a database. This collation is used
for all metadata in the database, as well as the default for all string columns within the database. Users can choose
a different collation for any particular column by using the COLLATE clause.
Example 1
For example, if we were working in Beijing, we might use a Chinese collation:
ALTER DATABASE MyDB COLLATE Chinese_Simplified_Pinyin_100_CI_AS;
Now if we create a column, its default collation will be this Chinese collation, but we can choose another one if we
want:
CREATE TABLE MyTable
(mycolumn1 nvarchar,
mycolumn2 nvarchar COLLATE Frisian_100_CS_AS);
GO
SELECT name, collation_name
FROM sys.columns
WHERE name LIKE 'mycolumn%' ;
GO
Here is the result set.
name
--------------mycolumn1
mycolumn2
collation_name
---------------------------------Chinese_Simplified_Pinyin_100_CI_AS
Frisian_100_CS_AS
This appears relatively simple, but several problems arise. Because the collation for a column is dependent on the
database in which the table is created, problems arise with the use of temporary tables which are stored in
tempdb. The collation of tempdb usually matches the collation for the instance, which does not have to match
the database collation.
Example 2
For example, consider the (Chinese) database above when used on an instance with a Latin1_General collation:
CREATE TABLE T1 (T1_txt nvarchar(max)) ;
GO
CREATE TABLE #T2 (T2_txt nvarchar(max)) ;
GO
At first glance, these two tables look like they have the same schema, but since the collations of the databases
differ, the values are actually incompatible:
SELECT T1_txt, T2_txt
FROM T1
JOIN #T2
ON T1.T1_txt = #T2.T2_txt
Here is the result set.
Msg 468, Level 16, State 9, Line 2
Cannot resolve the collation conflict between "Latin1_General_100_CI_AS_KS_WS_SC" and
Chinese_Simplified_Pinyin_100_CI_AS" in the equal to operation.
We can fix this by explicitly collating the temporary table. SQL Server makes this somewhat easier by providing
the DATABASE_DEFAULT keyword for the COLLATE clause.
CREATE TABLE T1 (T1_txt nvarchar(max)) ;
GO
CREATE TABLE #T2 (T2_txt nvarchar(max) COLLATE DATABASE_DEFAULT);
GO
SELECT T1_txt, T2_txt
FROM T1
JOIN #T2
ON T1.T1_txt = #T2.T2_txt ;
This now runs without error.
We can also see collation-dependent behavior with variables. Consider the following function:
CREATE FUNCTION f(@x INT) RETURNS INT
AS BEGIN
DECLARE @I INT = 1
DECLARE @İ INT = 2
RETURN @x * @i
END;
This is a rather peculiar function. In a case-sensitive collation, the @i in the return clause cannot bind to either @I
or @İ. In a case-insensitive Latin1_General collation, @i binds to @I, and the function returns 1. But in a caseinsensitive Turkish collation, @i binds to @İ, and the function returns 2. This can wreak havoc on a database that
moves between instances with different collations.
Contained Databases
Since a design objective of contained databases is to make them self-contained, the dependence on the instance
and tempdb collations must be severed. To do this, contained databases introduce the concept of the catalog
collation. The catalog collation is used for system metadata and transient objects. Details are provided below.
In a contained database, the catalog collation Latin1_General_100_CI_AS_WS_KS_SC. This collation is the same
for all contained databases on all instances of SQL Server and cannot be changed.
The database collation is retained, but is only used as the default collation for user data. By default, the database
collation is equal to the model database collation, but can be changed by the user through a CREATE or ALTER
DATABASE command as with non-contained databases.
A new keyword, CATALOG_DEFAULT, is available in the COLLATE clause. This is used as a shortcut to the current
collation of metadata in both contained and non-contained databases. That is, in a non-contained database,
CATALOG_DEFAULT will return the current database collation, since metadata is collated in the database
collation. In a contained database, these two values may be different, since the user can change the database
collation so that it does not match the catalog collation.
The behavior of various objects in both non-contained and contained databases is summarized in this table:
Item
Non-Contained Database
Contained Database
User Data (default)
DATABASE_DEFAULT
DATABASE_DEFAULT
Temp Data (default)
TempDB Collation
DATABASE_DEFAULT
Metadata
DATABASE_DEFAULT /
CATALOG_DEFAULT
CATALOG_DEFAULT
Temporary Metadata
tempdb Collation
CATALOG_DEFAULT
Variables
Instance Collation
CATALOG_DEFAULT
Goto Labels
Instance Collation
CATALOG_DEFAULT
Cursor Names
Instance Collation
CATALOG_DEFAULT
If we temp table example previously described, we can see that this collation behavior eliminates the need for an
explicit COLLATE clause in most temp table uses. In a contained database, this code now runs without error, even
if the database and instance collations differ:
CREATE TABLE T1 (T1_txt nvarchar(max)) ;
GO
CREATE TABLE #T2 (T2_txt nvarchar(max));
GO
SELECT T1_txt, T2_txt
FROM T1
JOIN #T2
ON T1.T1_txt = #T2.T2_txt ;
This works because both
T1_txt
and
T2_txt
are collated in the database collation of the contained database.
Crossing Between Contained and Uncontained Contexts
As long as a session in a contained database remains contained, it must remain within the database to which it
connected. In this case the behavior is very straightforward. But if a session crosses between contained and noncontained contexts, the behavior becomes more complex, since the two sets of rules must be bridged. This can
happen in a partially-contained database, since a user may USE to another database. In this case, the difference in
collation rules is handled by the following principle.
The collation behavior for a batch is determined by the database in which the batch begins.
Note that this decision is made before any commands are issued, including an initial USE. That is, if a batch
begins in a contained database, but the first command is a USE to a non-contained database, the contained
collation behavior will still be used for the batch. Given this, a reference to a variable, for example, may have
multiple possible outcomes:
The reference may find exactly one match. In this case, the reference will work without error.
The reference may not find a match in the current collation where there was one before. This will raise an
error indicating that the variable does not exist, even though it was apparently created.
The reference may find multiple matches that were originally distinct. This will also raise an error.
We’ll illustrate this with a few examples. For these we assume there is a partially-contained database named
MyCDB with its database collation set to the default collation, Latin1_General_100_CI_AS_WS_KS_SC. We
assume that the instance collation is Latin1_General_100_CS_AS_WS_KS_SC. The two collations differ
only in case sensitivity.
Example 1
The following example illustrates the case where the reference finds exactly one match.
USE MyCDB;
GO
CREATE TABLE #a(x int);
INSERT INTO #a VALUES(1);
GO
USE master;
GO
SELECT * FROM #a;
GO
Results:
Here is the result set.
x
----------1
In this case, the identified #a binds in both the case-insensitive catalog collation and the case-sensitive instance
collation, and the code works.
Example 2
The following example illustrates the case where the reference does not find a match in the current collation where
there was one before.
USE MyCDB;
GO
CREATE TABLE #a(x int);
INSERT INTO #A VALUES(1);
GO
Here, the #A binds to #a in the case-insensitive default collation, and the insert works,
Here is the result set.
(1 row(s) affected)
But if we continue the script...
USE master;
GO
SELECT * FROM #A;
GO
We get an error trying to bind to #A in the case-sensitive instance collation;
Here is the result set.
Msg 208, Level 16, State 0, Line 2
Invalid object name '#A'.
Example 3
The following example illustrates the case where the reference finds multiple matches that were originally distinct.
First, we start in tempdb (which has the same case-sensitive collation as our instance) and execute the following
statements.
USE tempdb;
GO
CREATE
GO
CREATE
GO
INSERT
GO
INSERT
GO
TABLE #a(x int);
TABLE #A(x int);
INTO #a VALUES(1);
INTO #A VALUES(2);
This succeeds, since the tables are distinct in this collation:
Here is the result set.
(1 row(s) affected)
(1 row(s) affected)
If we move into our contained database, however, we find that we can no longer bind to these tables.
USE MyCDB;
GO
SELECT * FROM #a;
GO
Here is the result set.
Msg 12800, Level 16, State 1, Line 2
The reference to temp table name '#a' is ambiguous and cannot be resolved. Possible candidates are '#a' and '#A'.
Conclusion
The collation behavior of contained databases differs subtly from that in non-contained databases. This behavior is
generally beneficial, providing instance-independence and simplicity. Some users may have issues, particularly
when a session accesses both contained and non-contained databases.
See Also
Contained Databases
Security Best Practices with Contained Databases
3/24/2017 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
Contained databases have some unique threats that should be understood and mitigated by SQL Server Database
Engine administrators. Most of the threats are related to the USER WITH PASSWORD authentication process,
which moves the authentication boundary from the Database Engine level to the database level.
Threats Related to Users
Users in a contained database that have the ALTER ANY USER permission, such as members of the db_owner
and db_securityadmin fixed database roles, can grant access to the database without the knowledge or
permission or the SQL Server administrator. Granting users access to a contained database increases the potential
attack surface area against the whole SQL Server instance. Administrators should understand this delegation of
access control, and be very careful about granting users in the contained database the ALTER ANY USER
permission. All database owners have the ALTER ANY USER permission. SQL Server administrators should
periodically audit the users in a contained database.
Accessing Other Databases Using the guest Account
Database owners and database users with the ALTER ANY USER permission can create contained database users.
After connecting to a contained database on an instance of SQL Server, a contained database user can access other
databases on the Database Engine, if the other databases have enabled the guest account.
Creating a Duplicate User in Another Database
Some applications might require that a user to have access to more than one database. This can be done by
creating identical contained database users in each database. Use the SID option when creating the second user
with password. The following example creates two identical users in two databases.
USE DB1;
GO
CREATE USER Carlo WITH PASSWORD = '<strong password>';
-- Return the SID of the user
SELECT SID FROM sys.database_principals WHERE name = 'Carlo';
-- Change to the second database
USE DB2;
GO
CREATE USER Carlo WITH PASSWORD = '<same password>', SID = <SID from DB1>;
GO
To execute a cross-database query, you must set the TRUSTWORTHY option on the calling database. For example
if the user (Carlo) defined above is in DB1, to execute SELECT * FROM db2.dbo.Table1; then the TRUSTWORTHY
setting must be on for database DB1. Execute the following code to set the TRUSTWORTHY setting on.
ALTER DATABASE DB1 SET TRUSTWORTHY ON;
Creating a User that Duplicates a Login
If a contained database user with password is created, using the same name as a SQL Server login, and if the SQL
Server login connects specifying the contained database as the initial catalog, then the SQL Server login will be
unable to connect. The connection will be evaluated as the contained database user with password principal on the
contained database instead of as a user based on the SQL Server login. This could cause an intentional or
accidental denial of service for the SQL Server login.
As a best practice, members of the sysadmin fixed server role should consider always connecting without
using the initial catalog option. This connects the login to the master database and avoids any attempts by a
database owner to misuse the login attempt. Then the administrator can change to the contained database
by using the USE<database> statement. You can also set the default database of the login to the contained
database, which completes the login to master, and then transfers the login to the contained database.
As a best practice, do not create contained database users with passwords who have the same name as SQL
Server logins.
If the duplicate login exists, connect to the master database without specifying an initial catalog, and then
execute the USE command to change to the contained database.
When contained databases are present, users of databases that are not contained databases should connect
to the Database Engine without using an initial catalog or by specifying the database name of a noncontained database as the initial catalog. This avoids connecting to the contained database which is under
less direct control by the Database Engine administrators.
Increasing Access by Changing the Containment Status of a Database
Logins that have the ALTER ANY DATABASE permission, such as members of the dbcreator fixed server role,
and users in a non-contained database that have the CONTROL DATABASE permission, such as members of the
db_owner fixed database role, can change the containment setting of a database. If the containment setting of a
database is changed from NONE to either PARTIAL or FULL, then user access can be granted by creating
contained database users with passwords. This could provide access without the knowledge or consent of the SQL
Server administrators. To prevent any databases from being contained, set the Database Enginecontained
database authentication option to 0. To prevent connections by contained database users with passwords on
selected contained databases, use login triggers to cancel login attempts by contained database users with
passwords.
Attaching a Contained Database
By attaching a contained database, an administrator could give unwanted users access to the instance of the
Database Engine. An administrator concerned about this risk can bring the database online in RESTRICTED_USER
mode, which prevents authentication for contained database users with passwords. Only principals authorized
through logins will be able to access the Database Engine.
Users are created using the password requirements in effect at the time that they are created and passwords are
not rechecked when a database is attached. By attaching a contained database which allowed weak passwords to a
system with a stricter password policy, an administrator could permit passwords that do not meet the current
password policy on the attaching Database Engine. Administrators can avoid retaining the weak passwords by
requiring that all passwords be reset for the attached database.
Password Policies
Passwords in a database can be required to be strong passwords, but cannot be protected by robust password
policies. Use Windows Authentication whenever possible to take advantage of the more extensive password
policies available from Windows.
Kerberos Authentication
Contained database users with passwords cannot use Kerberos Authentication. When possible, use Windows
Authentication to take advantage of Windows features such as Kerberos.
Offline Dictionary Attack
The password hashes for contained database users with passwords are stored in the contained database. Anyone
with access to the database files could perform a dictionary attack against the contained database users with
passwords on an unaudited system. To mitigate this threat, restrict access to the database files, or only permit
connections to contained databases by using Windows Authentication.
Escaping a Contained Database
If a database is partially contained, SQL Server administrators should periodically audit the capabilities of the users
and modules in contained databases.
Denial of Service Through AUTO_CLOSE
Do not configure contained databases to auto close. If closed, opening the database to authenticate a user
consumes additional resources and could contribute to a denial of service attack.
See Also
Contained Databases
Migrate to a Partially Contained Database
Migrate to a Partially Contained Database
3/24/2017 • 2 min to read • Edit Online
This topic discusses how to prepare to change to the partially contained database model and then provides the
migration steps.
In this topic:
Preparing to Migrate a Database
Enable Contained Databases
Converting a Database to Partially Contained
Migrating Users to Contained Database Users
Preparing to Migrate a Database
Review the following items when considering migrating a database to the partially contained database model.
You should understand the partially contained database model. For more information, see Contained
Databases.
You should understand risks that are unique to partially contained databases. For more information, see
Security Best Practices with Contained Databases.
Contained databases do not support replication, change data capture, or change tracking. Confirm the
database does not use these features.
Review the list of database features that are modified for partially contained databases. For more
information, see Modified Features (Contained Database).
Query sys.dm_db_uncontained_entities (Transact-SQL) to find uncontained objects or features in the
database. For more information, see.
Monitor the database_uncontained_usage XEvent to see when uncontained features are used.
Enable Contained Databases
Contained databases must be enabled on the instance of SQL Server Database Engine, before contained databases
can be created.
Enabling Contained Databases Using Transact-SQL
The following example enables contained databases on the instance of the SQL Server Database Engine.
sp_configure 'contained database authentication', 1;
GO
RECONFIGURE ;
GO
Enabling Contained Databases Using Management Studio
The following example enables contained databases on the instance of the SQL Server Database Engine.
1. In Object Explorer, right-click the server name, and then click Properties.
2. On the Advanced page, in the Containment section, set the Enable Contained Databases option to
True.
3. Click OK.
Converting a Database to Partially Contained
A database is converted to a contained database by changing the CONTAINMENT option.
Converting a Database to Partially Contained Using Transact-SQL
The following example converts a database named Accounting to a partially contained database.
USE [master]
GO
ALTER DATABASE [Accounting] SET CONTAINMENT = PARTIAL
GO
Converting a Database to Partially contained Using Management Studio
The following example converts a database to a partially contained database.
1. In Object Explorer, expand Databases, right-click the database to be converted, and then click Properties.
2. On the Options page, change the Containment type option to Partial.
3. Click OK.
Migrating Users to Contained Database Users
The following example migrates all users that are based on SQL Server logins to contained database users with
passwords. The example excludes logins that are not enabled. The example must be executed in the contained
database.
DECLARE @username sysname ;
DECLARE user_cursor CURSOR
FOR
SELECT dp.name
FROM sys.database_principals AS dp
JOIN sys.server_principals AS sp
ON dp.sid = sp.sid
WHERE dp.authentication_type = 1 AND sp.is_disabled = 0;
OPEN user_cursor
FETCH NEXT FROM user_cursor INTO @username
WHILE @@FETCH_STATUS = 0
BEGIN
EXECUTE sp_migrate_user_to_contained
@username = @username,
@rename = N'keep_name',
@disablelogin = N'disable_login';
FETCH NEXT FROM user_cursor INTO @username
END
CLOSE user_cursor ;
DEALLOCATE user_cursor ;
See Also
Contained Databases
sp_migrate_user_to_contained (Transact-SQL)
sys.dm_db_uncontained_entities (Transact-SQL)
SQL Server Data Files in Microsoft Azure
4/14/2017 • 14 min to read • Edit Online
SQL Server Data Files in Microsoft Azure enables native support for SQL Server database files stored as Microsoft
Azure Blobs. It allows you to create a database in SQL Server running in on-premises or in a virtual machine in
Microsoft Azure with a dedicated storage location for your data in Microsoft Azure Blob Storage. This enhancement
especially simplifies to move databases between machines by using detach and attach operations. In addition, it
provides an alternative storage location for your database backup files by allowing you to restore from or to
Microsoft Azure Storage. Therefore, it enables several hybrid solutions by providing several benefits for data
virtualization, data movement, security and availability, and any easy low costs and maintenance for highavailability and elastic scaling.
[AZURE.IMPORTANT]Storing system databases in Azure blob storage is not recommended and is not
supported.
This topic introduces concepts and considerations that are central to storing SQL Server data files in Microsoft
Azure Storage Service.
For a practical hands-on experience on how to use this new feature, see Tutorial: Using the Microsoft Azure Blob
storage service with SQL Server 2016 databases .
Why use SQL Server Data Files in Microsoft Azure?
Easy and fast migration benefits: This feature simplifies the migration process by moving one database
at a time between machines in on-premises as well as between on-premises and cloud environments
without any application changes. Therefore, it supports an incremental migration while maintaining your
existing on-premises infrastructure in place. In addition, having access to a centralized data storage
simplifies the application logic when an application needs to run in multiple locations in an on-premises
environment. In some cases, you may need to rapidly setup computer centers in geographically dispersed
locations, which gather data from many different sources. By using this new enhancement, instead of
moving data from one location to another, you can store many databases as Microsoft Azure blobs, and
then run Transact-SQL scripts to create databases on the local machines or virtual machines.
Cost and limitless storage benefits: This feature enables you to have limitless off-site storage in
Microsoft Azure while leveraging on-premises compute resources. When you use Microsoft Azure as a
storage location, you can easily focus on the application logic without the overhead of hardware
management. If you lose a computation node on-premises, you can set up a new one without any data
movement.
High availability and disaster recovery benefits: Using SQL Server Data Files in Microsoft Azure feature
might simplify the high availability and disaster recovery solutions. For example, if a virtual machine in
Microsoft Azure or an instance of SQL Server crashes, you can re-create your databases in a new SQL Server
instance by just re-establishing links to Microsoft Azure Blobs.
Security benefits: This new enhancement allows you to separate a compute instance from a storage
instance. You can have a fully encrypted database with decryption only occurring on compute instance but
not in a storage instance. In other words, using this new enhancement, you can encrypt all data in public
cloud using Transparent Data Encryption (TDE) certificates, which are physically separated from the data. The
TDE keys can be stored in the master database, which is stored locally in your physically secure on-premises
computer and backed up locally. You can use these local keys to encrypt the data, which resides in Microsoft
Azure Storage. If your cloud storage account credentials are stolen, your data still stays secure as the TDE
certificates always reside in on-premises.
Snapshot backup: This feature enables you to use Azure snapshots to provide nearly instantaneous
backups and quicker restores for database files stored using the Azure Blob storage service. This capability
enables you to simplify your backup and restore policies. For more information, see File-Snapshot Backups
for Database Files in Azure.
Concepts and Requirements
Azure Storage Concepts
When using SQL Server Data Files in Windows Azure feature, you need to create a storage account and a container
in Windows Azure. Then, you need to create a SQL Server credential, which includes information on the policy of
the container as well as a shared access signature that is necessary to access the container.
In Microsoft Azure, an Azure storage account represents the highest level of the namespace for accessing Blobs. A
storage account can contain an unlimited number of containers, as long as their total size is under 500 TB. For the
latest information on storage limits, see Azure Subscription and Service Limits, Quotas, and Constraints. A
container provides a grouping of a set of Blobs. All Blobs must be in a container. An account can contain an
unlimited number of containers. Similarly, a container can store an unlimited number of Blobs as well. There are
two types of blobs that can be stored in Azure Storage: block and page blobs. This new feature uses Page blobs,
which can be up to 1TB in size, and are more efficient when ranges of bytes in a file are modified frequently. You
can access Blobs using the following URL format: http://storageaccount.blob.core.windows.net/<container>/<blob> .
Azure Billing Considerations
Estimating the cost of using Azure Services is an important matter in the decision making and planning process.
When storing SQL Server data files in Azure Storage, you need to pay costs associated with storage and
transactions. In addition, the implementation of SQL Server Data Files in Azure Storage feature requires a renewal
of Blob lease every 45 to 60 seconds implicitly. This also results in transaction costs per database file, such as .mdf
or .ldf. Based on our estimations, the cost of renewing leases for two database files (.mdf and .ldf) would be about 2
cents per month according to the current pricing model. We recommend that you use the information on the Azure
Pricing page to help estimate the monthly costs associated with the use of Azure Storage and Azure Virtual
Machines.
SQL Server Concepts
When using this new enhancement, you are required to do the followings:
You must create a policy on a container and also generate a shared access signature (SAS) key.
For each container used by a data or a log file, you must create a SQL Server Credential whose name
matches the container path.
You must store the information regarding Azure Storage container, its associated policy name, and SAS key
in the SQL Server credential store.
The following example assumes that an Azure storage container has been created, and a policy has been
created with read, write, list, rights. Creating a policy on a container generates a SAS key which is safe to
keep unencrypted in memory and needed by SQL Server to access the blob files in the container. In the
following code snippet, replace 'your SAS key' with an entry similar to the following:
'sr=c&si=<MYPOLICYNAME>&sig=<THESHAREDACCESSSIGNATURE>' . For more information, see Manage Access to
Azure Storage Resources
-- Create a credential
CREATE CREDENTIAL [https://testdb.blob.core.windows.net/data]
WITH IDENTITY='SHARED ACCESS SIGNATURE',
SECRET = 'your SAS key'
-- Create database with data and log files in Windows Azure container.
CREATE DATABASE testdb
ON
( NAME = testdb_dat,
FILENAME = 'https://testdb.blob.core.windows.net/data/TestData.mdf' )
LOG ON
( NAME = testdb_log,
FILENAME = 'https://testdb.blob.core.windows.net/data/TestLog.ldf')
Important note: If there are any active references to data files in a container, attempts to delete the corresponding
SQL Server credential fails.
Security
The following are security considerations and requirements when storing SQL Server Data Files in Azure Storage.
When creating a container for the Azure Blob storage service, we recommend that you set the access to
private. When you set the access to private, container and blob data can be read by the Azure account owner
only.
When storing SQL Server database files in Azure Storage, you need to use a shared access signature, a URI
that grants restricted access rights to containers, blobs, queues, and tables. By using a shared access
signature, you can enable SQL Server to access resources in your storage account without sharing your
Azure storage account key.
In addition, we recommend that you continue implementing the traditional on-premises security practices
for your databases.
Installation Prerequisites
The followings are installation prerequisites when storing SQL Server Data Files in Azure.
SQL Server on-premises: SQL Server 2016 version includes this feature. To learn how to download SQL
Server 2016, see SQL Server 2016.
SQL Server running in an Azure virtual machine: If you are installing SQL Server on an Azure Virtual
Machine, install SQL Server 2016, or update your existing instance. Similarly, you can also create a new
virtual machine in Azure using SQL Server 2016 platform image.
Limitations
In the current release of this feature, storing FileStream data in Azure Storage is not supported. You can
store Filestream data in an Azure storage integrated local database but you cannot move Filestream data
between machines using Azure Storage. For FileStream data, we recommend that you continue using the
traditional techniques to move the files (.mdf, .ldf) associated with Filestream between different machines.
Currently, this new enhancement does not support more than one SQL Server instance accessing the same
database files in Azure Storage at the same time. If ServerA is online with an active database file and if
ServerB is accidently started, and it also has a database which points to the same data file, the second server
will fail to start the database with an error code 5120 Unable to open the physical file "%.*ls".
Operating system error %d: "%ls".
Only .mdf, .ldf, and .ndf files can be stored in Azure Storage by using the SQL Server Data Files in Azure
feature.
When using the SQL Server Data Files in Azure feature, geo-replication for your storage account is not
supported. If a storage account is geo-replicated and a geo-failover happened, database corruption could
occur.
Each Blob can be up to maximum 1 TB in size. This creates an upper limit on individual database data and
log files that can be stored in Azure Storage.
It is not possible to store In-Memory OLTP data in Azure Blob using the SQL Server Data Files in Azure
Storage feature. This is because In-Memory OLTP has a dependency on FileStream and, in the current
release of this feature, storing FileStream data in Azure Storage is not supported.
When using SQL Server Data Files in Azure feature, SQL Server performs all URL or file path comparisons
using the Collation set in the master database.
Always On Availability Groups are supported as long as you do not add new database files to the primary
database. If a database operation requires a new file to be created in the primary database, first disable
Always On Availability Groups in the secondary node. Then, perform the database operation on the primary
database and backup the database in the primary node. Next, restore the database to the secondary node,
and enable Always On Availability Groups in the secondary node. Note that Always On Failover Cluster
Instances is not supported when using the SQL Server Data Files in Azure feature.
During normal operation, SQL Server uses temporary leases to reserve Blobs for storage with a renewal of
each Blob lease every 45 to 60 seconds. If a server crashes and another instance of SQL Server configured to
use the same blobs is started, the new instance will wait up to 60 seconds for the existing lease on the Blob
to expire. If you want to attach the database to another instance and you cannot wait for the lease to expire
within 60 seconds, you can explicitly break the lease on the Blob to avoid any failures in attach operations.
Tools and programming reference support
This section describes which tools and programming reference libraries can be used when storing SQL Server data
files in Azure Storage.
PowerShell support
Use PowerShell cmdlets to store SQL Server data files in Azure Blob Storage service by referencing a Blob Storage
URL path instead of a file path. Access Blobs using the following URL format:
http://storageaccount.blob.core.windows.net/<container>/<blob> .
SQL Server Object and performance counters support
Starting with SQL Server 2014, a new SQL Server object has been added to be used with SQL Server Data Files in
Azure Storage feature. The new SQL Server object is called as SQL Server, HTTP_STORAGE_OBJECT and it can be
used by System Monitor to monitor activity when running SQL Server with Windows Azure Storage.
SQL Server Management Studio support
SQL Server Management Studio allows you to use this feature via several dialog windows. For example, you can
type the URL path of the storage container, such as > https://teststorageaccnt.blob.core.windows.net/testcontainer/
:
as a Path in several dialog windows, such as New Database, Attach Database, and Restore Database. For more
information, see Tutorial: Using the Microsoft Azure Blob storage service with SQL Server 2016 databases.
SQL Server Management Objects support
When using the SQL Server Data Files in Azure feature, all SQL Server Management Objects (SMO) are supported.
If an SMO object requires a file path, use the BLOB URL format instead of a local file path, such as
https://teststorageaccnt.blob.core.windows.net/testcontainer/ . For more information about SQL Server
Management Objects (SMO), see SQL Server Management Objects (SMO) Programming Guide in SQL Server
Books Online.
Transact-SQL support
This new feature has introduced the following change in the Transact-SQL surface area:
A new int column, credential_id, in the sys.master_files system view. The credential_id column is used to
enable Azure Storage enabled data files to be cross-referenced back to sys.credentials for the credentials
created for them. You can use it for troubleshooting, such as a credential cannot be deleted when there is a
database file which uses it.
Troubleshooting for SQL Server Data Files in Microsoft Azure
To avoid errors due to unsupported features or limitations, first review Limitations.
The list of errors that you might get when using the SQL Server Data Files in Azure Storage feature are as follows.
Authentication errors
Cannot drop the credential '%.\ls' because it is used by an active database file.*
Resolution: You may see this error when you try to drop a credential that is still being used by an active
database file in Azure Storage. To drop the credential, first you must delete the associated blob that has this
database file. To delete a blob that has an active lease, you must first break the lease.
Shared Access Signature has not been created on the container correctly.
Resolution: Make sure that you have created a Shared Access Signature on the container correctly. Review
the instructions given in Lesson 2 in Tutorial: Using the Microsoft Azure Blob storage service with SQL
Server 2016 databases .
SQL Server credential has not been not created correctly.
Resolution: Make sure that you have used 'Shared Access Signature' for the Identity field and created a
secret correctly. Review the instructions given in Lesson 3 in Tutorial: Using the Microsoft Azure Blob
storage service with SQL Server 2016 databases.
Lease blob errors:
Error when trying to start SQL Server after another instance using the same blob files has crashed.
Resolution: During normal operation, SQL Server uses temporary leases to reserve Blobs for storage with a
renewal of each Blob lease every 45 to 60 seconds. If a server crashes and another instance of SQL Server
configured to use the same blobs is started, the new instance will wait up to 60 seconds for the existing
lease on the Blob to expire. If you want to attach the database to another instance and you cannot wait for
the lease to expire within 60 seconds, you can explicitly break the lease on the Blob to avoid any failures in
attach operations.
Database errors
1. Errors when creating a database
Resolution: Review the instructions given in Lesson 4 in Tutorial: Using the Microsoft Azure Blob storage
service with SQL Server 2016 databases.
2. Errors when running the Alter statement
Resolution: Make sure to execute the Alter Database statement when the database is online. When copying
the data files to Azure Storage, always create a page blob not a block blob. Otherwise, ALTER Database will
fail. Review the instructions given in Lesson 7 in Tutorial: Using the Microsoft Azure Blob storage service
with SQL Server 2016 databases.
3. Error code 5120 Unable to open the physical file "%.\ls". Operating system error %d: "%ls"*
Resolution: Currently, this new enhancement does not support more than one SQL Server instance
accessing the same database files in Azure Storage at the same time. If ServerA is online with an active
database file and if ServerB is accidently started, and it also has a database which points to the same data
file, the second server will fail to start the database with an error code 5120 Unable to open the physical file
"%.\ls". Operating system error %d: "%ls"*.
To resolve this issue, first determine if you need ServerA to access the database file in Azure Storage or not.
If not, simply remove any connection between ServerA and the database files in Azure Storage. To do this,
follow these steps:
a. Set the file path of Server A to a local folder by using the ALTER Database statement.
b. Set the database offline in Server A.
c. Then, copy database files from Azure Storage to the local folder in Server A. This ensures that ServerA
still has a copy of the database locally.
d. Set the database online.
Database Files and Filegroups
3/24/2017 • 7 min to read • Edit Online
At a minimum, every SQL Server database has two operating system files: a data file and a log file. Data files
contain data and objects such as tables, indexes, stored procedures, and views. Log files contain the information
that is required to recover all transactions in the database. Data files can be grouped together in filegroups for
allocation and administration purposes.
Database Files
SQL Server databases have three types of files, as shown in the following table.
FILE
DESCRIPTION
Primary
The primary data file contains the startup information for the
database and points to the other files in the database. User
data and objects can be stored in this file or in secondary
data files. Every database has one primary data file. The
recommended file name extension for primary data files is
.mdf.
Secondary
Secondary data files are optional, are user-defined, and store
user data. Secondary files can be used to spread data across
multiple disks by putting each file on a different disk drive.
Additionally, if a database exceeds the maximum size for a
single Windows file, you can use secondary data files so the
database can continue to grow.
The recommended file name extension for secondary data
files is .ndf.
Transaction Log
The transaction log files hold the log information that is used
to recover the database. There must be at least one log file
for each database. The recommended file name extension for
transaction logs is .ldf.
For example, a simple database named Sales can be created that includes one primary file that contains all data
and objects and a log file that contains the transaction log information. Alternatively, a more complex database
named Orders can be created that includes one primary file and five secondary files. The data and objects within
the database spread across all six files, and the four log files contain the transaction log information.
By default, the data and transaction logs are put on the same drive and path. This is done to handle single-disk
systems. However, this may not be optimal for production environments. We recommend that you put data and
log files on separate disks.
Logical and Physical File Names
SQL Server files have two names:
logical_file_name: The logical_file_name is the name used to refer to the physical file in all Transact-SQL
statements. The logical file name must comply with the rules for SQL Server identifiers and must be unique
among logical file names in the database.
os_file_name: The os_file_name is the name of the physical file including the directory path. It must follow the
rules for the operating system file names.
SQL Server data and log files can be put on either FAT or NTFS file systems. We recommend using the NTFS file
system because the security aspects of NTFS. Read/write data filegroups and log files cannot be placed on an
NTFS compressed file system. Only read-only databases and read-only secondary filegroups can be put on an
NTFS compressed file system.
When multiple instances of SQL Server are run on a single computer, each instance receives a different default
directory to hold the files for the databases created in the instance. For more information, see File Locations for
Default and Named Instances of SQL Server.
Data File Pages
Pages in a SQL Server data file are numbered sequentially, starting with zero (0) for the first page in the file. Each
file in a database has a unique file ID number. To uniquely identify a page in a database, both the file ID and the
page number are required. The following example shows the page numbers in a database that has a 4-MB
primary data file and a 1-MB secondary data file.
The first page in each file is a file header page that contains information about the attributes of the file. Several of
the other pages at the start of the file also contain system information, such as allocation maps. One of the system
pages stored in both the primary data file and the first log file is a database boot page that contains information
about the attributes of the database. For more information about pages and page types, see Understanding Pages
and Extents.
File Size
SQL Server files can grow automatically from their originally specified size. When you define a file, you can
specify a specific growth increment. Every time the file is filled, it increases its size by the growth increment. If
there are multiple files in a filegroup, they will not autogrow until all the files are full. Growth then occurs in a
round-robin fashion.
Each file can also have a maximum size specified. If a maximum size is not specified, the file can continue to grow
until it has used all available space on the disk. This feature is especially useful when SQL Server is used as a
database embedded in an application where the user does not have convenient access to a system administrator.
The user can let the files autogrow as required to reduce the administrative burden of monitoring free space in
the database and manually allocating additional space.
Database Snapshot Files
The form of file that is used by a database snapshot to store its copy-on-write data depends on whether the
snapshot is created by a user or used internally:
A database snapshot that is created by a user stores its data in one or more sparse files. Sparse file technology
is a feature of the NTFS file system. At first, a sparse file contains no user data, and disk space for user data has
not been allocated to the sparse file. For general information about the use of sparse files in database
snapshots and how database snapshots grow, see View the Size of the Sparse File of a Database Snapshot.
Database snapshots are used internally by certain DBCC commands. These commands include DBCC
CHECKDB, DBCC CHECKTABLE, DBCC CHECKALLOC, and DBCC CHECKFILEGROUP. An internal database
snapshot uses sparse alternate data streams of the original database files. Like sparse files, alternate data
streams are a feature of the NTFS file system. The use of sparse alternate data streams allows for multiple data
allocations to be associated with a single file or folder without affecting the file size or volume statistics.
Filegroups
Every database has a primary filegroup. This filegroup contains the primary data file and any secondary files that
are not put into other filegroups. User-defined filegroups can be created to group data files together for
administrative, data allocation, and placement purposes.
For example, three files, Data1.ndf, Data2.ndf, and Data3.ndf, can be created on three disk drives, respectively, and
assigned to the filegroup fgroup1. A table can then be created specifically on the filegroup fgroup1. Queries for
data from the table will be spread across the three disks; this will improve performance. The same performance
improvement can be accomplished by using a single file created on a RAID (redundant array of independent
disks) stripe set. However, files and filegroups let you easily add new files to new disks.
All data files are stored in the filegroups listed in the following table.
FILEGROUP
DESCRIPTION
Primary
The filegroup that contains the primary file. All system tables
are allocated to the primary filegroup.
User-defined
Any filegroup that is specifically created by the user when the
user first creates or later modifies the database.
Default Filegroup
When objects are created in the database without specifying which filegroup they belong to, they are assigned to
the default filegroup. At any time, exactly one filegroup is designated as the default filegroup. The files in the
default filegroup must be large enough to hold any new objects not allocated to other filegroups.
The PRIMARY filegroup is the default filegroup unless it is changed by using the ALTER DATABASE statement.
Allocation for the system objects and tables remains within the PRIMARY filegroup, not the new default filegroup.
File and Filegroup Example
The following example creates a database on an instance of SQL Server. The database has a primary data file, a
user-defined filegroup, and a log file. The primary data file is in the primary filegroup and the user-defined
filegroup has two secondary data files. An ALTER DATABASE statement makes the user-defined filegroup the
default. A table is then created specifying the user-defined filegroup. (This example uses a generic path
c:\Program Files\Microsoft SQL Server\MSSQL.1 to avoid specifying a version of SQL Server.)
USE master;
GO
-- Create the database with the default data
-- filegroup and a log file. Specify the
-- growth increment and the max size for the
-- primary data file.
CREATE DATABASE MyDB
ON PRIMARY
( NAME='MyDB_Primary',
FILENAME=
'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\data\MyDB_Prm.mdf',
SIZE=4MB,
MAXSIZE=10MB,
FILEGROWTH=1MB),
FILEGROUP MyDB_FG1
( NAME = 'MyDB_FG1_Dat1',
FILENAME =
'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\data\MyDB_FG1_1.ndf',
SIZE = 1MB,
MAXSIZE=10MB,
FILEGROWTH=1MB),
( NAME = 'MyDB_FG1_Dat2',
FILENAME =
'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\data\MyDB_FG1_2.ndf',
SIZE = 1MB,
MAXSIZE=10MB,
FILEGROWTH=1MB)
LOG ON
( NAME='MyDB_log',
FILENAME =
'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\data\MyDB.ldf',
SIZE=1MB,
MAXSIZE=10MB,
FILEGROWTH=1MB);
GO
ALTER DATABASE MyDB
MODIFY FILEGROUP MyDB_FG1 DEFAULT;
GO
-- Create a table in the user-defined filegroup.
USE MyDB;
CREATE TABLE MyTable
( cola int PRIMARY KEY,
colb char(8) )
ON MyDB_FG1;
GO
The following illustration summarizes the results of the previous example.
Related Content
CREATE DATABASE (SQL Server Transact-SQL)
ALTER DATABASE File and Filegroup Options (Transact-SQL)
Database Detach and Attach (SQL Server)
Database States
3/24/2017 • 1 min to read • Edit Online
A database is always in one specific state. For example, these states include ONLINE, OFFLINE, or SUSPECT. To
verify the current state of a database, select the state_desc column in the sys.databases catalog view or the Status
property in the DATABASEPROPERTYEX function.
Database State Definitions
The following table defines the database states.
STATE
DEFINITION
ONLINE
Database is available for access. The primary filegroup is
online, although the undo phase of recovery may not have
been completed.
OFFLINE
Database is unavailable. A database becomes offline by explicit
user action and remains offline until additional user action is
taken. For example, the database may be taken offline in order
to move a file to a new disk. The database is then brought
back online after the move has been completed.
RESTORING
One or more files of the primary filegroup are being restored,
or one or more secondary files are being restored offline. The
database is unavailable.
RECOVERING
Database is being recovered. The recovering process is a
transient state; the database will automatically become online
if the recovery succeeds. If the recovery fails, the database will
become suspect. The database is unavailable.
RECOVERY PENDING
SQL Server has encountered a resource-related error during
recovery. The database is not damaged, but files may be
missing or system resource limitations may be preventing it
from starting. The database is unavailable. Additional action
by the user is required to resolve the error and let the
recovery process be completed.
SUSPECT
At least the primary filegroup is suspect and may be
damaged. The database cannot be recovered during startup
of SQL Server. The database is unavailable. Additional action
by the user is required to resolve the problem.
EMERGENCY
User has changed the database and set the status to
EMERGENCY. The database is in single-user mode and may be
repaired or restored. The database is marked READ_ONLY,
logging is disabled, and access is limited to members of the
sysadmin fixed server role. EMERGENCY is primarily used for
troubleshooting purposes. For example, a database marked as
suspect can be set to the EMERGENCY state. This could
permit the system administrator read-only access to the
database. Only members of the sysadmin fixed server role
can set a database to the EMERGENCY state.
Related Content
ALTER DATABASE (Transact-SQL)
Mirroring States (SQL Server)
File States
File States
3/24/2017 • 2 min to read • Edit Online
In SQL Server, the state of a database file is maintained independently from the state of the database. A file is
always in one specific state, such as ONLINE or OFFLINE. To view the current state of a file, use the sys.master_files
or sys.database_files catalog view. If the database is offline, the state of the files can be viewed from the
sys.master_files catalog view.
The state of the files in a filegroup determines the availability of the whole filegroup. For a filegroup to be available,
all files within the filegroup must be online. To view the current state of a filegroup, use the sys.filegroups catalog
view. If a filegroup is offline and you try to access the filegroup by a Transact-SQL statement, it will fail with an
error. When the query optimizer builds query plans for SELECT statements, it avoids nonclustered indexes and
indexed views that reside in offline filegroups, letting these statements to succeed. However, if the offline filegroup
contains the heap or clustered index of the target table, the SELECT statements fail. Additionally, any INSERT,
UPDATE, or DELETE statement that modifies a table with any index in an offline filegroup will fail.
File State Definitions
The following table defines the file states.
STATE
DEFINITION
ONLINE
The file is available for all operations. Files in the primary
filegroup are always online if the database itself is online. If a
file in the primary filegroup is not online, the database is not
online and the states of the secondary files are undefined.
OFFLINE
The file is not available for access and may not be present on
the disk. Files become offline by explicit user action and
remain offline until additional user action is taken.
** Caution *\* A file should only be set offline when the file is
corrupted, but it can be restored. A file set to offline can only
be set online by restoring the file from backup. For more
information about restoring a single file, see RESTORE
(Transact-SQL).
RESTORING
The file is being restored. Files enter the restoring state
because of a restore command affecting the whole file, not
just a page restore, and remain in this state until the restore is
completed and the file is recovered.
RECOVERY PENDING
The recovery of the file has been postponed. A file enters this
state automatically because of a piecemeal restore process in
which the file is not restored and recovered. Additional action
by the user is required to resolve the error and allow for the
recovery process to be completed. For more information, see
Piecemeal Restores (SQL Server).
STATE
DEFINITION
SUSPECT
Recovery of the file failed during an online restore process. If
the file is in the primary filegroup, the database is also marked
as suspect. Otherwise, only the file is suspect and the
database is still online.
The file will remain in the suspect state until it is made
available by one of the following methods:
Restore and recovery
DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS
DEFUNCT
Related Content
ALTER DATABASE (Transact-SQL)
Database States
Mirroring States (SQL Server)
DBCC CHECKDB (Transact-SQL)
Database Files and Filegroups
The file was dropped when it was not online. All files in a
filegroup become defunct when an offline filegroup is
removed.
Database Identifiers
3/24/2017 • 3 min to read • Edit Online
The database object name is referred to as its identifier. Everything in Microsoft SQL Server can have an identifier.
Servers, databases, and database objects, such as tables, views, columns, indexes, triggers, procedures, constraints,
and rules, can have identifiers. Identifiers are required for most objects, but are optional for some objects such as
constraints.
An object identifier is created when the object is defined. The identifier is then used to reference the object. For
example, the following statement creates a table with the identifier TableX , and two columns with the identifiers
KeyCol and Description :
CREATE TABLE TableX
(KeyCol INT PRIMARY KEY, Description nvarchar(80))
This table also has an unnamed constraint. The
PRIMARY KEY
constraint has no identifier.
The collation of an identifier depends on the level at which it is defined. Identifiers of instance-level objects, such as
logins and database names, are assigned the default collation of the instance. Identifiers of objects in a database,
such as tables, views, and column names, are assigned the default collation of the database. For example, two
tables with names that differ only in case can be created in a database that has case-sensitive collation, but cannot
be created in a database that has case-insensitive collation.
NOTE
The names of variables, or the parameters of functions and stored procedures must comply with the rules for Transact-SQL
identifiers.
Classes of Identifiers
There are two classes of identifiers:
Regular identifiers
Comply with the rules for the format of identifiers. Regular identifiers are not delimited when they are used in
Transact-SQL statements.
SELECT *
FROM TableX
WHERE KeyCol = 124
Delimited identifiers
Are enclosed in double quotation marks (") or brackets ([ ]). Identifiers that comply with the rules for the format of
identifiers might not be delimited. For example:
SELECT *
FROM [TableX]
--Delimiter is optional.
WHERE [KeyCol] = 124 --Delimiter is optional.
Identifiers that do not comply with all the rules for identifiers must be delimited in a Transact-SQL statement. For
example:
SELECT *
FROM [My Table]
WHERE [order] = 10
--Identifier contains a space and uses a reserved keyword.
--Identifier is a reserved keyword.
Both regular and delimited identifiers must contain from 1 through 128 characters. For local temporary tables, the
identifier can have a maximum of 116 characters.
Rules for Regular Identifiers
The names of variables, functions, and stored procedures must comply with the following rules for Transact-SQL
identifiers.
1. The first character must be one of the following:
A letter as defined by the Unicode Standard 3.2. The Unicode definition of letters includes Latin
characters from a through z, from A through Z, and also letter characters from other languages.
The underscore (_), at sign (@), or number sign (#).
Certain symbols at the beginning of an identifier have special meaning in SQL Server. A regular
identifier that starts with the at sign always denotes a local variable or parameter and cannot be used
as the name of any other type of object. An identifier that starts with a number sign denotes a
temporary table or procedure. An identifier that starts with double number signs (##) denotes a
global temporary object. Although the number sign or double number sign characters can be used to
begin the names of other types of objects, we do not recommend this practice.
Some Transact-SQL functions have names that start with double at signs (@@). To avoid confusion
with these functions, you should not use names that start with @@.
2. Subsequent characters can include the following:
Letters as defined in the Unicode Standard 3.2.
Decimal numbers from either Basic Latin or other national scripts.
The at sign, dollar sign ($), number sign, or underscore.
3. The identifier must not be a Transact-SQL reserved word. SQL Server reserves both the uppercase and
lowercase versions of reserved words. When identifiers are used in Transact-SQL statements, the identifiers
that do not comply with these rules must be delimited by double quotation marks or brackets. The words
that are reserved depend on the database compatibility level. This level can be set by using the ALTER
DATABASE statement.
4. Embedded spaces or special characters are not allowed.
5. Supplementary characters are not allowed.
When identifiers are used in Transact-SQL statements, the identifiers that do not comply with these rules
must be delimited by double quotation marks or brackets.
NOTE
Some rules for the format of regular identifiers depend on the database compatibility level. This level can be set by using
ALTER DATABASE.
See Also
ALTER TABLE (Transact-SQL)
CREATE DATABASE (SQL Server Transact-SQL)
CREATE DEFAULT (Transact-SQL)
CREATE PROCEDURE (Transact-SQL)
CREATE RULE (Transact-SQL)
CREATE TABLE (Transact-SQL)
CREATE TRIGGER (Transact-SQL)
CREATE VIEW (Transact-SQL)
DECLARE @local_variable (Transact-SQL)
DELETE (Transact-SQL)
INSERT (Transact-SQL)
Reserved Keywords (Transact-SQL)
SELECT (Transact-SQL)
UPDATE (Transact-SQL)
Estimate the Size of a Database
3/24/2017 • 1 min to read • Edit Online
When you design a database, you may have to estimate how large the database will be when filled with data.
Estimating the size of the database can help you determine the hardware configuration you will require to do the
following:
Achieve the performance required by your applications.
Guarantee the appropriate physical amount of disk space required to store the data and indexes.
Estimating the size of a database can also help you determine whether the database design needs refining.
For example, you may determine that the estimated size of the database is too large to implement in your
organization and that more normalization is required. Conversely, the estimated size may be smaller than
expected. This would allow you to denormalize the database to improve query performance.
To estimate the size of a database, estimate the size of each table individually and then add the values
obtained. The size of a table depends on whether the table has indexes and, if they do, what type of indexes.
In This Section
TOPIC
DESCRIPTION
Estimate the Size of a Table
Defines the steps and calculations needed to estimate the
amount of space required to store the data in a table and
associated indexes.
Estimate the Size of a Heap
Defines the steps and calculations needed to estimate the
amount of space required to store the data in a heap. A heap
is a table that does not have a clustered index.
Estimate the Size of a Clustered Index
Defines the steps and calculations needed to estimate the
amount of space required to store the data in a clustered
index.
Estimate the Size of a Nonclustered Index
Defines the steps and calculations needed to estimate the
amount of space required to store the data in a nonclustered
index.
Estimate the Size of a Table
3/24/2017 • 1 min to read • Edit Online
You can use the following steps to estimate the amount of space required to store data in a table:
1. Calculate the space required for the heap or clustered index following the instructions in Estimate the Size
of a Heap or Estimate the Size of a Clustered Index.
2. For each nonclustered index, calculate the space required for it by following the instructions in Estimate the
Size of a Nonclustered Index.
3. Add the values calculated in steps 1 and 2.
See Also
Estimate the Size of a Database
Estimate the Size of a Heap
Estimate the Size of a Clustered Index
Estimate the Size of a Nonclustered Index
Estimate the Size of a Clustered Index
3/24/2017 • 8 min to read • Edit Online
You can use the following steps to estimate the amount of space that is required to store data in a clustered index:
1. Calculate the space used to store data in the leaf level of the clustered index.
2. Calculate the space used to store index information for the clustered index.
3. Total the calculated values.
Step 1. Calculate the Space Used to Store Data in the Leaf Level
1. Specify the number of rows that will be present in the table:
Num_Rows = number of rows in the table
2. Specify the number of fixed-length and variable-length columns and calculate the space that is required for
their storage:
Calculate the space that each of these groups of columns occupies within the data row. The size of a column
depends on the data type and length specification.
Num_Cols = total number of columns (fixed-length and variable-length)
Fixed_Data_Size = total byte size of all fixed-length columns
Num_Variable_Cols = number of variable-length columns
Max_Var_Size = maximum byte size of all variable-length columns
3. If the clustered index is nonunique, account for the uniqueifier column:
The uniqueifier is a nullable, variable-length column. It will be nonnull and 4 bytes in size in rows that have
nonunique key values. This value is part of the index key and is required to make sure that every row has a
unique key value.
Num_Cols = Num_Cols + 1
Num_Variable_Cols = Num_Variable_Cols + 1
Max_Var_Size = Max_Var_Size + 4
These modifications assume that all values will be nonunique.
4. Part of the row, known as the null bitmap, is reserved to manage column nullability. Calculate its size:
Null_Bitmap = 2 + ((Num_Cols + 7) / 8)
Only the integer part of the previous expression should be used; discard any remainder.
5. Calculate the variable-length data size:
If there are variable-length columns in the table, determine how much space is used to store the columns
within the row:
Variable_Data_Size = 2 + (Num_Variable_Cols x 2) + Max_Var_Size
The bytes added to Max_Var_Size are for tracking each variable column. This formula assumes that all
variable-length columns are 100 percent full. If you anticipate that a smaller percentage of the variablelength column storage space will be used, you can adjust the Max_Var_Size value by that percentage to
yield a more accurate estimate of the overall table size.
NOTE
You can combine varchar, nvarchar, varbinary, or sql_variant columns that cause the total defined table width to
exceed 8,060 bytes. The length of each one of these columns must still fall within the limit of 8,000 bytes for a
varchar, varbinary, or sql_variant column, and 4,000 bytes for nvarchar columns. However, their combined
widths may exceed the 8,060 byte limit in a table.
If there are no variable-length columns, set Variable_Data_Size to 0.
6. Calculate the total row size:
Row_Size = Fixed_Data_Size + Variable_Data_Size + Null_Bitmap + 4
The value 4 is the row header overhead of a data row.
7. Calculate the number of rows per page (8096 free bytes per page):
Rows_Per_Page = 8096 / (Row_Size + 2)
Because rows do not span pages, the number of rows per page should be rounded down to the nearest
whole row. The value 2 in the formula is for the row's entry in the slot array of the page.
8. Calculate the number of reserved free rows per page, based on the fill factor specified:
Free_Rows_Per_Page = 8096 x ((100 - Fill_Factor) / 100) / (Row_Size + 2)
The fill factor used in the calculation is an integer value instead of a percentage. Because rows do not span
pages, the number of rows per page should be rounded down to the nearest whole row. As the fill factor
grows, more data will be stored on each page and there will be fewer pages. The value 2 in the formula is
for the row's entry in the slot array of the page.
9. Calculate the number of pages required to store all the rows:
Num_Leaf_Pages = Num_Rows / (Rows_Per_Page - Free_Rows_Per_Page)
The number of pages estimated should be rounded up to the nearest whole page.
10. Calculate the amount of space that is required to store the data in the leaf level (8192 total bytes per page):
Leaf_space_used = 8192 x Num_Leaf_Pages
Step 2. Calculate the Space Used to Store Index Information
You can use the following steps to estimate the amount of space that is required to store the upper levels of the
index:
1. Specify the number of fixed-length and variable-length columns in the index key and calculate the space
that is required for their storage:
The key columns of an index can include fixed-length and variable-length columns. To estimate the interior
level index row size, calculate the space that each of these groups of columns occupies within the index row.
The size of a column depends on the data type and length specification.
Num_Key_Cols = total number of key columns (fixed-length and variable-length)
Fixed_Key_Size = total byte size of all fixed-length key columns
Num_Variable_Key_Cols = number of variable-length key columns
Max_Var_Key_Size = maximum byte size of all variable-length key columns
2. Account for any uniqueifier needed if the index is nonunique:
The uniqueifier is a nullable, variable-length column. It will be nonnull and 4 bytes in size in rows that have
nonunique index key values. This value is part of the index key and is required to make sure that every row
has a unique key value.
Num_Key_Cols = Num_Key_Cols + 1
Num_Variable_Key_Cols = Num_Variable_Key_Cols + 1
Max_Var_Key_Size = Max_Var_Key_Size + 4
These modifications assume that all values will be nonunique.
3. Calculate the null bitmap size:
If there are nullable columns in the index key, part of the index row is reserved for the null bitmap. Calculate
its size:
Index_Null_Bitmap = 2 + ((number of columns in the index row + 7) / 8)
Only the integer part of the previous expression should be used. Discard any remainder.
If there are no nullable key columns, set Index_Null_Bitmap to 0.
4. Calculate the variable-length data size:
If there are variable-length columns in the index, determine how much space is used to store the columns
within the index row:
Variable_Key_Size = 2 + (Num_Variable_Key_Cols x 2) + Max_Var_Key_Size
The bytes added to Max_Var_Key_Size are for tracking each variable-length column. This formula assumes
that all variable-length columns are 100 percent full. If you anticipate that a smaller percentage of the
variable-length column storage space will be used, you can adjust the Max_Var_Key_Size value by that
percentage to yield a more accurate estimate of the overall table size.
If there are no variable-length columns, set Variable_Key_Size to 0.
5. Calculate the index row size:
Index_Row_Size = Fixed_Key_Size + Variable_Key_Size + Index_Null_Bitmap + 1 (for row header
overhead of an index row) + 6 (for the child page ID pointer)
6. Calculate the number of index rows per page (8096 free bytes per page):
Index_Rows_Per_Page = 8096 / (Index_Row_Size + 2)
Because index rows do not span pages, the number of index rows per page should be rounded down to the
nearest whole row. The 2 in the formula is for the row's entry in the page's slot array.
7. Calculate the number of levels in the index:
Non-leaf_Levels = 1 + log (Index_Rows_Per_Page) (Num_Leaf_Pages / Index_Rows_Per_Page)
Round this value up to the nearest whole number. This value does not include the leaf level of the clustered
index.
8. Calculate the number of non-leaf pages in the index:
Num_Index_Pages = ∑Level (Num_Leaf_Pages / (Index_Rows_Per_Page^Level))
where 1 <= Level <= Non-leaf_Levels
Round each summand up to the nearest whole number. As a simple example, consider an index where
Num_Leaf_Pages = 1000 and Index_Rows_Per_Page = 25. The first index level above the leaf level stores
1000 index rows, which is one index row per leaf page, and 25 index rows can fit per page. This means that
40 pages are required to store those 1000 index rows. The next level of the index has to store 40 rows. This
means it requires 2 pages. The final level of the index has to store 2 rows. This means it requires 1 page.
This gives 43 non-leaf index pages. When these numbers are used in the previous formulas, the outcome is
as follows:
Non-leaf_Levels = 1 + log(25) (1000 / 25) = 3
Num_Index_Pages = 1000/(25^3)+ 1000/(25^2) + 1000/(25^1) = 1 + 2 + 40 = 43, which is the number
of pages described in the example.
9. Calculate the size of the index (8192 total bytes per page):
Index_Space_Used = 8192 x Num_Index_Pages
Step 3. Total the Calculated Values
Total the values obtained from the previous two steps:
Clustered index size (bytes) = Leaf_Space_Used + Index_Space_used
This calculation does not consider the following:
Partitioning
The space overhead from partitioning is minimal, but complex to calculate. It is not important to include.
Allocation pages
There is at least one IAM page used to track the pages allocated to a heap, but the space overhead is
minimal and there is no algorithm to deterministically calculate exactly how many IAM pages will be used.
Large object (LOB) values
The algorithm to determine exactly how much space will be used to store the LOB data types
varchar(max), varbinary(max), nvarchar(max), text, ntext, xml, and image values is complex. It is
sufficient to just add the average size of the LOB values that are expected, multiply by Num_Rows, and add
that to the total clustered index size.
Compression
You cannot pre-calculate the size of a compressed index.
Sparse columns
For information about the space requirements of sparse columns, see Use Sparse Columns.
See Also
Clustered and Nonclustered Indexes Described
Estimate the Size of a Table
Create Clustered Indexes
Create Nonclustered Indexes
Estimate the Size of a Nonclustered Index
Estimate the Size of a Heap
Estimate the Size of a Database
Estimate the Size of a Nonclustered Index
3/24/2017 • 10 min to read • Edit Online
Follow these steps to estimate the amount of space that is required to store a nonclustered index:
1. Calculate variables for use in steps 2 and 3.
2. Calculate the space used to store index information in the leaf level of the nonclustered index.
3. Calculate the space used to store index information in the non-leaf levels of the nonclustered index.
4. Total the calculated values.
Step 1. Calculate Variables for Use in Steps 2 and 3
You can use the following steps to calculate variables that are used to estimate the amount of space that is
required to store the upper levels of the index.
1. Specify the number of rows that will be present in the table:
Num_Rows = number of rows in the table
2. Specify the number of fixed-length and variable-length columns in the index key and calculate the space
that is required for their storage:
The key columns of an index can include fixed-length and variable-length columns. To estimate the interior
level index row size, calculate the space that each of these groups of columns occupies within the index row.
The size of a column depends on the data type and length specification.
Num_Key_Cols = total number of key columns (fixed-length and variable-length)
Fixed_Key_Size = total byte size of all fixed-length key columns
Num_Variable_Key_Cols = number of variable-length key columns
Max_Var_Key_Size = maximum byte size of all variable-length key columns
3. Account for the data row locator that is required if the index is nonunique:
If the nonclustered index is nonunique, the data row locator is combined with the nonclustered index key to
produce a unique key value for every row.
If the nonclustered index is over a heap, the data row locator is the heap RID. This is a size of 8 bytes.
Num_Key_Cols = Num_Key_Cols + 1
Num_Variable_Key_Cols = Num_Variable_Key_Cols + 1
Max_Var_Key_Size = Max_Var_Key_Size + 8
If the nonclustered index is over a clustered index, the data row locator is the clustering key. The columns
that must be combined with the nonclustered index key are those columns in the clustering key that are not
already present in the set of nonclustered index key columns.
Num_Key_Cols = Num_Key_Cols + number of clustering key columns not in the set of nonclustered index
key columns (+ 1 if the clustered index is nonunique)
Fixed_Key_Size = Fixed_Key_Size + total byte size of fixed-length clustering key columns not in the set of
nonclustered index key columns
Num_Variable_Key_Cols = Num_Variable_Key_Cols + number of variable-length clustering key columns
not in the set of nonclustered index key columns (+ 1 if the clustered index is nonunique)
Max_Var_Key_Size = Max_Var_Key_Size + maximum byte size of variable-length clustering key columns
not in the set of nonclustered index key columns (+ 4 if the clustered index is nonunique)
4. Part of the row, known as the null bitmap, may be reserved to manage column nullability. Calculate its size:
If there are nullable columns in the index key, including any necessary clustering key columns as described
in Step 1.3, part of the index row is reserved for the null bitmap.
Index_Null_Bitmap = 2 + ((number of columns in the index row + 7) / 8)
Only the integer part of the previous expression should be used. Discard any remainder.
If there are no nullable key columns, set Index_Null_Bitmap to 0.
5. Calculate the variable length data size:
If there are variable-length columns in the index key, including any necessary clustered index key columns,
determine how much space is used to store the columns within the index row:
Variable_Key_Size = 2 + (Num_Variable_Key_Cols x 2) + Max_Var_Key_Size
The bytes added to Max_Var_Key_Size are for tracking each variable column.This formula assumes that all
variable-length columns are 100 percent full. If you anticipate that a smaller percentage of the variablelength column storage space will be used, you can adjust the Max_Var_Key_Size value by that percentage
to yield a more accurate estimate of the overall table size.
If there are no variable-length columns, set Variable_Key_Size to 0.
6. Calculate the index row size:
Index_Row_Size = Fixed_Key_Size + Variable_Key_Size + Index_Null_Bitmap + 1 (for row header
overhead of an index row) + 6 (for the child page ID pointer)
7. Calculate the number of index rows per page (8096 free bytes per page):
Index_Rows_Per_Page = 8096 / (Index_Row_Size + 2)
Because index rows do not span pages, the number of index rows per page should be rounded down to the
nearest whole row. The 2 in the formula is for the row's entry in the page's slot array.
Step 2. Calculate the Space Used to Store Index Information in the
Leaf Level
You can use the following steps to estimate the amount of space that is required to store the leaf level of the index.
You will need the values preserved from Step 1 to complete this step.
1. Specify the number of fixed-length and variable-length columns at the leaf level and calculate the space
that is required for their storage:
NOTE
You can extend a nonclustered index by including nonkey columns in addition to the index key columns. These
additional columns are only stored at the leaf level of the nonclustered index. For more information, see Create
Indexes with Included Columns.
NOTE
You can combine varchar, nvarchar, varbinary, or sql_variant columns that cause the total defined table width to
exceed 8,060 bytes. The length of each one of these columns must still fall within the limit of 8,000 bytes for a
varchar, varbinary, or sql_variant column, and 4,000 bytes for nvarchar columns. However, their combined
widths may exceed the 8,060 byte limit in a table. This also applies to nonclustered index leaf rows that have
included columns.
If the nonclustered index does not have any included columns, use the values from Step 1, including any
modifications determined in Step 1.3:
Num_Leaf_Cols = Num_Key_Cols
Fixed_Leaf_Size = Fixed_Key_Size
Num_Variable_Leaf_Cols = Num_Variable_Key_Cols
Max_Var_Leaf_Size = Max_Var_Key_Size
If the nonclustered index does have included columns, add the appropriate values to the values from Step
1, including any modifications in Step 1.3. The size of a column depends on the data type and length
specification. For more information, see Data Types (Transact-SQL).
Num_Leaf_Cols = Num_Key_Cols + number of included columns
Fixed_Leaf_Size = Fixed_Key_Size + total byte size of fixed-length included columns
Num_Variable_Leaf_Cols = Num_Variable_Key_Cols + number of variable-length included columns
Max_Var_Leaf_Size = Max_Var_Key_Size + maximum byte size of variable-length included columns
2. Account for the data row locator:
If the nonclustered index is nonunique, the overhead for the data row locator has already been considered
in Step 1.3 and no additional modifications are required. Go to the next step.
If the nonclustered index is unique, the data row locator must be accounted for in all rows at the leaf level.
If the nonclustered index is over a heap, the data row locator is the heap RID (size 8 bytes).
Num_Leaf_Cols = Num_Leaf_Cols + 1
Num_Variable_Leaf_Cols = Num_Variable_Leaf_Cols + 1
Max_Var_Leaf_Size = Max_Var_Leaf_Size + 8
If the nonclustered index is over a clustered index, the data row locator is the clustering key. The columns
that must be combined with the nonclustered index key are those columns in the clustering key that are not
already present in the set of nonclustered index key columns.
Num_Leaf_Cols = Num_Leaf_Cols + number of clustering key columns not in the set of nonclustered
index key columns (+ 1 if the clustered index is nonunique)
Fixed_Leaf_Size = Fixed_Leaf_Size + number of fixed-length clustering key columns not in the set of
nonclustered index key columns
Num_Variable_Leaf_Cols = Num_Variable_Leaf_Cols + number of variable-length clustering key
columns not in the set of nonclustered index key columns (+ 1 if the clustered index is nonunique)
Max_Var_Leaf_Size = Max_Var_Leaf_Size + size in bytes of the variable-length clustering key columns
not in the set of nonclustered index key columns (+ 4 if the clustered index is nonunique)
3. Calculate the null bitmap size:
Leaf_Null_Bitmap = 2 + ((Num_Leaf_Cols + 7) / 8)
Only the integer part of the previous expression should be used. Discard any remainder.
4. Calculate the variable length data size:
If there are variable-length columns in the index key, including any necessary clustering key columns as
described previously in Step 2.2, determine how much space is used to store the columns within the index
row:
Variable_Leaf_Size = 2 + (Num_Variable_Leaf_Cols x 2) + Max_Var_Leaf_Size
The bytes added to Max_Var_Key_Size are for tracking each variable column.This formula assumes that all
variable-length columns are 100 percent full. If you anticipate that a smaller percentage of the variablelength column storage space will be used, you can adjust the Max_Var_Leaf_Size value by that percentage
to yield a more accurate estimate of the overall table size.
If there are no variable-length columns, set Variable_Leaf_Size to 0.
5. Calculate the index row size:
Leaf_Row_Size = Fixed_Leaf_Size + Variable_Leaf_Size + Leaf_Null_Bitmap + 1 (for row header
overhead of an index row) + 6 (for the child page ID pointer)
6. Calculate the number of index rows per page (8096 free bytes per page):
Leaf_Rows_Per_Page = 8096 / (Leaf_Row_Size + 2)
Because index rows do not span pages, the number of index rows per page should be rounded down to the
nearest whole row. The 2 in the formula is for the row's entry in the page's slot array.
7. Calculate the number of reserved free rows per page, based on the fill factor specified:
Free_Rows_Per_Page = 8096 x ((100 - Fill_Factor) / 100) / (Leaf_Row_Size + 2)
The fill factor used in the calculation is an integer value instead of a percentage. Because rows do not span
pages, the number of rows per page should be rounded down to the nearest whole row. As the fill factor
grows, more data will be stored on each page and there will be fewer pages. The 2 in the formula is for the
row's entry in the page's slot array.
8. Calculate the number of pages required to store all the rows:
Num_Leaf_Pages = Num_Rows / (Leaf_Rows_Per_Page - Free_Rows_Per_Page)
The number of pages estimated should be rounded up to the nearest whole page.
9. Calculate the size of the index (8192 total bytes per page):
Leaf_Space_Used = 8192 x Num_Leaf_Pages
Step 3. Calculate the Space Used to Store Index Information in the
Non-leaf Levels
Follow these steps to estimate the amount of space that is required to store the intermediate and root levels of the
index. You will need the values preserved from steps 2 and 3 to complete this step.
1. Calculate the number of non-leaf levels in the index:
Non-leaf Levels = 1 + log( Index_Rows_Per_Page) (Num_Leaf_Pages / Index_Rows_Per_Page)
Round this value up to the nearest whole number. This value does not include the leaf level of the
nonclustered index.
2. Calculate the number of non-leaf pages in the index:
Num_Index_Pages = ∑Level (Num_Leaf_Pages/Index_Rows_Per_Page^Level)where 1 <= Level <=
Levels
Round each summand up to the nearest whole number. As a simple example, consider an index where
Num_Leaf_Pages = 1000 and Index_Rows_Per_Page = 25. The first index level above the leaf level stores
1000 index rows, which is one index row per leaf page, and 25 index rows can fit per page. This means that
40 pages are required to store those 1000 index rows. The next level of the index has to store 40 rows. This
means that it requires 2 pages. The final level of the index has to store 2 rows. This means that it requires 1
page. This yields 43 non-leaf index pages. When these numbers are used in the previous formulas, the
result is as follows:
Non-leaf_Levels = 1 + log(25) (1000 / 25) = 3
Num_Index_Pages = 1000/(25^3)+ 1000/(25^2) + 1000/(25^1) = 1 + 2 + 40 = 43, which is the number
of pages described in the example.
3. Calculate the size of the index (8192 total bytes per page):
Index_Space_Used = 8192 x Num_Index_Pages
Step 4. Total the Calculated Values
Total the values obtained from the previous two steps:
Nonclustered index size (bytes) = Leaf_Space_Used + Index_Space_used
This calculation does not consider the following:
Partitioning
The space overhead from partitioning is minimal, but complex to calculate. It is not important to include.
Allocation pages
There is at least one IAM page used to track the pages allocated to a heap, but the space overhead is
minimal and there is no algorithm to deterministically calculate exactly how many IAM pages will be used.
Large object (LOB) values
The algorithm to determine exactly how much space will be used to store the LOB data types
varchar(max), varbinary(max), nvarchar(max), text, ntext, xml, and image values is complex. It is
sufficient to just add the average size of the LOB values expected, multiply by Num_Rows, and add that to
the total nonclustered index size.
Compression
You cannot pre-calculate the size of a compressed index.
Sparse columns
For information about the space requirements of sparse columns, see Use Sparse Columns.
See Also
Clustered and Nonclustered Indexes Described
Create Nonclustered Indexes
Create Clustered Indexes
Estimate the Size of a Table
Estimate the Size of a Clustered Index
Estimate the Size of a Heap
Estimate the Size of a Database
Estimate the Size of a Heap
3/24/2017 • 3 min to read • Edit Online
You can use the following steps to estimate the amount of space that is required to store data in a heap:
1. Specify the number of rows that will be present in the table:
Num_Rows = number of rows in the table
2. Specify the number of fixed-length and variable-length columns and calculate the space that is required for
their storage:
Calculate the space that each of these groups of columns occupies within the data row. The size of a
column depends on the data type and length specification.
Num_Cols = total number of columns (fixed-length and variable-length)
Fixed_Data_Size = total byte size of all fixed-length columns
Num_Variable_Cols = number of variable-length columns
Max_Var_Size = maximum total byte size of all variable-length columns
3. Part of the row, known as the null bitmap, is reserved to manage column nullability. Calculate its size:
Null_Bitmap = 2 + ((Num_Cols + 7) / 8)
Only the integer part of this expression should be used. Discard any remainder.
4. Calculate the variable-length data size:
If there are variable-length columns in the table, determine how much space is used to store the columns
within the row:
Variable_Data_Size = 2 + (Num_Variable_Cols x 2) + Max_Var_Size
The bytes added to Max_Var_Size are for tracking each variable-length column. This formula assumes that
all variable-length columns are 100 percent full. If you anticipate that a smaller percentage of the variablelength column storage space will be used, you can adjust the Max_Var_Size value by that percentage to
yield a more accurate estimate of the overall table size.
NOTE
You can combine varchar, nvarchar, varbinary, or sql_variant columns that cause the total defined table width to
exceed 8,060 bytes. The length of each one of these columns must still fall within the limit of 8,000 bytes for a
varchar, nvarchar,varbinary, or sql_variant column. However, their combined widths may exceed the 8,060 byte
limit in a table.
If there are no variable-length columns, set Variable_Data_Size to 0.
5. Calculate the total row size:
Row_Size = Fixed_Data_Size + Variable_Data_Size + Null_Bitmap + 4
The value 4 in the formula is the row header overhead of the data row.
6. Calculate the number of rows per page (8096 free bytes per page):
Rows_Per_Page = 8096 / (Row_Size + 2)
Because rows do not span pages, the number of rows per page should be rounded down to the nearest
whole row. The value 2 in the formula is for the row's entry in the slot array of the page.
7. Calculate the number of pages required to store all the rows:
Num_Pages = Num_Rows / Rows_Per_Page
The number of pages estimated should be rounded up to the nearest whole page.
8. Calculate the amount of space that is required to store the data in the heap (8192 total bytes per page):
Heap size (bytes) = 8192 x Num_Pages
This calculation does not consider the following:
Partitioning
The space overhead from partitioning is minimal, but complex to calculate. It is not important to include.
Allocation pages
There is at least one IAM page used to track the pages allocated to a heap, but the space overhead is
minimal and there is no algorithm to deterministically calculate exactly how many IAM pages will be used.
Large object (LOB) values
The algorithm to determine exactly how much space will be used to store the LOB data types
varchar(max), varbinary(max), nvarchar(max), text, ntextxml, and image values is complex. It is
sufficient to just add the average size of the LOB values that are expected and add that to the total heap
size.
Compression
You cannot pre-calculate the size of a compressed heap.
Sparse columns
For information about the space requirements of sparse columns, see Use Sparse Columns.
See Also
Heaps (Tables without Clustered Indexes)
Clustered and Nonclustered Indexes Described
Create Clustered Indexes
Create Nonclustered Indexes
Estimate the Size of a Table
Estimate the Size of a Clustered Index
Estimate the Size of a Nonclustered Index
Estimate the Size of a Database
Copy Databases to Other Servers
3/24/2017 • 1 min to read • Edit Online
It is sometimes useful to copy a database from one computer to another, whether for testing, checking consistency,
developing software, running reports, creating a mirror database, or, possibly, to make the database available to
remote-branch operations.
There are several ways to copy a database:
Using the Copy Database Wizard
You can use the Copy Database Wizard to copy or move databases between servers or to upgrade a SQL
Server database to a later version. For more information, see Use the Copy Database Wizard.
Restoring a database backup
To copy an entire database, you can use the BACKUP and RESTORE Transact-SQL statements. Typically,
restoring a full backup of a database is used to copy the database from one computer to another for a
variety of reasons. For information on using backup and restore to copy a database, see Copy Databases
with Backup and Restore.
NOTE
To set up a mirror database for database mirroring, you must restore the database onto the mirror server by using
RESTORE DATABASE WITH NORECOVERY. For more information, see Prepare a Mirror Database for Mirroring (SQL
Server).
Using the Generate Scripts Wizard to publish databases
You can use the Generate Scripts Wizard to transfer a database from a local computer to a Web hosting
provider. For more information, see Generate and Publish Scripts Wizard.
Use the Copy Database Wizard
3/24/2017 • 17 min to read • Edit Online
The Copy Database Wizard moves or copies databases and certain server objects easily from one instance of SQL
Server to another instance, with no server downtime. By using this wizard, you can do the following:
Pick a source and destination server.
Select database(s) to move or copy.
Specify the file location for the database(s).
Copy logins to the destination server.
Copy additional supporting objects, jobs, user-defined stored procedures, and error messages.
Schedule when to move or copy the database(s).
Limitations and restrictions
The Copy Database Wizard is not available in the Express edition.
The Copy Database Wizard cannot be used to copy or move databases that:
Are System.
Are marked for replication.
Are marked Inaccessible, Loading, Offline, Recovering, Suspect, or in Emergency Mode.
Have data or log files stored in Microsoft Azure storage.
A database cannot be moved or copied to an earlier version of SQL Server.
If you select the Move option, the wizard deletes the source database automatically after moving the
database. The Copy Database Wizard does not delete a source database if you select the Copy option. In
addition, selected server objects are copied rather than moved to the destination; the database is the only
object that is actually moved.
If you use the SQL Server Management Object method to move the full-text catalog, you must repopulate
the index after the move.
The detach and attach method detaches the database, moves or copies the database .mdf, .ndf, .ldf files
and reattaches the database in the new location. For the detach and attach method, to avoid data loss or
inconsistency, active sessions cannot be attached to the database being moved or copied. For the SQL
Server Management Object method, active sessions are allowed because the database is never taken offline.
Transferring SQL Server Agent jobs which reference databases that do not already exist on the destination
server will cause the entire operation to fail. The Wizard attempts to create a SQL Server Agent job prior to
creating the database. As a workaround:
1. Create a shell database on the destination server with the same name as the database to be copied or
moved. See Create a Database.
2. From the Configure Destination Database page select Drop any database on the destination
server with the same name, then continue with the database transfer, overwriting existing
database files.
IMPORTANT!! The detach and attach method will cause the source and destination database ownership to
become set to the login executing the Copy Database Wizard. See ALTER AUTHORIZATION (Transact-SQL) to
change the ownership of a database.
Prerequisites
Ensure that SQL Server Agent is started on the destination server.
Ensure the data and log file directories on the source server can be reached from the destination server.
Under the detach and attach method, a SQL Server Agent Proxy for the SSIS subsystem must exist on the
destination server with a credential that can access the file system of both the source and destination
servers. For more information on proxies, see Create a SQL Server Agent Proxy.
IMPORTANT!! Under the detach and attach method, the copy or move process will fail if an Integration
Services Proxy account is not used. Under certain situations the source database will not become re-attached to
the source server and all NTFS security permissions will be stripped from the data and log files. If this happens,
navigate to your files, re-apply the relevant permissions, and then re-attach the database to your instance of
SQL Server.
Recommendations
To ensure optimal performance of an upgraded database, run sp_updatestats (Transact-SQL) (update
statistics) against the upgraded database.
When you move or copy a database to another server instance, to provide a consistent experience to users
and applications, you might have to re-create some or all of the metadata for the database, such as logins
and jobs, on the other server instance. For more information, see Manage Metadata When Making a
Database Available on Another Server Instance (SQL Server).
Permissions
You must be a member of the sysadmin fixed server role on both the source and destination servers.
The Copy Database wizard pages
Launch the Copy Database Wizard in SQL Server Management Studio from Object Explorer and expand
Databases. Then right-click a database, point to Tasks, and then click Copy Database. If the Welcome to the
Copy Database Wizard splash page appears, click Next.
Select a source server
Used to specify the server with the database to move or copy, and to enter login information. After you select the
authentication method and enter login information, click Next to establish the connection to the source server. This
connection remains open throughout the session.
Source server
Used to identify the name of the server on which the database(s) you want to move or copy is located.
Manually enter, or click the ellipsis to navigate to the desired server. The server must be at least SQL Server
2005.
Use Windows Authentication
Allows a user to connect through a Microsoft Windows user account.
Use SQL Server Authentication
Allows a user to connect by providing a SQL Server Authentication user name and password.
User name
Used to enter the user name to connect with. This option is only available if you have selected to
connect using SQL Server Authentication.
Password
Used to enter the password for the login. This option is only available if you have selected to connect
using SQL Server Authentication.
Select a destination server
Used to specify the server where the database will be moved or copied to. If you set the source and destination
servers to the same server instance, you will make a copy of the database. In this case you must rename the
database at a later point in the wizard. The source database name can be used for the copied or moved database
only if name conflicts do not exist on the destination server. If name conflicts exist, you must resolve them
manually on the destination server before you can use the source database name there.
Destination server
Used to identify the name of the server to which the database(s) you want to move or copy to is located.
Manually enter, or click the ellipsis to navigate to the desired server. The server must be at least SQL Server
2005.
NOTE You can use a destination that is a clustered server; the Copy Database Wizard will make sure you
select only shared drives on a clustered destination server.
Use Windows Authentication
Allows a user to connect through a Microsoft Windows user account.
Use SQL Server Authentication
Allows a user to connect by providing a SQL Server Authentication user name and password.
User name
Used to enter the user name to connect with. This option is only available if you have selected to
connect using SQL Server Authentication.
Password
Used to enter the password for the login. This option is only available if you have selected to connect
using SQL Server Authentication.
Select the transfer method
Use the detach and attach method
Detach the database from the source server, copy the database files (.mdf, .ndf, and .ldf) to the destination
server, and attach the database at the destination server. This method is usually the faster method because
the principal work is reading the source disk and writing the destination disk. No SQL Server logic is
required to create objects within the database, or create data storage structures. This method can be slower,
however, if the database contains a large amount of allocated but unused space. For instance, a new and
practically empty database that is created allocating 100 MB, copies the entire 100 MB, even if only 5 MB is
full.
NOTE This method makes the database unavailable to users during the transfer.
If a failure occurs, reattach the source database
When a database is copied, the original database files are always reattached to the source server. Use this
box to reattach original files to the source database if a database move cannot be completed.
Use the SQL Management Object method
This method reads the definition of each database object on the source database and creates each object in
the destination database. Then it transfers the data from the source tables to the destination tables,
recreating indexes and metadata.
NOTE
Database users can continue to access the database during the transfer.
Select database
Select the database(s) you want to move or copy from the source server to the destination server. See Limitations
and Restrictions at the top of topic.
Move
Move the database to the destination server.
Copy
Copy the database to the destination server.
Source
Displays the databases that exist on the source server.
Status
Displays various information of the source database.
Refresh
Refresh the list of databases.
Configure destination database
Change the database name if appropriate and specify the location and names of the database files. This page
appears once for each database being moved or copied.
Source Database
The name of the source database. The text box is not editable.
Destination Database
The name of the destination database to be created, modify as desired.
Destination database files:
Filename
The name of the destination database file to be created, modify as desired.
Size (MB)
Size of the destination database file in megabytes.
Destination Folder
The folder on the destination server to host the destination database file, modify as desired.
Status
Status
If the destination database already exists:
Decide what action to take if the destination database already exists.
Stop the transfer if a database or file with the same name exists at the destination.
Drop any database on the destination server with the same name, then continue with the
database transfer, overwriting existing database files.
Select Server Objects
This page is only available when the source and destination are different servers.
Available related objects
Lists objects available to transfer to the destinations server. To include an object, click the object name in the
Available related objects box, and then click the >> button to move the object to the Selected related
objects box.
Selected related objects
Lists objects that will be transferred to the destinations server. To exclude an object, click the object name in
the Selected related objects box, and then click the << button to move the object to the Available
related objects box. By default all objects of each selected type are transferred. To choose individual objects
of any type, click the ellipsis button next to any object type in the Selected related objects box. This opens
a dialog box where you can select individual objects.
List of Server Objects
Logins (Selected by default.)
SQL Server Agent jobs
User-defined error messages
Endpoints
Full-text catalog
SSIS Package
Stored procedures from master database
NOTE Extended stored procedures and their associated DLLs are not eligible for automated copy.
Location of source database files
This page is only available when the source and destination are different servers. Specify a file system share that
contains the database files on the source server.
Database
Displays the name of each database being moved.
Folder location
The folder location of the database files on the source server. For example:
C:\Program Files\Microsoft SQL Server\MSSQL110.MSSQLSERVER\MSSQL\DATA .
File share on source server
The file share containing the database files on the source server. Manually enter the share, or click the
ellipsis to navigate to the share. For example:
\\server_name\C$\Program Files\Microsoft SQL Server\MSSQL110.MSSQLSERVER\MSSQL\Data .
Configure the package
The Copy Database Wizard creates an SSIS package to transfer the database.
Package location
Displays to where the SSIS package will be written.
Package name
A default name for the SSIS package will be created, modify as desired.
Logging options
Select whether to store the logging information in the Windows event log, or in a text file.
Error log file path
This option is only available if the text file logging option is selected. Provide a path for the location of the
log file.
Schedule the package
Specify when you want the move or copy operation to start. If you are not a system administrator, you must specify
a SQL Server Agent Proxy account that has access to the Integration Services (SSIS) Package execution subsystem.
IMPORTANT!! An Integration Services Proxy account must be used under the detach and attach method.
Run immediately
SSIS Package will execute after completing the wizard.
Schedule
SSIS Package will execute according to a schedule.
Change Schedule
Opens the New Job Schedule dialog box. Configure as desired. Click OK when finished.
Integration Services Proxy account Select an available proxy account from the drop-down list. To
schedule the transfer, there must be at least one proxy account available to the user, configured with
permission to the SSIS package execution subsystem.
To create a proxy account for SSIS package execution, in Object Explorer, expand SQL Server Agent,
expand Proxies, right-click SSIS Package Execution, and then click New Proxy.
Complete the wizard
Displays summary of the selected options. Click Back to change an option. Click Finish to create the SSIS package.
The Performing operation page monitors status information about the execution of the Copy Database Wizard.
Action
Lists each action being performed.
Status
Indicates whether the action as a whole succeeded or failed.
Message
Provides any messages returned from each step.
Examples
Common Steps
Regardless of whether you choose Move or Copy, Detach and Attach or SMO, the five steps listed below will be
the same. For brevity, the steps are listed here once and all examples will start on Step 6.
1. In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that
instance.
2. Expand Databases, right-click the desired database, point to Tasks, and then click Copy Database...
3. If the Welcome to the Copy Database Wizard splash page appears, click Next.
4. Select a Source Server page: Specify the server with the database to move or copy. Select the
authentication method. If Use SQL Server Authentication is chosen you will need to enter your login
credentials. Click Next to establish the connection to the source server. This connection remains open
throughout the session.
5. Select a Destination Server page: Specify the server where the database will be moved or copied to. Select
the authentication method. If Use SQL Server Authentication is chosen you will need to enter your login
credentials. Click Next to establish the connection to the source server. This connection remains open
throughout the session.
NOTE You can launch the Copy Database Wizard from any database. You can use the Copy Database
Wizard from either the source or destination server.
A. Move database using detach and attach method to an instance on a different physical server. A login and SQL
Server Agent job will be moved as well.
The following example will move the Sales database, a Windows login named contoso\Jennie and a SQL Server
Agent job named Jennie’s Report from a 2008 instance of SQL Server on Server1 to a 2016 instance of SQL
Server on Server2 . Jennie’s Report uses the Sales database. Sales does not already exist on the destination
server, Server2 . Server1 will be re-assigned to a different team after the database move.
1. As noted in Limitations and Restrictions, above, a shell database will need to be created on the destination
server when transferring a SQL Server Agent job that references a database that does not already exist on
the destination server. Create a shell database called Sales on the destination server.
2. Back to the Wizard, Select the Transfer Method page: Review and maintain the default values. Click Next.
3. Select Databases page: Select the Move checkbox for the desired database,
Sales
. Click Next.
4. Configure Destination Database page: The Wizard has identified that Sales already exists on the
destination server, as created in Step 6 above, and has appended _new to the Destination database
name. Delete _new from the Destination database text box. If desired, change the Filename, and
Destination Folder. Select Drop any database on the destination server with the same name, then
continue with the database transfer, overwriting existing database files. Click Next.
5. Select Server Objects page: In the Selected related objects: panel, click the ellipsis button for Object
name Logins. Under Copy Options select Copy only the selected logins:. Check the box for Show all
server logins. Check the Login box for contoso\Jennie . Click OK. In the Available related objects: panel
select SQL Server Agent jobs and then click the > button. In the Selected related objects: panel, click the
ellipsis button for SQL Server Agent jobs. Under Copy Options select Copy only the selected jobs.
Check the box for Jennie’s Report . Click OK. Click Next.
6. Location of Source Database Files page: Click the ellipsis button for File share on source server and
navigate to the location for the given Folder location. For example, for Folder location
D:\MSSQL13.MSSQLSERVER\MSSQL\DATA use \\Server1\D$\MSSQL13.MSSQLSERVER\MSSQL\DATA for File share on
source server. Click Next.
7. Configure the Package page: In the Package name: text box enter SalesFromServer1toServer2_Move .
Check the Save transfer logs? box. In the Logging options drop-down list select Text file. Note the Error
log file path; revise as desired. Click Next.
NOTE The Error log file path is the path on the destination server.
8. Schedule the Package page: Select the relevant proxy from the Integration Services Proxy account
drop-down list. Click Next.
9. Complete the Wizard page: Review the summary of the selected options. Click Back to change an option.
Click Finish to execute the task. During the transfer, the Performing operation page monitors status
information about the execution of the Wizard.
10. Performing Operation page: If operation is successful, click Close. If operation is unsuccessful, review
error log, and possibly Back for further review. Otherwise, click Close.
11. Post Move Steps Consider executing the following T-SQL statements on the new host,
Server2
:
ALTER AUTHORIZATION ON DATABASE::Sales TO sa;
ALTER DATABASE Sales
SET COMPATIBILITY_LEVEL = 130;
USE Sales
GO
EXEC sp_updatestats;
12. Post Move Steps Cleanup
Since Server1 will be moved to a different team and the Move operation will not be repeated, consider
executing the following steps:
Deleting SSIS package SalesFromServer1toServer2_Move on Server2 .
Deleting SQL Server Agent job SalesFromServer1toServer2_Move on Server2 .
Deleting SQL Server Agent job Jennie’s Report on Server1 .
Dropping login contoso\Jennie on Server1 .
B. Copy database using detach and attach method to the same instance and set recurring schedule.
In this example the Sales database will be copied and created as SalesCopy on the same instance. Thereafter,
SalesCopy , will be re-created on a weekly basis.
1. Select a Transfer Method page: Review and maintain the default values. Click Next.
2. Select Databases page: Select the Copy checkbox for the
Sales
database. Click Next.
3. Configure Destination Database page: Change the Destination database name to SalesCopy . If desired,
change the Filename, and Destination Folder. Select Drop any database on the destination server
with the same name, then continue with the database transfer, overwriting existing database files.
Click Next.
4. Configure the Package page: In the Package name: text box enter
Save transfer logs? box. Click Next.
SalesCopy Weekly Refresh
. Check the
5. Schedule the Package page: Click the Schedule: radio button and then click the Change Schedule
button.
a. New Job Schedule page: In the Name text box enter
Weekly on Sunday
.
b. Click OK.
6. Select the relevant proxy from the Integration Services Proxy account drop-down list. Click Next.
7. Complete the Wizard page: Review the summary of the selected options. Click Back to change an option.
Click Finish to execute the task. During the package creation, the Performing operation page monitors
status information about the execution of the Wizard.
8. Performing Operation page: If operation is successful, click Close. If operation is unsuccessful, review
error log, and possibly Back for further review. Otherwise, click Close.
9. Manually start the newly created SQL Server Agent Job
ensure SalesCopy now exists on the instance.
SalesCopy weekly refresh
Follow up: After upgrading a database
. Review job history and
After you use the Copy Database Wizard to upgrade a database from an earlier version of SQL Server to SQL
Server 2016, the database becomes available immediately and is automatically upgraded. If the database has fulltext indexes, the upgrade process either imports, resets, or rebuilds them, depending on the setting of the FullText Upgrade Option server property. If the upgrade option is set to Import or Rebuild, the full-text indexes will
be unavailable during the upgrade. Depending the amount of data being indexed, importing can take several hours,
and rebuilding can take up to ten times longer. Note also that when the upgrade option is set to Import, if a fulltext catalog is not available, the associated full-text indexes are rebuilt. For information about viewing or changing
the setting of the Full-Text Upgrade Option property, see Manage and Monitor Full-Text Search for a Server
Instance.
If the compatibility level of a user database was 100 or higher before upgrade, it remains the same after upgrade. If
the compatibility level was 90 in the upgraded database, the compatibility level is set to 100, which is the lowest
supported compatibility level in SQL Server 2016. For more information, see ALTER DATABASE Compatibility Level
(Transact-SQL).
Post copy or move considerations
Consider whether to perform the following steps after a Copy or Move:
Changing the ownership of the database(s) when the detach and attach method is used.
Dropping server objects on the source server after a Move.
Dropping the SSIS package created by the Wizard on the destination server.
Dropping the SQL Server Agent job created by the Wizard on the destination server.
More information!
Upgrade a Database Using Detach and Attach (Transact-SQL)
Create a SQL Server Agent Proxy
Copy Databases with Backup and Restore
3/24/2017 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016)
Warehouse
Parallel Data Warehouse
Azure SQL Database
Azure SQL Data
In SQL Server 2016, you can create a new database by restoring a backup of a user database created by using SQL
Server 2005 or a later version. However, backups of master, model and msdb that were created by using an
earlier version of SQL Server cannot be restored by SQL Server 2016. Also, SQL Server 2016 backups cannot be
restored by any earlier version of SQL Server.
IMPORTANT! SQL Server 2016 uses a different default path than earlier versions. Therefore, to restore
backups of a database created in the default location of earlier versions you must use the MOVE option. For
information about the new default path see File Locations for Default and Named Instances of SQL Server. For
more information about moving database files, see "Moving the Database Files," later in this topic.
General steps for using Backup and Restore to copy a database
When you use backup and restore to copy a database to another instance of SQL Server, the source and
destination computers can be any platform on which SQL Server runs.
The general steps are:
1. Back up the source database, which can reside on an instance of SQL Server 2005 or later. The computer on
which this instance of SQL Server is running is the source computer.
2. On the computer to which you want to copy the database (the destination computer), connect to the
instance of SQL Server on which you plan to restore the database. If needed, on the destination server
instance, create the same backup devices as used to the backup of the source databases.
3. Restore the backup of the source database on the destination computer. Restoring the database
automatically creates all of the database files.
Some additional considerations that may affect this process:
Before You restore database files
Restoring a database automatically creates the database files needed by the restoring database. By default, the files
created by SQL Server during the restoration process use the same names and paths as the backup files from the
original database on the source computer.
Optionally, when restoring the database, you can specify the device mapping, file names, or path for the restoring
database.
This might be necessary in the following situations:
The directory structure or drive mapping used by the database on the original computer not exist on the
other computer. For example, perhaps the backup contains a file that would be restored to drive E by default,
but the destination computer lacks a drive E.
The target location might have insufficient space.
You are reusing a database name that exists on the restore destination and any of its files is named the same
as a database file in the backup set, one of the following occurs:
If the existing database file can be overwritten, it will be overwritten (this would not affect a file that
belongs to a different database name).
If the existing file cannot be overwritten, a restore error would occur.
To avoid errors and unpleasant consequences, before the restore operation, you can use the backupfile
history table to find out the database and log files in the backup you plan to restore.
Moving the database files
If the files within the database backup cannot be restored onto the destination computer, it is necessary to move
the files to a new location while they are being restored. For example:
You want to restore a database from backups created in the default location of the earlier version.
It may be necessary to restore some of the database files in the backup to a different drive because of
capacity considerations. This is a common occurrence because most computers within an organization do
not have the same number and size of disk drives or identical software configurations.
It may be necessary to create a copy of an existing database on the same computer for testing purposes. In
this case, the database files for the original database already exist, so different file names must be specified
when the database copy is created during the restore operation.
For more information, see "To restore files and filegroups to a new location," later in this topic.
Changing the database name
The name of the database can be changed as it is restored to the destination computer, without having to restore
the database first and then change the name manually. For example, it may be necessary to change the database
name from Sales to SalesCopy to indicate that this is a copy of a database.
The database name explicitly supplied when you restore a database is used automatically as the new database
name. Because the database name does not already exist, a new one is created by using the files in the backup.
When upgrading a database by using Restore
When restoring backups from an earlier version, it is helpful to know in advance whether the path (drive and
directory) of each of the full-text catalogs in a backup exists on the destination computer. To list the logical names
and physical names, path and file name) of every file in a backup, including the catalog files, use a RESTORE
FILELISTONLY FROM statement. For more information, see RESTORE FILELISTONLY (Transact-SQL).
If the same path does not exist on the destination computer, you have two alternatives:
Create the equivalent drive/directory mapping on the destination computer.
Move the catalog files to a new location during the restore operation, by using the WITH MOVE clause in
your RESTORE DATABASE statement. For more information, see RESTORE (Transact-SQL).
For information about alternative options for upgrading full-text indexes, see Upgrade Full-Text Search.
Database ownership
When a database is restored on another computer, the SQL Server login or Microsoft Windows user who initiates
the restore operation becomes the owner of the new database automatically. When the database is restored, the
system administrator or the new database owner can change database ownership. To prevent unauthorized
restoration of a database, use media or backup set passwords.
Managing metadata when restoring to another server instance
When you restore a database onto another server instance, to provide a consistent experience to users and
applications, you might have to re-create some or all of the metadata for the database, such as logins and jobs, on
the other server instance. For more information, see Manage Metadata When Making a Database Available on
Another Server Instance (SQL Server).
View the data and log files in a backup set
RESTORE FILELISTONLY (Transact-SQL)
Restore files and filegroups to a new location
Restore Files to a New Location (SQL Server)
Restore a Database Backup Using SSMS
Restore files and filegroups over existing files
Restore Files and Filegroups over Existing Files (SQL Server)
Restore a database with a new name
Restore a Database Backup Using SSMS
Restart an interrupted restore operation
Restart an Interrupted Restore Operation (Transact-SQL)
Change database owner
sp_changedbowner (Transact-SQL)
Copy a database by using SQL Server Management Objects (SMO)
ReadFileList
RelocateFiles
ReplaceDatabase
Restore
See also
Copy Databases to Other Servers
File Locations for Default and Named Instances of SQL Server
RESTORE FILELISTONLY (Transact-SQL)
RESTORE (Transact-SQL)
Publish a Database (SQL Server Management Studio)
3/24/2017 • 1 min to read • Edit Online
You can use the Generate and Publish Scripts Wizard to publish an entire database or individual database
objects to a Web hosting provider.
NOTE
The functionality described in this topic used to be provided by the Publish Database Wizard. The publishing functionality has
been added to the Generate and Publish Scripts Wizard, and the Publish Database Wizard has been discontinued.
Generate and Publish Scripts Wizard
The Generate and Publish Scripts Wizard can be used to publish a database or selected database objects to a Web
hosting provider. A SQL Server Web hosting provider is a connectivity interface to a Web service. The Web service
is created by using the Database Publishing Services project from the SQL Server Hosting Toolkit on CodePlex. The
Web service makes it easy for the Web hoster customers to publish their databases to the service by using the
Generate and Publish Scripts Wizard. For more information about downloading the SQL Server Hosting Toolkit, see
SQL Server Database Publishing Services.
The Generate and Publish Scripts Wizard can also be used to create a script for transferring a database.
To publish a database to a Web service
1. In Object Explorer, expand Databases, right-click a database, point to Tasks, and then click Generate and
Publish Scripts. Follow the steps in the wizard to script the database objects for publishing.
2. On the Choose Objects page, select the objects to be published to the Web hosting service.
3. On the Set Scripting Options page, select Publish to Web Service.
a. In the Provider box, specify the provider for your Web service. If you have not configured a Web
hosting provider, select Manage Providers and use the Manage Providers dialog box to configure
a provider for your Web service.
b. To specify advanced publishing options, select the Advanced button in the Publish to Web Service
section.
4. On the Summary page, review your selections. Click Previous to change your selections. Click Next to
publish the objects you selected.
5. On the Save or Publish Scripts page, monitor the progress of the publication.
See Also
Generate Scripts (SQL Server Management Studio)
Copy Databases to Other Servers
Deploy a SQL Server Database to a Microsoft Azure
Virtual Machine
3/24/2017 • 9 min to read • Edit Online
Use the Deploy a Database to a Windows Azure VM wizard to deploy a database from an instance of the
Database Engine to SQL Server in a Windows Azure Virtual Machine (VM). The wizard uses a full database backup
operation, so it always copies the complete database schema and the data from a SQL Server user database. The
wizard also does all of the Azure VM configuration for you, so no pre-configuration of the VM is required.
You cannot use the wizard for differential backups. The wizard will not overwrite an existing database that has the
same database name. To replace an existing database on the VM, you must first drop the existing database or
change the database name. If there is a naming conflict between the database name for an in-flight deploy
operation and an existing database on the VM, the wizard will suggest an appended database name for the in-flight
database to enable you to complete the operation.
Before You Begin
Limitations and Restrictions
Considerations for Deploying a FILESTREAM-enabled Database
Considerations for Geographic Distribution of Assets
Wizard Configuration Settings
Required Permissions
Launching the Wizard
Wizard Pages
NOTE
For a detailed step-by-step walkthrough of this wizard, see Migrate a SQL Server database to SQL Server in an Azure VM
Before You Begin
To complete this wizard, you must be able to provide the following information and have these configuration
settings in place:
The Microsoft account details associated with your Windows Azure subscription.
Your Windows Azure publishing profile.
Cau t i on
SQL Server currently supports publishing profile version 2.0. To download the supported version of the
publishing profile, see Download Publishing Profile 2.0.
The management certificate uploaded to your Microsoft Azure subscription. Create the management
certificate with the Powershell Cmdlet New-SelfSignedCertificate. Then, upload the management certificate
to your Microsoft Azure subscription. For more information on uploading a management certificate, see
Upload an Azure Management API Management Certificate. Sample syntax for creating a management
certificate from Certificates overview for Azure Cloud Services:
$cert = New-SelfSignedCertificate -DnsName yourdomain.cloudapp.net -CertStoreLocation
"cert:\LocalMachine\My"
$password = ConvertTo-SecureString -String "your-password" -Force -AsPlainText
Export-PfxCertificate -Cert $cert -FilePath ".\my-cert-file.pfx" -Password $password
NOTE
The MakeCert tool can also be used to create a management certificate; however, MakeCert is now deprecated. For
additional information, see MakeCert.
The management certificate saved into the personal certificate store on the computer where the wizard is
running.
You must have a temporary storage location that is available to the computer where the SQL Server
database is hosted. The temporary storage location must also be available to the computer where the wizard
is running.
If you are deploying the database to an existing VM, the instance of SQL Server must be configured to listen
on a TCP/IP port.
Either a Windows Azure VM or Gallery image you plan to use for creation of the VM must have the Cloud
Adapter for SQL Server configured and running.
You must configure an open endpoint for your Cloud Adapter for SQL Server on the Windows Azure
gateway with private port 11435.
In addition, if you plan to deploy your database into an existing Windows Azure VM, you must also be able
to provide:
The DNS name of the cloud service that hosts your VM.
Administrator credentials for the VM.
Credentials with Backup operator privileges on the database you plan to deploy, from the source instance of
SQL Server.
For more information about running SQL Server in Windows Azure virtual machines, see Provision a SQL
Server virtual machine in the Azure Portal and Migrate a SQL Server database to SQL Server in an Azure VM.
On computers running Windows Server operating systems, you must use the following configuration
settings to run this wizard:
Turn off Enhanced Security Configuration: Use Server Manager > Local Server to set Internet Explorer
Enhanced Security Configuration (ESC) to OFF.
Enable JavaScript: Internet Explorer > Internet Options > Security > Customer Level > Scripting > Active
Scripting: Enable.
Limitations and Restrictions
This deployment feature is for use only with an Azure Storage Account created through the Service Management
(Classic) deployment model. For more information regarding Azure deployment models, see Azure Resource
Manager vs. classic deployment.
The database size limitation for this operation is 1 TB.
This deployment feature is available in SQL Server Management Studio for SQL Server 2016 and SQL Server 2014.
This deployment feature is for use only with user databases; deploying system databases is not supported.
The deployment feature does not support hosted services that are associated with an Affinity Group. For example,
storage accounts associated with an Affinity Group cannot be selected for use on the Deployment Settings page
of this wizard.
SQL Server database versions that can be deployed to a Windows Azure VM using this wizard:
SQL Server 2008
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016
SQL Server database versions running in a Windows Azure VM database can be deployed to:
SQL Server 2012
SQL Server 2014
SQL Server 2016
If there is a naming conflict between the database name for an in-flight deploy operation and an existing
database on the VM, the wizard will suggest an appended database name for the in-flight database to enable
you to complete the operation.
Considerations for Deploying a FILESTREAM -enabled Database to an Azure VM
Note the following guidelines and restrictions when deploying databases that have BLOBS stored in FILESTREAM
objects:
The deployment feature cannot deploy a FILESTREAM-enabled database into a new VM. If FILESTREAM is not
enabled in the VM before you run the wizard, the database restore operation will fail and the wizard
operation will not be able to complete successfully. To successfully deploy a database that uses FILESTREAM,
enable FILESTREAM in the instance of SQL Server on the host VM before launching the wizard. For more
information, see FILESTREAM (SQL Server).
If your database utilizes In-Memory OLTP, you can deploy the database to an Azure VM without any
modifications to the database. For more information, see In-Memory OLTP (In-Memory Optimization).
Considerations for Geographic Distribution of Assets
Note that the following assets must be located in the same geographic region:
Cloud Service
VM Location
Data Disk Storage Service
If the assets listed above are not co-located, the wizard will not be able to complete successfully.
Wizard Configuration Settings
Use the following configuration details to modify settings for a SQL Server database deployment to an Azure VM.
Default path for the configuration file - %LOCALAPPDATA%\SQL Server\Deploy to SQL in WA
VM\DeploymentSettings.xml
Configuration file structure
<DeploymentSettings>
BackupPath="\\[server name]\[volume]\" <!-- The last used path for backup. Used as
default in the wizard. -->
CleanupDisabled = False /> <!-- Wizard will not delete intermediate files and Windows
Azure objects (VM, CS, SA). -->
Certificate="12A34B567890123ABCD4EF567A8" <!-- The certificate for use in the
wizard. -->
Subscription="1a2b34c5-67d8-90ef-ab12-xxxxxxxxxxxxx" <!-- The subscription for use
in the wizard. -->
Name="My Subscription" <!-- The name of the subscription. -->
Publisher="" />
</DeploymentSettings>
Configuration file values
Permissions
The database being deployed must be in a normal state, the database must be accessible to the user account
running the wizard, and the user account must have permissions to perform a backup operation.
Using the Deploy Database to Microsoft Azure VM Wizard
To launch the wizard, use the following steps:
1. Use SQL Server Management Studio to connect to the instance of SQL Server with the database you want to
deploy.
2. In Object Explorer, expand the instance name, then expand the Databases node.
3. Right-click the database you want to deploy, select Tasks, and then select Deploy Database to a Microsoft
Azure VM…
Wizard Pages
The following sections provide additional information about deployment settings and configuration details for this
operation.
Introduction
Source Settings
Windows Azure Sign-in
Deployment Settings
Summary
Results
Introduction
This page describes the Deploy Database to a Microsoft Azure VM wizard.
Do not show this page again.
Click this check box to stop the Introduction page from being displayed in the future.
Next
Proceeds to the Source Settings page.
Cancel
Cancels the operation and closes the wizard.
Help
Launches the MSDN Help topic for the wizard.
Source Settings
Use this page to connect to the instance of SQL Server that hosts the database you want to deploy to the Microsoft
Azure VM. You will also specify a temporary location for files to be saved from the local machine before they are
transferred to Microsoft Azure. This can be a shared, network location.
SQL Server
Click Connect and then specify connection details for the instance of SQL Server that hosts the database to
deploy.
Select Database
Use the drop-down list to specify the database to deploy.
Other Settings
In the field, specify a shared folder that will be accessible to the Microsoft Azure VM service.
Microsoft Azure Sign-in
Sign in to Microsoft Azure with your Microsoft account or your organizational account. Your Microsoft or
organizational account is in the format of an email address, such as [email protected]. For more information
about Azure credentials, see Microsoft Account for Organizations FAQ and Troubleshooting Problems.
Deployment Settings
Use this page to specify the destination server and to provide details about your new database.
Microsoft Azure Virtual Machine
Cloud Service name
Specify the name of the service that hosts the VM. To create a new Cloud Service, specify a name for the new
Cloud Service.
Virtual Machine name Specify the name of the VM that will host the SQL Server database. To create a new
Microsoft Azure VM, specify a name for the new VM.
Storage account
Select the storage account from the drop-down list. To create a new storage account, specify a name for the
new account. Note that storage accounts associated with an Affinity Group will not be available in the dropdown list.
Settings
Use the Settings button to create a new VM to host the SQL Server database. If you are using an existing VM,
the information you provide will be used to authenticate your credentials.
Target Database
SQL instance name
Connection details for the server.
Database name
Specify or confirm the name of a new database. If the database name already exists on the destination SQL
Server instance, we suggest that you specify a modified database name.
Summary
Use this page to review the specified settings for the operation. To complete the deploy operation using the
specified settings, click Finish. To cancel the deploy operation and exit the wizard, click Cancel. Clicking Finish will
launch the Deployment Progress page. You can also view progress from the log file located
"%LOCALAPPDATA%\SQL Server\Deploy to SQL in WA VM" .
There may be manual steps required to deploy database details to the SQL Server database on the Windows Azure
VM. These steps will be outlined in detail for you.
Results
This page reports the success or failure of the deploy operation, showing the results of each action. Any action that
encountered an error will have an indication in the Result column. Click the link to view a report of the error for
that action.
Click Finish to close the wizard.
See Also
Cloud Adapter for SQL Server
Database Lifecycle Management
Export a Data-tier Application
Import a BACPAC File to Create a New User Database
Azure SQL Database Backup and Restore
SQL Server Deployment in Windows Azure Virtual Machines
Getting Ready to Migrate to SQL Server in Windows Azure Virtual Machines
Database Detach and Attach (SQL Server)
3/24/2017 • 6 min to read • Edit Online
The data and transaction log files of a database can be detached and then reattached to the same or another
instance of SQL Server. Detaching and attaching a database is useful if you want to change the database to a
different instance of SQL Server on the same computer or to move the database.
Security
File access permissions are set during a number of database operations, including detaching or attaching a
database.
IMPORTANT
We recommend that you do not attach or restore databases from unknown or untrusted sources. Such databases could
contain malicious code that might execute unintended Transact-SQL code or cause errors by modifying the schema or the
physical database structure. Before you use a database from an unknown or untrusted source, run DBCC CHECKDB on the
database on a nonproduction server and also examine the code, such as stored procedures or other user-defined code, in
the database.
Detaching a Database
Detaching a database removes it from the instance of SQL Server but leaves the database intact within its data
files and transaction log files. These files can then be used to attach the database to any instance of SQL Server,
including the server from which the database was detached.
You cannot detach a database if any of the following are true:
The database is replicated and published. If replicated, the database must be unpublished. Before you can
detach it, you must disable publishing by running sp_replicationdboption.
NOTE
If you cannot use sp_replicationdboption, you can remove replication by running sp_removedbreplication.
A database snapshot exists on the database.
Before you can detach the database, you must drop all of its snapshots. For more information, see Drop a
Database Snapshot (Transact-SQL).
NOTE
A database snapshot cannot be detached or attached.
The database is being mirrored in a database mirroring session.
The database cannot be detached unless the session is terminated. For more information, see Removing
Database Mirroring (SQL Server).
The database is suspect. A suspect database cannot be detached; before you can detach it, you must put it
into emergency mode. For more information about how to put a database into emergency mode, see
ALTER DATABASE (Transact-SQL).
The database is a system database.
Backup and Restore and Detach
Detaching a read-only database loses information about the differential bases of differential backups. For more
information, see Differential Backups (SQL Server).
Responding to Detach Errors
Errors produced while detaching a database can prevent the database from closing cleanly and the transaction log
from being rebuilt. If you receive an error message, perform the following corrective actions:
1. Reattach all files associated with the database, not just the primary file.
2. Resolve the problem that caused the error message.
3. Detach the database again.
Attaching a Database
You can attach a copied or detached SQL Server database. When you attach a SQL Server 2005 database that
contains full-text catalog files onto a SQL Server 2016 server instance, the catalog files are attached from their
previous location along with the other database files, the same as in SQL Server 2005. For more information, see
Upgrade Full-Text Search.
When you attach a database, all data files (MDF and NDF files) must be available. If any data file has a different
path from when the database was first created or last attached, you must specify the current path of the file.
NOTE
If the primary data file being attached is read-only, the Database Engine assumes that the database is read-only.
When an encrypted database is first attached to an instance of SQL Server, the database owner must open the
master key of the database by executing the following statement: OPEN MASTER KEY DECRYPTION BY
PASSWORD = 'password'. We recommend that you enable automatic decryption of the master key by executing
the following statement: ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY. For more information,
see CREATE MASTER KEY (Transact-SQL) and ALTER MASTER KEY (Transact-SQL).
The requirement for attaching log files depends partly on whether the database is read-write or read-only, as
follows:
For a read-write database, you can usually attach a log file in a new location. However, in some cases,
reattaching a database requires its existing log files. Therefore, it is important to always keep all the
detached log files until the database has been successfully attached without them.
If a read-write database has a single log file and you do not specify a new location for the log file, the
attach operation looks in the old location for the file. If it is found, the old log file is used, regardless of
whether the database was shut down cleanly. However, if the old log file is not found and if the database
was shut down cleanly and has no active log chain, the attach operation attempts to build a new log file for
the database.
If the primary data file being attached is read-only, the Database Engine assumes that the database is readonly. For a read-only database, the log file or files must be available at the location specified in the primary
file of the database. A new log file cannot be built because SQL Server cannot update the log location
stored in the primary file.
Metadata Changes on Attaching a Database
When a read-only database is detached and then reattached, the backup information about the current differential
base is lost. The differential base is the most recent full backup of all the data in the database or in a subset of the
files or filegroups of the database. Without the base-backup information, the master database becomes
unsynchronized with the read-only database, so differential backups taken thereafter may provide unexpected
results. Therefore, if you are using differential backups with a read-only database, you should establish a new
differential base by taking a full backup after you reattach the database. For information about differential
backups, see Differential Backups (SQL Server).
On attach, database startup occurs. Generally, attaching a database places it in the same state that it was in when
it was detached or copied. However, attach-and-detach operations both disable cross-database ownership
chaining for the database. For information about how to enable chaining, see cross db ownership chaining Server
Configuration Option. Also, TRUSTWORTHY is set to OFF whenever the database is attached. For information
about how to set TRUSTWORTHY to ON, see ALTER DATABASE (Transact-SQL).
Backup and Restore and Attach
Like any database that is fully or partially offline, a database with restoring files cannot be attached. If you stop the
restore sequence, you can attach the database. Then, you can restart the restore sequence.
Attaching a Database to Another Server Instance
IMPORTANT
A database created by a more recent version of SQL Server cannot be attached in earlier versions.
When you attach a database onto another server instance, to provide a consistent experience to users and
applications, you might have to re-create some or all of the metadata for the database, such as logins and jobs, on
the other server instance. For more information, see Manage Metadata When Making a Database Available on
Another Server Instance (SQL Server).
Related Tasks
To detach a database
sp_detach_db (Transact-SQL)
Detach a Database
To attach a database
CREATE DATABASE (SQL Server Transact-SQL)
Attach a Database
sp_attach_db (Transact-SQL)
sp_attach_single_file_db (Transact-SQL)
To upgrade a database using detach and attach operations
Upgrade a Database Using Detach and Attach (Transact-SQL)
To move a database using detach and attach operations
Move a Database Using Detach and Attach (Transact-SQL)
To delete a database snapshot
Drop a Database Snapshot (Transact-SQL)
See Also
Database Files and Filegroups
Move a Database Using Detach and Attach (TransactSQL)
3/24/2017 • 2 min to read • Edit Online
This topic describes how to move a detached database to another location and re-attach it to the same or a
different server instance in SQL Server 2016. However, we recommend that you move databases by using the
ALTER DATABASE planned relocation procedure, instead of using detach and attach. For more information, see
Move User Databases.
IMPORTANT
We recommend that you do not attach or restore databases from unknown or untrusted sources. Such databases could
contain malicious code that might execute unintended Transact-SQL code or cause errors by modifying the schema or the
physical database structure. Before you use a database from an unknown or untrusted source, run DBCC CHECKDB on the
database on a nonproduction server and also examine the code, such as stored procedures or other user-defined code, in
the database.
Procedure
To move a database by using detach and attach
1. Detach the database. For more information, see Detach a Database.
2. In a Windows Explorer or Windows Command Prompt window, move the detached database file or files and
log file or files to the new location.
NOTE
To move a single-file database, you can use email if the file size is small enough for email to accommodate.
You should move the log files even if you intend to create new log files. In some cases, reattaching a
database requires its existing log files. Therefore, always keep all the detached log files until the database
has been successfully attached without them.
NOTE
If you try to attach the database without specifying the log file, the attach operation will look for the log file in its
original location. If a copy of the log still exists in the original location, that copy is attached. To avoid using the
original log file, either specify the path of the new log file or remove the original copy of the log file (after copying it
to the new location).
3. Attach the copied files. For more information, see Attach a Database.
Example
The following example creates a copy of the AdventureWorks2012 database named MyAdventureWorks . The
Transact-SQL statements are executed in a Query Editor window that is connected to the server instance to which
is attached.
1. Detach the AdventureWorks2012 database by executing the following Transact-SQL statements:
USE master;
GO
EXEC sp_detach_db @dbname = N'AdventureWorks2012';
GO
2. Using the method of your choice, copy the database files (AdventureWorks208R2_Data.mdf and
AdventureWorks208R2_log) to: C:\MySQLServer\AdventureWorks208R2_Data.mdf and
C:\MySQLServer\AdventureWorks208R2_Log.ldf, respectively.
IMPORTANT
For a production database, place the database and transaction log on separate disks.
To copy files over the network to a disk on a remote computer, use the universal naming convention (UNC)
name of the remote location. A UNC name takes the form \\Servername\Sharename\Path\Filename. As
with writing files to the local hard disk, the appropriate permissions that are required to read or write to a
file on the remote disk must be granted to the user account used by the instance of SQL Server.
3. Attach the moved database and, optionally, its log by executing the following Transact-SQL statements:
USE master;
GO
CREATE DATABASE MyAdventureWorks
ON (FILENAME = 'C:\MySQLServer\AdventureWorks2012_Data.mdf'),
(FILENAME = 'C:\MySQLServer\AdventureWorks2012_Log.ldf')
FOR ATTACH;
GO
In SQL Server Management Studio, a newly attached database is not immediately visible in Object Explorer.
To view the database, in Object Explorer, click View, and then Refresh. When the Databases node is
expanded in Object Explorer, the newly attached database now appears in the list of databases.
See Also
Database Detach and Attach (SQL Server)
Upgrade a Database Using Detach and Attach
(Transact-SQL)
3/24/2017 • 5 min to read • Edit Online
This topic describes how to use detach and attach operations to upgrade a database in SQL Server 2016. After
being attached to SQL Server 2016, the database is available immediately and is automatically upgraded.
In This Topic
Before you begin:
Limitations and Restrictions
Recommendations
To Upgrade a SQL Server Database:
Using detach and attach operations
Follow Up: After Upgrading a SQL Server Database
Before You Begin
Limitations and Restrictions
The system databases cannot be attached.
Attach and detach disable cross-database ownership chaining for the database by setting its cross db
ownership chaining option to 0. For information about enabling chaining, see cross db ownership
chaining Server Configuration Option.
When attaching a replicated database that was copied instead of detached:
If you attach the database to an upgraded version of the same server instance, you must execute
sp_vupgrade_replication to upgrade replication after the attach operation finishes. For more
information, see sp_vupgrade_replication (Transact-SQL).
If you attach the database to a different server instance (regardless of version), you must execute
sp_removedbreplication to remove replication after the attach operation finishes. For more
information, see sp_removedbreplication (Transact-SQL).
Recommendations
We recommend that you do not attach or restore databases from unknown or untrusted sources. Such databases
could contain malicious code that might execute unintended Transact-SQL code or cause errors by modifying the
schema or the physical database structure. Before you use a database from an unknown or untrusted source, run
DBCC CHECKDB on the database on a nonproduction server and also examine the code, such as stored procedures
or other user-defined code, in the database.
To Upgrade a Database by Using Detach and Attach
1. Detach the database. For more information, see Detach a Database.
2. Optionally, move the detached database file or files and the log file or files.
You should move the log files along with the data files, even if you intend to create new log files. In some
cases, reattaching a database requires its existing log files. Therefore, always keep all the detached log files
until the database has been successfully attached without them.
NOTE
If you try to attach the database without specifying the log file, the attach operation will look for the log file in its
original location. If the original copy of the log still exists in that location, that copy is attached. To avoid using the
original log file, either specify the path of the new log file or remove the original copy of the log file (after copying it
to the new location).
3. Attach the copied files to the instance of SQL Server 2016. For more information, see Attach a Database.
Example
The following example upgrades a copy of a database from an earlier version of SQL Server. The Transact-SQL
statements are executed in a Query Editor window that is connected to the server instance to which is attached.
1. Detach the database by executing the following Transact-SQL statements:
USE master;
GO
EXEC sp_detach_db @dbname = N'MyDatabase';
GO
2. Using the method of your choice, copy the data and log files to the new location.
IMPORTANT
For a production database, place the database and transaction log on separate disks.
To copy files over the network to a disk on a remote computer, use the universal naming convention (UNC)
name of the remote location. A UNC name takes the form \\Servername\Sharename\Path\Filename. As
with writing files to the local hard disk, the appropriate permissions that are required to read or write to a
file on the remote disk must be granted to the user account used by the instance of SQL Server.
3. Attach the moved database and, optionally, its log by executing the following Transact-SQL statement:
USE master;
GO
CREATE DATABASE MyDatabase
ON (FILENAME = 'C:\MySQLServer\MyDatabase.mdf'),
(FILENAME = 'C:\MySQLServer\Database.ldf')
FOR ATTACH;
GO
In SQL Server Management Studio, a newly attached database is not immediately visible in Object Explorer.
To view the database, in Object Explorer, click View, and then Refresh. When the Databases node is
expanded in Object Explorer, the newly attached database now appears in the list of databases.
Follow Up: After Upgrading a SQL Server Database
If the database has full-text indexes, the upgrade process either imports, resets, or rebuilds them, depending on the
setting of the upgrade_option server property. If the upgrade option is set to import (upgrade_option = 2) or
rebuild (upgrade_option = 0), the full-text indexes will be unavailable during the upgrade. Depending the amount
of data being indexed, importing can take several hours, and rebuilding can take up to ten times longer. Note also
that when the upgrade option is set to import, the associated full-text indexes are rebuilt if a full-text catalog is not
available. To change the setting of the upgrade_option server property, use sp_fulltext_service.
Database Compatibility Level After Upgrade
If the compatibility level of a user database is 100 or higher before upgrade, it remains the same after upgrade. If
the compatibility level is 90 before upgrade in the upgraded database, the compatibility level is set to 100, which is
the lowest supported compatibility level in SQL Server 2016. For more information, see ALTER DATABASE
Compatibility Level (Transact-SQL).
Managing Metadata on the Upgraded Server Instance
When you attach a database onto another server instance, to provide a consistent experience to users and
applications, you might have to re-create some or all of the metadata for the database, such as logins, jobs, and
permissions, on the other server instance. For more information, see Manage Metadata When Making a Database
Available on Another Server Instance (SQL Server).
Service Master Key and Database Master Key Encryption changes from 3DES to AES
SQL Server 2012 and higher versions uses the AES encryption algorithm to protect the service master key (SMK)
and the database master key (DMK). AES is a newer encryption algorithm than 3DES used in earlier versions. When
a database is first attached or restored to a new instance of SQL Server, a copy of the database master key
(encrypted by the service master key) is not yet stored in the server. You must use the OPEN MASTER KEY
statement to decrypt the database master key (DMK). Once the DMK has been decrypted, you have the option of
enabling automatic decryption in the future by using the ALTER MASTER KEY REGENERATE statement to
provision the server with a copy of the DMK, encrypted with the service master key (SMK). When a database has
been upgraded from an earlier version, the DMK should be regenerated to use the newer AES algorithm. For more
information about regenerating the DMK, see ALTER MASTER KEY (Transact-SQL). The time required to regenerate
the DMK key to upgrade to AES depends upon the number of objects protected by the DMK. Regenerating the
DMK key to upgrade to AES is only necessary once, and has no impact on future regenerations as part of a key
rotation strategy.
Detach a Database
3/24/2017 • 2 min to read • Edit Online
This topic describes how to detach a database in SQL Server 2016 by using SQL Server Management Studio or
Transact-SQL. The detached files remain and can be reattached by using CREATE DATABASE with the FOR ATTACH
or FOR ATTACH_REBUILD_LOG option. The files can be moved to another server and attached there.
In This Topic
Before you begin:
Limitations and Restrictions
Security
To detach a database, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Limitations and Restrictions
For a list of limitations and restrictions, see Database Detach and Attach (SQL Server).
Security
Permissions
Requires membership in the db_owner fixed database role.
Using SQL Server Management Studio
To detach a database
1. In SQL Server Management Studio Object Explorer, connect to the instance of the SQL Server Database
Engine and then expand the instance.
2. Expand Databases, and select the name of the user database you want to detach.
3. Right-click the database name, point to Tasks, and then click Detach. The Detach Database dialog box
appears.
Databases to detach
Lists the databases to detach.
Database Name
Displays the name of the database to be detached.
Drop Connections
Disconnect connections to the specified database.
NOTE
You cannot detach a database with active connections.
Update Statistics
By default, the detach operation retains any out-of-date optimization statistics when detaching the
database; to update the existing optimization statistics, click this check box.
Keep Full-Text Catalogs
By default, the detach operation keeps any full-text catalogs that are associated with the database. To
remove them, clear the Keep Full-Text Catalogs check box. This option appears only when you are
upgrading a database from SQL Server 2005.
Status
Displays one of the following states: Ready or Not ready.
Message
The Message column may display information about the database, as follows:
When a database is involved with replication, the Status is Not ready and the Message column
displays Database replicated.
When a database has one or more active connections, the Status is Not ready and the Message
column displays Active connection(s) — for example: 1 Active connection(s). Before you can
detach the database, you need to disconnect any active connections by selecting Drop Connections.
To obtain more information about a message, click the hyperlinked text to open Activity Monitor.
4. When you are ready to detach the database, click OK.
NOTE
The newly detached database will remain visible in the Databases node of Object Explorer until the view is refreshed. You
can refresh the view at any time: Click in the Object Explorer pane, and from the menu bar select View and then Refresh.
Using Transact-SQL
To detach a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example detaches the
AdventureWorks2012 database with skipchecks set to true.
EXEC sp_detach_db 'AdventureWorks2012', 'true';
See Also
Database Detach and Attach (SQL Server)
sp_detach_db (Transact-SQL)
Attach a Database
3/24/2017 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This topic describes how to attach a database in SQL Server 2016 by using SQL Server Management Studio or
Transact-SQL. You can use this feature to copy, move, or upgrade a SQL Server database.
Prerequisites
The database must first be detached. Attempting to attach a database that has not been detached will return
an error. For more information, see Detach a Database.
When you attach a database, all data files (MDF and LDF files) must be available. If any data file has a
different path from when the database was first created or last attached, you must specify the current path
of the file.
When you attach a database, if MDF and LDF files are located in different directories and one of the paths
includes \\?\GlobalRoot, the operation will fail.
Is Attach the best choice?
We recommend that you move databases by using the ALTER DATABASE planned relocation procedure instead of
using detach and attach, when moving database files within the same instance. For more information, see Move
User Databases.
We don't recommend using detach and attach for Backup and Recovery. There are no transaction log backups, and
it possible to accidently delete files.
Security
File access permissions are set during a number of database operations, including detaching or attaching a
database. For information about file permissions that are set whenever a database is detached and attached, see
Securing Data and Log Files from SQL Server 2008 R2 Books Online (Still a valid read!)
We recommend that you do not attach or restore databases from unknown or untrusted sources. Such databases
could contain malicious code that might execute unintended Transact-SQL code or cause errors by modifying the
schema or the physical database structure. Before you use a database from an unknown or untrusted source, run
DBCC CHECKDB on the database on a nonproduction server and also examine the code, such as stored procedures
or other user-defined code, in the database. For more information about attaching databases and information
about changes that are made to metadata when you attach a database, see Database Detach and Attach (SQL
Server).
Permissions
Requires CREATE DATABASE, CREATE ANY DATABASE, or ALTER ANY DATABASE permission.
Using SQL Server Management Studio
To Attach a Database
1. In SQL Server Management Studio Object Explorer, connect to an instance of the SQL Server Database
Engine, and then click to expand that instance view in SSMS.
2. Right-click Databases and click Attach.
3. In the Attach Databases dialog box, to specify the database to be attached, click Add; and in the Locate
Database Files dialog box, select the disk drive where the database resides and expand the directory tree
to find and select the .mdf file of the database; for example:
C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorks2012_Data.mdf
IMPORTANT
Trying to select a database that is already attached generates an error.
Databases to attach
Displays information about the selected databases.
<no column header>
Displays an icon indicating the status of the attach operation. The possible icons are described in the Status
description, below).
MDF File Location
Displays the path and file name of the selected MDF file.
Database Name
Displays the name of the database.
Attach As
Optionally, specifies a different name for the database to attach as.
Owner
Provides a drop-down list of possible database owners from which you can optionally select a different
owner.
Status
Displays the status of the database according to the following table.
ICON
STATUS TEXT
DESCRIPTION
(No icon)
(No text)
Attach operation has not been
started or may be pending for this
object. This is the default when the
dialog is opened.
Green, right-pointing triangle
In progress
Attach operation has been started
but it is not complete.
Green check mark
Success
The object has been attached
successfully.
Red circle containing a white cross
Error
Attach operation encountered an
error and did not complete
successfully.
Circle containing two black
quadrants (on left and right) and two
white quadrants (on top and
bottom)
Stopped
Attach operation was not completed
successfully because the user
stopped the operation.
ICON
STATUS TEXT
DESCRIPTION
Circle containing a curved arrow
pointing counter-clockwise
Rolled Back
Attach operation was successful but
it has been rolled back due to an
error during attachment of another
object.
Message
Displays either a blank message or a "File not found" hyperlink.
Add
Find the necessary main database files. When the user selects an .mdf file, applicable information is
automatically filled in the respective fields of the Databases to attach grid.
Remove
Removes the selected file from the Databases to attach grid.
" " database details
Displays the names of the files to be attached. To verify or change the pathname of a file, click the Browse
button (…).
NOTE
If a file does not exist, the Message column displays "Not found." If a log file is not found, it exists in another
directory or has been deleted. You need to either update the file path in the database details grid to point to the
correct location or remove the log file from the grid. If an .ndf data file is not found, you need to update its path in
the grid to point to the correct location.
Original File Name
Displays the name of the attached file belonging to the database.
File Type
Indicates the type of file, Data or Log.
Current File Path
Displays the path to the selected database file. The path can be edited manually.
Message
Displays either a blank message or a "File not found" hyperlink.
Using Transact-SQL
To attach a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Use the CREATE DATABASE statement with the FOR ATTACH close.
Copy and paste the following example into the query window and click Execute. This example attaches the
files of the AdventureWorks2012 database and renames the database to MyAdventureWorks .
CREATE DATABASE MyAdventureWorks
ON (FILENAME = 'C:\MySQLServer\AdventureWorks_Data.mdf'),
(FILENAME = 'C:\MySQLServer\AdventureWorks_Log.ldf')
FOR ATTACH;
NOTE
Alternatively, you can use the sp_attach_db or sp_attach_single_file_db stored procedure. However, these procedures
will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and
plan to modify applications that currently use this feature. We recommend that you use CREATE DATABASE … FOR
ATTACH instead.
Follow Up: After Upgrading a SQL Server Database
After you upgrade a database by using the attach method, the database becomes available immediately and is
automatically upgraded. If the database has full-text indexes, the upgrade process either imports, resets, or
rebuilds them, depending on the setting of the Full-Text Upgrade Option server property. If the upgrade option
is set to Import or Rebuild, the full-text indexes will be unavailable during the upgrade. Depending the amount of
data being indexed, importing can take several hours, and rebuilding can take up to ten times longer. Note also
that when the upgrade option is set to Import, if a full-text catalog is not available, the associated full-text indexes
are rebuilt.
If the compatibility level of a user database is 100 or higher before upgrade, it remains the same after upgrade. If
the compatibility level is 90 before upgrade, in the upgraded database, the compatibility level is set to 100, which
is the lowest supported compatibility level in SQL Server 2016. For more information, see ALTER DATABASE
Compatibility Level (Transact-SQL).
NOTE
If you are attaching a database from an instance running SQL Server 2014 or below which had Change Data Capture (CDC)
enabled, you will also need to execute the command below to upgrade the Change Data Capture (CDC) metadata.
USE <database name>
EXEC sys.sp_cdc_vupgrade
See Also
CREATE DATABASE (SQL Server Transact-SQL)
Detach a Database
Add Data or Log Files to a Database
3/24/2017 • 3 min to read • Edit Online
This topic describes how to add data or log files to a database in SQL Server 2016 by using SQL Server
Management Studio or Transact-SQL.
In This Topic
Before you begin:
Limitations and Restrictions
Security
To add data or log files to a database, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Limitations and Restrictions
You cannot add or remove a file while a BACKUP statement is running.
A maximum of 32,767 files and 32,767 filegroups can be specified for each database.
Security
Permissions
Requires ALTER permission on the database.
Using SQL Server Management Studio
To add data or log files to a database
1. In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that
instance.
2. Expand Databases, right-click the database from which to add the files, and then click Properties.
3. In the Database Properties dialog box, select the Files page.
4. To add a data or transaction log file, click Add.
5. In the Database files grid, enter a logical name for the file. The file name must be unique within the
database.
6. Select the file type, data or log.
7. For a data file, select the filegroup in which the file should be included from the list, or select <new
filegroup> to create a new filegroup. Transaction logs cannot be put in filegroups.
8. Specify the initial size of the file. Make the data file as large as possible, based on the maximum amount of
data you expect in the database.
9. To specify how the file should grow, click (…) in the Autogrowth column. Select from the following
options:
a. To allow for the currently selected file to grow as more data space is required, select the Enable
Autogrowth check box and then select from the following options:
b. To specify that the file should grow by fixed increments, select In Megabytes and specify a value.
c. To specify that the file should grow by a percentage of the current file size, select In Percent and
specify a value.
10. To specify the maximum file size limit, select from the following options:
a. To specify the maximum size the file should be able to grow to, select Restricted File Growth (MB)
and specify a value.
b. To allow for the file to grow as much as needed, select Unrestricted File Growth.
c. To prevent the file from growing, clear the Enable Autogrowth check box. The size of the file will
not grow beyond the value specified in the Initial Size (MB) column.
NOTE
The maximum database size is determined by the amount of disk space available and the licensing limits determined
by the version of SQL Server that you are using.
11. Specify the path for the file location. The specified path must exist before adding the file.
NOTE
By default, the data and transaction logs are put on the same drive and path to accommodate single-disk systems,
but may not be optimal for production environments. For more information, see Database Files and Filegroups.
12. Click OK.
Using Transact-SQL
To add data or log files to a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. The example adds a
filegroup with two files to a database. The example creates the filegroup Test1FG1 in the
AdventureWorks2012 database and adds two 5-MB files to the filegroup.
USE master
GO
ALTER DATABASE AdventureWorks2012
ADD FILEGROUP Test1FG1;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = test1dat3,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\t1dat3.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
),
(
NAME = test1dat4,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\t1dat4.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
)
TO FILEGROUP Test1FG1;
GO
For more examples, see ALTER DATABASE File and Filegroup Options (Transact-SQL).
See Also
Database Files and Filegroups
Delete Data or Log Files from a Database
Increase the Size of a Database
Change the Configuration Settings for a Database
3/24/2017 • 1 min to read • Edit Online
This topic describes how to change database-level options in SQL Server 2016 by using SQL Server Management
Studio or Transact-SQL. These options are unique to each database and do not affect other databases.
In This Topic
Before you begin:
Limitations and Restrictions
Security
To change the option settings for a database, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Limitations and Restrictions
Only the system administrator, database owner, members of the sysadmin and dbcreator fixed server roles
and db_owner fixed database roles can modify these options.
Security
Permissions
Requires ALTER permission on the database.
Using SQL Server Management Studio
To change the option settings for a database
1. In Object Explorer, connect to a Database Engine instance, expand the server, expand Databases, right-click
a database, and then click Properties.
2. In the Database Properties dialog box, click Options to access most of the configuration settings. File and
filegroup configurations, mirroring and log shipping are on their respective pages.
Using Transact-SQL
To change the option settings for a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example sets the
recovery model and data page verification options for the AdventureWorks2012 sample database.
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET RECOVERY FULL, PAGE_VERIFY CHECKSUM;
GO
For more examples, see ALTER DATABASE SET Options (Transact-SQL).
See Also
ALTER DATABASE Compatibility Level (Transact-SQL)
ALTER DATABASE Database Mirroring (Transact-SQL)
ALTER DATABASE SET HADR (Transact-SQL)
Rename a Database
Shrink a Database
Create a Database
3/24/2017 • 3 min to read • Edit Online
This topic describes how to create a database in SQL Server 2016 by using SQL Server Management Studio or
Transact-SQL.
In This Topic
Before you begin:
Limitations and Restrictions
Prerequisites
Recommendations
Security
To create a database, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Limitations and Restrictions
A maximum of 32,767 databases can be specified on an instance of SQL Server.
Prerequisites
The CREATE DATABASE statement must run in autocommit mode (the default transaction management mode)
and is not allowed in an explicit or implicit transaction.
Recommendations
The master database should be backed up whenever a user database is created, modified, or dropped.
When you create a database, make the data files as large as possible based on the maximum amount of
data you expect in the database.
Security
Permissions
Requires CREATE DATABASE permission in the master database, or requires CREATE ANY DATABASE, or ALTER
ANY DATABASE permission.
To maintain control over disk use on an instance of SQL Server, permission to create databases is typically limited
to a few login accounts.
Using SQL Server Management Studio
To create a database
1. In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that
instance.
2. Right-click Databases, and then click New Database.
3. In New Database, enter a database name.
4. To create the database by accepting all default values, click OK; otherwise, continue with the following
optional steps.
5. To change the owner name, click (…) to select another owner.
NOTE
The Use full-text indexing option is always checked and dimmed because, beginning in SQL Server 2008, all user
databases are full-text enabled.
6. To change the default values of the primary data and transaction log files, in the Database files grid, click
the appropriate cell and enter the new value. For more information, see Add Data or Log Files to a Database.
7. To change the collation of the database, select the Options page, and then select a collation from the list.
8. To change the recovery model, select the Options page and select a recovery model from the list.
9. To change database options, select the Options page, and then modify the database options. For a
description of each option, see ALTER DATABASE SET Options (Transact-SQL).
10. To add a new filegroup, click the Filegroups page. Click Add and then enter the values for the filegroup.
11. To add an extended property to the database, select the Extended Properties page.
a. In the Name column, enter a name for the extended property.
b. In the Value column, enter the extended property text. For example, enter one or more statements
that describe the database.
12. To create the database, click OK.
Using Transact-SQL
To create a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example creates the
database Sales . Because the keyword PRIMARY is not used, the first file ( Sales dat ) becomes the primary
file. Because neither MB nor KB is specified in the SIZE parameter for the Sales \ dat file, it uses MB and is
allocated in megabytes. The Sales _ log file is allocated in megabytes because the MB suffix is explicitly
stated in the SIZE parameter.
USE master ;
GO
CREATE DATABASE Sales
ON
( NAME = Sales_dat,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\saledat.mdf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 5 )
LOG ON
( NAME = Sales_log,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\salelog.ldf',
SIZE = 5MB,
MAXSIZE = 25MB,
FILEGROWTH = 5MB ) ;
GO
For more examples, see CREATE DATABASE (SQL Server Transact-SQL).
See Also
Database Files and Filegroups
Database Detach and Attach (SQL Server)
ALTER DATABASE (Transact-SQL)
Add Data or Log Files to a Database
Delete a Database
3/24/2017 • 1 min to read • Edit Online
This topic describes how to delete a user-defined database in SQL Server Management Studio in SQL Server 2016
by using SQL Server Management Studio or Transact-SQL.
In This Topic
Before you begin:
Limitations and Restrictions
Prerequisites
Recommendations
Security
To delete a database, using:
SQL Server Management Studio
Transact-SQL
Follow Up: After deleting a database
Before You Begin
Limitations and Restrictions
System databases cannot be deleted.
Prerequisites
Delete any database snapshots that exist on the database. For more information, see Drop a Database
Snapshot (Transact-SQL).
If the database is involved in log shipping, remove log shipping.
If the database is published for transactional replication, or published or subscribed to merge replication,
remove replication from the database.
Recommendations
Consider taking a full backup of the database. A deleted database can be re-created only by restoring a backup.
Security
Permissions
To execute DROP DATABASE, at a minimum, a user must have CONTROL permission on the database.
Using SQL Server Management Studio
To delete a database
1. In Object Explorer, connect to an instance of the SQL Server Database Engine, and then expand that
instance.
2. Expand Databases, right-click the database to delete, and then click Delete.
3. Confirm the correct database is selected, and then click OK.
Using Transact-SQL
To delete a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. The example removes the
Sales and NewSales databases.
USE master ;
GO
DROP DATABASE Sales, NewSales ;
GO
Follow Up: After deleting a database
Back up the master database. If master must be restored, any database that has been deleted since the last backup
of master will still have references in the system catalog views and may cause error messages to be raised.
See Also
CREATE DATABASE (SQL Server Transact-SQL)
ALTER DATABASE (Transact-SQL)
Delete Data or Log Files from a Database
3/24/2017 • 1 min to read • Edit Online
This topic describes how to delete data or log files in SQL Server 2016 by using SQL Server Management Studio
or Transact-SQL.
In This Topic
Before you begin:
Prerequisites
Security
To delete data or logs files from a database, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Prerequisites
A file must be empty before it can be deleted. For more information, see Shrink a File.
Security
Permissions
Requires ALTER permission on the database.
Using SQL Server Management Studio
To delete data or log files from a database
1. In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that
instance.
2. Expand Databases, right-click the database from which to delete the file, and then click Properties.
3. Select the Files page.
4. In the Database files grid, select the file to delete and then click Remove.
5. Click OK.
Using Transact-SQL
To delete data or log files from a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example removes the
file test1dat4 .
USE master;
GO
ALTER DATABASE AdventureWorks2012
REMOVE FILE test1dat4;
GO
For more examples, see ALTER DATABASE File and Filegroup Options (Transact-SQL).
See Also
Shrink a Database
Add Data or Log Files to a Database
Display Data and Log Space Information for a
Database
3/24/2017 • 1 min to read • Edit Online
This topic describes how to display the data and log space information for a database in SQL Server 2016 by using
SQL Server Management Studio or Transact-SQL.
Before You Begin
Security
Permissions
Permission to execute sp_spaceused is granted to the public role. Only members of the db_owner fixed database
role can specify the @updateusage parameter.
Using SQL Server Management Studio
To display data and log space information for a database
1. In Object Explorer, connect to an instance of SQL Server and then expand that instance.
2. Expand Databases.
3. Right-click a database, point to Reports, point to Standard Reports,, and then click Disk Usage.
Using Transact-SQL
To display data and log space information for a database by using sp_spaceused
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example uses the
sp_spaceused system stored procedure to report disk space information for the Vendor table and its
indexes.
USE AdventureWorks2012;
GO
EXEC sp_spaceused N'Purchasing.Vendor';
GO
To display data and log space information for a database by querying sys.database_files
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example queries the
sys.database_files catalog view to return specific information about the data and log files in the
AdventureWorks2012 database.
USE AdventureWorks2012;
GO
SELECT file_id, name, type_desc, physical_name, size, max_size
FROM sys.database_files ;
GO
See Also
SELECT (Transact-SQL)
sys.database_files (Transact-SQL)
sp_spaceused (Transact-SQL)
Add Data or Log Files to a Database
Delete Data or Log Files from a Database
Increase the Size of a Database
3/24/2017 • 1 min to read • Edit Online
This topic describes how to increase the size of a database in SQL Server 2016 by using SQL Server Management
Studio or Transact-SQL. The database is expanded by either increasing the size of an existing data or log file or by
adding a new file to the database.
In This Topic
Before you begin:
Limitations and Restrictions
Security
To increase the size of a database, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Limitations and Restrictions
You cannot add or remove a file while a BACKUP statement is running.
Security
Permissions
Requires ALTER permission on the database.
Using SQL Server Management Studio
To increase the size of a database
1. In Object Explorer, connect to an instance of the SQL Server Database Engine, and then expand that
instance.
2. Expand Databases, right-click the database to increase, and then click Properties.
3. In Database Properties, select the Files page.
4. To increase the size of an existing file, increase the value in the Initial Size (MB) column for the file. You
must increase the size of the database by at least 1 megabyte.
5. To increase the size of the database by adding a new file, click Add and then enter the values for the new
file. For more information, see Add Data or Log Files to a Database.
6. Click OK.
Using Transact-SQL
To increase the size of a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example increases the
size of the file test1dat3 .
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILE
(NAME = test1dat3,
SIZE = 20MB);
GO
For more examples, see ALTER DATABASE File and Filegroup Options (Transact-SQL).
See Also
Add Data or Log Files to a Database
Shrink a Database
Manage Metadata When Making a Database
Available on Another Server
3/29/2017 • 18 min to read • Edit Online
This topic is relevant in the following situations:
Configuring the availability replicas of an Always On availability groups availability group.
Setting up database mirroring for a database.
When preparing to change roles between primary and secondary servers in a log shipping configuration.
Restoring a database to another server instance.
Attaching a copy of a database on another server instance.
Some applications depend on information, entities, and/or objects that are outside of the scope of a single
user database. Typically, an application has dependencies on the master and msdb databases, and also on
the user database. Anything stored outside of a user database that is required for the correct functioning of
that database must be made available on the destination server instance. For example, the logins for an
application are stored as metadata in the master database, and they must be re-created on the destination
server. If an application or database maintenance plan depends on SQL Server Agent jobs, whose metadata
is stored in the msdb database, you must re-create those jobs on the destination server instance. Similarly,
the metadata for a server-level trigger is stored in master.
When you move the database for an application to another server instance, you must re-create all the
metadata of the dependant entities and objects in master and msdb on the destination server instance. For
example, if a database application uses server-level triggers, just attaching or restoring the database on the
new system is not enough. The database will not work as expected unless you manually re-create the
metadata for those triggers in the master database.
Information, Entities, and Objects That Are Stored Outside of User
Databases
The remainder of this topic summarizes the potential issues that might affect a database that is being made
available on another server instance. You might have to re-create one or more of the types of information, entities,
or objects listed in the following list. To see a summary, click the link for the item.
Server configuration settings
Credentials
Cross-database queries
Database ownership
Distributed queries/linked servers
Encrypted data
User-defined error messages
Event notifications and Windows Management Instrumentation (WMI) events (at server level)
Extended stored procedures
Full-text engine for SQL Server properties
Jobs
Logins
Permissions
Replication settings
Service Broker applications
Startup procedures
Triggers (at server level)
Server Configuration Settings
SQL Server 2005 and later versions selectively install and starts key services and features. This helps reduce the
attackable surface area of a system. In the default configuration of new installations, many features are not
enabled. If the database relies on any service or feature that is off by default, this service or feature must be
enabled on the destination server instance.
For more information about these settings and enabling or disabling them, see Server Configuration Options (SQL
Server).
Credentials
A credential is a record that contains the authentication information that is required to connect to a resource
outside SQL Server. Most credentials consist of a Windows login and password.
For more information about this feature, see Credentials (Database Engine).
NOTE: SQL Server Agent Proxy accounts use credentials. To learn the credential ID of a proxy account, use the
sysproxies system table.
Cross-Database Queries
The DB_CHAINING and TRUSTWORTHY database options are OFF by default. If either of these is set to ON for the
original database, you may have to enable them on the database on the destination server instance. For more
information, see ALTER DATABASE (Transact-SQL).
Attach-and-detach operations disable cross-database ownership chaining for the database. For information about
how to enable chaining, see cross db ownership chaining Server Configuration Option.
For more information, see also Set Up a Mirror Database to Use the Trustworthy Property (Transact-SQL)
Database Ownership
When a database is restored on another computer, the SQL Server login or Windows user who initiated the
restore operation becomes the owner of the new database automatically. When the database is restored, the
system administrator or the new database owner can change database ownership.
Distributed Queries and Linked Servers
Distributed queries and linked servers are supported for OLE DB applications. Distributed queries access data from
multiple heterogeneous data sources on either the same or different computers. A linked server configuration
enables SQL Server to execute commands against OLE DB data sources on remote servers. For more information
about these features, see Linked Servers (Database Engine).
Encrypted Data
If the database you are making available on another server instance contains encrypted data and if the database
master key is protected by the service master key on the original server, it might be necessary to re-create the
service master key encryption. The database master key is a symmetric key that is used to protect the private keys
of certificates and asymmetric keys in an encrypted database. When created, the database master key is encrypted
by using the Triple DES algorithm and a user-supplied password.
To enable the automatic decryption of the database master key on a server instance, a copy of this key is
encrypted by using the service master key. This encrypted copy is stored in both the database and in master.
Typically, the copy stored in master is silently updated whenever the master key is changed. SQL Server first tries
to decrypt the database master key with the service master key of the instance. If that decryption fails, SQL Server
searches the credential store for master key credentials that have the same family GUID as the database for which
it requires the master key. SQL Server then tries to decrypt the database master key with each matching credential
until the decryption succeeds or there are no more credentials. A master key that is not encrypted by the service
master key must be opened by using the OPEN MASTER KEY statement and a password.
When an encrypted database is copied, restored, or attached to a new instance of SQL Server, a copy of the
database master key encrypted by the service master key is not stored in master on the destination server
instance. On the destination server instance, you must open the master key of the database. To open the master
key, execute the following statement: OPEN MASTER KEY DECRYPTION BY PASSWORD ='password'. We
recommend that you then enable automatic decryption of the database master key by executing the following
statement: ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY. This ALTER MASTER KEY statement
provisions the server instance with a copy of the database master key that is encrypted with the service master
key. For more information, see OPEN MASTER KEY (Transact-SQL) and ALTER MASTER KEY (Transact-SQL).
For information about how to enable automatic decryption of the database master key of a mirror database, see
Set Up an Encrypted Mirror Database.
For more information, see also:
Encryption Hierarchy
Set Up an Encrypted Mirror Database
Create Identical Symmetric Keys on Two Servers
User-defined Error Messages
User-defined error messages reside in the sys.messages catalog view. This catalog view is stored in master. If a
database application depends on user-defined error messages and the database is made available on another
server instance, use sp_addmessage to add those user-defined messages on the destination server instance.
Event Notifications and Windows Management Instrumentation (WMI)
Events (at Server Level)
Server-Level Event Notifications
Server-level event notifications are stored in msdb. Therefore, if a database application relies on a server-level
event notifications, that event notification must be re-created on the destination server instance. To view the event
notifications on a server instance, use the sys.server_event_notifications catalog view. For more information, see
Event Notifications.
Additionally, event notifications are delivered by using Service Broker. Routes for incoming messages are not
included in the database that contains a service. Instead, explicit routes are stored in msdb. If your service uses an
explicit route in the msdb database to route incoming messages to the service, when you attach a database in a
different instance, you must re-create this route.
Windows Management Instrumentation (WMI ) Events
The WMI Provider for Server Events lets you use the Windows Management Instrumentation (WMI) to monitor
events in SQL Server. Any application that relies on server-level events exposed through the WMI provider on
which a database relies must be defined the computer of the destination server instance. WMI Event provider
creates event notifications with a target service that is defined in msdb.
NOTE: For more information, see WMI Provider for Server Events Concepts.
To create a WMI alert using SQL Server Management Studio
Create a WMI Event Alert
How Event Notifications Work for a Mirrored Database
Cross-database delivery of event notifications that involves a mirrored database is remote, by definition, because
the mirrored database can fail over. Service Broker provides special support for mirrored databases, in the form of
mirrored routes. A mirrored route has two addresses: one for the principal server instance and one for the mirror
server instance.
By setting up mirrored routes, you make Service Broker routing aware of database mirroring. The mirrored routes
enable Service Broker to transparently redirect conversations to the current principal server instance. For example,
consider a service, Service_A, which is hosted by a mirrored database, Database_A. Assume that you need another
service, Service_B, which is hosted by Database_B, to have a dialog with Service_A. For this dialog to be possible,
Database_B must contain a mirrored route for Service_A. In addition, Database_A must contain a nonmirrored TCP
transport route to Service_B, which, unlike a local route, remains valid after failover. These routes enable ACKs to
come back after a failover. Because the service of the sender is always named in the same manner, the route must
specify the broker instance.
The requirement for mirrored routes applies for regardless of whether the service in the mirrored database is the
initiator service or the target service:
If target service is in the mirrored database, the initiator service must have a mirrored route back to the
target. However, the target can have a regular route back to initiator.
If initiator service is in the mirrored database, the target service must have a mirrored route back to initiator
to deliver acknowledgements and replies. However, the initiator can have a regular route to the target.
Extended Stored Procedures
IMPORTANT! This feature will be removed in a future version of Microsoft SQL Server. Avoid using this
feature in new development work, and plan to modify applications that currently use this feature. Use CLR
Integration instead.
Extended stored procedures are programmed by using the SQL Server Extended Stored Procedure API. A member
of the sysadmin fixed server role can register an extended stored procedure with an instance of SQL Server and
grant permission to users to execute the procedure. Extended stored procedures can be added only to the master
database.
Extended stored procedures run directly in the address space of an instance of SQL Server, and they may produce
memory leaks or other problems that reduce the performance and reliability of the server. You should consider
storing extended stored procedures in an instance of SQL Server that is separate from the instance that contains
the referenced data. You should also consider using distributed queries to access the database.
IMPORTANT!! Before adding extended stored procedures to the server and granting EXECUTE permissions to
other users, the system administrator should thoroughly review each extended stored procedure to make sure
that it does not contain harmful or malicious code.
For more information, see GRANT Object Permissions (Transact-SQL), DENY Object Permissions (Transact-SQL),
and REVOKE Object Permissions (Transact-SQL).
Full-Text Engine for SQL Server Properties
Properties are set on the Full-Text Engine by sp_fulltext_service. Make sure that the destination server instance has
the required settings for these properties. For more information about these properties, see
FULLTEXTSERVICEPROPERTY (Transact-SQL).
Additionally, if the word breakers and stemmers component or full-text search filters component have different
versions on the original and destination server instances, full-text index and queries may behave differently. Also,
the thesaurus is stored in instance-specific files. You must either transfer a copy of those files to an equivalent
location on the destination server instance or re-create them on new instance.
NOTE: When you attach a SQL Server 2005 database that contains full-text catalog files onto a SQL Server
2016 server instance, the catalog files are attached from their previous location along with the other database
files, the same as in SQL Server 2005. For more information, see Upgrade Full-Text Search.
For more information, see also:
Back Up and Restore Full-Text Catalogs and Indexes
Database Mirroring and Full-Text Catalogs (SQL Server)
Jobs
If the database relies on SQL Server Agent jobs, you will have to re-create them on the destination server instance.
Jobs depend on their environments. If you plan to re-create an existing job on the destination server instance, the
destination server instance might have to be modified to match the environment of that job on the original server
instance. The following environmental factors are significant:
The login used by the job
To create or execute SQL Server Agent jobs, you must first add any SQL Server logins required by the job to
the destination server instance. For more information, see Configure a User to Create and Manage SQL
Server Agent Jobs.
SQL Server Agent service startup account
The service startup account defines the Microsoft Windows account in which SQL Server Agent runs and its
network permissions. SQL Server Agent runs as a specified user account. The context of the Agent service
affects the settings for the job and its run environment. The account must have access to the resources,
such as network shares, required by the job. For information about how to select and modify the service
startup account, see Select an Account for the SQL Server Agent Service.
To operate correctly, the service startup account must be configured to have the correct domain, file
system, and registry permissions. Also, a job might require a shared network resource that must be
configured for the service account. For information, see Configure Windows Service Accounts and
Permissions.
SQL Server Agent service, which is associated with a specific instance of SQL Server, has its own registry
hive, and its jobs typically have dependencies on one or more of the settings in this registry hive. To behave
as intended, a job requires those registry settings. If you use a script to re-create a job in another SQL
Server Agent service, its registry might not have the correct settings for that job. For re-created jobs to
behave correctly on a destination server instance, the original and destination SQL Server Agent services
should have the same registry settings.
Cau t i on
Changing registry settings on the destination SQL Server Agent service to handle a re-created job could be
problematic if the current settings are required by other jobs. Furthermore, incorrectly editing the registry
can severely damage your system. Before you make changes to the registry, we recommend that you back
up any valued data on the computer.
SQL Server Agent Proxies
A SQL Server Agent proxy defines the security context for a specified job step. For a job to run on the
destination server instance, all the proxies it requires must be manually re-created on that instance. For
more information, see Create a SQL Server Agent Proxy and Troubleshoot Multiserver Jobs That Use
Proxies.
For more information, see also:
Implement Jobs
Management of Logins and Jobs After Role Switching (SQL Server) (for database mirroring)
Configure Windows Service Accounts and Permissions (when you install an instance of SQL Server)
Configure SQL Server Agent (when you install an instance of SQL Server)
Implement SQL Server Agent Security
To view existing jobs and their properties
Monitor Job Activity
sp_help_job (Transact-SQL)
View Job Step Information
dbo.sysjobs (Transact-SQL)
To create a job
Create a Job
Create a Job
Best Practices for Using a Script to Re-create a Job
We recommend that you start by scripting a simple job, re-creating the job on the other SQL Server Agent service,
and running the job to see whether it works as intended. This will let you to identify incompatibilities and try to
resolve them. If a scripted job does not work as intended in its new environment, we recommend that you create
an equivalent job that works correctly in that environment.
Logins
Logging into an instance of SQL Server requires a valid SQL Server login. This login is used in the authentication
process that verifies whether the principal can connect to the instance of SQL Server. A database user for which
the corresponding SQL Server login is undefined or is incorrectly defined on a server instance cannot log in to the
instance. Such a user is said to be an orphaned user of the database on that server instance. A database user can
become orphaned if after a database is restored, attached, or copied to a different instance of SQL Server.
To generate a script for some or all the objects in the original copy of the database, you can use the Generate
Scripts Wizard, and in the Choose Script Options dialog box, set the Script Logins option to True.
NOTE: For information about how to set up logins for a mirrored database, see Set Up Login Accounts for
Database Mirroring or Always On Availability Groups (SQL Server) and Management of Logins and Jobs After
Role Switching (SQL Server).
Permissions
The following types of permissions might be affected when a database is made available on another server
instance.
GRANT, REVOKE, or DENY permissions on system objects
GRANT, REVOKE, or DENY permissions on server instance (server-level permissions)
GRANT, REVOKE, and DENY Permissions on System Objects
Permissions on system objects such as stored procedures, extended stored procedures, functions, and views, are
stored in the master database and must be configured on the destination server instance.
To generate a script for some or all the objects in the original copy of the database, you can use the Generate
Scripts Wizard, and in the Choose Script Options dialog box, set the Script Object-Level Permissions option to
True.
IMPORTANT!! If you script logins, the passwords are not scripted. If you have logins that use SQL Server
Authentication, you have to modify the script on the destination.
System objects are visible in the sys.system_objects catalog view. The permissions on system objects are visible in
the sys.database_permissions catalog view in the master database. For information about querying these catalog
views and granting system-object permissions, see GRANT System Object Permissions (Transact-SQL). For more
information, see REVOKE System Object Permissions (Transact-SQL) and DENY System Object Permissions
(Transact-SQL).
GRANT, REVOKE, and DENY Permissions on a Server Instance
Permissions at the server scope are stored in the master database and must be configured on the destination
server instance. For information about the server permissions of a server instance, query the
sys.server_permissions catalog view, for information about server principals query the sys.server_principalss
catalog view, and for information about membership of server roles query the sys.server_role_members catalog
view.
For more information, see GRANT Server Permissions (Transact-SQL), REVOKE Server Permissions (TransactSQL), and DENY Server Permissions (Transact-SQL).
Server-Level Permissions for a Certificate or Asymmetric Key
Server-level permissions cannot be granted directly to a certificate or asymmetric key. Instead, server-level
permissions are granted to a mapped login that is created exclusively for a specific certificate or asymmetric key.
Therefore, each certificate or asymmetric key that requires server-level permissions, requires its own certificatemapped login or asymmetric key-mapped login. To grant server-level permissions for a certificate or asymmetric
key, grant the permissions to its mapped login.
NOTE: A mapped login is used only for authorization of code signed with the corresponding certificate or
asymmetric key. Mapped logins cannot be used for authentication.
The mapped login and its permissions both reside in master. If a certificate or asymmetric key resides in a
database other than master, you must re-create it in master and map it to a login. If you move, copy, or restore
the database to another server instance, you must re-create its certificate or asymmetric key in the master
database of the destination server instance, map to a login, and grant the required server-level permissions to the
login.
To create a certificate or asymmetric key
CREATE CERTIFICATE (Transact-SQL)
CREATE ASYMMETRIC KEY (Transact-SQL)
To map a certificate or asymmetric key to a login
CREATE LOGIN (Transact-SQL)
To assign permissions to the mapped login
GRANT Server Permissions (Transact-SQL)
For more information about certificates and asymmetric keys, see Encryption Hierarchy.
Replication Settings
If you restore a backup of a replicated database to another server or database, replication settings cannot be
preserved. In this case, you must re-create all publications and subscriptions after backups are restored. To make
this process easier, create scripts for your current replication settings and, also, for the enabling and disabling of
replication. To help re-create your replication settings, copy these scripts and change the server name references
to work for the destination server instance.
For more information, see Back Up and Restore Replicated Databases, Database Mirroring and Replication (SQL
Server), and Log Shipping and Replication (SQL Server).
Service Broker Applications
Many aspects of a Service Broker application move with the database. However, some aspects of the application
must be re-created or reconfigured in the new location.
Startup Procedures
A startup procedure is a stored procedure that is marked for automatic execution and is executed every time SQL
Server starts. If the database depends on any startup procedures, they must be defined on the destination server
instance and be configured to be automatically executed at startup.
Triggers (at Server Level)
DDL triggers fire stored procedures in response to a variety of Data Definition Language (DDL) events. These
events primarily correspond to Transact-SQL statements that start with the keywords CREATE, ALTER, and DROP.
Certain system stored procedures that perform DDL-like operations can also fire DDL triggers.
For more information about this feature, see DDL Triggers.
See Also
Contained Databases
Copy Databases to Other Servers
Database Detach and Attach (SQL Server)
Fail Over to a Log Shipping Secondary (SQL Server)
Role Switching During a Database Mirroring Session (SQL Server)
Set Up an Encrypted Mirror Database
SQL Server Configuration Manager
Troubleshoot Orphaned Users (SQL Server)
Move Database Files
3/24/2017 • 1 min to read • Edit Online
In SQL Server, you can move system and user databases by specifying the new file location in the FILENAME
clause of the ALTER DATABASE statement. Data, log, and full-text catalog files can be moved in this way. This may
be useful in the following situations:
Failure recovery. For example, the database is in suspect mode or has shut down, because of a hardware
failure.
Planned relocation.
Relocation for scheduled disk maintenance.
In This Section
TOPIC
DESCRIPTION
Move User Databases
Describes the procedures for moving user database files and
full-text catalog files to a new location.
Move System Databases
Describes the procedures for moving system database files to
a new location.
See Also
ALTER DATABASE (Transact-SQL)
CREATE DATABASE (SQL Server Transact-SQL)
Database Detach and Attach (SQL Server)
Move User Databases
3/24/2017 • 4 min to read • Edit Online
In SQL Server, you can move the data, log, and full-text catalog files of a user database to a new location by
specifying the new file location in the FILENAME clause of the ALTER DATABASE statement. This method applies to
moving database files within the same instance SQL Server. To move a database to another instance of SQL Server
or to another server, use backup and restore or detach and attach operations.
Considerations
When you move a database onto another server instance, to provide a consistent experience to users and
applications, you might have to re-create some or all the metadata for the database. For more information, see
Manage Metadata When Making a Database Available on Another Server Instance (SQL Server).
Some features of the SQL Server Database Engine change the way that the Database Engine stores information in
the database files. These features are restricted to specific editions of SQL Server. A database that contains these
features cannot be moved to an edition of SQL Server that does not support them. Use the
sys.dm_db_persisted_sku_features dynamic management view to list all edition-specific features that are enabled
in the current database.
The procedures in this topic require the logical name of the database files. To obtain the name, query the name
column in the sys.master_files catalog view.
Starting with SQL Server 2008 R2, full-text catalogs are integrated into the database rather than being stored in
the file system. The full-text catalogs now move automatically when you move a database.
Planned Relocation Procedure
To move a data or log file as part of a planned relocation, follow these steps:
1. Run the following statement.
ALTER DATABASE database_name SET OFFLINE;
2. Move the file or files to the new location.
3. For each file moved, run the following statement.
ALTER DATABASE database_name MODIFY FILE ( NAME = logical_name, FILENAME = 'new_path\os_file_name' );
4. Run the following statement.
ALTER DATABASE database_name SET ONLINE;
5. Verify the file change by running the following query.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'<database_name>');
Relocation for Scheduled Disk Maintenance
To relocate a file as part of a scheduled disk maintenance process, follow these steps:
1. For each file to be moved, run the following statement.
ALTER DATABASE database_name MODIFY FILE ( NAME = logical_name , FILENAME = 'new_path\os_file_name' );
2. Stop the instance of SQL Server or shut down the system to perform maintenance. For more information,
see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or SQL Server Browser
Service.
3. Move the file or files to the new location.
4. Restart the instance of SQL Server or the server. For more information, see Start, Stop, Pause, Resume,
Restart the Database Engine, SQL Server Agent, or SQL Server Browser Service
5. Verify the file change by running the following query.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'<database_name>');
Failure Recovery Procedure
If a file must be moved because of a hardware failure, use the following steps to relocate the file to a new location.
IMPORTANT
If the database cannot be started, that is it is in suspect mode or in an unrecovered state, only members of the sysadmin
fixed role can move the file.
1. Stop the instance of SQL Server if it is started.
2. Start the instance of SQL Server in master-only recovery mode by entering one of the following commands
at the command prompt.
For the default (MSSQLSERVER) instance, run the following command.
NET START MSSQLSERVER /f /T3608
For a named instance, run the following command.
NET START MSSQL$instancename /f /T3608
For more information, see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server
Agent, or SQL Server Browser Service.
3. For each file to be moved, use sqlcmd commands or SQL Server Management Studio to run the following
statement.
ALTER DATABASE database_name MODIFY FILE( NAME = logical_name , FILENAME = 'new_path\os_file_name' );
For more information about how to use the sqlcmd utility, see Use the sqlcmd Utility.
4. Exit the sqlcmd utility or SQL Server Management Studio.
5. Stop the instance of SQL Server.
6. Move the file or files to the new location.
7. Start the instance of SQL Server. For example, run:
NET START MSSQLSERVER
.
8. Verify the file change by running the following query.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'<database_name>');
Examples
The following example moves the AdventureWorks2012 log file to a new location as part of a planned relocation.
USE master;
GO
-- Return the logical file name.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'AdventureWorks2012')
AND type_desc = N'LOG';
GO
ALTER DATABASE AdventureWorks2012 SET OFFLINE;
GO
-- Physically move the file to a new location.
-- In the following statement, modify the path specified in FILENAME to
-- the new location of the file on your server.
ALTER DATABASE AdventureWorks2012
MODIFY FILE ( NAME = AdventureWorks2012_Log,
FILENAME = 'C:\NewLoc\AdventureWorks2012_Log.ldf');
GO
ALTER DATABASE AdventureWorks2012 SET ONLINE;
GO
--Verify the new location.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'AdventureWorks2012')
AND type_desc = N'LOG';
See Also
ALTER DATABASE (Transact-SQL)
CREATE DATABASE (SQL Server Transact-SQL)
Database Detach and Attach (SQL Server)
Move System Databases
Move Database Files
BACKUP (Transact-SQL)
RESTORE (Transact-SQL)
Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or SQL Server Browser Service
Move System Databases
3/24/2017 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This topic describes how to move system databases in SQL Server. Moving system databases may be useful in
the following situations:
Failure recovery. For example, the database is in suspect mode or has shut down because of a hardware
failure.
Planned relocation.
Relocation for scheduled disk maintenance.
The following procedures apply to moving database files within the same instance of SQL Server. To move
a database to another instance of SQL Server or to another server, use the backup and restore operation.
The procedures in this topic require the logical name of the database files. To obtain the name, query the
name column in the sys.master_files catalog view.
IMPORTANT
If you move a system database and later rebuild the master database, you must move the system database again because
the rebuild operation installs all system databases to their default location.
IMPORTANT
After moving files, the SQL Server service account must have permission to access the files in new file folder location.
Planned Relocation and Scheduled Disk Maintenance Procedure
To move a system database data or log file as part of a planned relocation or scheduled maintenance operation,
follow these steps. This procedure applies to all system databases except the master and Resource databases.
1. For each file to be moved, run the following statement.
ALTER DATABASE database_name MODIFY FILE ( NAME = logical_name , FILENAME = 'new_path\os_file_name' )
2. Stop the instance of SQL Server or shut down the system to perform maintenance. For more information,
see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or SQL Server Browser
Service.
3. Move the file or files to the new location.
4. Restart the instance of SQL Server or the server. For more information, see Start, Stop, Pause, Resume,
Restart the Database Engine, SQL Server Agent, or SQL Server Browser Service.
5. Verify the file change by running the following query.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'<database_name>');
If the msdb database is moved and the instance of SQL Server is configured for Database Mail, complete
these additional steps.
6. Verify that Service Broker is enabled for the msdb database by running the following query.
SELECT is_broker_enabled
FROM sys.databases
WHERE name = N'msdb';
For more information about enabling Service Broker, see ALTER DATABASE (Transact-SQL).
7. Verify that Database Mail is working by sending a test mail.
Failure Recovery Procedure
If a file must be moved because of a hardware failure, follow these steps to relocate the file to a new location. This
procedure applies to all system databases except the master and Resource databases.
IMPORTANT
If the database cannot be started, that is it is in suspect mode or in an unrecovered state, only members of the sysadmin
fixed role can move the file.
1. Stop the instance of SQL Server if it is started.
2. Start the instance of SQL Server in master-only recovery mode by entering one of the following
commands at the command prompt. The parameters specified in these commands are case sensitive. The
commands fail when the parameters are not specified as shown.
For the default (MSSQLSERVER) instance, run the following command:
NET START MSSQLSERVER /f /T3608
For a named instance, run the following command:
NET START MSSQL$instancename /f /T3608
For more information, see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server
Agent, or SQL Server Browser Service.
3. For each file to be moved, use sqlcmd commands or SQL Server Management Studio to run the following
statement.
ALTER DATABASE database_name MODIFY FILE( NAME = logical_name , FILENAME = 'new_path\os_file_name' )
For more information about using the sqlcmd utility, see Use the sqlcmd Utility.
4. Exit the sqlcmd utility or SQL Server Management Studio.
5. Stop the instance of SQL Server. For example, run
NET STOP MSSQLSERVER
.
6. Move the file or files to the new location.
7. Restart the instance of SQL Server. For example, run
NET START MSSQLSERVER
.
8. Verify the file change by running the following query.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'<database_name>');
Moving the master Database
To move the master database, follow these steps.
1. From the Start menu, point to All Programs, point to Microsoft SQL Server, point to Configuration
Tools, and then click SQL Server Configuration Manager.
2. In the SQL Server Services node, right-click the instance of SQL Server (for example, SQL Server
(MSSQLSERVER)) and choose Properties.
3. In the SQL Server (instance_name) Properties dialog box, click the Startup Parameters tab.
4. In the Existing parameters box, select the –d parameter to move the master data file. Click Update to
save the change.
In the Specify a startup parameter box, change the parameter to the new path of the master database.
5. In the Existing parameters box, select the –l parameter to move the master log file. Click Update to save
the change.
In the Specify a startup parameter box, change the parameter to the new path of the master database.
The parameter value for the data file must follow the -d parameter and the value for the log file must
follow the -l parameter. The following example shows the parameter values for the default location of
the master data file.
-dC:\Program Files\Microsoft SQL Server\MSSQL<version>.MSSQLSERVER\MSSQL\DATA\master.mdf
-lC:\Program Files\Microsoft SQL Server\MSSQL<version>.MSSQLSERVER\MSSQL\DATA\mastlog.ldf
If the planned relocation for the master data file is
follows:
E:\SQLData
, the parameter values would be changed as
-dE:\SQLData\master.mdf
-lE:\SQLData\mastlog.ldf
6. Stop the instance of SQL Server by right-clicking the instance name and choosing Stop.
7. Move the master.mdf and mastlog.ldf files to the new location.
8. Restart the instance of SQL Server.
9. Verify the file change for the master database by running the following query.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID('master');
GO
10. At this point SQL Server should run normally. However Microsoft recommends also adjusting the registry
entry at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\instance_ID\Setup , where instance_ID
is like MSSQL13.MSSQLSERVER . In that hive, change the SQLDataRoot value to the new path. Failure to update
the registry can cause patching and upgrading to fail.
Moving the Resource Database
The location of the Resource database is <drive>:\Program Files\Microsoft SQL Server\MSSQL<version>.
<instance_name>\MSSQL\Binn\. The database cannot be moved.
Follow-up: After Moving All System Databases
If you have moved all of the system databases to a new drive or volume or to another server with a different
drive letter, make the following updates.
Change the SQL Server Agent log path. If you do not update this path, SQL Server Agent will fail to start.
Change the database default location. Creating a new database may fail if the drive letter and path
specified as the default location do not exist.
Change the SQL Server Agent Log Path
1. From SQL Server Management Studio, in Object Explorer, expand SQL Server Agent.
2. Right-click Error Logs and click Configure.
3. In the Configure SQL Server Agent Error Logs dialog box, specify the new location of the
SQLAGENT.OUT file. The default location is C:\Program Files\Microsoft SQL
Server\MSSQL<version>.\MSSQL\Log\.
Change the database default location
1. From SQL Server Management Studio, in Object Explorer, right-click the SQL Server server and click
Properties.
2. In the Server Properties dialog box, select Database Settings.
3. Under Database Default Locations, browse to the new location for both the data and log files.
4. Stop and start the SQL Server service to complete the change.
Examples
A. Moving the tempdb database
The following example moves the tempdb data and log files to a new location as part of a planned relocation.
NOTE
Because tempdb is re-created each time the instance of SQL Server is started, you do not have to physically move the data
and log files. The files are created in the new location when the service is restarted in step 3. Until the service is restarted,
tempdb continues to use the data and log files in existing location.
1. Determine the logical file names of the
tempdb
database and their current location on the disk.
SELECT name, physical_name AS CurrentLocation
FROM sys.master_files
WHERE database_id = DB_ID(N'tempdb');
GO
2. Change the location of each file by using
ALTER DATABASE
.
USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, FILENAME = 'E:\SQLData\tempdb.mdf');
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, FILENAME = 'F:\SQLLog\templog.ldf');
GO
3. Stop and restart the instance of SQL Server.
4. Verify the file change.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'tempdb');
5. Delete the
tempdb.mdf
and
templog.ldf
files from the original location.
See Also
Resource Database
tempdb Database
master Database
msdb Database
model Database
Move User Databases
Move Database Files
Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or SQL Server Browser Service
ALTER DATABASE (Transact-SQL)
Rebuild System Databases
Rename a Database
3/24/2017 • 1 min to read • Edit Online
This topic describes how to rename a user-defined database in SQL Server 2016 by using SQL Server
Management Studio or Transact-SQL. The name of the database can include any characters that follow the rules
for identifiers.
In This Topic
Before you begin:
Limitations and Restrictions
Security
To rename a database, using:
SQL Server Management Studio
Transact-SQL
Follow Up: After renaming a database
Before You Begin
Limitations and Restrictions
System databases cannot be renamed.
Security
Permissions
Requires ALTER permission on the database.
Using SQL Server Management Studio
To rename a database
1. In Object Explorer, connect to an instance of the SQL Server Database Engine, and then expand that
instance.
2. Make sure that no one is using the database, and then set the database to single-user mode.
3. Expand Databases, right-click the database to rename, and then click Rename.
4. Enter the new database name, and then click OK.
Using Transact-SQL
To rename a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example changes the
name of the AdventureWorks2012 database to Northwind .
USE master;
GO
ALTER DATABASE AdventureWorks2012
Modify Name = Northwind ;
GO
Follow Up: After renaming a database
Back up the master database after you rename any database.
See Also
ALTER DATABASE (Transact-SQL)
Database Identifiers
Set a Database to Single-user Mode
3/24/2017 • 2 min to read • Edit Online
This topic describes how to set a user-defined database to single-user mode in SQL Server 2016 by using SQL
Server Management Studio or Transact-SQL. Single-user mode specifies that only one user at a time can access
the database and is generally used for maintenance actions.
In This Topic
Before you begin:
Limitations and Restrictions
Prerequisites
Security
To set a database to single-user mode, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Limitations and Restrictions
If other users are connected to the database at the time that you set the database to single-user mode, their
connections to the database will be closed without warning.
The database remains in single-user mode even if the user that set the option logs off. At that point, a
different user, but only one, can connect to the database.
Prerequisites
Before you set the database to SINGLE_USER, verify that the AUTO_UPDATE_STATISTICS_ASYNC option is set to
OFF. When this option is set to ON, the background thread that is used to update statistics takes a connection
against the database, and you will be unable to access the database in single-user mode. For more information,
see ALTER DATABASE SET Options (Transact-SQL).
Security
Permissions
Requires ALTER permission on the database.
Using SQL Server Management Studio
To set a database to single-user mode
1. In Object Explorer, connect to an instance of the SQL Server Database Engine, and then expand that
instance.
2. Right-click the database to change, and then click Properties.
3. In the Database Properties dialog box, click the Options page.
4. From the Restrict Access option, select Single.
5. If other users are connected to the database, an Open Connections message will appear. To change the
property and close all other connections, click Yes.
You can also set the database to Multiple or Restricted access by using this procedure. For more information
about the Restrict Access options, see Database Properties (Options Page).
Using Transact-SQL
To set a database to single-user mode
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example sets the
database to SINGLE_USER mode to obtain exclusive access. The example then sets the state of the
AdventureWorks2012 database to READ_ONLY and returns access to the database to all users.The
termination option WITH ROLLBACK IMMEDIATE is specified in the first ALTER DATABASE statement. This will
cause all incomplete transactions to be rolled back and any other connections to the AdventureWorks2012
database to be immediately disconnected.
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
GO
ALTER DATABASE AdventureWorks2012
SET READ_ONLY;
GO
ALTER DATABASE AdventureWorks2012
SET MULTI_USER;
GO
See Also
ALTER DATABASE (Transact-SQL)
Shrink a Database
3/24/2017 • 3 min to read • Edit Online
This topic describes how to shrink a database by using Object in SQL Server 2016 by using SQL Server
Management Studio or Transact-SQL.
Shrinking data files recovers space by moving pages of data from the end of the file to unoccupied space closer to
the front of the file. When enough free space is created at the end of the file, data pages at end of the file can be
deallocated and returned to the file system.
In This Topic
Before you begin:
Limitations and Restrictions
Recommendations
Security
To shrink a database, using:
SQL Server Management Studio
Transact-SQL
Follow Up: You shrink a database
Before You Begin
Limitations and Restrictions
The database cannot be made smaller than the minimum size of the database. The minimum size is the size
specified when the database was originally created, or the last explicit size set by using a file-size-changing
operation, such as DBCC SHRINKFILE. For example, if a database was originally created with a size of 10 MB
and grew to 100 MB, the smallest size the database could be reduced to is 10 MB, even if all the data in the
database has been deleted.
You cannot shrink a database while the database is being backed up. Conversely, you cannot backup a
database while a shrink operation on the database is in process.
DBCC SHRINKDATABASE will fail when it encounters an xVelocity memory optimized columnstore index.
Work completed before encountering the columnstore index will succeed so the database might be smaller.
To complete DBCC SHRINKDATABASE, disable all columnstore indexes before executing DBCC
SHRINKDATABASE, and then rebuild the columnstore indexes.
Recommendations
To view the current amount of free (unallocated) space in the database. For more information, see Display
Data and Log Space Information for a Database
Consider the following information when you plan to shrink a database:
A shrink operation is most effective after an operation that creates lots of unused space, such as a
truncate table or a drop table operation.
Most databases require some free space to be available for regular day-to-day operations. If you
shrink a database repeatedly and notice that the database size grows again, this indicates that the
space that was shrunk is required for regular operations. In these cases, repeatedly shrinking the
database is a wasted operation.
A shrink operation does not preserve the fragmentation state of indexes in the database, and
generally increases fragmentation to a degree. This is another reason not to repeatedly shrink the
database.
Unless you have a specific requirement, do not set the AUTO_SHRINK database option to ON.
Security
Permissions
Requires membership in the sysadmin fixed server role or the db_owner fixed database role.
Using SQL Server Management Studio
To shrink a database
1. In Object Explorer, connect to an instance of the SQL Server Database Engine, and then expand that
instance.
2. Expand Databases, and then right-click the database that you want to shrink.
3. Point to Tasks, point to Shrink, and then click Database.
Database
Displays the name of the selected database.
Current allocated space
Displays the total used and unused space for the selected database.
Available free space
Displays the sum of free space in the log and data files of the selected database.
Reorganize files before releasing unused space
Selecting this option is equivalent to executing DBCC SHRINKDATABASE specifying a target percent option.
Clearing this option is equivalent to executing DBCC SHRINKDATABASE with TRUNCATEONLY option. By
default, this option is not selected when the dialog is opened. If this option is selected, the user must specify
a target percent option.
Maximum free space in files after shrinking
Enter the maximum percentage of free space to be left in the database files after the database has been
shrunk. Permissible values are between 0 and 99.
4. Click OK.
Using Transact-SQL
To shrink a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example uses DBCC
SHRINKDATABASE to decreases the size of the data and log files in the UserDB database and to allow for
10 percent free space in the database.
DBCC SHRINKDATABASE (UserDB, 10);
GO
Follow Up: After you shrink a database
Data that is moved to shrink a file can be scattered to any available location in the file. This causes index
fragmentation and can slow the performance of queries that search a range of the index. To eliminate the
fragmentation, consider rebuilding the indexes on the file after shrinking.
See Also
Shrink a File
sys.databases (Transact-SQL)
sys.database_files (Transact-SQL)
DBCC (Transact-SQL)
DBCC SHRINKFILE (Transact-SQL)
Database Files and Filegroups
Shrink a File
3/24/2017 • 4 min to read • Edit Online
This topic describes how to shrink a data or log file in SQL Server 2016 by using SQL Server Management Studio
or Transact-SQL.
Shrinking data files recovers space by moving pages of data from the end of the file to unoccupied space closer to
the front of the file. When enough free space is created at the end of the file, data pages at end of the file can
deallocated and returned to the file system.
In This Topic
Before you begin:
Limitations and Restrictions
Recommendations
Security
To shrink a data or log file, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Limitations and Restrictions
The primary data file cannot be made smaller than the size of the primary file in the model database.
Recommendations
Data that is moved to shrink a file can be scattered to any available location in the file. This causes index
fragmentation and can slow the performance of queries that search a range of the index. To eliminate the
fragmentation, consider rebuilding the indexes on the file after shrinking.
Security
Permissions
Requires membership in the sysadmin fixed server role or the db_owner fixed database role.
Using SQL Server Management Studio
To shrink a data or log file
1. In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that instance.
2. Expand Databases and then right-click the database that you want to shrink.
3. Point to Tasks, point to Shrink, and then click Files.
Database
Displays the name of the selected database.
File type
Select the file type for the file. The available choices are Data and Log files. The default selection is Data.
Selecting a different filegroup type changes the selections in the other fields accordingly.
Filegroup
Select a filegroup from the list of Filegroups associated with the selected File type above. Selecting a
different filegroup changes the selections in the other fields accordingly.
File name
Select a file from the list of available files of the selected filegroup and file type.
Location
Displays the full path to the currently selected file. The path is not editable, but it can be copied to the
clipboard.
Currently allocated space
For data files, displays the current allocated space. For log files, displays the current allocated space
computed from the output of DBCC SQLPERF(LOGSPACE).
Available free space
For data files, displays the current available free space computed from the output of DBCC
SHOWFILESTATS(fileid). For log files, displays the current available free space computed from the output of
DBCC SQLPERF(LOGSPACE).
Release unused space
Cause any unused space in the files to be released to the operating system and shrink the file to the last
allocated extent, reducing the file size without moving any data. No attempt is made to relocate rows to
unallocated pages.
Reorganize pages before releasing unused space
Equivalent to executing DBCC SHRINKFILE specifying the target file size. When this option is selected, the
user must specify a target file size in the Shrink file to box.
Shrink file to
Specifies the target file size for the shrink operation. The size cannot be less than the current allocated space
or more than the total extents allocated to the file. Entering a value beyond the minimum or the maximum
will revert to the min or the max once the focus is changed or when any of the buttons on the toolbar are
clicked.
Empty file by migrating the data to other files in the same filegroup
Migrate all data from the specified file. This option allows the file to be dropped using the ALTER DATABASE
statement. This option is equivalent to executing DBCC SHRINKFILE with the EMPTYFILE option.
4. Select the file type and file name.
5. Optionally, select the Release unused space check box.
Selecting this option causes any unused space in the file to be released to the operating system and shrinks
the file to the last allocated extent. This reduces the file size without moving any data.
6. Optionally, select the Reorganize files before releasing unused space check box. If this is selected, the
Shrink file to value must be specified. By default, the option is cleared.
Selecting this option causes any unused space in the file to be released to the operating system and tries to
relocate rows to unallocated pages.
7. Optionally, enter the maximum percentage of free space to be left in the database file after the database has
been shrunk. Permissible values are between 0 and 99. This option is only available when Reorganize files
before releasing unused space is enabled.
8. Optionally, select the Empty file by migrating the data to other files in the same filegroup check box.
Selecting this option moves all data from the specified file to other files in the filegroup. The empty file can
then be deleted. This option is the same as executing DBCC SHRINKFILE with the EMPTYFILE option.
9. Click OK.
Using Transact-SQL
To shrink a data or log file
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example uses DBCC
SHRINKFILE to shrink the size of a data file named DataFile1 in the UserDB database to 7 MB.
USE UserDB;
GO
DBCC SHRINKFILE (DataFile1, 7);
GO
See Also
DBCC SHRINKDATABASE (Transact-SQL)
Shrink a Database
Delete Data or Log Files from a Database
sys.databases (Transact-SQL)
sys.database_files (Transact-SQL)
View or Change the Properties of a Database
3/24/2017 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This topic describes how to view or change the properties of a database in SQL Server 2016 by using SQL Server
Management Studio or Transact-SQL. After you change a database property, the modification takes effect
immediately.
In This Topic
Before you begin:
Recommendations
Security
To view or change the properties of a database, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Recommendations
When AUTO_CLOSE is ON, some columns in the sys.databases catalog view and DATABASEPROPERTYEX
function will return NULL because the database is unavailable to retrieve the data. To resolve this, open the
database.
Security
Permissions
Requires ALTER permission on the database to change the properties of a database. Requires at least membership
in the Public database role to view the properties of a database.
Using SQL Server Management Studio
To view or change the properties of a database
1. In Object Explorer, connect to an instance of the SQL Server Database Engine, and then expand that
instance.
2. Expand Databases, right-click the database to view, and then click Properties.
3. In the Database Properties dialog box, select a page to view the corresponding information. For example,
select the Files page to view data and log file information.
Using Transact-SQL
Transact-SQL provides a number of different methods for viewing the properties of a database and for changing
the properties of a database. To view the properties of a database, you can use the DATABASEPROPERTYEX
(Transact-SQL) function and the sys.databases (Transact-SQL) catalog view. To change the properties of a database,
you can use the version of the ALTER DATABASE statement for your environment: ALTER DATABASE (TransactSQL) or ALTER DATABASE (Azure SQL Database). To view database scoped properties, use the
sys.database_scoped_configurations (Transact-SQL) catalog view and to alter database scoped properties, use the
ALTER DATABASE SCOPED CONFIGURATION (Transact-SQL) statement.
To view a property of a database by using the DATABASEPROPERTYEX function
1. Connect to the Database Engine and then connect to the database for which you wish to view its properties.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example uses the
DATABASEPROPERTYEX system function to return the status of the AUTO_SHRINK database option in the
AdventureWorks2012 database. A return value of 1 means that the option is set to ON, and a return value
of 0 means that the option is set to OFF.
SELECT DATABASEPROPERTYEX('AdventureWorks2012', 'IsAutoShrink');
To view the properties of a database by querying sys.databases
1. Connect to the Database Engine and then connect to the database for which you wish to view its properties..
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example queries the
sys.databases catalog view to view several properties of the AdventureWorks2012 database. This example
returns the database ID number ( database_id ), whether the database is read-only or read-write (
is_read_only ), the collation for the database ( collation_name ), and the database compatibility level (
compatibility_level ).
SELECT database_id, is_read_only, collation_name, compatibility_level
FROM sys.databases WHERE name = 'AdventureWorks2012';
To view the properties of a database-scoped configuration by querying sys.databases_scoped_configuration
1. Connect to the Database Engine and then connect to the database for which you wish to view its properties..
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example queries the
sys.database_scoped_configurations (Transact-SQL) catalog view to view several properties of the current
database.
SELECT configuration_id, name, value, value_for_secondary
FROM sys.database_scoped_configurations;
For more examples, see sys.database_scoped_configurations (Transact-SQL)
To change the properties of a SQL Server 2016 database using ALTER DATABASE
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window. The example determines the state of snapshot
isolation on the AdventureWorks2012 database, changes the state of the property, and then verifies the
change.
To determine the state of snapshot isolation, select the first
To change the state of snapshot isolation, select the
To verify the change, select the second
SELECT
SELECT
statement and click Execute.
ALTER DATABASE
statement and click Execute.
statement, and click Execute.
USE AdventureWorks2012;
GO
-- Check the state of the snapshot_isolation_framework
-- in the database.
SELECT name, snapshot_isolation_state,
snapshot_isolation_state_desc AS description
FROM sys.databases
WHERE name = N'AdventureWorks2012';
GO
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET ALLOW_SNAPSHOT_ISOLATION ON;
GO
-- Check again.
SELECT name, snapshot_isolation_state,
snapshot_isolation_state_desc AS description
FROM sys.databases
WHERE name = N'AdventureWorks2012';
GO
To change the database-scoped properties using ALTER DATABASE SCOPED CONFIGURATION
1. Connect to a database in your SQL Server instance.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window. The following example sets MAXDOP for a
secondary database to the value for the primary database.
ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET MAXDOP=PRIMARY
See Also
sys.databases (Transact-SQL)
DATABASEPROPERTYEX (Transact-SQL)
ALTER DATABASE (Transact-SQL)
ALTER DATABASE (Azure SQL Database)
ALTER DATABASE SCOPED CONFIGURATION (Transact-SQL)
sys.database_scoped_configurations (Transact-SQL)
Value for Extended Property Dialog Box
3/24/2017 • 1 min to read • Edit Online
Use the Value for <property name> dialog box to enter or view a value. This is a common dialog box that can be
opened from several locations.
UIElement List
Extended property name
The name of the extended property being viewed or set.
Extended Property Value
Type or change the value of the extended property.
Database Object (Extended Properties Page)
3/24/2017 • 1 min to read • Edit Online
Extended properties allow you to add custom properties to database objects. Use this page to view or modify
extended properties for the selected object. The Extended Properties page is the same for all types of database
objects.
UIElement List
Database
Displays the name of the selected database. This field is read-only.
Collation
Displays the collation used for the selected database. This field is read-only.
Properties
View or specify the extended properties for the object. Each extended property consists of a name/value pair of
metadata associated with the object.
Browse button
Click the browse (…) button after Value to open the Value for Extended Property Dialog Box dialog box. Type
or view the value of the extended property in this larger location.
Delete
Removes the selected extended property.
See Also
Extended Properties Catalog Views (Transact-SQL)
Database Properties (General Page)
3/24/2017 • 1 min to read • Edit Online
Use this page to view or modify properties for the selected database.
Options
Last Database Backup
Displays the date that the database was last backed up.
Last Database Log Backup
Displays the date that the database transaction log was last backed up.
Name
Displays the name of the database.
Status
Displays the database state. For more information, see Database States.
Owner
Displays the name of the database owner. The owner can be changed on the Files page.
Date Created
Displays the date and time that the database was created.
Size
Displays the size of the database in megabytes.
Space Available
Displays the amount of available space in the database in megabytes.
Number of Users
Displays the number of users configured for the database.
Collation Name
Displays the collation used for the database. The collation can be changed on the Options page.
See Also
ALTER DATABASE (Transact-SQL)
ALTER DATABASE (Azure SQL Database)
sys.databases (Transact-SQL)
DATABASEPROPERTYEX (Transact-SQL)
Database Properties (Files Page)
3/24/2017 • 2 min to read • Edit Online
Use this page to create a new database, or view or modify properties for the selected database. This topic applies to
the Database Properties (Files Page) for existing databases, and to the New Database (General Page).
UIElement List
Database name
Add or display the name of the database.
Owner
Specify the owner of the database by selecting from the list.
Use full-text indexing
This check box is checked and disabled because full-text indexing is always enabled in SQL Server 2016. For more
information, see Full-Text Search.
Database Files
Add, view, modify, or remove database files for the associated database. Database files have the following
properties:
Logical Name
Enter or modify the name of the file.
File Type
Select the file type from the list. The file type can be Data, Log, or Filestream Data. You cannot modify the file type
of an existing file.
Select Filestream Data if you are adding files (containers) to a memory-optimized filegroup.
To add files (containers) to a Filestream data filegroup, FILESTREAM must be enabled. You can enable FILESTREAM
by using the Server Properties (Advanced Page) dialog box.
Filegroup
Select the filegroup for the file from the list. By default, the filegroup is PRIMARY. You can create a new filegroup by
selecting <new filegroup> and entering information about the filegroup in the New Filegroup dialog box. A new
filegroup can also be created on the Filegroup page. You cannot modify the filegroup of an existing file.
When adding files (containers) to a memory-optimized filegroup, the Filegroup field will be populated with the
name of the database's memory-optimized filegroup.
Initial Size
Enter or modify the initial size for the file in megabytes. By default, this is the value of the model database.
This field is not valid for FILESTREAM files.
For files in memory-optimized filegroups, this field cannot be modified.
Autogrowth
Select or display the autogrowth properties for the file. These properties control how the file expands when its
maximum file size is reached. To edit autogrowth values, click the edit button next to the autogrowth properties for
the file that you want, and change the values in the Change Autogrowth dialog box. By default, these are the
values of the model database.
This field is not valid for FILESTREAM files.
For files in memory-optimized filegroups, this field should be Unlimited.
Path
Displays the path of the selected file. To specify a path for a new file, click the edit button next to the path for the
file, and navigate to the destination folder. You cannot modify the path of an existing file.
For FILESTREAM files, the path is a folder. The SQL Server Database Engine will create the underlying files in this
folder.
File Name
Displays the name of the file.
This field is not valid for FILESTREAM files, including files in memory-optimized filegroups.
Add
Add a new file to the database.
Remove
Delete the selected file from the database. A file cannot be removed unless it is empty. The primary data file and log
file cannot be removed.
For information about files, see Database Files and Filegroups.
See Also
ALTER DATABASE (Transact-SQL)
sys.databases (Transact-SQL)
Database Properties (Filegroups Page)
3/24/2017 • 1 min to read • Edit Online
Use this page to view the filegroups or add a new filegroup to the selected database. Filegroup types are separated
into row filegroups, FILESTREAM data, and memory-optimized filegroups.
Row filegroups contain regular data and log files. FILESTREAM data filegroups contain FILESTREAM data files. These
data files store information about how binary large object (BLOB) data is stored on the file system when you are
using FILESTREAM storage. The options are the same for both types of filegroups.
If FILESTREAM is not enabled, the Filestream section will not be available. You can enable FILESTREAM storage by
using Server Properties (Advanced Page).
For information about how SQL Server uses row filegroups, see Database Files and Filegroups. For more
information about FILESTREAM data and filegroups, see FILESTREAM (SQL Server).
Memory-optimized file groups are required for a database to contain one or more memory-optimized tables.
Row and FILESTREAM Data Filegroup Options
Name
Enter the name of the filegroup.
Files
Displays the count of files in the filegroup.
Read-only
Select to set the filegroup to a read-only status.
Default
Select to make this filegroup the default filegroup. You can have one default filegroup for rows and one default
filegroup for FILESTREAM data.
Add
Adds a new blank row to the grid listing filegroups for the database.
Remove
Removes the selected filegroup row from the grid.
Memory-Optimized Data Filegroup Options
Name
Enter the name of the memory-optimized filegroup.
Filestream Files
Displays the number of files (containers) in the memory-optimized data filegroup. You can add containers in the
Files page.
Add
Adds a new blank row to the grid listing filegroups for the database.
Remove
Removes the selected filegroup row from the grid.
See Also
ALTER DATABASE (Transact-SQL)
sys.databases (Transact-SQL)
Database Properties (Options Page)
3/24/2017 • 11 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016)
Warehouse
Parallel Data Warehouse
Azure SQL Database
Azure SQL Data
Use this page to view or modify options for the selected database. For more information about the options
available on this page, see ALTER DATABASE SET Options (Transact-SQL) and ALTER DATABASE SCOPED
CONFIGURATION (Transact-SQL).
Page Header
Collation
Specify the collation of the database by selecting from the list. For more information, see Set or Change the
Database Collation.
Recovery model
Specify one of the following models for recovering the database: Full, Bulk-Logged, or Simple. For more
information about recovery models, see Recovery Models (SQL Server).
Compatibility level
Specify the latest version of SQL Server that the database supports. Possible values are SQL Server 2014 (120),
SQL Server 2012 (110), and SQL Server 2008 (100). When a SQL Server 2005 database is upgraded to SQL
Server 2014, the compatibility level for that database is changed from 90 to 100. The 90 compatibility level is not
supported in SQL Server 2014. For more information, see ALTER DATABASE Compatibility Level (Transact-SQL).
Containment type
Specify none or partial to designate if this is a contained database. For more information about contained
databases, see Contained Databases. The server property Enable Contained Databases must be set to TRUE
before a database can be configured as contained.
IMPORTANT
Enabling partially contained databases delegates control over access to the instance of SQL Server to the owners of the
database. For more information, see Security Best Practices with Contained Databases.
Automatic
Auto Close
Specify whether the database shuts down cleanly and frees resources after the last user exits. Possible values are
True and False. When True, the database is shut down cleanly and its resources are freed after the last user logs
off.
Auto Create Incremental Statistics
Specify whether to use the incremental option when per partition statistics are created. For information about
incremental statistics, see CREATE STATISTICS (Transact-SQL).
Auto Create Statistics
Specify whether the database automatically creates missing optimization statistics. Possible values are True and
False. When True, any missing statistics needed by a query for optimization are automatically built during
optimization. For more information, see CREATE STATISTICS (Transact-SQL).
Auto Shrink
Specify whether the database files are available for periodic shrinking. Possible values are True and False. For
more information, see Shrink a Database.
Auto Update Statistics
Specify whether the database automatically updates out-of-date optimization statistics. Possible values are True
and False. When True, any out-of-date statistics needed by a query for optimization are automatically built during
optimization. For more information, see CREATE STATISTICS (Transact-SQL).
Auto Update Statistics Asynchronously
When True, queries that initiate an automatic update of out-of-date statistics will not wait for the statistics to be
updated before compiling. Subsequent queries will use the updated statistics when they are available.
When False, queries that initiate an automatic update of out-of-date statistics, wait until the updated statistics can
be used in the query optimization plan.
Setting this option to True has no effect unless Auto Update Statistics is also set to True.
Containment
In a contained databases, some settings usually configured at the server level can be configured at the database
level.
Default Fulltext Language LCID
Specifies a default language for full-text indexed columns. Linguistic analysis of full-text indexed data is dependent
on the language of the data. The default value of this option is the language of the server. For the language that
corresponds to the displayed setting, see sys.fulltext_languages (Transact-SQL).
Default Language
The default language for all new contained database users, unless otherwise specified.
Nested Triggers Enabled
Allows triggers to fire other triggers. Triggers can be nested to a maximum of 32 levels. For more information, see
the "Nested Triggers" section in CREATE TRIGGER (Transact-SQL).
Transform Noise Words
Suppress an error message if noise words, that is stopwords, cause a Boolean operation on a full-text query to
return zero rows. For more information, see transform noise words Server Configuration Option.
Two Digit Year Cutoff
Indicates the highest year number that can be entered as a two-digit year. The year listed and the previous 99 years
can be entered as a two-digit year. All other years must be entered as a four-digit year.
For example, the default setting of 2049 indicates that a date entered as '3/14/49' will be interpreted as March 14,
2049, and a date entered as '3/14/50' will be interpreted as March 14, 1950. For more information, see Configure
the two digit year cutoff Server Configuration Option.
Cursor
Close Cursor on Commit Enabled
Specify whether cursors close after the transaction opening the cursor has committed. Possible values are True
and False. When True, any cursors that are open when a transaction is committed or rolled back are closed. When
False, such cursors remain open when a transaction is committed. When False, rolling back a transaction closes
any cursors except those defined as INSENSITIVE or STATIC. For more information, see SET
CURSOR_CLOSE_ON_COMMIT (Transact-SQL).
Default Cursor
Specify default cursor behavior. When True, cursor declarations default to LOCAL. When False, Transact-SQL
cursors default to GLOBAL.
Database Scoped Configurations
In SQL Server 2016 and in Azure SQL Database, there are a number of configuration properties that can be scoped
to the database level. For more information for all of these settings, see ALTER DATABASE SCOPED
CONFIGURATION (Transact-SQL).
Legacy Cardinality Estimation
Specify the query optimizer cardinality estimation model for the primary independent of the compatibility level of
the database. This is equivalent to Trace Flag 9481.
Legacy Cardinality Estimation for Secondary
Specify the query optimizer cardinality estimation model for secondaries, if any, independent of the compatibility
level of the database. This is equivalent to Trace Flag 9481.
Max DOP
Specify the default MAXDOP setting for the primary that should be used for statements.
Max DOP for Secondary
Specify the default MAXDOP setting for secondaries, if any, that should be used for statements.
Parameter Sniffing
Enables or disables parameter sniffing on the primary. This is equivalent to Trace Flag 4136.
Parameter Sniffing for Secondary
Enables or disables parameter sniffing on secondaries, if any. This is equivalent to Trace Flag 4136.
Query Optimizer Fixes
Enables or disables query optimization hotfixes on the primary regardless of the compatibility level of the database.
This is equivalent to Trace Flag 4199.
Query Optimizer Fixes for Secondary
Enables or disables query optimization hotfixes on secondaries, if any, regardless of the compatibility level of the
database. This is equivalent to Trace Flag 4199.
FILESTREAM
FILESTREAM Directory Name
Specify the directory name for the FILESTREAM data associated with the selected database.
FILESTREAM Non-transacted Access
Specify one of the following options for non-transactional access through the file system to FILESTREAM data
stored in FileTables: OFF, READ_ONLY, or FULL. If FILESTREAM is not enabled on the server, this value is set to OFF
and is disabled. For more information, see FileTables (SQL Server).
Miscellaneous
ANSI NULL Default
Allow null values for all user-defined data types or columns that are not explicitly defined as NOT NULL during a
CREATE TABLE or ALTER TABLE statement (the default state). For more information, see SET
ANSI_NULL_DFLT_ON (Transact-SQL) and SET ANSI_NULL_DFLT_OFF (Transact-SQL).
ANSI NULLS Enabled
Specify the behavior of the Equals ( = ) and Not Equal To ( <> ) comparison operators when used with null values.
Possible values are True (on) and False (off). When True, all comparisons to a null value evaluate to UNKNOWN.
When False, comparisons of non-UNICODE values to a null value evaluate to True if both values are NULL. For
more information, see SET ANSI_NULLS (Transact-SQL).
ANSI Padding Enabled
Specify whether ANSI padding is on or off. Permissible values are True (on) and False (off). For more information,
see SET ANSI_PADDING (Transact-SQL).
ANSI Warnings Enabled
Specify ISO standard behavior for several error conditions. When True, a warning message is generated if null
values appear in aggregate functions (such as SUM, AVG, MAX, MIN, STDEV, STDEVP, VAR, VARP, or COUNT).
When False, no warning is issued. For more information, see SET ANSI_WARNINGS (Transact-SQL).
Arithmetic Abort Enabled
Specify whether the database option for arithmetic abort is enabled or not. Possible values are True and False.
When True, an overflow or divide-by-zero error causes the query or batch to terminate. If the error occurs in a
transaction, the transaction is rolled back. When False, a warning message is displayed, but the query, batch, or
transaction continues as if no error occurred. For more information, see SET ARITHABORT (Transact-SQL).
Concatenate Null Yields Null
Specify the behavior when null values are concatenated. When the property value is True, string + NULL returns
NULL. When False, the result is string. For more information, see SET CONCAT_NULL_YIELDS_NULL (TransactSQL).
Cross-database Ownership Chaining Enabled
This read-only value indicates if cross-database ownership chaining has been enabled. When True, the database
can be the source or target of a cross-database ownership chain. Use the ALTER DATABASE statement to set this
property.
Date Correlation Optimization Enabled
When True, SQL Server maintains correlation statistics between any two tables in the database that are linked by a
FOREIGN KEY constraint and have datetime columns.
When False, correlation statistics are not maintained.
Numeric Round-Abort
Specify how the database handles rounding errors. Possible values are True and False. When True, an error is
generated when loss of precision occurs in an expression. When False, losses of precision do not generate error
messages, and the result is rounded to the precision of the column or variable storing the result. For more
information, see SET NUMERIC_ROUNDABORT (Transact-SQL).
Parameterization
When SIMPLE, queries are parameterized based on the default behavior of the database. When FORCED, SQL
Server parameterizes all queries in the database.
Quoted Identifiers Enabled
Specify whether SQL Server keywords can be used as identifiers (an object or variable name) if enclosed in
quotation marks. Possible values are True and False. For more information, see SET QUOTED_IDENTIFIER
(Transact-SQL).
Recursive Triggers Enabled
Specify whether triggers can be fired by other triggers. Possible values are True and False. When set to True, this
enables recursive firing of triggers. When set to False, only direct recursion is prevented. To disable indirect
recursion, set the nested triggers server option to 0 using sp_configure. For more information, see Create Nested
Triggers.
Trustworthy
When displaying True, this read-only option indicates that SQL Server allows access to resources outside the
database under an impersonation context established within the database. Impersonation contexts can be
established within the database using the EXECUTE AS user statement or the EXECUTE AS clause on database
modules.
To have access, the owner of the database also needs to have the AUTHENTICATE SERVER permission at the server
level.
This property also allows the creation and execution of unsafe and external access assemblies within the database.
In addition to setting this property to True, the owner of the database must have the EXTERNAL ACCESS
ASSEMBLY or UNSAFE ASSEMBLY permission at the server level.
By default, all user databases and all system databases (with the exception of MSDB) have this property set to
False. The value cannot be changed for the model and tempdb databases.
TRUSTWORTHY is set to False whenever a database is attached to the server.
The recommended approach for accessing resources outside the database under an impersonation context is to
use certificates and signatures as apposed to the Trustworthy option.
To set this property, use the ALTER DATABASE statement.
VarDecimal Storage Format Enabled
This option is read-only starting with SQL Server 2008. When True, this database is enabled for the vardecimal
storage format. Vardecimal storage format cannot be disabled while any tables in the database are using it. In SQL
Server 2008 and later versions, all databases are enabled for the vardecimal storage format. This option uses
sp_db_vardecimal_storage_format.
Recovery
Page Verify
Specify the option used to discover and report incomplete I/O transactions caused by disk I/O errors. Possible
values are None, TornPageDetection, and Checksum. For more information, see Manage the suspect_pages
Table (SQL Server).
Target Recovery Time (Seconds)
Specifies the maximum bound on the time, expressed in seconds, to recover the specified database in the event of a
crash. For more information, see Database Checkpoints (SQL Server).
State
Database Read Only
Specify whether the database is read only. Possible values are True and False. When True, users can only read
data in the database. Users cannot modify the data or database objects; however, the database itself can be deleted
using the DROP DATABASE statement. The database cannot be in use when a new value for the Database Read
Only option is specified. The master database is the exception, and only the system administrator can use master
while the option is being set.
Database State
View the current state of the database. It is not editable. For more information about Database State, see Database
States.
Restrict Access
Specify which users may access the database. Possible values are:
Multiple
The normal state for a production database, allows multiple users to access the database at once.
Single
Used for maintenance actions, only one user is allowed to access the database at once.
Restricted
Only members of the db_owner, dbcreator, or sysadmin roles can use the database.
Encryption Enabled
When True, this database is enabled for database encryption. A Database Encryption Key is required for
encryption. For more information, see Transparent Data Encryption (TDE).
See Also
ALTER DATABASE (Transact-SQL)
CREATE DATABASE (SQL Server Transact-SQL)
Database Properties (ChangeTracking Page)
3/24/2017 • 1 min to read • Edit Online
Use this page to view or modify change tracking settings for the selected database. For more information about the
options available on this page, see Enable and Disable Change Tracking (SQL Server).
Options
Change Tracking
Use to enable or disable change tracking for the database.
To enable change tracking, you must have permission to modify the database.
Setting the value to True sets a database option that allows change tracking to be enabled on individual tables.
You can also configure change tracking by using ALTER DATABASE.
Retention Period
Specifies the minimum period for keeping change track information in the database. Data is removed only if the
Auto Clean-Upvalue is True.
The default value is 2.
Retention Period Units
Specifies the units for the Retention Period value. You can select Days, Hours, or Minutes. The default value is
Days.
The minimum retention period is 1 minute. There is no maximum retention period.
Auto Clean-Up
Indicates whether change tracking information is automatically removed after the specified retention period.
Enabling Auto Clean-Up resets any previous custom retention period to the default retention period of 2 days.
See Also
ALTER DATABASE (Transact-SQL)
CREATE DATABASE (SQL Server Transact-SQL)
Database Properties (Transaction Log Shipping Page)
3/24/2017 • 1 min to read • Edit Online
Use this page to configure and modify the properties of log shipping for a database.
For an explanation of log shipping concepts, see About Log Shipping (SQL Server).
Options
Enable this as a primary database in a log shipping configuration
Enables this database as a log shipping primary database. Select it and then configure the remaining options on
this page. If you clear this check box, the log shipping configuration will be dropped for this database.
Backup Settings
Click Backup Settings to configure backup schedule, location, alert, and archiving parameters.
Backup schedule
Shows the currently selected backup schedule for the primary database. Click Backup Settings to modify these
settings.
Last backup created
Indicates the time and date of the last transaction log backup of the primary database.
Secondary server instances and databases
Lists the currently configured secondary servers and databases for this primary database. Highlight a database, and
then click ... to modify the parameters associated with that secondary database.
Add
Click Add to add a secondary database to the log shipping configuration for this primary database.
Remove
Removes a selected database from this log shipping configuration. Select the database first and then click Remove.
Use a monitor server instance
Sets up a monitor server instance for this log shipping configuration. Select the Use a monitor server instance
check box and then click Settings to specify the monitor server instance.
Monitor server instance
Indicates the currently configured monitor server instance for this log shipping configuration.
Settings
Configures the monitor server instance for a log shipping configuration. Click Settings to configure this monitor
server instance.
Script Configuration
Generates a script for configuring log shipping with the parameters you have selected. Click Script Configuration,
and then choose an output destination for the script.
IMPORTANT
Before scripting settings for a secondary database, you must invoke the Secondary Database Settings dialog box. Invoking
this dialog box connects you to the secondary server and retrieves the current settings of the secondary database that are
needed to generate the script.
See Also
Log Shipping Stored Procedures (Transact-SQL)
Log Shipping Tables (Transact-SQL)
Log Shipping Transaction Log Backup Settings
3/24/2017 • 3 min to read • Edit Online
Use this dialog box to configure and modify the transaction log backup settings for a log shipping configuration.
For an explanation of log shipping concepts, see About Log Shipping (SQL Server).
Options
Network path to the backup folder
Type the network share to your backup folder in this box. The local folder where your transaction log backups are
saved must be shared so that the log shipping copy jobs can copy these files to the secondary server. You must
grant read permissions on this network share to the proxy account under which the copy job will run at the
secondary server instance. By default, this is the SQLServerAgent service account of the secondary server instance,
but an administrator can choose another proxy account for the job.
If the backup folder is located on the primary server, type the local path to the folder
Type the local drive letter and path to the backup folder if the backup folder is located on the primary server. If the
backup folder is not located on the primary server, you can leave this blank.
If you specify a local path here, the BACKUP command will use this path to create the transaction log backups;
otherwise, if no local path is specified, the BACKUP command will use the network path specified in the Network
path to the backup folder box.
NOTE
If the SQL Server service account is running under the local system account on the primary server, you must create the
backup folder on the primary server and specify the local path to that folder here. The SQL Server service account of the
primary server instance must have Read and Write permissions on this folder.
Delete files older than
Specify the length of time you want transaction log backups to remain in your backup directory before being
deleted.
Alert if no backup occurs within
Specify the amount of time you want log shipping to wait before raising an alert that no transaction log backups
have occurred.
Job name
Displays the name of the SQL Server Agent job that is used to create the transaction log backups for log shipping.
When first creating the job, you can modify the name by typing in the box.
Schedule
Displays the current schedule for backing up the transaction logs of the primary database. Before the backup job
has been created, You can modify this schedule by clicking Schedule.... After the job has been created, you can
modify this schedule by clicking Edit Job....
Backup Job
Schedule...
Modify the schedule that is created when the SQL Server Agent job is created.
Edit Job...
Modify the SQL Server Agent job parameters for the job that performs transaction log backups on the primary
database.
Disable this job
Disable the SQL Server Agent job from creating transaction log backups.
Compression
SQL Server 2008 Enterprise (or a later version) supports backup compression.
Set backup compression
In SQL Server 2008 Enterprise (or a later version), select one the following backup compression values for the log
backups of this log shipping configuration:
Use the default server setting
Click to use the server-level default.
This default is set by the backup compression default
server-configuration option. For information about how to
view the current setting of this option, see View or Configure
the backup compression default Server Configuration Option.
Compress backup
Click to compress the backup, regardless of the server-level
default.
** Important *\* By default, compression significantly
increases CPU usage, and the additional CPU consumed by
the compression process might adversely impact concurrent
operations. Therefore, you might want to create low-priority
compressed backups in a session whose CPU usage is limited
by the Resource Governor. For more information, see Use
Resource Governor to Limit CPU Usage by Backup
Compression (Transact-SQL).
Do not compress backup
Click to create an uncompressed backup, regardless of the
server-level default.
See Also
Configure a User to Create and Manage SQL Server Agent Jobs
About Log Shipping (SQL Server)
Secondary Database Settings
3/24/2017 • 5 min to read • Edit Online
Use this dialog box to configure and to modify the properties of a secondary database in the log shipping
configuration.
For an explanation of log shipping concepts, see About Log Shipping (SQL Server).
Options
Secondary server instance
Displays the name of the instance of SQL Server currently configured to be a secondary server in the log shipping
configuration.
Secondary database
Displays the name of the secondary database for the log shipping configuration. When adding a new secondary
database to a log shipping configuration, you can choose a database from the list or type the name of a new of the
database into the box. If you enter the name of a new database, you must select an option on the Initialization tab
that restores a full database backup of the primary database into the secondary database. The new database is
created as part of the restore operation.
Connect
Connect to an instance of SQL Server for use as a secondary server in the log shipping configuration. The account
used to connect must be a member of the sysadmin fixed server role on the secondary server instance.
Initialize tab
The options are as follows:
Yes, generate a full backup of the primary database and restore it to the secondary database
Have SQL Server Management Studio configure your secondary database by backing up the primary database and
restoring it on the secondary server. If you entered a new database name in the Secondary database box, the
database will be created as part of the restore operation.
Restore Options
Click if you want to restore the data and log files for the secondary database into nondefault locations on the
secondary server.
This button opens the Restore Options dialog box. There, you can specify paths to nondefault folders where you
want the secondary database and its log to reside. If you specify either folder, you must specify both.
The paths must refer to drives that are local to the secondary server. Also, the paths must begin with a local drive
letter and colon (for example, C: ). Mapped drive letters or network paths are invalid.
If you click the Restore Options button and then decide that you want to use the default folders, we recommend
that you cancel the Restore Options dialog box. If you have already specified nondefault locations and now want
to use the default locations instead, click Restore Options again, clear the text boxes, and click OK.
Yes, restore an existing backup of the primary database to the secondary database
Have Management Studio use an existing backup of your primary database to initialize the secondary database.
Type the location of that backup in the Backup file box. If you entered a new database name in the Secondary
database box, the database will be created as part of the restore operation.
Backup file
Type the path and filename of the full database backup you want to use to initialize the secondary database if you
chose the Yes, restore and existing backup of the primary database to the secondary database option.
Restore Options
See the description of this button earlier in this topic.
No, the secondary database is initialized
Specify that the secondary database is already initialized and ready to accept transaction log backups from the
primary database. This option is not available if you typed a new database name in the Secondary database box.
Copy Files tab
The options are as follows:
Destination folder for copied files
Type the path to which transaction log backups should be copied for restoration to the secondary database. This is
usually a local path to a folder located on the secondary server. If the folder is located on another server, however,
you must specify a UNC path to the folder. The SQL Server service account of the secondary server instance must
have Read permissions on this folder. You must also grant read and write permissions on this network share to the
proxy accounts under which the copy and restore jobs will run at the secondary server instances. By default, this is
the SQL Server Agent service account of the secondary server instance, but a sysadmin can choose other proxy
accounts for the jobs.
Delete copied files after
Choose the length of time you want the copied transaction log backup files to remain in the destination folder
before being deleted.
Job name
Displays the name of the SQL Server Agent job used to copy transaction log backup files from the primary server to
the secondary server. When first creating this job, you can change the name by typing in the box.
Schedule
Displays the current schedule for the SQL Server Agent copy job to copy transaction log backups from the primary
server to the secondary server. You can modify this schedule by clicking Schedule....
Schedule...
Modify the parameters of the SQL Server Agent job that copies transaction log backups from the primary server to
the secondary server.
Disable this job
Suspend the SQL Server Agent copy job.
Restore Transaction Log tab
The options are as follows:
Disconnect users in the database when restoring backups
Automatically disconnect users from the secondary database while transaction log backups are being restored.
No recovery mode
Leave the secondary database in NORECOVERY mode.
Standby mode
Leave the secondary database in STANDBY mode. This mode will allow read-only operations to be performed on
the database.
IMPORTANT
If you change the recovery mode of an existing secondary database, for example, from No recovery mode to Standby
mode, the change takes effect only after the next log backup is restored to the database.
Delay restoring backups at least
Choose the delay before transaction log backups are restored to the secondary database, if any.
Alert if no restore occurs within
Choose the amount of time you want log shipping to wait before raising an alert that no transaction log restores
have occurred.
Job name
Displays the name of the SQL Server Agent job used to restore transaction log backups to the secondary database.
When first creating this job, you can change the name by typing in the box.
Schedule
Displays the current schedule for the SQL Server Agent job used for restoring transaction log backups to the
secondary database. You can modify this option by clicking Schedule....
Schedule...
Modify the parameters associate with the SQL Server Agent restore job.
Disable this job
Suspend restore operations to the secondary database.
See Also
Back Up and Restore of SQL Server Databases
About Log Shipping (SQL Server)
Log Shipping Monitor Settings
3/24/2017 • 1 min to read • Edit Online
Use this page to configure and to modify the properties of the log shipping monitor server.
For an explanation of log shipping concepts, see About Log Shipping (SQL Server).
Options
Monitor server instance
Displays the name of the server instance that is currently configured as the monitor server for the log shipping
configuration.
Connect
Choose and connect to an instance of SQL Server to be used as the monitor server. The account used to connect
must be a member of the sysadmin fixed server role on the secondary server instance.
By impersonating the proxy account of the job
Have log shipping impersonate the SQL Server Agent proxy account when connecting to the monitor server
instance. The backup, copy, and restore processes must be able to connect to the monitor server to update the
status of log shipping operations.
Using the following SQL Server login
Allow log shipping to use a specific SQL Server login when connecting to the monitor server instance. The backup,
copy, and restore processes must be able to connect to the monitor server to update the status of log shipping
operations. Choose this option if you want log shipping to use a specific SQL Server login and then specify the
login and password.
Delete history after
Specify the amount of time to retain log shipping history information on the monitor server before it is deleted.
Job name
Indicates the name of the SQL Server Agent alert job used by log shipping to raise alerts when backup or restore
thresholds have been exceeded. When first creating this job, you can change the name by typing in the box.
Schedule
Indicates the current schedule of the SQL Server Agent alert job.
Edit
Modify the SQL Server Agent alert job parameters.
Disable this job
Suspend the SQL Server Agent alert job.
Database Properties (Mirroring Page)
3/24/2017 • 10 min to read • Edit Online
Access this page from the principal database, and use it to configure and to modify the properties of database
mirroring for a database. Also use it to launch the Configure Database Mirroring Security Wizard, to view the status
of a mirroring session, and to pause or remove the database mirroring session.
IMPORTANT!!! Security must be configured before you can start mirroring. If mirroring has not been started,
you must begin by using the wizard. The Mirroring page textboxes are disabled until the wizard has been
finished.
Configure database mirroring by using SQL Server Management Studio
Establish a Database Mirroring Session Using Windows Authentication (SQL Server Management Studio)
Options
Configure Security
Click this button to launch the Configure Database Mirroring Security Wizard.
If the wizard completes successfully, the action taken depends on whether mirroring has already begun, as follows:
If mirroring has not begun.
The property page caches that connection information and,
also, caches a value that indicates whether the mirror database
has the partner property set.
At the end of the wizard, you are prompted to start database
mirroring using the default server network addresses and
operating mode. If you need to change the addresses or
operating mode, click Do Not Start Mirroring.
If mirroring has begun.
If the witness server was changed in the wizard, it is set
accordingly.
Server network addresses
An equivalent option exists for each of the server instances: Principal, Mirror, and Witness.
The server network addresses of the server instances are specified automatically when you complete the Configure
Database Mirroring Security Wizard. After completing the wizard, you can modify the network addresses manually,
if necessary.
The server network address has the following basic syntax:
TCP://fully_qualified_domain_name:port
where
fully_qualified_domain_name is the server on which the server instance exists.
port is the port assigned to the database mirroring endpoint of the server instance.
To participate in database mirroring, a server requires a database mirroring endpoint. When you use the
Configure Database Mirroring Security Wizard to establish the first mirroring session for a server instance,
the wizard automatically creates the endpoint and configures it to use Windows Authentication. For
information about how to use the wizard with certificate-based authentication, see Establish a Database
Mirroring Session Using Windows Authentication (SQL Server Management Studio).
IMPORTANT!! Each server instance requires one and only one database mirroring endpoint, regardless
of the number of mirroring session to be supported.
For example, for a server instance on a computer system named
, the network address might be:
DBSERVER9
whose endpoint uses port
7022
TCP://DBSERVER9.COMPANYINFO.ADVENTURE-WORKS.COM:7022
For more information, see Specify a Server Network Address (Database Mirroring).
NOTE: During a database mirroring session the principal and mirror server instances cannot be changed; the
witness server instance, however, can be changed during a session. For more information, see "Remarks," later
in this topic.
Start Mirroring
Click to begin mirroring, when all of the following conditions exist:
The mirror database must exist.
Before you can start mirroring, the mirror database must have been created by restoring WITH
NORECOVERY a recent full backup and, perhaps, log backups of the principal database onto the mirror
server. For more information, see Prepare a Mirror Database for Mirroring (SQL Server).
The TCP addresses of the principal and mirror server instances are already specified (in the Server network
addresses section).
If the operating mode is set to high safety with automatic failover (synchronous), the TCP address of the
mirror server instance is also specified.
Security has been configured correctly.
Click Start Mirroring to initiate the session. The Database Engine attempts to automatically connect to the
mirroring partner to verify that the mirror server is correctly configured and begin the mirroring session. If
mirroring can be started, a job is created to monitor the database.
Pause or Resume
During a database mirroring session, click Pause to pause the session. A prompt asks for confirmation; if
you click Yes, the session is paused, and the button changes to Resume. To resume the session, click
Resume.
For information about the impact of pausing a session, see Pausing and Resuming Database Mirroring (SQL
Server).
IMPORTANT!! Following a forced service, when the original principal server reconnects, mirroring is
suspended. Resuming mirroring in this situation could possibly cause data loss on the original principal server.
For information about how to manage the potential data loss, see Role Switching During a Database Mirroring
Session (SQL Server).
Remove Mirroring
On the principal server instance, click to stop the session and remove the mirroring configuration from the
databases. A prompt asks for confirmation; if you click Yes, the session is stopped and mirroring is removed. For
information about the impact of removing database mirroring, see Removing Database Mirroring (SQL Server).
NOTE: If this is the only mirrored database on the server instance, the monitor job is removed.
Failover
Click to fail over the principal database to the mirror database manually.
NOTE: If the mirroring session is running in high-performance mode, manual failover is not supported. To fail
over manually, you must first change the operating mode to High safety without automatic failover
(synchronous). After failover completes, you can change the mode back to High performance
(asynchronous) on the new principal server instance.
A prompt asks for confirmation. If you click Yes, failover is attempted. The principal server begins by trying to
connect to the mirror server by using Windows Authentication. If Windows Authentication does not work, the
principal server displays the Connect to Server dialog box. If the mirror server uses SQL Server Authentication,
select SQL Server Authentication in the Authentication box. In the Login text box, specify the login account to
connect with on the mirror server, and in the Password text box, specify the password for that account.
If failover succeeds, the Database Properties dialog box closes. The principal and mirror server roles are switched:
the former mirror database becomes the principal database, and vice versa. Note that the Database Properties
dialog box becomes unavailable on the old principal database immediately because it has become the mirror
database; this dialog box will become available on the new principal database after failover.
If failover fails, an error message is displayed, and the dialog box remains open.
IMPORTANT!! If you click Failover after modifying properties in the Database Properties dialog box, those
changes are lost. To save your current changes, answer No to the confirmation prompt, and click OK to save
your changes. Then, reopen the database properties dialog box and click Failover.
Operating mode
Optionally, change the operating mode. The availability of certain operating modes depends on whether you have
specified a TCP address for a witness. The options are as follows:
OPTION
WITNESS?
EXPLANATION
High performance (asynchronous)
Null (if exists, not used but the session
requires a quorum)
To maximize performance, the mirror
database always lags somewhat behind
the principal database, never quite
catching up. However, the gap between
the databases is typically small. The loss
of a partner has the following effect:
If the mirror server instance becomes
unavailable, the principal continues.
If the principal server instance becomes
unavailable, the mirror stops. But if the
session has no witness (as
recommended) or the witness is
connected to the mirror server, the
mirror server remains accessible as a
warm standby; the database owner can
force service to the mirror server
instance (with possible data loss).
OPTION
WITNESS?
EXPLANATION
High safety without automatic
failover (synchronous)
No
All committed transactions are
guaranteed to be written to disk on the
mirror server.
Manual failover is possible if the
partners are connected to each other.
The loss of a partner has the following
effect:
If the mirror server instance becomes
unavailable, the principal continues.
If the principal server instance becomes
unavailable, the mirror stops but is
available as a warm standby; the
database owner can force service to the
mirror server instance (with possible
data loss).
High safety with automatic failover
(synchronous)
Yes (required)
Maximized availability by including a
witness server instance to support
automatic failover. Note that you can
select the High safety with automatic
failover (synchronous) option only if
you have first specified a witness server
address.
Manual failover is possible whenever
the partners are connected to each
other.
** Important *\* If the witness
becomes disconnected, the partners
must be connected to each other for
the database to be available. For more
information, see Quorum: How a
Witness Affects Database Availability
(Database Mirroring).
In the synchronous operating modes, all
committed transactions are guaranteed
to be written to disk on the mirror
server. n the presence of a witness, the
loss of a partner has the following
effect:
If the principal server instance becomes
unavailable, automatic failover occurs.
The mirror server instance switches to
the role of principal, and it offers its
database as the principal database.
If the mirror server instance becomes
unavailable, the principal continues.
For more information, see Database
Mirroring Operating Modes.
After mirroring begins, you can change the operating mode and save the change by clicking OK.
For more information about operating modes, see Database Mirroring Operating Modes.
Status
After mirroring begins, the Status panel displays the status of the database mirroring session as of when you
selected the Mirroring page. To update the Status panel, click the Refresh button. The possible states are as
follows:
STATES
EXPLANATION
This database has not been configured for mirroring
No database mirroring session exists and there is no activity to
report on the Mirroring page.
Paused
The principal database is available but is not sending any logs
to the mirror server.
No connection
The principal server instance cannot connect to its partner.
Synchronizing
The contents of the mirror database are lagging behind the
contents of the principal database. The principal server
instance is sending log records to the mirror server instance,
which is applying the changes to the mirror database to roll it
forward.
At the start of a database mirroring session, the mirror and
principal databases are in this state.
Failover
On the principal server instance, a manual failover (role
switching) has begun, and the server is currently transitioning
into the mirror role. In this state, user connections to the
principal database are terminated quickly, and the database
takes over the mirror role soon thereafter.
Synchronized
When the mirror server becomes sufficiently caught up to the
principal server, the database state changes to Synchronized.
The database remains in this state as long as the principal
server continues to send changes to the mirror server and the
mirror server continues to apply changes to the mirror
database.
For high-safety mode, failover is possible, without any data
loss.
For high-performance mode, some data loss is always
possible, even in the Synchronized state.
For more information, see Mirroring States (SQL Server).
Refresh
Click to update the Status box.
Remarks
If you are unfamiliar with database mirroring, see Database Mirroring (SQL Server).
Adding a Witness to an Existing Session
You can add a witness to an existing session or replace an existing witness. If you know the server network address
of the witness, you can enter it into the Witness field manually. If you do not know the server network address of
the witness, use Configure Database Mirroring Security Wizard to configure the witness. After the address is in the
field, make sure that the High-safety with automatic failover (synchronous) option is selected.
After you configure a new witness, you must click Ok to add it to the mirroring session.
To add a witness when using Windows Authentication
Add or Replace a Database Mirroring Witness (SQL Server Management Studio)
Removing a Witness
To remove a witness, delete its server network address from the Witness field. If you switch from high-safety mode
with automatic failover to high-performance mode, the Witness field is automatically cleared.
After deleting the witness, you must click Ok to remove it from the mirroring session.
Monitoring Database Mirroring
To monitor the mirrored databases on a server instance, you can use either the Database Mirroring Monitor or the
sp_dbmmonitorresults system stored procedure.
To monitor mirrored databases
Start Database Mirroring Monitor (SQL Server Management Studio)
sp_dbmmonitorresults (Transact-SQL)
For more information, see Monitoring Database Mirroring (SQL Server).
Related Tasks
Specify a Server Network Address (Database Mirroring)
Establish a Database Mirroring Session Using Windows Authentication (SQL Server Management Studio)
Start Database Mirroring Monitor (SQL Server Management Studio)
See Also
Transport Security for Database Mirroring and Always On Availability Groups (SQL Server)
Role Switching During a Database Mirroring Session (SQL Server)
Monitoring Database Mirroring (SQL Server)
Database Mirroring (SQL Server)
Pausing and Resuming Database Mirroring (SQL Server)
Removing Database Mirroring (SQL Server)
Database Mirroring Witness
Database Properties (Query Store Page)
3/24/2017 • 2 min to read • Edit Online
Access this page from the principal database, and use it to configure and to modify the properties of the database
query store. These options can also be configure by using the ALTER DATABASE SET options. For information about
the query store, see Monitoring Performance By Using the Query Store.
||
|-|
|Applies to: SQL Server ( SQL Server 2016 through current version), SQL Database.|
Options
Operation Mode
Valid values are OFF, READ_ONLY, and READ_WRITE. OFF disables the Query Store. In READ_WRITE mode, the
query store collects and persists query plan and runtime execution statistics information. In READ_ONLY mode,
information can be read from the query store, but new information is not added. If the maximum allocated space of
the query store has been exhausted, the query store will change its operation mode to READ_ONLY.
Operation Mode (Actual)
Gets the actual operation mode of the query store.
Operation Mode (Requested)
Gets and sets the desired operation mode of the query store.
Data Flush Interval (Minutes)
Determines the frequency at which data written to the query store is persisted to disk. To optimize for performance,
data collected by the query store is asynchronously written to the disk. The frequency at which this asynchronous
transfer occurs is configured.
Statistics Collection Interval
Gets and sets the statistics collection interval value.
Max Size (MB)
Gets and sets the total space allocated to the query store.
Query Store Capture Mode
None does not capture new queries.
All captures all queries.
Auto captures queries based on resource consumption.
Stale Query Threshold (Days)
Gets and sets the stale query threshold. Configure the STALE_QUERY_THRESHOLD_DAYS argument to
specify the number of days to retain data in the query store.
Purge Query Data
Removes the contents of the query store.
Pie Charts
The left chart shows the total database file consumption on the disk, and the portion of the file which is filled
with the query store data.
The right chart shows the portion of the query store quota which is currently used up. Note that the quota is
not shown in the left chart. The quota may exceed the current size of the database.
Remarks
The query store feature provides DBAs with insight on query plan choice and performance. It simplifies
performance troubleshooting by enabling you to quickly find performance differences caused by changes in query
plans. The feature automatically captures a history of queries, plans, and runtime statistics, and retains these for
your review. It separates data by time windows, allowing you to see database usage patterns and understand when
query plan changes happened on the server. The query store can be configured by using this query store database
property page, or by using the ALTER DATABASE SET option. The query store presents information by using a
Management Studio dialog box. For more information about query store, see Monitoring Performance By Using the
Query Store.
See Also
Query Store Stored Procedures (Transact-SQL)
Query Store Catalog Views (Transact-SQL)
View a List of Databases on an Instance of SQL
Server
3/24/2017 • 1 min to read • Edit Online
This topic describes how to view a list of databases on an instance of SQL Server by using SQL Server Management
Studio or Transact-SQL.
In This Topic
Before you begin:
Security
To view a list of databases on an instance of SQL Server, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Security
Permissions
If the caller of sys.databases is not the owner of the database and the database is not master or tempdb, the
minimum permissions required to see the corresponding row are ALTER ANY DATABASE or VIEW ANY DATABASE
server-level permission, or CREATE DATABASE permission in the master database. The database to which the caller
is connected can always be viewed in sys.databases.
Using SQL Server Management Studio
To view a list of databases on an instance of SQL Server
1. In Object Explorer, connect to an instance of the SQL Server Database Engine, and then expand that
instance.
2. To see a list of all databases on the instance, expand Databases.
Using Transact-SQL
To view a list of databases on an instance of SQL Server
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example returns a list
of databases on the instance of SQL Server. The list includes the names of the databases, their database IDs,
and the dates when the databases were created.
USE AdventureWorks2012;
GO
SELECT name, database_id, create_date
FROM sys.databases ;
GO
See Also
Databases and Files Catalog Views (Transact-SQL)
sys.databases (Transact-SQL)
View or Change the Compatibility Level of a
Database
3/24/2017 • 1 min to read • Edit Online
This topic describes how to view or change the compatibility level of a database in SQL Server 2016 by using SQL
Server Management Studio or Transact-SQL. Before you change the compatibility level of a database, you should
understand the impact of the change on your applications. For more information, see ALTER DATABASE
Compatibility Level (Transact-SQL).
In This Topic
Before you begin:
Security
To view or change the compatibility level of a database, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Security
Permissions
Requires ALTER permission on the database.
Using SQL Server Management Studio
To view or change the compatibility level of a database
1. After connecting to the appropriate instance of the SQL Server Database Engine, in Object Explorer, click the
server name.
2. Expand Databases, and, depending on the database, either select a user database or expand System
Databases and select a system database.
3. Right-click the database, and then click Properties.
The Database Properties dialog box opens.
4. In the Select a page pane, click Options.
The current compatibility level is displayed in the Compatibility level list box.
5. To change the compatibility level, select a different option from the list. The choices are SQL Server 2008
(100), SQL Server 2012 (110), or SQL Server 2014 (120).
Using Transact-SQL
To view the compatibility level of a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example returns the
compatibility level of the AdventureWorks2012 database.
USE AdventureWorks2012;
GO
SELECT compatibility_level
FROM sys.databases WHERE name = 'AdventureWorks2012';
GO
To change the compatibility level of a database
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example changes the
compatibility level of the AdventureWorks2012 database to 120, which is the compatibility level for SQL
Server 2014.
ALTER DATABASE AdventureWorks2012
SET COMPATIBILITY_LEVEL = 120;
GO
Create a User-Defined Data Type Alias
3/24/2017 • 3 min to read • Edit Online
This topic describes how to create a new user-defined data type alias in SQL Server 2016 by using SQL Server
Management Studio or Transact-SQL.
In This Topic
Before you begin:
Limitations and Restrictions
Security
To create a user-defined data type alias, using:
SQL Server Management Studio
Transact-SQL
Before You Begin
Limitations and Restrictions
The name of a user-defined data type alias must comply with the rules for identifiers.
Security
Permissions
Requires CREATE TYPE permission in the current database and ALTER permission on schema_name. If
schema_name is not specified, the default name resolution rules for determining the schema for the current user
apply.
Using SQL Server Management Studio
To create a user-defined data type
1. In Object Explorer, expand Databases, expand a database, expand Programmability, expand Types, rightclick User-Defined Data Types, and then click New User-Defined Data Type.
Allow NULLs
Specify whether the user-defined data type can accept NULL values. The nullability of an existing userdefined data type is not editable.
Data type
Select the base data type from the list box. The list box displays all data types except for the geography,
geometry, hierarchyid, sysname, timestamp , and xml data types. The data type of an existing userdefined data type is not editable.
Default
Optionally select a rule or a default to bind to the user-defined data type alias.
Length/Precision
Displays the length or precision of the data type as applicable. Length applies to character-based userdefined data types; Precision applies only to numeric-based user-defined data types. The label changes
depending on the data type selected earlier. This box is not editable if the length or precision of the selected
data type is fixed.
Length is not displayed for nvarchar(max), varchar(max), or varbinary(max) data types.
Name
If you are creating a new user-defined data type alias, type a unique name to be used across the database to
represent the user-defined data type. The maximum number of characters must match the system sysname
data type. The name of an existing user-defined data type alias is not editable.
Rule
Optionally select a rule to bind to the user-defined data type alias.
Scale
Specify the maximum number of decimal digits that can be stored to the right of the decimal point.
Schema
Select a schema from a list of all schemas available to the current user. The default selection is the default
schema for the current user.
Storage
Displays the maximum storage size for the user-defined data type alias. Maximum storage sizes vary, based
on precision.
1–9
5
10 – 19
9
20 – 28
13
29 – 38
17
For nchar and nvarchar data types, the storage value is always two times the value in Length.
Storage is not displayed for nvarchar(max), varchar(max), or varbinary(max) data types.
2. In the New User-defined Data Type dialog box, in the Schema box, type the schema to own this data type
alias, or use the browse button to select the schema.
3. In the Name box, type a name for the new data type alias.
4. In the Data type box, select the data type that the new data type alias will be based on.
5. Complete the Length, Precision, and Scale boxes if appropriate for that data type.
6. Check Allow NULLs if the new data type alias can permit NULL values.
7. In the Binding area, complete the Default or Rule boxes if you want to bind a default or rule to the new
data type alias. Defaults and rules cannot be created in SQL Server Management Studio. Use Transact-SQL.
Example code for creating defaults and rules are available in Template Explorer.
Using Transact-SQL
To create a user-defined data type alias
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. Copy and paste the following example into the query window and click Execute. This example creates a data
type alias based on the system-supplied varchar data type. The ssn data type alias is used for columns
holding 11-digit social security numbers (999-99-9999). The column cannot be NULL.
CREATE TYPE ssn
FROM varchar(11) NOT NULL ;
See Also
Database Identifiers
CREATE TYPE (Transact-SQL)
Database Snapshots (SQL Server)
3/24/2017 • 11 min to read • Edit Online
A database snapshot is a read-only, static view of a SQL Server database (the source database). The database
snapshot is transactionally consistent with the source database as of the moment of the snapshot's creation. A
database snapshot always resides on the same server instance as its source database. As the source database is
updated, the database snapshot is updated. Therefore, the longer a database snapshot exists, the more likely it is
to use up its available disk space.
Multiple snapshots can exist on a given source database. Each database snapshot persists until it is explicitly
dropped by the database owner.
NOTE
Database snapshots are unrelated to snapshot backups, snapshot isolation of transactions, or snapshot replication.
In this Topic:
Feature Overview
Benefits of Database Snapshots
Terms and Definitions
Prerequisites for and Limitations on Database Snapshots
Related Tasks
Feature Overview
Database snapshots operate at the data-page level. Before a page of the source database is modified for the first
time, the original page is copied from the source database to the snapshot. The snapshot stores the original page,
preserving the data records as they existed when the snapshot was created. The same process is repeated for
every page that is being modified for the first time. To the user, a database snapshot appears never to change,
because read operations on a database snapshot always access the original data pages, regardless of where they
reside.
To store the copied original pages, the snapshot uses one or more sparse files. Initially, a sparse file is an
essentially empty file that contains no user data and has not yet been allocated disk space for user data. As more
and more pages are updated in the source database, the size of the file grows. The following figure illustrates the
effects of two contrasting update patterns on the size of a snapshot. Update pattern A reflects an environment in
which only 30 percent of the original pages are updated during the life of the snapshot. Update pattern B reflects
an environment in which 80 percent of the original pages are updated during the life of the snapshot.
Benefits of Database Snapshots
Snapshots can be used for reporting purposes.
Clients can query a database snapshot, which makes it useful for writing reports based on the data at the
time of snapshot creation.
Maintaining historical data for report generation.
A snapshot can extend user access to data from a particular point in time. For example, you can create a
database snapshot at the end of a given time period (such as a financial quarter) for later reporting. You can
then run end-of-period reports on the snapshot. If disk space permits, you can also maintain end-of-period
snapshots indefinitely, allowing queries against the results from these periods; for example, to investigate
organizational performance.
Using a mirror database that you are maintaining for availability purposes to offload reporting.
Using database snapshots with database mirroring permits you to make the data on the mirror server
accessible for reporting. Additionally, running queries on the mirror database can free up resources on the
principal. For more information, see Database Mirroring and Database Snapshots (SQL Server).
Safeguarding data against administrative error.
In the event of a user error on a source database, you can revert the source database to the state it was in
when a given database snapshot was created. Data loss is confined to updates to the database since the
snapshot's creation.
For example, before doing major updates, such as a bulk update or a schema change, create a database
snapshot on the database protects data. If you make a mistake, you can use the snapshot to recover by
reverting the database to the snapshot. Reverting is potentially much faster for this purpose than restoring
from a backup; however, you cannot roll forward afterward.
IMPORTANT
Reverting does not work in an offline or corrupted database. Therefore, taking regular backups and testing your
restore plan are necessary to protect a database.
NOTE
Database snapshots are dependent on the source database. Therefore, using database snapshots for reverting a
database is not a substitute for your backup and restore strategy. Performing all your scheduled backups remains
essential. If you must restore the source database to the point in time at which you created a database snapshot,
implement a backup policy that enables you to do that.
Safeguarding data against user error.
By creating database snapshots on a regular basis, you can mitigate the impact of a major user error, such
as a dropped table. For a high level of protection, you can create a series of database snapshots spanning
enough time to recognize and respond to most user errors. For instance, you might maintain 6 to 12 rolling
snapshots spanning a 24-hour interval, depending on your disk resources. Then, each time a new snapshot
is created, the earliest snapshot can be deleted.
To recover from a user error, you can revert the database to the snapshot immediately before the
error. Reverting is potentially much faster for this purpose than restoring from a backup; however,
you cannot roll forward afterward.
Alternatively, you may be able to manually reconstruct a dropped table or other lost data from the
information in a snapshot. For instance, you could bulk copy the data out of the snapshot into the
database and manually merge the data back into the database.
NOTE
Your reasons for using database snapshots determine how many concurrent snapshots you need on a database,
how frequently to create a new snapshot, and how long to keep it.
Managing a test database
In a testing environment, it can be useful when repeatedly running a test protocol for the database to
contain identical data at the start of each round of testing. Before running the first round, an application
developer or tester can create a database snapshot on the test database. After each test run, the database
can be quickly returned to its prior state by reverting the database snapshot.
Terms and Definitions
database snapshot
A transactionally consistent, read-only, static view of a database (the source database).
source database
For a database snapshot, the database on which the snapshot was created. Database snapshots are dependent on
the source database. The snapshots of a database must be on the same server instance as the database.
Furthermore, if that database becomes unavailable for any reason, all of its database snapshots also become
unavailable.
sparse file
A file provided by the NTFS file system that requires much less disk space than would otherwise be needed. A
sparse file is used to store pages copied to a database snapshot. When first created, a sparse file takes up little disk
space. As data is written to a database snapshot, NTFS allocates disk space gradually to the corresponding sparse
file.
Prerequisites for and Limitations on Database Snapshots
In This Section:
Prerequisites
Limitations on the Source Database
Limitations on Database Snapshots
Disk Space Requirements
Database Snapshots with Offline Filegroups
Prerequisites
The source database, which can use any recovery model, must meet the following prerequisites:
The server instance must be running on an edition of SQL Server that supports database snapshots. For
more information, see Features Supported by the Editions of SQL Server 2016.
The source database must be online, unless the database is a mirror database within a database mirroring
session.
You can create a database snapshot on any primary or secondary database in an availability group. The
replica role must be either PRIMARY or SECONDARY, not in the RESOLVING state.
We recommend that the database synchronization state be SYNCHRONIZING or SYNCHRONIZED when
you create a database snapshot. However, database snapshots can be created when the database
synchronization state is NOT SYNCHRONIZING.
For more information, see Database Snapshots with Always On Availability Groups (SQL Server).
To create a database snapshot on a mirror database, the database must be in the SYNCHRONIZED
mirroring state.
The source database cannot be configured as a scalable shared database.
The source database must not contain a MEMORY_OPTIMIZED_DATA filegroup. For more information, see
Unsupported SQL Server Features for In-Memory OLTP.
NOTE
All recovery models support database snapshots.
Limitations on the Source Database
As long as a database snapshot exists, the following limitations exist on the snapshot's source database:
The database cannot be dropped, detached, or restored.
NOTE
Backing up the source database works normally; it is unaffected by database snapshots.
Performance is reduced, due to increased I/O on the source database resulting from a copy-on-write
operation to the snapshot every time a page is updated.
Files cannot be dropped from the source database or from any snapshots.
Limitations on Database Snapshots
The following limitations apply to database snapshots:
A database snapshot must be created and remain on the same server instance as the source database.
Database snapshots always work on an entire database.
Database snapshots are dependent on the source database and are not redundant storage. They do not
protect against disk errors or other types of corruption. Therefore, using database snapshots for reverting a
database is not a substitute for your backup and restore strategy. Performing all your scheduled backups
remains essential. If you must restore the source database to the point in time at which you created a
database snapshot, implement a backup policy that enables you to do that.
When a page getting updated on the source database is pushed to a snapshot, if the snapshot runs out of
disk space or encounters some other error, the snapshot becomes suspect and must be deleted.
Snapshots are read-only. Since they are read only, they cannot be upgraded. Therefore, database snapshots
are not expected to be viable after an upgrade.
Snapshots of the model, master, and tempdb databases are prohibited.
You cannot change any of the specifications of the database snapshot files.
You cannot drop files from a database snapshot.
You cannot back up or restore database snapshots.
You cannot attach or detach database snapshots.
You cannot create database snapshots on FAT32 file system or RAW partitions. The sparse files used by
database snapshots are provided by the NTFS file system.
Full-text indexing is not supported on database snapshots. Full-text catalogs are not propagated from the
source database.
A database snapshot inherits the security constraints of its source database at the time of snapshot
creation. Because snapshots are read-only, inherited permissions cannot be changed and permission
changes made to the source will not be reflected in existing snapshots.
A snapshot always reflects the state of filegroups at the time of snapshot creation: online filegroups remain
online, and offline filegroups remain offline. For more information, see "Database Snapshots with Offline
Filegroups" later in this topic.
If a source database becomes RECOVERY_PENDING, its database snapshots may become inaccessible. After
the issue on the source database is resolved, however, its snapshots should become available again.
Reverting is unsupported for any NTFS read-only or NTFS compressed files in the database. Attempts to
revert a database containing either of these types of filegroups will fail.
In a log shipping configuration, database snapshots can be created only on the primary database, not on a
secondary database. If you switch roles between the primary server instance and a secondary server
instance, you must drop all the database snapshots before you can set the primary database up as a
secondary database.
A database snapshot cannot be configured as a scalable shared database.
FILESTREAM filegroups are not supported by database snapshots. If FILESTREAM filegroups exist in a
source database, they are marked as offline in its database snapshots, and the database snapshots cannot
be used for reverting the database.
NOTE
A SELECT statement that is executed on a database snapshot must not specify a FILESTREAM column; otherwise, the
following error message will be returned: Could not continue scan with NOLOCK due to data movement.
When statistics on a read-only snapshot are missing or stale, the Database Engine creates and maintains
temporary statistics in tempdb. For more information, see Statistics.
Disk Space Requirements
Database snapshots consume disk space. If a database snapshot runs out of disk space, it is marked as suspect and
must be dropped. (The source database, however, is not affected; actions on it continue normally.) Compared to a
full copy of a database, however, snapshots are highly space efficient. A snapshot requires only enough storage
for the pages that change during its lifetime. Generally, snapshots are kept for a limited time, so their size is not a
major concern.
The longer you keep a snapshot, however, the more likely it is to use up available space. The maximum size to
which a sparse file can grow is the size of the corresponding source database file at the time of the snapshot
creation. If a database snapshot runs out of disk space, it must be deleted (dropped).
NOTE
Except for file space, a database snapshot consumes roughly as many resources as a database.
Database Snapshots with Offline Filegroups
Offline filegroups in the source database affect database snapshots when you try to do any of the following:
Create a snapshot
When a source database has one or more offline filegroups, snapshot creation succeeds with the filegroups
offline. Sparse files are not created for the offline filegroups.
Take a filegroup offline
You can take a file offline in the source database. However, the filegroup remains online in database
snapshots if it was online when the snapshot was created. If the queried data has changed since snapshot
creation, the original data page will be accessible in the snapshot. However, queries that use the snapshot
to access unmodified data in the filegroup are likely to fail with input/output (I/O) errors.
Bring a filegroup online
You cannot bring a filegroup online in a database that has any database snapshots. If a filegroup is offline
at the time of snapshot creation or is taken offline while a database snapshot exists, the filegroup remains
offline. This is because bringing a file back online involves restoring it, which is not possible if a database
snapshot exists on the database.
Revert the source database to the snapshot
Reverting a source database to a database snapshot requires that all of the filegroups are online except for
filegroups that were offline when the snapshot was created.
Related Tasks
Create a Database Snapshot (Transact-SQL)
View a Database Snapshot (SQL Server)
View the Size of the Sparse File of a Database Snapshot (Transact-SQL)
Revert a Database to a Database Snapshot
Drop a Database Snapshot (Transact-SQL)
See Also
Database Mirroring and Database Snapshots (SQL Server)
View the Size of the Sparse File of a Database
Snapshot (Transact-SQL)
3/24/2017 • 2 min to read • Edit Online
This topic describes how to use Transact-SQL to verify that a SQL Server database file is a sparse file and to find
out its actual and maximum sizes. Sparse files, which are a feature of the NTFS file system, are used by SQL Server
database snapshots.
NOTE
During database snapshot creation, sparse files are created by using the file names in the CREATE DATABASE statement.
These file names are stored in sys.master_files in the physical_name column. In sys.database_files (whether in the source
database or in a snapshot), the physical_name column always contains the names of the source database files.
Verify that a Database File is a Sparse File
1. On the instance of SQL Server:
Select the is_sparse column from either sys.database_files in the database snapshot or from
sys.master_files. The value indicates whether the file is a sparse file, as follows:
1 = File is a sparse file.
0 = File is not a sparse file.
Find Out the Actual Size of a Sparse File
NOTE
Sparse files grow in 64-kilobyte (KB) increments; thus, the size of a sparse file on disk is always a multiple of 64 KB.
To view the number of bytes that each sparse file of a snapshot is currently using on disk, query the
size_on_disk_bytes column of the SQL Serversys.dm_io_virtual_file_stats dynamic management view.
To view the disk space used by a sparse file, right-click the file in Microsoft Windows, click Properties, and look at
the Size on disk value.
Find Out the Maximum Size of a Sparse File
The maximum size to which a sparse can grow is the size of the corresponding source database file at the time of
the snapshot creation. To learn this size, you can use one of the following alternatives:
Using Windows Command Prompt:
1. Use Windows dir commands.
2. Select the sparse file, open the file Properties dialog box in Windows, and look at the Size value.
On the instance of SQL Server:
Select the size column from either sys.database_files in the database snapshot or from sys.master_files.
The value of size column reflects the maximum space, in SQL pages, that the snapshot can ever use; this
value is equivalent to the Windows Size field, except that it is represented in terms of the number of SQL
pages in the file; the size in bytes is:
( number_of_pages * 8192)
Example
The following script will show the size on disk in kilobytes for each sparse file. The script will also show the
maximum size in megabytes to which a sparse file can grow. Execute the Transact-SQL script in SQL Server
Management Studio.
SELECT DB_NAME(sd.source_database_id) AS [SourceDatabase],
sd.name AS [Snapshot],
mf.name AS [Filename],
size_on_disk_bytes/1024 AS [size_on_disk (KB)],
mf2.size/128 AS [MaximumSize (MB)]
FROM sys.master_files mf
JOIN sys.databases sd
ON mf.database_id = sd.database_id
JOIN sys.master_files mf2
ON sd.source_database_id = mf2.database_id
AND mf.file_id = mf2.file_id
CROSS APPLY sys.dm_io_virtual_file_stats(sd.database_id, mf.file_id)
WHERE mf.is_sparse = 1
AND mf2.is_sparse = 0
ORDER BY 1;
See Also
Database Snapshots (SQL Server)
sys.fn_virtualfilestats (Transact-SQL)
sys.database_files (Transact-SQL)
sys.master_files (Transact-SQL)
Create a Database Snapshot (Transact-SQL)
3/24/2017 • 5 min to read • Edit Online
The only way to create a SQL Server database snapshot is to use Transact-SQL. SQL Server Management Studio
does not support the creation of database snapshots.
Before You Begin
Prerequisites
The source database, which can use any recovery model, must meet the following prerequisites:
The server instance must be running an edition of SQL Server that supports database snapshot. For
information about support for database snapshots in SQL Server 2016, see Features Supported by the
Editions of SQL Server 2016.
The source database must be online, unless the database is a mirror database within a database mirroring
session.
To create a database snapshot on a mirror database, the database must be in the synchronized mirroring
state.
The source database cannot be configured as a scalable shared database.
The source database must not contain a MEMORY_OPTIMIZED_DATA filegroup. For more information, see
Unsupported SQL Server Features for In-Memory OLTP.
IMPORTANT
For information about other significant considerations, see Database Snapshots (SQL Server).
Recommendations
This section discusses the following best practices:
Best Practice: Naming Database Snapshots
Best Practice: Limiting the Number of Database Snapshots
Best Practice: Client Connections to a Database Snapshot
Best Practice: Naming Database Snapshots
Before creating snapshots, it is important to consider how to name them. Each database snapshot requires a
unique database name. For administrative ease, the name of a snapshot can incorporate information that identifies
the database, such as:
The name of the source database.
An indication that the new name is for a snapshot.
The creation date and time of the snapshot, a sequence number, or some other information, such as time of
day, to distinguish sequential snapshots on a given database.
For example, consider a series of snapshots for the AdventureWorks2012 database. Three daily snapshots
are created at 6-hour intervals between 6 A.M. and 6 P.M., based on a 24-hour clock. Each daily snapshot is
kept for 24 hours before being dropped and replaced by a new snapshot of the same name. Note that each
snapshot name indicates the hour, but not the day:
AdventureWorks_snapshot_0600
AdventureWorks_snapshot_1200
AdventureWorks_snapshot_1800
Alternatively, if the creation time of these daily snapshots varies from day to day, a less precise naming convention
might be preferable, for example:
AdventureWorks_snapshot_morning
AdventureWorks_snapshot_noon
AdventureWorks_snapshot_evening
Best Practice: Limiting the Number of Database Snapshots
Creating a series of snapshots over time captures sequential snapshots of the source database. Each snapshot
persists until it is explicitly dropped. Because each snapshot will continue to grow as original pages are updated,
you may want to conserve disk space by deleting an older snapshot after creating a new snapshot.
Note! To revert to a database snapshot, you need to delete any other snapshots from that database.
Best Practice: Client Connections to a Database Snapshot
To use a database snapshot, clients need to know where to find it. Users can read from one database snapshot
while another is being created or deleted. However, when you substitute a new snapshot for an existing one, you
need to redirect clients to the new snapshot. Users can manually connect to a database snapshot by means of SQL
Server Management Studio. However, to support a production environment, you should create a programmatic
solution that transparently directs report-writing clients to the latest database snapshot of the database.
Permissions
Any user who can create a database can create a database snapshot; however, to create a snapshot of a mirror
database, you must be a member of the sysadmin fixed server role.
How to Create a Database Snapshot (Using Transact-SQL)
To create a database snapshot
For an example of this procedure, see Examples (Transact-SQL), later in this section.
1. Based on the current size of the source database, ensure that you have sufficient disk space to hold the
database snapshot. The maximum size of a database snapshot is the size of the source database at snapshot
creation. For more information, see View the Size of the Sparse File of a Database Snapshot (Transact-SQL).
2. Issue a CREATE DATABASE statement on the files using the AS SNAPSHOT OF clause. Creating a snapshot
requires specifying the logical name of every database file of the source database. The syntax is as follows:
CREATE DATABASE database_snapshot_name
ON
(
NAME =logical_file_name,
FILENAME ='os_file_name'
) [ ,...n ]
AS SNAPSHOT OF source_database_name
[;]
Where source_database_name is the source database, logical_file_name is the logical name used in SQL
Server when referencing the file, os_file_name is the path and file name used by the operating system when
you create the file, and database_snapshot_name is the name of the snapshot to which you want to revert
the database. For a full description of this syntax, see CREATE DATABASE (SQL Server Transact-SQL).
NOTE
When you create a database snapshot, log files, offline files, restoring files, and defunct files are not allowed in the
CREATE DATABASE statement.
Examples (Transact-SQL )
NOTE
The .ss extension used in the examples is arbitrary.
This section contains the following examples:
A. Creating a snapshot on the AdventureWorks database
B. Creating a snapshot on the Sales database
A. Creating a snapshot on the AdventureWorks database
This example creates a database snapshot on the AdventureWorks database. The snapshot name,
AdventureWorks_dbss_1800 , and the file name of its sparse file, AdventureWorks_data_1800.ss , indicate the creation
time, 6 P.M (1800 hours).
CREATE DATABASE AdventureWorks_dbss1800 ON
( NAME = AdventureWorks_Data, FILENAME =
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Data\AdventureWorks_data_1800.ss' )
AS SNAPSHOT OF AdventureWorks;
GO
B. Creating a snapshot on the Sales database
This example creates a database snapshot, sales_snapshot1200 , on the Sales database. This database was created
in the example, "Creating a database that has filegroups," in CREATE DATABASE (SQL Server Transact-SQL).
--Creating sales_snapshot1200 as snapshot of the
--Sales database:
CREATE DATABASE sales_snapshot1200 ON
( NAME = SPri1_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\data\SPri1dat_1200.ss'),
( NAME = SPri2_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\data\SPri2dt_1200.ss'),
( NAME = SGrp1Fi1_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\mssql\data\SG1Fi1dt_1200.ss'),
( NAME = SGrp1Fi2_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\data\SG1Fi2dt_1200.ss'),
( NAME = SGrp2Fi1_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\data\SG2Fi1dt_1200.ss'),
( NAME = SGrp2Fi2_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\data\SG2Fi2dt_1200.ss')
AS SNAPSHOT OF Sales;
GO
Related Tasks
View a Database Snapshot (SQL Server)
Revert a Database to a Database Snapshot
Drop a Database Snapshot (Transact-SQL)
See Also
CREATE DATABASE (SQL Server Transact-SQL)
Database Snapshots (SQL Server)
View a Database Snapshot (SQL Server)
3/24/2017 • 1 min to read • Edit Online
This topic explains how to view a SQL Server database snapshot using SQL Server Management Studio.
NOTE
To create, revert to, or delete a database snapshot, you must use Transact-SQL.
In This Topic
To view a database snapshot, using:
SQL Server Management Studio
Transact-SQL
Using SQL Server Management Studio
To view a database snapshot
1. In Object Explorer, connect to the instance of the SQL Server Database Engine and then expand that
instance.
2. Expand Databases.
3. Expand Database Snapshots, and select the snapshot you want to view.
Using Transact-SQL
To view a database snapshot
1. Connect to the Database Engine.
2. From the Standard bar, click New Query.
3. To list the database snapshots of the instance of SQL Server, query the source_database_id column of the
sys.databases catalog view for non-NULL values.
Related Tasks
Create a Database Snapshot (Transact-SQL)
Revert a Database to a Database Snapshot
Drop a Database Snapshot (Transact-SQL)
See Also
Database Snapshots (SQL Server)
Revert a Database to a Database Snapshot
3/24/2017 • 5 min to read • Edit Online
If data in an online database becomes damaged, in some cases, reverting the database to a database snapshot that
predates the damage might be an appropriate alternative to restoring the database from a backup. For example,
reverting a database might be useful for reverse a recent serious user error, such as a dropped table. However, all
changes made after the snapshot was created are lost.
Before you begin:
Limitations and Restrictions
Prerequisites
Security
To Revert a Database to a Database Snapshot, using: Transact-SQL
Before You Begin
Limitations and Restrictions
Reverting is unsupported under the following conditions:
The database must currently have only one database snapshot, to which you plan to revert.
Any read-only or compressed filegroups exist in the database.
Any files are now offline but were online when the snapshot was created.
Before reverting a database, consider the following limitations:
Reverting is not intended for media recovery. . A database snapshot is an incomplete copy of the database
files, so if either the database or the database snapshot is corrupted, reverting from a snapshot is likely to
be impossible. Furthermore, even when it is possible, reverting in the event of corruption is unlikely to
correct the problem. Therefore, taking regular backups and testing your restore plan are essential to protect
a database. For more information, see Back Up and Restore of SQL Server Databases.
NOTE
If you need to be able to restore the source database to the point in time at which you created a database snapshot,
use the full recovery model and implement a backup policy that enables you to do that.
The original source database is overwritten by the reverted database, so any updates to the database since
the snapshot's creation are lost.
The revert operation also overwrites the old log file and rebuilds the log. Consequently, you cannot roll the
reverted database forward to the point of user error. Therefore, we recommend that you back up the log
before reverting a database.
NOTE
Although you cannot restore the original log to roll forward the database, the information in the original log file can
be useful for reconstructing lost data.
Reverting breaks the log backup chain. Therefore, before you can take log backups of the reverted database,
you must first take a full database backup or file backup. We recommend a full database backup.
During a revert operation, both the snapshot and the source database are unavailable. The source database
and snapshot are both marked "In restore." If an error occurs during the revert operation, when the
database starts up again, the revert operation will try to finish reverting.
The metadata of a reverted database is the same as the metadata at the time of the snapshot.
Reverting drops all the full-text catalogs.
Prerequisites
Ensure that the source database and database snapshot meet the following prerequisites:
Verify that the database has not become corrupted.
NOTE
If the database has been corrupted, you will need to restore it from backups. For more information, see Complete
Database Restores (Simple Recovery Model) or Complete Database Restores (Full Recovery Model).
Identify a recent snapshot that was created before the error. For more information, see View a Database
Snapshot (SQL Server).
Drop any other snapshots that currently exist on the database. For more information, see Drop a Database
Snapshot (Transact-SQL).
Security
Permissions
Any user who has RESTORE DATABASE permissions on the source database can revert it to its state when a
database snapshot was created.
How to Revert a Database to a Database Snapshot (Using TransactSQL)
To revert a database to a database snapshot
NOTE
For an example of this procedure, see Examples (Transact-SQL), later in this section.
1. Identify the database snapshot to which you want to revert the database. You can view the snapshots on a
database in SQL Server Management Studio (see View a Database Snapshot (SQL Server)). Also, you can
identify the source database of a view from the source_database_id column of the sys.databases
(Transact-SQL) catalog view.
2. Drop any other database snapshots.
For information on dropping snapshots, see Drop a Database Snapshot (Transact-SQL). If the database uses
the full recovery model, before reverting, you should back up the log. For more information, see Back Up a
Transaction Log (SQL Server) or Back Up the Transaction Log When the Database Is Damaged (SQL Server).
3. Perform the revert operation.
A revert operation requires RESTORE DATABASE permissions on the source database. To revert the
database, use the following Transact-SQL statement:
RESTORE DATABASE database_name FROM DATABASE_SNAPSHOT =database_snapshot_name
Where database_name is the source database and database_snapshot_name is the name of the snapshot
to which you want to revert the database. Notice that in this statement, you must specify a snapshot name
rather than a backup device.
For more information, see RESTORE (Transact-SQL).
NOTE
During the revert operation, both the snapshot and the source database are unavailable. The source database and
snapshot are both marked as "In restore." If an error occurs during the revert operation, it will try to finish reverting
when the database starts up again.
4. If the database owner changed since creation of the database snapshot, you may want to update the
database owner of the reverted database.
NOTE
The reverted database retains the permissions and configuration (such as database owner and recovery model) of
the database snapshot.
5. Start the database.
6. Optionally, back up the reverted database, especially if it uses the full (or bulk-logged) recovery model. To
back up a database, see Create a Full Database Backup (SQL Server).
Examples (Transact-SQL )
This section contains the following examples of reverting a database to a database snapshot:
A. Reverting a snapshot on the AdventureWorks database
B. Reverting a snapshot on the Sales database
A. Reverting a snapshot on the AdventureWorks database
This example assumes that only one snapshot currently exists on the AdventureWorks2012 database. For the
example that creates the snapshot to which the database is reverted here, see Create a Database Snapshot
(Transact-SQL).
USE master;
-- Reverting AdventureWorks to AdventureWorks_dbss1800
RESTORE DATABASE AdventureWorks from
DATABASE_SNAPSHOT = 'AdventureWorks_dbss1800';
GO
B. Reverting a snapshot on the Sales database
This example assumes that two snapshots currently exist on the Sales database: sales_snapshot0600 and
sales_snapshot1200. The example deletes the older of the snapshots and reverts the database to the more recent
snapshot.
For the code for creating the sample database and snapshots on which this example depends, see:
For the Sales database and the sales_snapshot0600 snapshot, see "Creating a database with filegroups"
and "Creating a database snapshot" in CREATE DATABASE (SQL Server Transact-SQL).
For the sales_snapshot1200 snapshot, see "Creating a snapshot on the Sales database" in Create a
Database Snapshot (Transact-SQL).
--Test to see if sales_snapshot0600 exists and if it
-- does, delete it.
IF EXISTS (SELECT dbid FROM sys.databases
WHERE NAME='sales_snapshot0600')
DROP DATABASE SalesSnapshot0600;
GO
-- Reverting Sales to sales_snapshot1200
USE master;
RESTORE DATABASE Sales FROM DATABASE_SNAPSHOT = 'sales_snapshot1200';
GO
Related Tasks
Create a Database Snapshot (Transact-SQL)
View a Database Snapshot (SQL Server)
Drop a Database Snapshot (Transact-SQL)
See Also
Database Snapshots (SQL Server)
RESTORE (Transact-SQL)
sys.databases (Transact-SQL)
Database Mirroring and Database Snapshots (SQL Server)
Drop a Database Snapshot (Transact-SQL)
3/24/2017 • 1 min to read • Edit Online
Dropping a database snapshot deletes the database snapshot from SQL Server and deletes the sparse files that
are used by the snapshot. When you drop a database snapshot, all user connections to it are terminated.
Security
Permissions
Any user with DROP DATABASE permissions can drop a database snapshot.
How to Drop a Database Snapshot (Using Transact-SQL)
To drop a database snapshot
1. Identify the database snapshot that you want to drop. You can view the snapshots on a database in SQL
Server Management Studio. For more information, see View a Database Snapshot (SQL Server).
2. Issue a DROP DATABASE statement, specifying the name of the database snapshot to be dropped. The
syntax is as follows:
DROP DATABASE database_snapshot_name [ ,...n ]
Where database_snapshot_name is the name of the database snapshot to be dropped.
Example (Transact-SQL)
This example drops a database snapshot named SalesSnapshot0600, without affecting the source database.
DROP DATABASE SalesSnapshot0600 ;
Any user connections to SalesSnapshot0600 are terminated, and all of the NTFS file system sparse files used by
the snapshot are deleted.
NOTE
For information about the use of sparse files by database snapshots, see Database Snapshots (SQL Server).
Related Tasks
Create a Database Snapshot (Transact-SQL)
View a Database Snapshot (SQL Server)
Revert a Database to a Database Snapshot
See Also
DROP DATABASE (Transact-SQL)
Database Snapshots (SQL Server)
Database Instant File Initialization
3/24/2017 • 2 min to read • Edit Online
Data and log files are initialized to overwrite any existing data left on the disk from previously deleted files. Data
and log files are first initialized by filling the files with zeros when you perform one of the following operations:
Create a database.
Add files, log or data, to an existing database.
Increase the size of an existing file (including autogrow operations).
Restore a database or filegroup.
File initialization causes these operations to take longer. However, when data is written to the files for the
first time, the operating system does not have to fill the files with zeros.
Instant File Initialization
In SQL Server, data files can be initialized instantaneously. This allows for fast execution of the previously
mentioned file operations. Instant file initialization reclaims used disk space without filling that space with zeros.
Instead, disk content is overwritten as new data is written to the files. Log files cannot be initialized instantaneously.
NOTE
Instant file initialization is available only on Microsoft Windows XP Professional or Windows Server 2003 or later versions.
Instant file initialization is only available if the SQL Server (MSSQLSERVER) service account has been granted
SE_MANAGE_VOLUME_NAME. Members of the Windows Administrator group have this right and can grant it to
other users by adding them to the Perform Volume Maintenance Tasks security policy. For more information
about assigning user rights, see the Windows documentation.
Instant file initialization is not available when TDE is enabled.
To grant an account the
Perform volume maintenance tasks
permission:
1. On the computer where the backup file will be created, open the Local Security Policy application (
secpol.msc ).
2. In the left pane, expand Local Policies, and then click User Rights Assignment.
3. In the right pane, double-click Perform volume maintenance tasks.
4. Click Add User or Group and add any user accounts that are used for backups.
5. Click Apply, and then close all Local Security Policy dialog boxes.
Security Considerations
Because the deleted disk content is overwritten only as new data is written to the files, the deleted content might be
accessed by an unauthorized principal. While the database file is attached to the instance of SQL Server, this
information disclosure threat is reduced by the discretionary access control list (DACL) on the file. This DACL allows
file access only to the SQL Server service account and the local administrator. However, when the file is detached, it
may be accessed by a user or service that does not have SE_MANAGE_VOLUME_NAME. A similar threat exists when
the database is backed up. The deleted content can become available to an unauthorized user or service if the
backup file is not protected with an appropriate DACL.
If the potential for disclosing deleted content is a concern, you should do one or both of the following:
Always make sure that any detached data files and backup files have restrictive DACLs.
Disable instant file initialization for the instance of SQL Server by revoking SE_MANAGE_VOLUME_NAME
from the SQL Server service account.
NOTE
Disabling instant file initialization only affects files that are created or increased in size after the user right is revoked.
See Also
CREATE DATABASE (SQL Server Transact-SQL)