Download Managing and Optimizing Tempdb in SQL Server

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

DBase wikipedia , lookup

Concurrency control wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Oracle Database wikipedia , lookup

Microsoft Access wikipedia , lookup

Database wikipedia , lookup

Ingres (database) wikipedia , lookup

Team Foundation Server wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Btrieve wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Relational model wikipedia , lookup

Clusterpoint wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Database model wikipedia , lookup

SQL wikipedia , lookup

PL/SQL wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Transcript
Ramblings of a SQL DBA mind... by Nitin Garg
Ramblings of a SQL DBA mind...
http://blogs.lostincreativity.com/sqldba
Managing and Optimizing Tempdb in SQL Server
Author : admin
We often come across situations where there is contention in tempdb or tempdb is getting oversized, etc. better planning of
tempdb may avoid performance issues if it is being used heavily and also reduce administration effort, below article is basically
focussed on the planning stage of tempdb to reduce administration effort and avoid performance issues,
Source: http://support.microsoft.com/kb/328551, http://www.sqllion.com/2009/05/optimizing-tempdb-in-sql-server-2005/
Tempdb is one of the system database in SQL Server, all user data is lost from tempdb when SQL server is shutdown because
tempdb is newly created each time SQL Server restarts and it is recreated using model database whereas data and log files
gets resized to last set configiration.
Like all other user databases, it has also two types of file groups; one for keeping data and one for the logs. The tempdb default
size is 8 MB when the service starts. But if the Auto Grow option is enabled then its size may grow till the disk volume is full. But
at every restart of the service, the tempdb size again reduces to the default configuration. You should keep the initial size of
the tempdb database according to the need of the workload because keeping a decent amount of space for tempdb will
enhance your performance. This will reduce the overheads for auto growth of tempdb database for each operation as auto
growth should kick only in most extreme circumstances. Sometimes, it’s better to keep the tempdb database in a separate disk.
To change the tempdb path to a different location, follow the steps given at http://support.microsoft.com/kb/224071
If there is no space left in tempdb database, then the server can become unusable. And this usually happens when the tempdb
size reaches to its maximum size limit or when there is no space left in the physical hard drive to store the data and logs for
tempdb. As tempdb is a global resource, so it’s mandatory for a DBA to use its space effectively and efficiently when the
tempdb size comes down to the critical limit. So it’s better to monitor some performance majors and size of tempdb.
Other issue generally users face is when the tempdb database is heavily used, SQL Server may experience contention when it
tries to allocate pages.
One great way to improve the SQL sever performance is to configure tempdb effectively. To increase its efficiency, it’s better to
check around the physical disk configuration, file configuration, as well as some settings within the database.
To identify the data file size or contention issues, three types of tempdb space containers should be taken into account and few
operations set:
user objects : All the objects that are created by user applications like:
user-defined tables
global and local temporary tables
indexes
table variables
Restructuring clustered index (create or alter a clustered index, mapping index, etc.)
Internal objects: Below is the list of internal objects that uses tempdb for storing intermediate data.
intermediate runs for index sorts, group by , order by operations
intermediate results for hash joins and hash aggregates
To store XML variables or other large object (LOB) data type variables. The LOB data type includes all of the large
object types: text, image, ntext, varchar(max), varbinary(max), and all others.
By queries that need a spool to store intermediate results.
By keyset cursors to store the keys.
By static cursors to store a query result.
By Service Broker to store messages in transit.
By INSTEAD OF triggers to store data for internal processing.
Version store: When any transaction occurs, it has to be tracked for its version stores. These version stores are used to store
row versions generated by transactions i.e. for snapshot isolation, triggers, MARS (multiple active result sets), and online index
build. Two type of version stores are there in tempdb:
1/6
Ramblings of a SQL DBA mind... by Nitin Garg
Ramblings of a SQL DBA mind...
http://blogs.lostincreativity.com/sqldba
online index build version store (row versions from tables that have online index build operations on them)
common version store (for row versions from all other tables in all databases)
These operations heavily use tempdb:
Repeated create and drop of temporary tables (local or global).
Table variables that use tempdb for storage purposes.
Work tables associated with CURSORS.
Work tables associated with an ORDER BY clause.
Work tables associated with an GROUP BY clause.
Work files associated with HASH PLANS.
Heavy and significant use of these activities may lead to the contention problems.
Resolution:
To reduce the allocation resource contention for tempdb that is experiencing heavy usage, follow these steps:
1. Apply SP4: For servers that are running SQL Server 2000 Service Pack 3 (SP3) or below, apply SP4. With the fix, the
starting file will now be different for each consecutive mixed page allocation (if more than one file exists). This avoids the
contention problem by breaking up the train that went through the SGAMs in the same order every time with the same starting
point. The new allocation algorithm for SGAM is pure round-robin, and does not honor the proportional fill to maintain speed.
2. Implement trace flag -T1118.
-T1118 is a server-wide setting.
Include the -T1118 trace flag in the Startup parameters for SQL Server so that the trace flag remains in effect even after
SQL Server is recycled.
-T1118 removes almost all single page allocations on the server.
By disabling most of the single page allocations, you reduce the contention on the SGAM page.
With -T1118 turned ON, almost all new allocations are performed from a GAM page (for example, 2:1:2) that allocates
eight (8) pages (1 extent) at a time to a object as opposed to a single page from an extent for the first eight (8) pages of
an object, without the trace flag.
The IAM pages still use the single page allocations from the SGAM page, even with -T1118 turned ON. However, when
combined with hotfix 8.00.0702 and increased tempdb data files, the net effect is a reduction in contention on the SGAM
page. For space concerns, see the “Disadvantages” section of this article.
Note Trace flag -T1118 is also available and supported in Microsoft SQL Server 2005 and SQL Server 2008. However, if you
are running SQL Server 2005 or SQL Server 2008, you do not have to apply any hotfix.
For space issue and contention problem the effective solution is to properly define physical files parameters as follows,
1. Configuring No. of Physical Files:
Increase the number of tempdb data files to be at least equal to the number of processors. Also, create the files with
equal sizing. For e.g. if the data file size of tempdb is 5 GB, and the Log file size is 5 GB, the recommendation is to
increase the single datafile to 10 (each of 500 MB to maintain equal sizing), and leave the log file as is. Having the
different data files on separate disks would be good. However, this is not required and they can co-exist on the same
disk.
The optimal number of tempdb data files depends on the degree of contention seen in tempdb. As a starting point, you
can configure the tempdb to be at least equal to the number of processors assigned for SQL Server. For higher end
systems (for example, 16 or 32 proc), the starting number could be 10. If the contention is not reduced, you may have to
increase the number of data files more.
Note A dual-core processor is considered to be two processors.
The equal sizing of data files is critical because the proportional fill algorithm is based on the size of the files. If data files
are created with unequal sizes, the proportional fill algorithm tries to use the largest file more for GAM allocations instead
of spreading the allocations between all the files, thereby defeating the purpose of creating multiple data files.
2. Tempdb auto grow setting:
2/6
Ramblings of a SQL DBA mind... by Nitin Garg
Ramblings of a SQL DBA mind...
http://blogs.lostincreativity.com/sqldba
The auto-grow of tempdb data files can also interfere with the proportional fill algorithm. Therefore, it may be a good idea
to turn off the auto-grow feature for the tempdb data files. If the auto-grow option is turned off, you must make sure to
create the data files so that they are large enough to prevent the server from experiencing a lack of disk space with
tempdb.
Disadvantages of multiple physical files
The only downside to the recommendations mentioned earlier is that you may see the size of the databases increase
when the following conditions are true:
New objects are created in a user database.
Each of the new objects occupy less than 64 KB of storage.
Appendix:
Contention is caused because:
When the tempdb database is heavily used, SQL Server may experience contention when it tries to allocate pages.
From the sysprocesses system table output, the waitresource may show up as “2:1:1? (PFS Page) or “2:1:3? (SGAM
Page). Depending on the degree of contention, this may also lead to SQL Server appearing unresponsive for short
periods.
During object creation, two (2) pages must be allocated from a mixed extent and assigned to the new object. One page
is for the Index Allocation Map (IAM), and the second is for the first page for the object. SQL Server tracks mixed extents
by using the Shared Global Allocation Map (SGAM) page. Each SGAM page tracks about 4 gigabytes of data.
As part of allocating a page from the mixed extent, SQL Server must scan the Page Free Space (PFS) page to find out
which mixed page is free to be allocated. The PFS page keeps track of free space available on every page, and each
PFS page tracks about 8000 pages. Appropriate synchronization is maintained to make changes to the PFS and SGAM
pages; and that can stall other modifiers for short periods.
When SQL Server searches for a mixed page to allocate, it always starts the scan on the same file and SGAM page.
This results in intense contention on the SGAM page when several mixed page allocations are underway, which can
cause the problems stated above.
How increasing the number of tempdb data files with equal sizing reduces contention
Here is a list of how increasing the number of tempdb data files with equal sizing reduces contention:
With one data file for the tempdb, you only have one GAM page, and one SGAM page for each 4 GB of space.
Increasing the number of data files with the same sizes for tempdb effectively creates one or more GAM and
SGAM pages for each data file.
The allocation algorithm for GAM gives out one extent at a time (eight contiguous pages) from the number of files
in a round robin fashion while honoring the proportional fill. Therefore, if you have 10 equal sized files, the first
allocation is from File1, the second from File2, the third from File3, and so on.
The resource contention of the PFS page is reduced because eight pages are marked as FULL at a time
because GAM is allocating the pages.
Panic mode (when tempdb eats all your hard drive space)
Use performance monitor queries to check the usage of tempdb size.
Set an alerting system, to notify whenever the tempdb size decreases to a threshold limit.
After getting the alert, try to redesign queries to work on smaller sets of data at a time.
Break one large transaction into several smaller transactions if possible.
Expand the tempdb by adding files or by moving it to another hardrive or volume.
Making each data files of same size allows for optimal proportional-fill performance.
Put thetempdbdatabase on a fast I/O device.
Put thetempdbdatabase on disks that is not used by user databases.
Shrinking tempdb
http://support.microsoft.com/kb/307487
3/6
Ramblings of a SQL DBA mind... by Nitin Garg
Ramblings of a SQL DBA mind...
http://blogs.lostincreativity.com/sqldba
Find correct size of tempdb using,
use tempdb
select (size*8) as FileSizeKB from sys.database_files
Method 1 to Shrink Tempdb
This method requires you to restart SQL Server.
1.Stop SQL Server. Open a command prompt, and then start SQL Server by typing the following command:
sqlservr -c -f
The -c and -f parameters cause SQL Server to start in a minimum configuration mode with a tempdb size of 1 MB for the
data file and 0.5 MB for the log file.
NOTE: If you use a SQL Server named instance, you must change to the appropriate folder (Program Files\Microsoft
SQL Server\MSSQL$instance name\Binn) and use the -s switch (-s%instance_name%).
2.Connect to SQL Server with Query Analyzer, and then run the following Transact-SQL commands:
ALTER DATABASE tempdb MODIFY FILE
(NAME = ‘tempdev’, SIZE = target_size_in_MB)
–Desired target size for the data file
ALTER DATABASE tempdb MODIFY FILE
(NAME = ‘templog’, SIZE = target_size_in_MB)
–Desired target size for the log file
3.Stop SQL Server by pressing Ctrl-C at the command prompt window, restart SQL Server as a service, and then verify
the size of the Tempdb.mdf and Templog.ldf files.
A limitation of this method is that it only operates on the default tempdb logical files, tempdev and templog. If additional
files were added to tempdb you can shrink them after you restart SQL Server as a service. All tempdb files are recreated during startup; therefore, they are empty and can be removed. To remove additional files in tempdb, use the
ALTER DATABASE command with the REMOVE FILE option.
Method 2 to Shrink Tempdb
Use the DBCC SHRINKDATABASE command to shrink the tempdb database as a whole. DBCC SHRINKDATABASE
receives the parameter target_percent, which is the desired percentage of free space left in the database file after the
database is shrunk. If you use DBCC SHRINKDATABASE, you may have to restart SQL Server.
IMPORTANT: If you run DBCC SHRINKDATABASE, no other activity can be occurring with the tempdb database. To
make sure that other processes cannot use tempdb while DBCC SHRINKDATABASE is run, you must start SQL Server
in single user mode. For more information refer to the Effects of Execution of DBCC SHRINKDATABASE or
DBCCSHRINKFILE While Tempdb Is In Use section of this article.
1.Determine the space currently used in tempdb by using the sp_spaceused stored procedure. Then, calculate the
percentage of free space left for use as a parameter to DBCC SHRINKDATABASE; this calculation is based on the
desired database size.
Note In some cases you may have to execute sp_spaceused @updateusage=true to recalculate the space used and to
obtain an updated report. Refer to SQL Server Books Online for more information about the sp_spaceused stored
procedure.
Consider this example:
Assume that tempdb has two files, the primary data file (Tempdb.mdf), which is 100 MB in size and the log file
(Tempdb.ldf), which is 30 MB. Assume that sp_spaceused reports that the primary data file contains 60 MB of data. Also
assume that you want to shrink the primary data file to 80 MB. Calculate the desired percentage of free space left after
the shrink, 80 MB – 60 MB = 20 MB. Now, divide 20 MB by 80 MB = 25% and that is your target_percent. The
4/6
Ramblings of a SQL DBA mind... by Nitin Garg
Ramblings of a SQL DBA mind...
http://blogs.lostincreativity.com/sqldba
transaction log file is shrunk accordingly, leaving 25% or 20 MB of space free after the database is shrunk.
2.Connect to SQL Server with Query Analyzer, and then run the following Transact-SQL commands:
dbcc shrinkdatabase (tempdb, ‘target percent’)
— This command shrinks the tempdb database as a whole
There are limitations for use of the DBCC SHRINKDATABASE command on the tempdb database. The target size for
data and log files cannot be smaller than the size specified when the database was created or the last size explicitly set
with a file-size changing operation such as ALTER DATABASE with the MODIFY FILE option or the DBCC SHRINKFILE
command. Another limitation of DBCC SHRINKDATABASE is the calculation of the target_percentage parameter and its
dependency on the current space used.
Method 3 to Shrink Tempdb
Use the command DBCC SHRINKFILE to shrink the individual tempdb files. DBCC SHRINKFILE provides more
flexibility than DBCC SHRINKDATABASE because you can use it on a single database file without affecting other files
that belong to the same database. DBCC SHRINKFILE receives the target size parameter, which is the desired final size
for the database file.
IMPORTANT: You must run DBCC SHRINKFILE command while no other activity occurs in the tempdb database. To
make sure that other processes cannot use tempdb while DBCC SHRINKFILE executes, you must restart SQL Server in
the single user mode. For more information about DBCC SHRINKFILE, see the Effects of Execution of DBCC
SHRINKDATABASE or DBCCSHRINKFILE While Tempdb is In Use section of this article.
1.Determine the desired size for the primary data file (tempdb.mdf), the log file (templog.ldf), and/or additional files
added to tempdb. Make sure that the space used in the files is less than or equal to the desired target size.
2.Connect to SQL Server with Query Analyzer, and then run the following Transact-SQL commands for the specific
database files that you need to shrink:
use tempdb
go
dbcc shrinkfile (tempdev, ‘target size in MB’)
go
— this command shrinks the primary data file
dbcc shrinkfile (templog, ‘target size in MB’)
go
— this command shrinks the log file, look at the last paragraph.
An advantage of DBCC SHRINKFILE is that it can reduce the size of a file to a size smaller than its original size. You
can issue DBCC SHRINKFILE on any of the data or log files. A limitation of DBCC SHRINKFILE is that you cannot make
the database smaller than the size of the model database.
Effects of Execution of DBCC SHRINKDATABASE or DBCCSHRINKFILE While Tempdb Is In Use
If tempdb is in use and you attempt to shrink it by using the DBCC SHRINKDATABASE or DBCC SHRINKFILE
commands, you may receive multiple consistency errors similar to the following type and the shrink operation may fail:
Server: Msg 2501, Level 16, State 1, Line 1 Could not find table named ’1525580473?. Check sysobjects.
-orServer: Msg 8909, Level 16, State 1, Line 0 Table Corrupt: Object ID 1, index ID 0, page ID %S_PGID. The PageId in
the page header = %S_PGID.
Although error 2501 may not be indicative of any corruption in tempdb, it causes the shrink operation to fail. On the other
5/6
Ramblings of a SQL DBA mind... by Nitin Garg
Ramblings of a SQL DBA mind...
http://blogs.lostincreativity.com/sqldba
hand, error 8909 could indicate corruption in the tempdb database. Restart SQL Server to re-create tempdb and clean
up the consistency errors. However, keep in mind that there could be other reasons for physical data corruption errors
like error 8909 and those include input/output subsystem problems.
REFERENCES:
http://technet.microsoft.com/en-au/library/cc966545.aspx
http://support.microsoft.com/default.aspx?scid=KB;EN-US;307487
http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1307255,00.html
http://msdn.microsoft.com/en-us/library/ms187104.aspx
http://msdn.microsoft.com/en-us/library/ms175527.aspx
http://support.microsoft.com/kb/307487
6/6
Powered by TCPDF (www.tcpdf.org)