Download In-Memory OLTP (In-Memory Optimization) | Microsoft Docs

Document related concepts

Tandem Computers wikipedia , lookup

Oracle Database wikipedia , lookup

Microsoft Access wikipedia , lookup

Serializability wikipedia , lookup

Database wikipedia , lookup

Concurrency control wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Ingres (database) wikipedia , lookup

Clusterpoint wikipedia , lookup

SQL wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Relational model wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

PL/SQL wikipedia , lookup

Database model wikipedia , lookup

Transcript
Table of Contents
Overview
Quick Start 1: In-Memory OLTP Technologies for Faster Transact-SQL Performance
Overview and Usage Scenarios
Plan your adoption of In-Memory OLTP Features in SQL Server
Requirements for Using Memory-Optimized Tables
In-Memory OLTP Code Samples
Demonstration: Performance Improvement of In-Memory OLTP
Creating a Memory-Optimized Table and a Natively Compiled Stored Procedure
Application-Level Partitioning
Sample Database for In-Memory OLTP
Memory-Optimized Tables
Introduction to Memory-Optimized Tables
Native Compilation of Tables and Stored Procedures
Altering Memory-Optimized Tables
Transactions with Memory-Optimized Tables
Application Pattern for Partitioning Memory-Optimized Tables
Statistics for Memory-Optimized Tables
Table and Row Size in Memory-Optimized Tables
A Guide to Query Processing for Memory-Optimized Tables
Faster temp table and table variable by using memory optimization
Scalar User-Defined Functions for In-Memory OLTP
Indexes for Memory-Optimized Tables
Hash Indexes for Memory-Optimized Tables
Natively Compiled Stored Procedures
Creating Natively Compiled Stored Procedures
Altering Natively Compiled T-SQL Modules
Atomic Blocks
Natively Compiled Stored Procedures and Execution Set Options
Best Practices for Calling Natively Compiled Stored Procedures
Monitoring Performance of Natively Compiled Stored Procedures
Calling Natively Compiled Stored Procedures from Data Access Applications
Estimate Memory Requirements for Memory-Optimized Tables
Bind a Database with Memory-Optimized Tables to a Resource Pool
Monitor and Troubleshoot Memory Usage
Resolve Out Of Memory Issues
Restore a Database and Bind it to a Resource Pool
In-Memory OLTP Garbage Collection
Creating and Managing Storage for Memory-Optimized Objects
Configuring Storage for Memory-Optimized Tables
The Memory Optimized Filegroup
Durability for Memory-Optimized Tables
Checkpoint Operation for Memory-Optimized Tables
Defining Durability for Memory-Optimized Objects
Comparing Disk-Based Table Storage to Memory-Optimized Table Storage
Scalability
Backing Up a Database with Memory-Optimized Tables
Piecemeal Restore of Databases With Memory-Optimized Tables
Restore and Recovery of Memory-Optimized Tables
SQL Server Support for In-Memory OLTP
Unsupported SQL Server Features for In-Memory OLTP
SQL Server Management Objects Support for In-Memory OLTP
SQL Server Integration Services Support for In-Memory OLTP
SQL Server Management Studio Support for In-Memory OLTP
High Availability Support for In-Memory OLTP databases
Transact-SQL Support for In-Memory OLTP
Supported Data Types for In-Memory OLTP
Accessing Memory-Optimized Tables Using Interpreted Transact-SQL
Supported Features for Natively Compiled T-SQL Modules
Supported DDL for Natively Compiled T-SQL modules
Transact-SQL Constructs Not Supported by In-Memory OLTP
Migrating to In-Memory OLTP
Determining if a Table or Stored Procedure Should Be Ported to In-Memory OLTP
Determining if a Table or Stored Procedure Should Be Ported to In-Memory OLTP
Memory Optimization Advisor
Native Compilation Advisor
PowerShell Cmdlet for Migration Evaluation
Implementing SQL_VARIANT in a Memory-Optimized Table
Migration Issues for Natively Compiled Stored Procedures
Creating and Accessing Tables in TempDB from Natively Compiled Stored
Procedures
Simulating an IF-WHILE EXISTS Statement in a Natively Compiled Module
Implementing MERGE Functionality in a Natively Compiled Stored Procedure
Implementing a CASE Expression in a Natively Compiled Stored Procedure
Implementing UPDATE with FROM or Subqueries
Implementing an Outer Join
Migrating Computed Columns
Migrating Triggers
Cross-Database Queries
Implementing IDENTITY in a Memory-Optimized Table
In-Memory OLTP (In-Memory Optimization)
3/24/2017 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
In-Memory OLTP can significantly improve the performance of transaction processing, data ingestion and data
load, and transient data scenarios. To jump into the basic code and knowledge you need to quickly test your own
memory-optimized table and natively compiled stored procedure, see
Quick Start 1: In-Memory OLTP Technologies for Faster Transact-SQL Performance.
A 17-minute video explaining In-Memory OLTP and demonstrating performance benefits:
In-Memory OLTP in SQL Server 2016.
To download the performance demo for In-Memory OLTP used in the video:
In-Memory OLTP Performance Demo v1.0
For a more detailed overview of In-Memory OLTP and a review of scenarios that see performance benefits from
the technology:
Overview and Usage Scenarios
Note that In-Memory OLTP is the SQL Server technology for improving performance of transaction
processing. For the SQL Server technology that improves reporting and analytical query performance see
Columnstore Indexes Guide.
Several improvements have been made to In-Memory OLTP in SQL Server 2016 as well as in Azure SQL
Database. The Transact-SQL surface area has been increased to make it easier to migrate database
applications. Support for performing ALTER operations for memory-optimized tables and natively
compiled stored procedures has been added, to make it easier to maintain applications. For information
about the new features in In-Memory OLTP, see Columnstore indexes - what's new.
NOTE
Try it out
In-Memory OLTP is available in Premium Azure SQL databases. To get started with In-Memory OLTP, as well as
Columnstore in Azure SQL Database, see Optimize Performance using In-Memory Technologies in SQL Database.
In this section
This section provides includes the following topics:
TOPIC
DESCRIPTION
Quick Start 1: In-Memory OLTP Technologies for Faster
Transact-SQL Performance
Delve right into In-Memory OLTP
Overview and Usage Scenarios
Overview of what In-Memory OLTP is, and what are the
scenarios that see performance benefits.
TOPIC
DESCRIPTION
Requirements for Using Memory-Optimized Tables
Discusses hardware and software requirements and
guidelines for using memory-optimized tables.
In-Memory OLTP Code Samples
Contains code samples that show how to create and use a
memory-optimized table.
Memory-Optimized Tables
Introduces memory-optimized tables.
Memory-Optimized Table Variables
Code example showing how to use a memory-optimized
table variable instead of a traditional table variable to reduce
tempdb use.
Indexes on Memory-Optimized Tables
Introduces memory-optimized indexes.
Natively Compiled Stored Procedures
Introduces natively compiled stored procedures.
Managing Memory for In-Memory OLTP
Understanding and managing memory usage on your
system.
Creating and Managing Storage for Memory-Optimized
Objects
Discusses data and delta files, which store information about
transactions in memory-optimized tables.
Backup, Restore, and Recovery of Memory-Optimized Tables
Discusses backup, restore, and recovery for memoryoptimized tables.
Transact-SQL Support for In-Memory OLTP
Discusses Transact-SQL support for In-Memory OLTP.
High Availability Support for In-Memory OLTP databases
Discusses availability groups and failover clustering in InMemory OLTP.
SQL Server Support for In-Memory OLTP
Lists new and updated syntax and features supporting
memory-optimized tables.
Migrating to In-Memory OLTP
Discusses how to migrate disk-based tables to memoryoptimized tables.
More information about In-Memory OLTP is available on:
Video explaining In-Memory OLTP and demonstrating performance benefits.
In-Memory OLTP Performance Demo v1.0
SQL Server In-Memory OLTP Internals Technical Whitepaper
SQL Server In-Memory OLTP and Columnstore Feature Comparison
What's new for In-Memory OLTP in SQL Server 2016 Part 1 and Part 2
In-Memory OLTP – Common Workload Patterns and Migration Considerations
In-Memory OLTP Blog
See Also
Database Features
Survey of Initial Areas in In-Memory OLTP
4/12/2017 • 14 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This article is for the developer who is in a hurry to learn the basics of the In-Memory OLTP performance features
of Microsoft SQL Server and Azure SQL Database.
For In-Memory OLTP, this article provides the following:
Quick explanations of the features.
Core code samples that implement the features.
SQL Server and SQL Database have only minor variations in their support of In-Memory technologies.
In the wild some bloggers refer to the In-Memory OLTP as Hekaton.
Benefits of In-Memory Features
SQL Server provides In-Memory features that can greatly improve the performance of many application systems.
The most straight forward considerations are described in this section.
Features for OLTP (Online Transactional Processing)
Systems which must processes large numbers of SQL INSERTs concurrently are excellent candidates for the OLTP
features.
Our benchmarks show that speed improvements from 5 times to 20 times faster are achievable by adoption of
the In-Memory features.
Systems which process heavy calculations in Transact-SQL are excellent candidates.
A stored procedure that is dedicated to heavy calculations can run up to 99 times faster.
Later you might visit the following articles which offer demonstrations of performance gains from In-Memory
OLTP:
Demonstration: Performance Improvement of In-Memory OLTP offers a small-scale demonstration of the larger
potential performance gains.
Sample Database for In-Memory OLTP offers a larger scale demonstration.
Features for Operational Analytics
In-Memory Analytics refers to SQL SELECTs which aggregate transactional data, typically by inclusion of a GROUP
BY clause. The index type called columnstore is central to operational analytics.
There are two major scenarios:
Batch Operational Analytics refers to aggregation processes that run either after business hours or on
secondary hardware which has copies of the transactional data.
Azure SQL Data Warehouse also relates to batch operational analytics.
Real-time Operational Analytics refers to aggregration processes that run during business hours and on the
primary hardware which is used for transactional workloads.
The present article focuses on OLTP, and not on Analytics. For information on how columnstore indexes bring
Analytics to SQL, see:
Get started with Columnstore for real time operational analytics
Columnstore Indexes Guide
NOTE
A two minute video about the In-Memory features is available at Azure SQL Database - In-Memory Technologies. The video
is dated December 2015.
Columnstore
A sequence of excellent blog posts elegantly explains columnstore indexes from several perspectives. The majority
of the posts describe further the concept of real-time operational analytics, which columnstore supports. These
posts were authored by Sunil Agarwal, a Program Manager at Microsoft, in March 2016.
Real-time Operational Analytics
1. Real-Time Operational Analytics Using In-Memory Technology
2. Real-Time Operational Analytics – Overview nonclustered columnstore index (NCCI)
3. Real-Time Operational Analytics: Simple example using nonclustered clustered columnstore index (NCCI) in
SQL Server 2016
4. Real-Time Operational Analytics: DML operations and nonclustered columnstore index (NCCI) in SQL Server
2016
5. Real-Time Operational Analytics: Filtered nonclustered columnstore index (NCCI)
6. Real-Time Operational Analytics: Compression Delay Option for Nonclustered Columnstore Index (NCCI)
7. Real-Time Operational Analytics: Compression Delay option with NCCI and the performance
8. Real-Time Operational Analytics: Memory-Optimized Tables and Columnstore Index
Defragment a columnstore index
1. Columnstore Index Defragmentation using REORGANIZE Command
2. Columnstore Index Merge Policy for REORGANIZE
Bulk importation of data
1. Clustered Column Store: Bulk Load
2. Clustered Columnstore Index: Data Load Optimizations – Minimal Logging
3. Clustered columnstore Index: Data Load Optimization – Parallel Bulk Import
Features of In-Memory OLTP
Let's look at the main features of In-Memory OLTP.
Memory-optimized tables
The T-SQL keyword MEMORY_OPTIMIZED, on the CREATE TABLE statement, is how a table is created to exist in
active memory, instead of on disk.
A Memory-optimized tables has one representation of itself in active memory, and secondary copy on the disk.
The disk copy is for routine recovery after a shutdown-then-restart of the server or database. This memoryplus-disk duality is completely hidden from you and your code.
Natively compiled modules
The T-SQL keyword NATIVE_COMPILATION, on the CREATE PROCEDURE statement, is how a native proc is
created. The T-SQL statements are compiled to machine code on first use of the native proc each time the database
is cycled online. The T-SQL instructions no longer endure slow interpretation of every instruction.
We have seen native compilation result in durations that are 1/100th of the interpreted duration.
A native module can reference memory-optimized tables only, and it cannot reference disk-based tables.
There are three types of natively compiled modules:
Natively compiled stored procedures.
Natively compiled user-defined functions (UDFs), which are scalar.
Natively compiled triggers.
Availability in Azure SQL Database
In-Memory OLTP and Columnstore are available in Azure SQL Database. For details see Optimize Performance
using In-Memory Technologies in SQL Database.
1. Ensure compatibility level >= 130
This section begins a sequence of numbered sections that together demonstrate the Transact-SQL syntax you can
use to implement In-Memory OLTP features.
First, it is important that your database be set to a compatibility level of at least 130. Next is the T-SQL code to view
the current compatibility level that your current database is set to.
SELECT d.compatibility_level
FROM sys.databases as d
WHERE d.name = Db_Name();
Next is the T-SQL code to update the level, if necessary.
ALTER DATABASE CURRENT
SET COMPATIBILITY_LEVEL = 130;
2. Elevate to SNAPSHOT
When a transaction involves both a disk-based table and a memory-optimized table, we call that a cross-container
transaction. In such a transaction it is essential that the memory-optimized portion of the transaction operate at
the transaction isolation level named SNAPSHOT.
To reliably enforce this level for memory-optimized tables in a cross-container transaction, alter your database
setting by executing the following T-SQL.
ALTER DATABASE CURRENT
SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON;
3. Create an optimized FILEGROUP
On Microsoft SQL Server, before you can create a memory-optimized table you must first create a FILEGROUP that
you declare CONTAINS MEMORY_OPTIMIZED_DATA. The FILEGROUP is assigned to your database. For details see:
The Memory Optimized FILEGROUP
On Azure SQL Database, you need not and cannot create such a FILEGROUP.
The following sample T-SQL script enables a database for In-Memory OLTP and configures all recommended
settings. It works with both SQL Server and Azure SQL Database: enable-in-memory-oltp.sql.
4. Create a memory-optimized table
The crucial Transact-SQL keyword is the keyword MEMORY_OPTIMIZED.
CREATE TABLE dbo.SalesOrder
(
SalesOrderId integer
not null IDENTITY
PRIMARY KEY NONCLUSTERED,
CustomerId
integer
not null,
OrderDate
datetime
not null
)
WITH
(MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_AND_DATA);
Transact-SQL INSERT and SELECT statements against a memory-optimized table are the same as for a regular
table.
ALTER TABLE for Memory-Optimized tables
ALTER TABLE...ADD/DROP can add or remove a column from a memory-optimized table, or an index.
CREATE INDEX and DROP INDEX cannot be run against a memory-optimized table, use ALTER TABLE ...
ADD/DROP INDEX instead.
For details see Altering Memory-Optimized Tables.
Plan your memory-optimized tables and indexes
Indexes for Memory-Optimized Tables
Transact-SQL Constructs Not Supported by In-Memory OLTP
5. Create a natively compiled stored procedure (native proc)
The crucial keyword is NATIVE_COMPILATION.
CREATE PROCEDURE ncspRetrieveLatestSalesOrderIdForCustomerId
@_CustomerId INT
WITH
NATIVE_COMPILATION,
SCHEMABINDING
AS
BEGIN ATOMIC
WITH
(TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'us_english')
DECLARE @SalesOrderId int, @OrderDate datetime;
SELECT TOP 1
@SalesOrderId = s.SalesOrderId,
@OrderDate
= s.OrderDate
FROM dbo.SalesOrder AS s
WHERE s.CustomerId = @_CustomerId
ORDER BY s.OrderDate DESC;
RETURN @SalesOrderId;
END;
The keyword SCHEMABINDING means the tables referenced in the native proc cannot be dropped unless the
native proc is dropped first. For details see Creating Natively Compiled Stored Procedures.
6. Execute the native proc
Populate the table with two rows of data.
INSERT into dbo.SalesOrder
( CustomerId, OrderDate )
VALUES
( 42, '2013-01-13 03:35:59' ),
( 42, '2015-01-15 15:35:59' );
An EXECUTE call to the natively compiled stored procedure follows.
DECLARE @LatestSalesOrderId int, @mesg nvarchar(128);
EXECUTE @LatestSalesOrderId =
ncspRetrieveLatestSalesOrderIdForCustomerId 42;
SET @mesg = CONCAT(@LatestSalesOrderId,
' = Latest SalesOrderId, for CustomerId = ', 42);
PRINT @mesg;
-- Here is the actual PRINT output:
-- 2 = Latest SalesOrderId, for CustomerId = 42
Guide to the documentation and next steps
The preceding plain examples give you a foundation for learning the more advanced features of In-Memory OLTP.
The following sections are a guide to the special considerations you might need to know, and to where you can see
the details about each.
How In-Memory OLTP features work so much faster
The following subsections briefly describe how the In-Memory OLTP features work internally to provide improved
performance.
How memory-optimized tables perform faster
Dual nature: A memory-optimized table has a dual nature: one representation in active memory, and the other on
the hard disk. Each transaction is committed to both representations of the table. Transactions operate against the
much faster active memory representation. Memory-optimized tables benefit from the greater speed of active
memory versus the disk. Further, the greater nimbleness of active memory makes practical a more advanced table
structure that is optimized for speed. The advanced structure is also pageless, so it avoids the overhead and
contention of latches and spinlocks.
No locks: The memory-optimized table relies on an optimistic approach to the competing goals of data integrity
versus concurrency and high throughput. During the transaction, the table does not place locks on any version of
the updated rows of data. This can greatly reduce contention in some high volume systems.
Row versions: Instead of locks, the memory-optimized table adds a new version of an updated row in the table
itself, not in tempdb. The original row is kept until after the transaction is committed. During the transaction, other
processes can read the original version of the row.
When multiple versions of a row are created for a disk-based table, row versions are stored temporarily in
tempdb.
Less logging: The before and after versions of the updated rows are held in the memory-optimized table. The pair
of rows provides much of the information that is traditionally written to the log file. This enables the system to
write less information, and less often, to the log. Yet transactional integrity is ensured.
How native procs perform faster
Converting a regular interpreted stored procedure into a natively compiled stored procedure greatly reduces the
number of instructions to execute during run time.
Trade-offs of In-Memory features
As is common in computer science, the performance gains provided by the In-Memory features are a trade-off. The
better features bring benefits that are more valuable than the extra costs of the feature. You can find
comprehensive guidance about the trade-offs at:
Plan your adoption of In-Memory OLTP Features in SQL Server
The rest of this section lists some of the major planning and trade-off considerations.
Trade -offs of memory-optimized tables
Estimate memory: You must estimate the amount of active memory that your memory-optimized table will
consume. Your computer system must have adequate memory capacity to host a memory-optimized table. For
details see:
Monitor and Troubleshoot Memory Usage
Estimate Memory Requirements for Memory-Optimized Tables
Table and Row Size in Memory-Optimized Tables
Partition your large table: One way to meet the demand for lots of active memory is to partition your large table
into parts in-memory that store hot recent data rows versus other parts on the disk that store cold legacy rows
(such as sales orders that have been fully shipped and completed). This partitioning is a manual process of design
and implementation. See:
Application-Level Partitioning
Application Pattern for Partitioning Memory-Optimized Tables
Trade -offs of native procs
A natively compiled stored procedure cannot access a disk-based table. A native proc can access only memoryoptimized tables.
When a native proc runs for its first time after the server or database was most recently brought back online,
the native proc must be recompiled one time. This causes a delay before the native proc starts to run.
Advanced considerations for memory-optimized tables
Indexes for Memory-Optimized Tables are different in some ways from indexes on traditional on-disk tables.
Hash Indexes are available only on memory-optimized tables.
You must plan to ensure there will be sufficient active memory for your planned memory-optimized table and its
indexes. See:
Creating and Managing Storage for Memory-Optimized Objects
A memory-optimized table can be declared with DURABILITY = SCHEMA_ONLY:
This syntax tells the system to discard all data from the memory-optimized table when the database is taken
offline. Only the table definition is persisted.
When the database is brought back online, the memory-optimized table is loaded back into active memory,
empty of data.
SCHEMA_ONLY tables can be a superior alternative to #temporary tables in tempdb, when many thousands of
rows are involved.
Table variables can also be declared as memory-optimized. See:
Faster temp table and table variable by using memory optimization
Advanced considerations for natively compiled modules
The types of natively compiled modules available through Transact-SQL are:
Natively compiled stored procedures (native procs).
Natively compiled scalar user-defined functions.
Natively compiled triggers (native triggers).
Only triggers that are natively compiled are allowed on memory-optimized tables.
Natively compiled table-valued functions.
Improving temp table and table variable performance using memory optimization
A natively compiled user defined function (UDF) runs faster than an interpreted UDF. Here are some things to
consider with UDFs:
When a T-SQL SELECT uses a UDF, the UDF is always called once per returned row.
UDFs never run inline, and instead are always called.
The compiled distinction is less significant than is the overhead of repeated calls that is inherent to all
UDFs.
Still, the overhead of UDF calls is often acceptable at the practical level.
For test data and explanation about the performance of native UDFs, see:
Soften the RBAR impact with Native Compiled UDFs in SQL Server 2016
The fine blog post written by Gail Shaw, dated January 2016.
Documentation guide for memory-optimized tables
Here are links to other articles that discuss special considerations for memory-optimized tables:
Migrating to In-Memory OLTP
Determining if a Table or Stored Procedure Should Be Ported to In-Memory OLTP
The Transaction Performance Analysis report in SQL Server Management Studio helps you evaluate if InMemory OLTP will improve your database application’s performance.
Use the Memory Optimization Advisor to help you migrate the disk-based database table to In-Memory
OLTP.
Backup, Restore, and Recovery of Memory-Optimized Tables
The storage used by memory-optimized tables can be much larger than its size in memory, and it affects
the size of the database backup.
Transactions with Memory-Optimized Tables
Includes information about retry logic in T-SQL, for transactions on memory-optimized tables.
Transact-SQL Support for In-Memory OLTP
Supported and unsupported T-SQL and data types, for memory-optimized tabled and native procs.
Bind a Database with Memory-Optimized Tables to a Resource Pool, which discusses an optional advanced
consideration.
Documentation guide for native procs
Related links
Initial article: In-Memory OLTP (In-Memory Optimization)
Here are articles that offer code to demonstrate the performance gains you can achieve by using In-Memory OLTP:
Demonstration: Performance Improvement of In-Memory OLTP offers a small-scale demonstration of the larger
potential performance gains.
Sample Database for In-Memory OLTP offers a larger scale demonstration.
<!-e1328615-6b59-4473-8a8d-4f360f73187d , dn817827.aspx , "Get started with Columnstore for real time
operational analytics"
f98af4a5-4523-43b1-be8d-1b03c3217839 , gg492088.aspx , "Columnstore Indexes Guide"
14dddf81-b502-49dc-a6b6-d18b1ae32d2b , dn133165.aspx , "Memory-Optimized Tables"
d5ed432c-10c5-4e4f-883c-ef4d1fa32366 , dn133184.aspx , "Natively compiled stored procedures"
14106cc9-816b-493a-bcb9-fe66a1cd4630 , dn639109.aspx , "The Memory Optimized Filegroup"
f222b1d5-d2fa-4269-8294-4575a0e78636 , dn465873.aspx , "Bind a Database with Memory-Optimized Tables to
a Resource Pool"
86805eeb-6972-45d8-8369-16ededc535c7 , dn511012.aspx , "Indexes on Memory-Optimized Tables"
16ef63a4-367a-46ac-917d-9eebc81ab29b , dn133166.aspx , "Guidelines for Using Indexes on Memory-Optimized
Tables"
e3f8009c-319d-4d7b-8993-828e55ccde11 , dn246937.aspx , "Transact-SQL Constructs Not Supported by InMemory OLTP"
2cd07d26-a1f1-4034-8d6f-f196eed1b763 , dn133169.aspx , "Transactions in Memory-Optimized Tables"
NOTE: SEE mt668425.aspx "Behaviors and Guidelines for Transactions with Memory-Optimized Tables", its soon
replacement for these!
f2a35c37-4449-49ee-8bba-928028f1de66 , dn169141.aspx , "Guidelines for Retry Logic for Transactions on
Memory-Optimized Tables"
7a458b9c-3423-4e24-823d-99573544c877 , dn465869.aspx , "Monitor and Troubleshoot Memory Usage"
5c5cc1fc-1fdf-4562-9443-272ad9ab5ba8 , dn282389.aspx , "Estimate Memory Requirements for MemoryOptimized Tables"
b0a248a4-4488-4cc8-89fc-46906a8c24a1 , dn205318.aspx , "Table and Row Size in Memory-Optimized Tables"
162d1392-39d2-4436-a4d9-ee5c47864c5a , dn296452.aspx , "Application-Level Partitioning"
3f867763-a8e6-413a-b015-20e9672cc4d1 , dn133171.aspx , "Application Pattern for Partitioning MemoryOptimized Tables"
86805eeb-6972-45d8-8369-16ededc535c7 , dn511012.aspx , "Indexes on Memory-Optimized Tables"
d82f21fa-6be1-4723-a72e-f2526fafd1b6 , dn465872.aspx , "Managing Memory for Memory-Optimized OLTP"
622aabe6-95c7-42cc-8768-ac2e679c5089 , dn133174.aspx , "Creating and Managing Storage for MemoryOptimized Objects"
bd102e95-53e2-4da6-9b8b-0e4f02d286d3 , dn535766.aspx , "Memory-Optimized Table Variables"; "table
variable of a memory-optimized table"
OBSOLETE. Instead see 38512a22-7e63-436f-9c13-dde7cf5c2202 , mt718711.aspx , "Faster temp table and table
variable by using memory optimization"
f0d5dd10-73fd-4e05-9177-07f56552bdf7 , ms191320.aspx , "Create User-defined Functions (Database Engine)";
"table-valued functions"
d2546e40-fdfc-414b-8196-76ed1f124bf5 , dn935012.aspx , "Scalar User-Defined Functions for In-Memory OLTP";
"scalar user-defined functions"
405cdac5-a0d4-47a4-9180-82876b773b82 , dn247639.aspx , "Migrating to In-Memory OLTP"
3f083347-0fbb-4b19-a6fb-1818d545e281 , dn624160.aspx , "Backup, Restore, and Recovery of MemoryOptimized Tables"
690b70b7-5be1-4014-af97-54e531997839 , dn269114.aspx , "Altering Memory-Optimized Tables"
b1cc7c30-1747-4c21-88ac-e95a5e58baac , dn133080.aspx , "New and Updated Properties, System Views, Stored
Procedures, Wait Types, and DMVs for In-Memory OLTP"
.....
ALSO: "Transact-SQL Support for In-Memory OLTP"
c1ef96f1-290d-4952-8369-2f49f27afee2 , dn205133.aspx , "Determining if a Table or Stored Procedure Should Be
Ported to In-Memory OLTP"
181989c2-9636-415a-bd1d-d304fc920b8a , dn284308.aspx , "Memory Optimization Advisor"
55548cb2-77a8-4953-8b5a-f2778a4f13cf , dn452282.aspx , "Monitoring Performance of Natively Compiled
Stored Procedures"
d3898a47-2985-4a08-bc70-fd8331a01b7b , dn358355.aspx , "Native Compilation Advisor"
f43faad4-2182-4b43-a76a-0e3b405816d1 , dn296678.aspx , "Migration Issues for Natively Compiled Stored
Procedures"
e1d03d74-2572-4a55-afd6-7edf0bc28bdb , dn133186.aspx , "In-Memory OLTP (In-Memory Optimization)"
c6def45d-d2d4-4d24-8068-fab4cd94d8cc , dn530757.aspx , "Demonstration: Performance Improvement of InMemory OLTP"
405cdac5-a0d4-47a4-9180-82876b773b82 , dn247639.aspx , "Migrating to In-Memory OLTP"
f76fbd84-df59-4404-806b-8ecb4497c9cc , bb522682.aspx , "ALTER DATABASE SET Options (Transact-SQL)"
e6b34010-cf62-4f65-bbdf-117f291cde7b , dn452286.aspx , "Creating Natively Compiled Stored Procedures"
df347f9b-b950-4e3a-85f4-b9f21735eae3 , mt465764.aspx , "Sample Database for In-Memory OLTP"
38512a22-7e63-436f-9c13-dde7cf5c2202 , mt718711.aspx , "Faster temp table and table variable by using
memory optimization"
38512a22-7e63-436f-9c13-dde7cf5c2202 , mt718711.aspx , "Faster temp table and table variable by using
memory optimization"
H1 # Quick Start 1: In-Memory technologies for faster transactional workloads
{1c25a164-547d-43c4-8484-6b5ee3cbaf3a} in CAPS
mt718711.aspx on MSDN
GeneMi , 2016-05-07 00:07am
-->
Overview and Usage Scenarios
4/10/2017 • 10 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
In-Memory OLTP is the premier technology available in SQL Server and Azure SQL Database for optimizing
performance of transaction processing, data ingestion, data load, and transient data scenarios. This topic includes
an overview of the technology and outlines usage scenarios for In-Memory OLTP. Use this information to
determine whether In-Memory OLTP is right for your application. The topic concludes with an example that shows
In-Memory OLTP objects, reference to a perf demo, and references to resources you can use for next steps.
This article covers the In-Memory OLTP technology in both SQL Server and Azure SQL Database. The following
blog post contains a deep dive into the performance and resource utilization benefits in Azure SQL Database:
In-Memory OLTP in Azure SQL Database
In-Memory OLTP Overview
In-Memory OLTP can provide great performance gains, for the right workloads. One customer, bwin, managed to
achieve 1.2 Million requests per second with a single machine running SQL Server 2016, leveraging In-Memory
OLTP. Another customer, Quorum, managed to double their workload while reducing their resource utilization by
70%, by leveraging In-Memory OLTP in Azure SQL Database. While customers have seen up to 30X performance
gain in some cases, how much gain you will see depends on the workload.
Now, where does this performance gain come from? In essence, In-Memory OLTP improves performance of
transaction processing by making data access and transaction execution more efficient, and by removing lock and
latch contention between concurrently executing transactions: it is not fast because it is in-memory; it is fast
because it is optimized around the data being in-memory. Data storage, access, and processing algorithms were
redesigned from the ground up to take advantage of the latest enhancements in in-memory and high concurrency
computing.
Now, just because data lives in-memory does not mean you lose it when there is a failure. By default, all
transactions are fully durable, meaning that you have the same durability guarantees you get for any other table in
SQL Server: as part of transaction commit, all changes are written to the transaction log on disk. If there is a failure
at any time after the transaction commits, your data is there when the database comes back online. In addition, InMemory OLTP works with all high availability and disaster recovery capabilities of SQL Server, like AlwaysOn,
backup/restore, etc.
To leverage In-Memory OLTP in your database, you use one or more of the following types of objects:
Memory-optimized tables are used for storing user data. You declare a table to be memory-optimized at create
time.
Non-durable tables are used for transient data, either for caching or for intermediate result set (replacing
traditional temp tables). A non-durable table is a memory-optimized table that is declared with
DURABILITY=SCHEMA_ONLY, meaning that changes to these tables do not incur any IO. This avoids consuming
log IO resources for cases where durability is not a concern.
Memory-optimized table types are used for table-valued parameters (TVPs), as well as intermediate result sets
in stored procedures. These can be used instead of traditional table types. Table variables and TVPs that are
declared using a memory-optimized table type inherit the benefits of non-durable memory-optimized tables:
efficient data access, and no IO.
Natively compiled T-SQL modules are used to further reduce the time taken for an individual transaction by
reducing CPU cycles required to process the operations. You declare a Transact-SQL module to be natively
compiled at create time. At this time, the following T-SQL modules can be natively compiled: stored procedures,
triggers and scalar user-defined functions.
In-Memory OLTP is built into SQL Server and Azure SQL Database. And because these objects behave very similar
to their traditional counterparts, you can often gain performance benefits while making only minimal changes to
the database and the application. Plus, you can have both memory-optimized and traditional disk-based tables in
the same database, and run queries across the two. You will find a Transact-SQL script showing an example for
each of these types of objects towards the bottom of this topic.
Usage Scenarios for In-Memory OLTP
In-Memory OLTP is not a magic go-fast button, and is not suitable for all workloads. For example, memoryoptimized tables will not really bring down your CPU utilization if most of the queries are performing aggregation
over large ranges of data – Columnstore indexes help with that scenario.
Here is a list of scenarios and application patterns where we have seen customers be successful with In-Memory
OLTP.
High-throughput and low-latency transaction processing
This is really the core scenario for which we built In-Memory OLTP: support large volumes of transactions, with
consistent low latency for individual transactions.
Common workload scenarios are: trading of financial instruments, sports betting, mobile gaming, and ad delivery.
Another common pattern we’ve seen is a “catalog” that is frequently read and/or updated. One example is where
you have large files, each distributed over a number of nodes in a cluster, and you catalog the location of each
shard of each file in a memory-optimized table.
Implementation considerations
Use memory-optimized tables for your core transaction tables, i.e., the tables with the most performance-critical
transactions. Use natively compiled stored procedures to optimize execution of the logic associated with the
business transaction. The more of the logic you can push down into stored procedures in the database, the more
benefit you will see from In-Memory OLTP.
To get started in an existing application:
1. use the transaction performance analysis report to identify the objects you want to migrate,
2. and use the memory-optimization and native compilation advisors to help with migration.
Customer Case Studies
CMC Markets leverages In-Memory OLTP in SQL Server 2016 to achieve consistent low latency: Because a
second is too long to wait, this financial services firm is updating its trading software now.
Derivco leverages In-Memory OLTP in SQL Server 2016 to support increased throughput and handle spikes in
the workload: When an online gaming company doesn’t want to risk its future, it bets on SQL Server 2016.
Data ingestion, including IoT (Internet-of-Things)
In-Memory OLTP is really good at ingesting large volumes of data from many different sources at the same time.
And it is often beneficial to ingest data into a SQL Server database compared with other destinations, because SQL
makes running queries against the data really fast, and allows you to get real-time insights.
Common application patterns are: Ingesting sensor readings and events, to allow notification, as well as historical
analysis. Managing batch updates, even from multiple sources, while minimizing the impact on the concurrent
read workload.
Implementation considerations
Use a memory-optimized table for the data ingestion. If the ingestion consists mostly of inserts (rather than
updates) and In-Memory OLTP storage footprint of the data is a concern, either
Use a job to regularly batch-offload data to a disk-based table with a Clustered Columnstore index, using a job
that does INSERT INTO <disk-based table> SELECT FROM <memory-optimized table> ; or
Use a temporal memory-optimized table to manage historical data – in this mode, historical data lives on disk,
and data movement is managed by the system.
The SQL Server samples repository contains a smart grid application that uses a temporal memory-optimized
table, a memory-optimized table type, and a natively compiled stored procedure, to speed up data ingestion, while
managing the In-Memory OLTP storage footprint of the sensor data:
smart-grid-release
smart-grid-source-code
Customer Case Studies
Quorum doubles key database’s workload while lowering utilization by 70% by leveraging In-Memory OLTP in
Azure SQL Database
EdgeNet improved the performance of batch data load and removed the need to maintain a mid-tier cache,
with In-Memory OLTP in SQL Server 2014: Data Services Firm Gains Real-Time Access to Product Data with InMemory Technology
Beth Israel Deaconess Medical Center was able to dramatically improve data ingestion rate from domain
controllers, and handle spikes in the workload, with In-Memory OLTP in SQL Server 2014:
[https://customers.microsoft.com/en-us/story/strengthening-data-security-and-creating-more-time-for]
Caching and session state
The In-Memory OLTP technology makes SQL really attractive for maintaining session state (e.g., for an ASP.NET
application) and for caching.
ASP.NET session state is a very successful use case for In-Memory OLTP. With SQL Server, one customer was
about to achieve 1.2 Million requests per second. In the meantime, they have started using In-Memory OLTP for
the caching needs of all mid-tier applications in the enterprise. Details: How bwin is using SQL Server 2016 InMemory OLTP to achieve unprecedented performance and scale
Implementation considerations
You can use non-durable memory-optimized tables as a simple key-value store by storing a BLOB in a
varbinary(max) columns. Alternatively, you can implement a semi-structured cache with JSON support in SQL
Server and Azure SQL Database. Finally, you can create a full relational cache through non-durable tables with a
full relational schema, including various data types and constraints.
Get started with memory-optimizing ASP.NET session state by leveraging the scripts published on GitHub to
replace the objects created by the built-in SQL Server session state provider:
aspnet-session-state
Customer case studies
bwin was able to dramatically increase throughput and reduce hardware footprint for ASP.NET session state,
with In-Memory OLTP in SQL Server 2014: Gaming Site Can Scale to 250,000 Requests Per Second and
Improve Player Experience
bwin increased throughput with ASP.NET session state even further and implemented an enterprise-wide midtier caching system, with In-Memory OLTP in SQL Server 2016: How bwin is using SQL Server 2016 In-Memory
OLTP to achieve unprecedented performance and scale
Tempdb object replacement
Leverage non-durable tables and memory-optimized table types to replace your traditional tempdb-based #temp
tables, table variables, and table-valued parameters (TVPs).
Memory-optimized table variables and non-durable tables typically reduce CPU and completely remove log IO,
when compared with traditional table variables and #temp table.
Implementation considerations
To get started see: Improving temp table and table variable performance using memory optimization.
Customer Case Studies
One customer was able to improve performance by 40%, just by replacing traditional TVPs with memoryoptimized TVPs: High Speed IoT Data Ingestion Using In-Memory OLTP in Azure
ETL (Extract Transform Load)
ETL workflows often include load of data into a staging table, transformations of the data, and load into the final
tables.
Implementation considerations
Use non-durable memory-optimized tables for the data staging. They completely remove all IO, and make data
access more efficient.
If you perform transformations on the staging table as part of the workflow, you can use natively compiled stored
procedures to speed up these transformations. If you can do these transformations in parallel you get additional
scaling benefits from the memory-optimization.
Sample Script
Before you can start using In-Memory OLTP, you need to create a MEMORY_OPTIMIZED_DATA filegroup. In
addition, we recommend to use database compatibility level 130 (or higher), and set the database option
MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT to ON.
You can use the script at the following location to create the filegroup in the default data folder, and configure the
recommended settings:
enable-in-memory-oltp.sql
The following script illustrates In-Memory OLTP objects you can create in your database:
-- configure recommended DB option
ALTER DATABASE CURRENT SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON
GO
-- memory-optimized table
CREATE TABLE dbo.table1
( c1 INT IDENTITY PRIMARY KEY NONCLUSTERED,
c2 NVARCHAR(MAX))
WITH (MEMORY_OPTIMIZED=ON)
GO
-- non-durable table
CREATE TABLE dbo.temp_table1
( c1 INT IDENTITY PRIMARY KEY NONCLUSTERED,
c2 NVARCHAR(MAX))
WITH (MEMORY_OPTIMIZED=ON,
DURABILITY=SCHEMA_ONLY)
GO
-- memory-optimized table type
CREATE TYPE dbo.tt_table1 AS TABLE
( c1 INT IDENTITY,
c2 NVARCHAR(MAX),
is_transient BIT NOT NULL DEFAULT (0),
INDEX ix_c1 HASH (c1) WITH (BUCKET_COUNT=1024))
WITH (MEMORY_OPTIMIZED=ON)
GO
-- natively compiled stored procedure
CREATE PROCEDURE dbo.usp_ingest_table1
@table1 dbo.tt_table1 READONLY
WITH NATIVE_COMPILATION, SCHEMABINDING
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT,
WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT,
LANGUAGE=N'us_english')
DECLARE @i INT = 1
WHILE @i > 0
BEGIN
INSERT dbo.table1
SELECT c2
FROM @table1
WHERE c1 = @i AND is_transient=0
IF @@ROWCOUNT > 0
SET @i += 1
ELSE
BEGIN
INSERT dbo.temp_table1
SELECT c2
FROM @table1
WHERE c1 = @i AND is_transient=1
IF @@ROWCOUNT > 0
SET @i += 1
ELSE
SET @i = 0
END
END
END
GO
-- sample execution of the proc
DECLARE @table1 dbo.tt_table1
INSERT @table1 (c2, is_transient) VALUES (N'sample durable', 0)
INSERT @table1 (c2, is_transient) VALUES (N'sample non-durable', 1)
EXECUTE dbo.usp_ingest_table1 @table1=@table1
SELECT c1, c2 from dbo.table1
SELECT c1, c2 from dbo.temp_table1
GO
Resources to learn more:
Quick Start 1: In-Memory OLTP Technologies for Faster T-SQL Performance
Perf demo using In-Memory OLTP can be found at: in-memory-oltp-perf-demo-v1.0
17-minute video explaining In-Memory OLTP and showing the demo (demo is at 8:25)
Script to enable In-Memory OLTP and set recommended options
Main In-Memory OLTP documentation
Performance and resource utilization benefits of In-Memory OLTP in Azure SQL Database
Improving temp table and table variable performance using memory optimization Optimize Performance using
In-Memory Technologies in SQL Database
System-Versioned Temporal Tables with Memory-Optimized Tables
In-Memory OLTP – Common Workload Patterns and Migration Considerations.
Plan your adoption of In-Memory OLTP Features in
SQL Server
5/9/2017 • 10 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
This article describes the ways in which the adoption of In-Memory features affects other aspects of your business
system.
A. Adoption of In-Memory OLTP features
The following subsections discuss factors you must consider when you plan to adopt and implement In-Memory
features. A lot of explanatory information is available at:
Use In-Memory OLTP to improve your application performance in Azure SQL Database
A.1 Prerequisites
One prerequisite for using the In-Memory features can involve the edition or service tier of the SQL product. For
this and other prerequisites, see:
Requirements for Using Memory-Optimized Tables
Editions and Components of SQL Server 2016
SQL Database pricing tier recommendations
A.2 Forecast the amount of active memory
Does your system have enough active memory to support a new memory-optimized table?
Microsoft SQL Server
A memory-optimized table which contains 200 GB of data requires more than 200 GB of active memory be
dedicated to its support. Before you implement a memory-optimized table containing a large amount of data, you
must forecast the amount of additional active memory you might need to add to your server computer. For
estimation guidance, see:
Estimate Memory Requirements for Memory-Optimized Tables
Azure SQL Database
For a database hosted in the Azure SQL Database cloud service, your chosen service tier affects the amount of
active memory your database is allowed to consume. You should plan to monitor the memory usage of your
database by using an alert. For details, see:
Review the In-Memory OLTP Storage limits for your Pricing Tier
Monitor In-Memory OLTP Storage
Memory-optimized table variables
A table variable which is declared to be memory-optimzed is sometimes a preferable to a traditional #TempTable
that resides in the tempdb database. Such table variables can provide significant performance gains without using
significant amounts of active memory.
A.3 Table must be offline to convert to memory-optimized
Some ALTER TABLE functionality is available for memory-optimized tables. But you cannot issue an ALTER TABLE
statement to convert a disk-based table into a memory-optimized table. Instead you must use a more manual set
of steps. What follows are various ways you can convert your disk-based table to be memory-optimized.
Manual scripting
One way to convert your disk-based table to a memory-optimized table is to code the necessary Transact-SQL
steps yourself.
1. Suspend application activity.
2. Take a full backup.
3. Rename your disk-based table.
4. Issue a CREATE TABLE statement to create your new memory-optimized table.
5. INSERT INTO your memory-optimized table with a sub-SELECT from the disk-based table.
6. DROP your disk-based table.
7. Take another full backup.
8. Resume application activity.
Memory Optimization Advisor
The Memory Optimization Advisor tool can generate a script to help implement the conversion of a disk-based
table to a memory-optimized table. The tool is installed as part of SQL Server Data Tools (SSDT).
Memory Optimization Advisor
Download SQL Server Data Tools (SSDT)
.dacpac file
You can update your database in-place by using a .dacpac file, managed by SSDT. In SSDT you can specify changes
to the schema that is encoded in the .dacpac file.
You work with .dacpac files in the context of a Visual Studio project of type Database.
Data-tier Applications and .dacpac files
A.4 Guidance for whether In-Memory OLTP features are right for your application
For guidance on whether In-Memory OLTP features can improve the performance of your particular application,
see:
In-Memory OLTP (In-Memory Optimization)
B. Unsupported features
Features which are not supported in certain In-Memory OLTP scenarios are described at:
Unsupported SQL Server Features for In-Memory OLTP
The following subsections highlight some of the more important unsupported features.
B.1 SNAPSHOT of a database
After the first time that any memory-optimized table or module is created in a given database, no SNAPSHOT of
the database can ever be taken. The specific reason is that:
The first memory-optimized item makes it impossible to ever drop the last file from the memory-optimized
FILEGROUP; and
No database that has a file in a memory-optimized FILEGROUP can support a SNAPSHOT.
Normally a SNAPSHOT can be handy for quick testing iterations.
B.2 Cross-database queries
Memory-optimized tables do not support cross-database transactions. You cannot access another database from
the same transaction or the same query that also accesses a memory-optimized table.
Table variables are not transactional. Therefore, memory-optimized table variables can be used in cross-database
queries.
B.3 READPAST table hint
No query can apply the READPAST table hint to any memory-optimized table.
The READPAST hint is helpful in scenarios where several sessions are each accessing and modifying the same small
set of rows, such as in processing a queue.
B.4 RowVersion, Sequence
No column can be tagged for RowVersion on a memory-optimized table.
A SEQUENCE cannot be used with a constraint in a memory-optimized table. For example, you cannot create
a DEFAULT constraint with a NEXT VALUE FOR clause. SEQUENCEs can be used with INSERT and UPDATE
statements.
C. Administrative maintenance
This section describes differences in database administration where memory-optimized tables are used.
C.1 Identity seed reset, increment > 1
DBCC CHECKIDENT, to reseed an IDENTITY column, cannot be used on a memory-optimized table.
The increment value is restricted to exactly 1 for an IDENTITY column on a memory-optimized table.
C.2 DBCC CHECKDB cannot validate memory-optimized tables
The DBCC CHECKDB command does nothing when its target is a memory-optimized table. The following steps are
a work-around:
1. Back up the transaction log.
2. Back up the files in the memory-optimized FILEGROUP to a null device. The backup process invokes a
checksum validation.
If corruption is found, proceed with the next steps.
3. Copy data from your memory-optimized tables into disk-based tables, for temporary storage.
4. Restore the files of the memory-optimized FILEGROUP.
5. INSERT INTO the memory-optimized tables the data you temporarily stored in the disk-based tables.
6. DROP the disk-based tables which temporarily held the data.
D. Performance
This section describes situations where the excellent performance of memory-optimized tables can be held below
full potential.
D.1 Index considerations
All indexes on a memory-optimized table are created and managed by the table-related statements CREATE TABLE
and ALTER TABLE. You cannot target a memory-optimized table with a CREATE INDEX statement.
The traditional b-tree nonclustered index is often the sensible and simple choice when you first implement a
memory-optimized table. Later, after you see how your application performs, you can consider swapping another
index type.
Two special types of indexes need discussion in the context of a memory-optimized table: Hash indexes, and
Columnstore indexes.
For an overview of indexes on memory-optimized tables, see:
Indexes for Memory-Optimized Tables
Hash indexes
Hash indexes can be the fastest format for accessing one specific row by its exact primary key value by using the
'=' operator.
Inexact operators such as '!=', '>', or 'BETWEEN' would harm performace if used with a hash index.
A hash index might not be the best choice if the rate of key value duplication becomes too high.
Guard against underestimating how many buckets your hash index might need, to avoid long chains within
individual buckets. For details, see:
Hash Indexes for Memory-Optimized Tables
Nonclustered columnstore indexes
Memory-optimized tables deliver high throughput of typical business transactional data, in the paradigm we call
online transaction processing or OLTP. Columnstore indexes deliver high throughput of aggregations and similar
processing we call Analytics. In years past the best approach available for satisfying the needs of both OLTP and
Analytics was to have separate tables with heavy movement of data, and with some degree of data duplication.
Today a simpler hybrid solution is available: have a columnstore index on a memory-optimized table.
A columnstore index can be built on a disk-based table, even as the clustered index. But on a memoryoptimized table a columnstore index cannot be clustered.
LOB or off-row columns for a memory-optimized table prevent the creation of a columnstore index on the
table.
No ALTER TABLE statement can be executed against a memory-optimized table while a columnstore index
exists on the table.
As of August 2016, Microsoft has near-term plans to improve the performance of re-creating the
columnstore index.
D.2 LOB and off-row columns
Large objects (LOBs) are columns of such types as varchar(max). Having a couple of LOB columns on a memoryoptimized table probably does not harm performance enough to matter. But do avoid having more LOB columns
than your data needs. The same advice applies to off-row columns. Do not define a column as nvarchar(3072) if
varchar(512) will suffice.
A bit more about LOB and off-row columns is available at:
Table and Row Size in Memory-Optimized Tables
Supported Data Types for In-Memory OLTP
E. Limitations of native procs
Particular elements of Transact-SQL are not supported in natively compiled stored procedures.
For considerations when migrating a Transact-SQL script to a native proc, see:
Migration Issues for Natively Compiled Stored Procedures
E.1 No CASE in a native proc
The CASE expression in Transact-SQL cannot be used inside a native proc. You can fashion a work-around:
Implementing a CASE Expression in a Natively Compiled Stored Procedure
E.2 No MERGE in a native proc
The Transact-SQL MERGE statement has similarities to what is often called upsert functionality. A native proc
cannot use the MERGE statement. However, you can achieve the same functionality as MERGE by using a
combination of SELECT plus UPDATE plus INSERT statements. A code example is at:
Implementing MERGE Functionality in a Natively Compiled Stored Procedure
E.3 No joins in UPDATE or DELETE statements, in a native proc
Transact-SQL statements in a native proc can access memory-optimized tables only. In UPDATE and DELETE
statements, you cannot join any tables. Attempts in a native proc fail with a message such as Msg 12319 which
explains that you:
Cannot use the FROM clause in an UPDATE statement.
Cannot specify a table source in a DELETE statement.
No type of subquery provides a work-around. However, you can use a memory-optimized table variable to achieve
a join outcome over multiple statements. Two code samples follow:
DELETE...JOIN... we want to run in a native proc, but cannot.
A work-around set of Transact-SQL statements that achieves the delete join.
Scenario: The table TabProjectEmployee has a unique key of two columns: ProjectId and EmployeeId. Each row
indicates the assignment of an employee to an active project. When an Employee leaves the company, the
employee must be deleted from the TabProjectEmployee table.
Invalid T-SQL, DELETE...JOIN
A native proc cannot have a DELETE...JOIN such as the following.
DELETE pe
FROM
TabProjectEmployee
JOIN TabEmployee
AS pe
AS e
ON pe.EmployeeId = e.EmployeeId
WHERE
e.EmployeeStatus = 'Left-the-Company'
;
Valid work-around, manual delete...join
Next is the work-around code sample, in two parts:
1. The CREATE TYPE is executed one time, days before the type is first used by any actual table variable.
2. The business process uses the created type. It begins by declaring a table variable of the created table type.
CREATE TYPE dbo.type_TableVar_EmployeeId
AS TABLE
(
EmployeeId bigint NOT NULL
);
Next, use the create table type.
DECLARE @MyTableVarMo dbo.type_TableVar_EmployeeId
INSERT INTO @MyTableVarMo (EmployeeId)
SELECT
e.EmployeeId
FROM
TabProjectEmployee AS pe
JOIN TabEmployee
AS e ON e.EmployeeId = pe.EmployeeId
WHERE
e.EmployeeStatus = 'Left-the-Company'
;
DECLARE @EmployeeId
bigint;
WHILE (1=1)
BEGIN
SET @EmployeeId = NULL;
SELECT TOP 1 @EmployeeId = v.EmployeeId
FROM @MyTableVarMo AS v;
IF (NULL = @Employeed) BREAK;
DELETE TabProjectEmployee
WHERE EmployeeId = @EmployeeId;
DELETE @MyTableVarMo
WHERE EmployeeId = @EmployeeId;
END;
E.4 Query plan limitations for native procs
Some types of query plans are not available for native procs. Many details are discussed in:
A Guide to Query Processing for Memory-Optimized Tables
No parallel processing in a native proc
Parallel processing cannot be a part of any query plan for a native proc. Native procs are always single-threaded.
Join types
Neither a hash join nor a merge join can be a part of any query plan for a native proc. Nested loop joins are used.
No hash aggregation
When the query plan for a native proc requires an aggregation phase, only stream aggregation is available. Hash
aggregation is not supported in a query plan for a native proc.
Hash aggregation is better when data from a large number of rows must aggregated.
F. Application design: Transactions and retry logic
A transaction involving a memory-optimized table can become dependent on another transaction which involves
the same table. If the count of dependent transactions reaches exceeds the allowed maximum, all the dependent
transactions fail.
In SQL Server 2016:
The allowed maximum is 8 dependent transactions. 8 is also the limit of transactions that any given transaction
can be dependent on.
The error number is 41839. (In SQL Server 2014 the error number is 41301.)
You can make your Transact-SQL scripts more robust against a possible transaction error by adding retry logic to
your scripts. Retry logic is more likely to help when UPDATE and DELETE calls are frequent, or if the memory-
optimized table is referenced by a foreign key in another table. For details, see:
Transactions with Memory-Optimized Tables
Transaction dependency limits with memory optimized tables – Error 41839
Related links
In-Memory OLTP (In-Memory Optimization)
Requirements for Using Memory-Optimized Tables
3/24/2017 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
For using In-Memory OLTP in Azure DB see Get started with In-Memory in SQL Database.
In addition to the Hardware and Software Requirements for Installing SQL Server 2016, the following are
requirements to use In-Memory OLTP:
SQL Server 2016 SP1 (or later), any edition. For SQL Server 2014 and SQL Server 2016 RTM (pre-SP1) you
need Enterprise, Developer, or Evaluation edition.
Note: In-Memory OLTP requires the 64-bit version of SQL Server.
SQL Server needs enough memory to hold the data in memory-optimized tables and indexes, as well as
additional memory to support the online workload. See Estimate Memory Requirements for MemoryOptimized Tables for more information.
When running SQL Server in a Virtual Machine (VM), ensure there is enough memory allocated to the VM
to support the memory needed for memory-optimized tables and indexes. Depending on the VM host
application, the configuration option to guarantee memory allocation for the VM could be called Memory
Reservation or, when using Dynamic Memory, Minimum RAM. Make sure these settings are sufficient for
the needs of the databases in SQL Server.
Free disk space that is two times the size of your durable memory-optimized tables.
A processor needs to support the instruction cmpxchg16b to use In-Memory OLTP. All modern 64-bit
processors support cmpxchg16b.
If you are using a Virtual Machine and SQL Server displays an error caused by an older processor, see if the
VM host application has a configuration option to allow cmpxchg16b. If not, you could use Hyper-V, which
supports cmpxchg16b without needing to modify a configuration option.
In-Memory OLTP is installed as part of Database Engine Services.
To install report generation (Determining if a Table or Stored Procedure Should Be Ported to In-Memory
OLTP) and SQL Server Management Studio (to manage In-Memory OLTP via SQL Server Management
Studio Object Explorer), Download SQL Server Management Studio (SSMS).
Important Notes on Using In-Memory OLTP
Starting SQL Server 2016 there is no limit on the size of memory-optimized tables, other than available
memory. In SQL Server 2014 the total in-memory size of all durable tables in a database should not exceed
250 GB for SQL Server 2014 databases. For more information, see Estimate Memory Requirements for
Memory-Optimized Tables.
Note: Starting SQL Server 2016 SP1, Standard and Express Editions support In-Memory OLTP, but they
impose quotas on the amount of memory you can use for memory-optimized tables in a given database.
In Standard edition this is 32GB per database; in Express edition this is 352MB per database.
If you create one or more databases with memory-optimized tables, you should enable Instant File
Initialization (grant the SQL Server service startup account the SE_MANAGE_VOLUME_NAME user right) for
the SQL Server instance. Without Instant File Initialization, memory-optimized storage files (data and delta
files) will be initialized upon creation, which can have negative impact on the performance of your
workload. For more information about Instant File Initialization, see Database File Initialization. For
information on how to enable Instant File Initialization, see How and Why to Enable Instant File Initialization.
See Also
In-Memory OLTP (In-Memory Optimization)
In-Memory OLTP Code Samples
3/24/2017 • 1 min to read • Edit Online
This section contains code samples that demonstrate In-Memory OLTP:
Demonstration: Performance Improvement of In-Memory OLTP
Creating a Memory-Optimized Table and a Natively Compiled Stored Procedure
Application-Level Partitioning
See Also
In-Memory OLTP (In-Memory Optimization)
Demonstration: Performance Improvement of InMemory OLTP
3/24/2017 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
The code sample in this topic demonstrates the fast performance of memory-optimized tables. The performance
improvement is evident when data in a memory-optimized table is accessed from traditional, interpreted TransactSQL. This performance improvement is even greater when data in a memory-optimized table is accessed from a
natively compiled stored procedure (NCSProc).
To see a more comprehensive demonstration of the potential performance improvements of In-Memory OLTP see
In-Memory OLTP Performance Demo v1.0.
The code example in the present article is single-threaded, and it does not take advantage of the concurrency
benefits of In-Memory OLTP. A workload that uses concurrency will see a greater performance gain. The code
example shows only one aspect of performance improvement, namely data access efficiency for INSERT.
The performance improvement offered by memory-optimized tables is fully realized when data in a memoryoptimized table is accessed from a NCSProc.
Code Example
The following subsections describe each step.
Step 1a: Prerequisite If Using SQL Server
The steps in this first subsection applies only if you are running in SQL Server, and does not apply if you are
running in Azure SQL Database. Do the following:
1. Use SQL Server Management Studio (SSMS.exe) to connect to your SQL Server. Or any tool similar to
SSMS.exe is fine.
2. Manually create a directory named C:\data\. The sample Transact-SQL code expects the directory to preexist.
3. Run the short T-SQL to create the database and its memory-optimized filegroup.
go
CREATE DATABASE imoltp;
go
-- Transact-SQL
ALTER DATABASE imoltp ADD FILEGROUP [imoltp_mod]
CONTAINS MEMORY_OPTIMIZED_DATA;
ALTER DATABASE imoltp ADD FILE
(name = [imoltp_dir], filename= 'c:\data\imoltp_dir')
TO FILEGROUP imoltp_mod;
go
USE imoltp;
go
Step 1b: Prerequisite If Using Azure SQL Database
This subsection applies only if you are using Azure SQL Database. Do the following:
1. Decide which existing test database you will use for the code example.
2. If you decide to create a new test database, use the Azure portal to create a database named imoltp.
If you would like instructions for using the Azure portal for this, see Get Started with Azure SQL Database.
Step 2: Create Memory-Optimized Tables, and NCSProc
This step creates memory-optimized tables, and a natively compiled stored procedure (NCSProc). Do the
following:
1. Use SSMS.exe to connect to your new database.
2. Run the following T-SQL in your database.
go
DROP
DROP
DROP
DROP
go
PROCEDURE IF EXISTS ncsp;
TABLE IF EXISTS sql;
TABLE IF EXISTS hash_i;
TABLE IF EXISTS hash_c;
CREATE TABLE [dbo].[sql] (
c1 INT NOT NULL PRIMARY KEY,
c2 NCHAR(48) NOT NULL
);
go
CREATE TABLE [dbo].[hash_i] (
c1 INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),
c2 NCHAR(48) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_AND_DATA);
go
CREATE TABLE [dbo].[hash_c] (
c1 INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),
c2 NCHAR(48) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_AND_DATA);
go
CREATE PROCEDURE ncsp
@rowcount INT,
@c NCHAR(48)
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
DECLARE @i INT = 1;
WHILE @i <= @rowcount
BEGIN;
INSERT INTO [dbo].[hash_c] VALUES (@i, @c);
SET @i += 1;
END;
END;
go
Step 3: Run the Code
Now you can execute the queries that will demonstrate the performance of memory-optimized tables. Do the
following:
1. Use SSMS.exe to run the following T-SQL in your database.
Ignore any speed or other performance data this first run generates. The first run ensure several one-time
only operations are performed, such as initial allocations of memory.
2. Again, use SSMS.exe to rerun the following T-SQL in your database.
go
SET STATISTICS TIME OFF;
SET NOCOUNT ON;
-- Inserts, one at a time.
DECLARE
DECLARE
DECLARE
DECLARE
DECLARE
@starttime DATETIME2 = sysdatetime();
@timems INT;
@i INT = 1;
@rowcount INT = 100000;
@c NCHAR(48) = N'12345678901234567890123456789012345678';
-- Harddrive-based table and interpreted Transact-SQL.
BEGIN TRAN;
WHILE @i <= @rowcount
BEGIN;
INSERT INTO [dbo].[sql] VALUES (@i, @c);
SET @i += 1;
END;
COMMIT;
SET @timems = datediff(ms, @starttime, sysdatetime());
SELECT 'A: Disk-based table and interpreted Transact-SQL: '
+ cast(@timems AS VARCHAR(10)) + ' ms';
-- Interop Hash.
SET @i = 1;
SET @starttime = sysdatetime();
BEGIN TRAN;
WHILE @i <= @rowcount
BEGIN;
INSERT INTO [dbo].[hash_i] VALUES (@i, @c);
SET @i += 1;
END;
COMMIT;
SET @timems = datediff(ms, @starttime, sysdatetime());
SELECT 'B: memory-optimized table with hash index and interpreted Transact-SQL: '
+ cast(@timems as VARCHAR(10)) + ' ms';
-- Compiled Hash.
SET @starttime = sysdatetime();
EXECUTE ncsp @rowcount, @c;
SET @timems = datediff(ms, @starttime, sysdatetime());
SELECT 'C: memory-optimized table with hash index and native SP:'
+ cast(@timems as varchar(10)) + ' ms';
go
DELETE sql;
DELETE hash_i;
DELETE hash_c;
go
Next are the output time statistics generated by our second test run.
10453 ms , A: Disk-based table and interpreted Transact-SQL.
5626 ms , B: memory-optimized table with hash index and interpreted Transact-SQL.
3937 ms , C: memory-optimized table with hash index and native SP.
See Also
In-Memory OLTP (In-Memory Optimization)
Creating a Memory-Optimized Table and a Natively
Compiled Stored Procedure
3/24/2017 • 4 min to read • Edit Online
This topic contains a sample that introduces you to the syntax for In-Memory OLTP.
To enable an application to use In-Memory OLTP, you need to complete the following tasks:
Create a memory-optimized data filegroup and add a container to the filegroup.
Create memory-optimized tables and indexes. For more information, see CREATE TABLE (Transact-SQL).
Load data into the memory-optimized table and update statistics after loading the data and before creating
the compiled stored procedures. For more information, see Statistics for Memory-Optimized Tables.
Create natively compiled stored procedures to access data in memory-optimized tables. For more
information, see CREATE PROCEDURE (Transact-SQL). You can also use a traditional, interpreted TransactSQL to access data in memory-optimized tables.
As needed, migrate data from existing tables to memory-optimized tables.
For information on how to use SQL Server Management Studio to create memory-optimized tables, see
SQL Server Management Studio Support for In-Memory OLTP.
The following code sample requires a directory called c:\Data.
CREATE DATABASE imoltp
GO
--------------------------------------- create database with a memory-optimized filegroup and a container.
ALTER DATABASE imoltp ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA
ALTER DATABASE imoltp ADD FILE (name='imoltp_mod1', filename='c:\data\imoltp_mod1') TO FILEGROUP imoltp_mod
ALTER DATABASE imoltp SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON
GO
USE imoltp
GO
-- create a durable (data will be persisted) memory-optimized table
-- two of the columns are indexed
CREATE TABLE dbo.ShoppingCart (
ShoppingCartId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED,
UserId INT NOT NULL INDEX ix_UserId NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),
CreatedDate DATETIME2 NOT NULL,
TotalPrice MONEY
) WITH (MEMORY_OPTIMIZED=ON)
GO
-- create a non-durable table. Data will not be persisted, data loss if the server turns off unexpectedly
CREATE TABLE dbo.UserSession (
SessionId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=400000),
UserId int NOT NULL,
CreatedDate DATETIME2 NOT NULL,
ShoppingCartId INT,
INDEX ix_UserId NONCLUSTERED HASH (UserId) WITH (BUCKET_COUNT=400000)
)
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY)
GO
-- insert data into the tables
INSERT dbo.UserSession VALUES (342, SYSDATETIME(), 4)
INSERT dbo.UserSession VALUES (65, SYSDATETIME(), NULL)
INSERT dbo.UserSession VALUES (8798, SYSDATETIME(), 1)
INSERT dbo.UserSession VALUES (80, SYSDATETIME(), NULL)
INSERT dbo.UserSession VALUES (4321, SYSDATETIME(), NULL)
INSERT dbo.UserSession VALUES (8578, SYSDATETIME(), NULL)
INSERT
INSERT
INSERT
INSERT
GO
dbo.ShoppingCart
dbo.ShoppingCart
dbo.ShoppingCart
dbo.ShoppingCart
VALUES
VALUES
VALUES
VALUES
(8798, SYSDATETIME(), NULL)
(23, SYSDATETIME(), 45.4)
(80, SYSDATETIME(), NULL)
(342, SYSDATETIME(), 65.4)
-- verify table contents
SELECT * FROM dbo.UserSession
SELECT * FROM dbo.ShoppingCart
GO
-- update statistics on memory-optimized tables
UPDATE STATISTICS dbo.UserSession WITH FULLSCAN, NORECOMPUTE
UPDATE STATISTICS dbo.ShoppingCart WITH FULLSCAN, NORECOMPUTE
GO
-- in an explicit transaction, assign a cart to a session and update the total price.
-- SELECT/UPDATE/DELETE statements in explicit transactions
BEGIN TRAN
UPDATE dbo.UserSession SET ShoppingCartId=3 WHERE SessionId=4
UPDATE dbo.ShoppingCart SET TotalPrice=65.84 WHERE ShoppingCartId=3
COMMIT
GO
-- verify table contents
SELECT *
FROM dbo.UserSession u JOIN dbo.ShoppingCart s on u.ShoppingCartId=s.ShoppingCartId
WHERE u.SessionId=4
GO
-- natively compiled stored procedure for assigning a shopping cart to a session
CREATE PROCEDURE dbo.usp_AssignCart @SessionId int
WITH NATIVE_COMPILATION, SCHEMABINDING
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
DECLARE @UserId INT,
@ShoppingCartId INT
SELECT @UserId=UserId, @ShoppingCartId=ShoppingCartId
FROM dbo.UserSession WHERE SessionId=@SessionId
IF @UserId IS NULL
THROW 51000, N'The session or shopping cart does not exist.', 1
UPDATE dbo.UserSession SET ShoppingCartId=@ShoppingCartId WHERE SessionId=@SessionId
END
GO
EXEC usp_AssignCart 1
GO
-- natively compiled stored procedure for inserting a large number of rows
-- this demonstrates the performance of native procs
CREATE PROCEDURE dbo.usp_InsertSampleCarts @InsertCount int
WITH NATIVE_COMPILATION, SCHEMABINDING
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
DECLARE @i int = 0
WHILE @i < @InsertCount
BEGIN
INSERT INTO dbo.ShoppingCart VALUES (1, SYSDATETIME() , NULL)
SET @i += 1
END
END
GO
-- insert 1,000,000 rows
EXEC usp_InsertSampleCarts 1000000
GO
---- verify the rows have been inserted
SELECT COUNT(*) FROM dbo.ShoppingCart
GO
-- sample memory-optimized tables for sales orders and sales order details
CREATE TABLE dbo.SalesOrders
(
so_id INT NOT NULL PRIMARY KEY NONCLUSTERED,
cust_id INT NOT NULL,
so_date DATE NOT NULL INDEX ix_date NONCLUSTERED,
so_total MONEY NOT NULL,
INDEX ix_date_total NONCLUSTERED (so_date DESC, so_total DESC)
) WITH (MEMORY_OPTIMIZED=ON)
GO
CREATE TABLE dbo.SalesOrderDetails
(
so_id INT NOT NULL,
lineitem_id INT NOT NULL,
product_id INT NOT NULL,
unitprice MONEY NOT NULL,
CONSTRAINT PK_SOD PRIMARY KEY NONCLUSTERED (so_id,lineitem_id)
) WITH (MEMORY_OPTIMIZED=ON)
GO
-- memory-optimized table type for collecting sales order details
CREATE TYPE dbo.SalesOrderDetailsType AS TABLE
(
so_id INT NOT NULL,
lineitem_id INT NOT NULL,
product_id INT NOT NULL,
unitprice MONEY NOT NULL,
PRIMARY KEY NONCLUSTERED (so_id,lineitem_id)
) WITH (MEMORY_OPTIMIZED=ON)
GO
-- stored procedure that inserts a sales order, along with its details
CREATE PROCEDURE dbo.InsertSalesOrder @so_id INT, @cust_id INT, @items dbo.SalesOrderDetailsType READONLY
WITH NATIVE_COMPILATION, SCHEMABINDING
AS BEGIN ATOMIC WITH
(
TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'us_english'
)
DECLARE @total MONEY
SELECT @total = SUM(unitprice) FROM @items
INSERT dbo.SalesOrders VALUES (@so_id, @cust_id, getdate(), @total)
INSERT dbo.SalesOrderDetails SELECT so_id, lineitem_id, product_id, unitprice FROM @items
END
GO
-- insert a sample sales order
DECLARE @so_id INT = 18,
@cust_id INT = 8,
@items dbo.SalesOrderDetailsType
INSERT @items VALUES
(@so_id, 1, 4, 43),
(@so_id, 2, 3, 3),
(@so_id, 3, 8, 453),
(@so_id, 4, 5, 76),
(@so_id, 5, 4, 43)
EXEC dbo.InsertSalesOrder @so_id, @cust_id, @items
GO
-- verify the content of the tables
SELECT
so.so_id,
so.so_date,
sod.lineitem_id,
sod.product_id,
sod.unitprice
FROM dbo.SalesOrders so JOIN dbo.SalesOrderDetails sod on so.so_id=sod.so_id
ORDER BY so.so_id, sod.lineitem_id
See Also
In-Memory OLTP Code Samples
Application-Level Partitioning
3/24/2017 • 5 min to read • Edit Online
This application processes orders. There is a lot of processing on recent orders. There is not a lot of processing on
older orders. Recent orders are in a memory-optimized table. Older orders are in a disk-based table. All orders
after the hotDate are in the memory-optimized table. All orders before the hotDate are in the disk-based table.
Assume an extreme OLTP workload with a lot of concurrent transactions. This business rule (recent orders in a
memory-optimized table) must be enforced even if several concurrent transactions are attempting to change the
hotDate.
This sample does not use a partitioned table for the disk-based table but does track an explicit split point between
the two tables, using a third table. The split point can be used to ensure that newly inserted data is always inserted
into the appropriate table based on the date. It could also be used to determine where to look for data. Late
arriving data still goes into the appropriate table.
For a related sample, see Application Pattern for Partitioning Memory-Optimized Tables.
Code Listing
USE MASTER
GO
IF NOT EXISTS(SELECT name FROM sys.databases WHERE name = 'hkTest')
CREATE DATABASE hkTest
-- enable for In-Memory OLTP - change file path as needed
ALTER DATABASE hkTest ADD FILEGROUP hkTest_mod CONTAINS MEMORY_OPTIMIZED_DATA
ALTER DATABASE hkTest ADD FILE( NAME = 'hkTest_mod' , FILENAME = 'c:\data\hkTest_mod') TO FILEGROUP
hkTest_mod;
GO
use hkTest
go
-- create memory-optimized table
if OBJECT_ID(N'hot',N'U') IS NOT NULL
drop table [hot]
create table hot
(id int not null primary key nonclustered,
orderDate datetime not null,
custName nvarchar(10) not null
) with (memory_optimized=on)
go
-- create disk-based table for older order data
if OBJECT_ID(N'cold',N'U') IS NOT NULL
drop table [cold]
create table cold (
id int not null primary key,
orderDate datetime not null,
custName nvarchar(10) not null
)
go
-- the hotDate is maintained in this memory-optimized table. The current hotDate is always the single date in
this table
if OBJECT_ID(N'hotDataSplit') IS NOT NULL
drop table [hotDataSplit]
drop table [hotDataSplit]
create table hotDataSplit (
hotDate datetime not null primary key nonclustered hash with (bucket_count = 1)
) with (memory_optimized=on)
go
----if
Stored Procedures
set the hotDate
snapshot: if any other transaction tries to update the hotDate, it will fail immediately due to a
write/write conflict
OBJECT_ID(N'usp_hkSetHotDate') IS NOT NULL
drop procedure usp_hkSetHotDate
go
create procedure usp_hkSetHotDate @newDate datetime
with native_compilation, schemabinding, execute as owner
as begin atomic with
(
transaction isolation level = snapshot,
language = N'english'
)
delete from dbo.hotDataSplit
insert dbo.hotDataSplit values (@newDate)
end
go
-- extract data up to a certain date [presumably the new hotDate]
-- must be serializable, because you don't want to delete rows that are not returned
if OBJECT_ID(N'usp_hkExtractHotData') IS NOT NULL
drop procedure usp_hkExtractHotData
go
create procedure usp_hkExtractHotData @hotDate datetime
with native_compilation, schemabinding, execute as owner
as begin atomic with
(
transaction isolation level = serializable,
language = N'english'
)
select id, orderDate, custName from dbo.hot where orderDate < @hotDate
delete from dbo.hot where orderDate < @hotDate
end
go
-- insert order
-- inserts an order either in recent or older table, depending on the current hotDate
-- it is important that the SP for retrieving the hotDate is repeatableread, in order to ensure that
-- the hotDate is not changed before the decision is made where to insert the order
-- note that insert operations [in both disk-based and memory-optimized tables] are always fully isolated, so
the transaction
-- isolation level has no impact on the insert operations; this whole transaction is effectively
repeatableread
if OBJECT_ID(N'usp_InsertOrder') IS NOT NULL
drop procedure usp_InsertOrder
go
create procedure usp_InsertOrder(@id int, @orderDate date, @custName nvarchar(10))
as begin
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
begin tran
-- get hot date under repeatableread isolation; this is to guarantee it does not change before the
insert is executed
declare @hotDate datetime
set @hotDate = (select hotDate from hotDataSplit with (repeatableread))
if (@orderDate >= @hotDate) begin
insert into hot values (@id, @orderDate, @custName)
end
else begin
else begin
insert into cold values (@id, @orderDate, @custName)
end
commit tran
end
go
-- change hot date
-- changes the hotDate and moves the rows between the recent and older order tables as appropriate
-- the hotDate is updated in this transaction; this means that if the hotDate is changed by another
transaction
-- the update will fail due to a write/write conflict and the transaction is rolled back
-- therefore, the initial (snapshot) access of the hotDate is effectively repeatable read
if OBJECT_ID(N'usp_ChangeHotDate') IS NOT NULL
drop procedure usp_ChangeHotDate
go
create procedure usp_ChangeHotDate(@newHotDate datetime)
as
begin
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
begin tran
declare @oldHotDate datetime
set @oldHotDate = (select hotDate from hotDataSplit with (snapshot))
-- get hot date under repeatableread isolation; this is to guarantee it does not change before the
insert is executed
if (@oldHotDate < @newHotDate) begin
insert into cold exec usp_hkExtractHotData @newHotDate
end
else begin
insert into hot select * from cold with (serializable) where orderDate >= @newHotDate
delete from cold with (serializable) where orderDate >= @newHotDate
end
exec usp_hkSetHotDate @newHotDate
commit tran
end
go
-- Deploy and populate tables
-- cleanup
delete from cold
go
-- init hotDataSplit
exec usp_hkSetHotDate '2012-1-1'
go
-- verify hotDate
select * from hotDataSplit
go
EXEC
EXEC
EXEC
EXEC
EXEC
EXEC
EXEC
EXEC
EXEC
EXEC
go
------
usp_InsertOrder
usp_InsertOrder
usp_InsertOrder
usp_InsertOrder
usp_InsertOrder
usp_InsertOrder
usp_InsertOrder
usp_InsertOrder
usp_InsertOrder
usp_InsertOrder
1, '2011-11-14', 'cust1'
2, '2012-3-4', 'cust1'
3, '2011-1-23', 'cust1'
4, '2011-8-6', 'cust1'
5, '2010-11-1', 'cust1'
6, '2012-1-9', 'cust1'
7, '2012-2-14', 'cust1'
8, '2010-1-17', 'cust1'
9, '2012-3-8', 'cust1'
10, '2011-9-24', 'cust1'
Demo Portion
verify contents of the tables
hotDate is 2012-1-1
all orders from 2012 are in the recent table
all orders before 2012 are in the older order table
-- query hot data
-- query hot data
select * from hot order by orderDate desc
-- query cold date
select * from cold order by orderDate desc
-- move hot date to Mar 2012
EXEC usp_ChangeHotDate '2012-03-01'
-- Verify that all orders before Mar 2012 were moved to older order table
-- query hot data
select * from hot order by orderDate desc
-- query old data
select * from cold order by orderDate desc
See Also
In-Memory OLTP Code Samples
Sample Database for In-Memory OLTP
3/24/2017 • 30 min to read • Edit Online
Overview
This sample showcases the In-Memory OLTP feature. It shows memory-optimized tables and natively-compiled
stored procedures, and can be used to demonstrate performance benefits of In-Memory OLTP.
NOTE
To view this topic for SQL Server 2014, see Extensions to AdventureWorks to Demonstrate In-Memory OLTP.
The sample migrates 5 tables in the AdventureWorks database to memory-optimized, and it includes a demo
workload for sales order processing. You can use this demo workload to see the performance benefit of using InMemory OLTP on your server.
In the description of the sample we discuss the tradeoffs that were made in migrating the tables to In-Memory
OLTP to account for the features that are not (yet) supported for memory-optimized tables.
The documentation of this sample is structured as follows:
Prerequisites for installing the sample and running the demo workload
Instructions for Installing the In-Memory OLTP sample based on AdventureWorks
Description of the sample tables and procedures – this includes descriptions of the tables and procedures
added to AdventureWorks by the In-Memory OLTP sample, as well as considerations for migrating some of
the original AdventureWorks tables to memory-optimized
Instructions for performing Performance Measurements using the Demo Workload – this includes
instructions for installing and running ostress, a tool using for driving the workload, as well as running the
demo workload itself
Memory and Disk Space Utilization in the Sample
Prerequisites
SQL Server 2016
For performance testing, a server with specifications similar to your production environment. For this
particular sample you should have at least 16GB of memory available to SQL Server. For general guidelines
on hardware for In-Memory OLTP, see the following blog
post:http://blogs.technet.com/b/dataplatforminsider/archive/2013/08/01/hardware-considerations-for-inmemory-oltp-in-sql-server-2014.aspx
Installing the In-Memory OLTP sample based on AdventureWorks
Follow these steps to install the sample:
1. Download AdventureWorks2016CTP3.bak and SQLServer2016CTP3Samples.zip from:
https://www.microsoft.com/download/details.aspx?id=49502 to a local folder, for example 'c:\temp'.
2. Restore the database backup using Transact-SQL or SQL Server Management Studio:
a. Identify the target folder and filename for the data file, for example
'h:\DATA\AdventureWorks2016CTP3_Data.mdf'
b. Identify the target folder and filename for the log file, for example
'i:\DATA\AdventureWorks2016CTP3_log.ldf'
a. The log file should be placed on a different drive than the data file, ideally a low latency drive such
as an SSD or PCIe storage, for maximum performance.
Example T-SQL script:
RESTORE DATABASE [AdventureWorks2016CTP3]
FROM DISK = N'C:\temp\AdventureWorks2016CTP3.bak'
WITH FILE = 1,
MOVE N'AdventureWorks2016_Data' TO N'h:\DATA\AdventureWorks2016CTP3_Data.mdf',
MOVE N'AdventureWorks2016_Log' TO N'i:\DATA\AdventureWorks2016CTP3_log.ldf',
MOVE N'AdventureWorks2016CTP3_mod' TO N'h:\data\AdventureWorks2016CTP3_mod'
GO
3. To view the sample scripts and workload, unpack the file SQLServer2016CTP3Samples.zip to a local folder.
Consult the file In-Memory OLTP\readme.txt for instructions on running the workload.
Description of the sample tables and procedures
The sample creates new tables for products and sales orders, based on existing tables in AdventureWorks. The
schema of the new tables is similar to the existing tables, with a few differences, as explained below.
The new memory-optimized tables carry the suffix ‘_inmem’. The sample also includes corresponding tables
carrying the suffix ‘_ondisk’ – these tables can be used to make a one-to-one comparison between the
performance of memory-optimized tables and disk-based tables on your system..
Note that the memory-optimized tables used in the workload for performance comparison are fully durable and
fully logged. They do not sacrifice durability or reliability to attain the performance gain.
The target workload for this sample is sales order processing, where we consider also information about products
and discounts. To this end, the table SalesOrderHeader, SalesOrderDetail, Product, SpecialOffer, and
SpecialOfferProduct.
Two new stored procedures, Sales.usp_InsertSalesOrder_inmem and Sales.usp_UpdateSalesOrderShipInfo_inmem,
are used to insert sales orders and to update the shipping information of a given sales order.
The new schema 'Demo' contains helper tables and stored procedures to execute a demo workload.
Concretely, the In-Memory OLTP sample adds the following objects to AdventureWorks:
Tables added by the sample
The New Tables
Sales.SalesOrderHeader_inmem
Header information about sales orders. Each sales order has one row in this table.
Sales.SalesOrderDetail_inmem
Details of sales orders. Each line item of a sales order has one row in this table.
Sales.SpecialOffer_inmem
Information about special offers, including the discount percentage associated with each special offer.
Sales.SpecialOfferProduct_inmem
Reference table between special offers and products. Each special offer can feature zero or more products,
and each product can be featured in zero or more special offers.
Production.Product_inmem
Information about products, including their list price.
Demo.DemoSalesOrderDetailSeed
Used in the demo workload to construct sample sales orders.
Disk-based variations of the tables:
Sales.SalesOrderHeader_ondisk
Sales.SalesOrderDetail_ondisk
Sales.SpecialOffer_ondisk
Sales.SpecialOfferProduct_ondisk
Production.Product_ondisk
Differences between original disk-based and the and new memory-optimized tables
For the most part, the new tables introduced by this sample use the same columns and the same data types as the
original tables. However, there are a few differences. We list the differences below, along with a rationale for the
changes.
Sales.SalesOrderHeader_inmem
Default constraints are supported for memory-optimized tables, and most default constraints we migrated
as is. However, the original table Sales.SalesOrderHeader contains two default constraints that retrieve the
current date, for the columns OrderDate and ModifiedDate. In a high throughput order processing workload
with a lot of concurrency, any global resource can become a point of contention. System time is such a
global resource, and we have observed that it can become a bottleneck when running an In-Memory OLTP
workload that inserts sales orders, in particular if the system time needs to be retrieved for multiple
columns in the sales order header, as well as the sales order details. The problem is addressed in this
sample by retrieving the system time only once for each sales order that is inserted, and use that value for
the datetime columns in SalesOrderHeader_inmem and SalesOrderDetail_inmem, in the stored procedure
Sales.usp_InsertSalesOrder_inmem.
Alias UDTs - The original table uses two alias user-defined data types (UDTs) dbo.OrderNumber and
dbo.AccountNumber, for the columns PurchaseOrderNumber and AccountNumber, respectively. SQL
Server 2016 does not support alias UDT for memory-optimized tables, thus the new tables use system data
types nvarchar(25) and nvarchar(15), respectively.
Nullable columns in index keys - In the original table, the column SalesPersonID is nullable, while in the new
tables the column is not nullable and has a default constraint with value (-1). This is because indexes on
memory-optimized tables cannot have nullable columns in the index key; -1 is a surrogate for NULL in this
case.
Computed columns - The computed columns SalesOrderNumber and TotalDue are omitted, as SQL Server
2016 does not support computed columns in memory-optimized tables. The new view
Sales.vSalesOrderHeader_extended_inmem reflects the columns SalesOrderNumber and TotalDue.
Therefore, you can use this view if these columns are needed.
Applies to: SQL Server 2017 CTP 1.1.
Beginning with SQL Server 2017 CTP 1.1, computed columns are supported in memory-optimized tables
and indexes.
Foreign key constraints are supported for memory-optimized tables in SQL Server 2016, but only if the
referenced tables are also memory-optimized. Foreign keys that reference tables that are also migrated to
memory-optimized are kept in the migrated tables, while other foreign keys are omitted. In addition,
SalesOrderHeader_inmem is a hot table in the example workload, and foreign keys constraints require
additional processing for all DML operations, as it requires lookups in all the other tables referenced in
these constraints. Therefore, the assumption is that the app ensures referential integrity for the
Sales.SalesOrderHeader_inmem table, and referential integrity is not validated when rows are inserted.
Rowguid - The rowguid column is omitted. While uniqueidentifier is support for memory-optimized tables,
the option ROWGUIDCOL is not supported in SQL Server 2016. Columns of this kind are typically used for
either merge replication or tables that have filestream columns. This sample includes neither.
Sales.SalesOrderDetail
Default constraints – similar to SalesOrderHeader, the default constraint requiring the system date/time is
not migrated, instead the stored procedure inserting sales orders takes care of inserting the current system
date/time on first insert.
Computed columns – the computed column LineTotal was not migrated as computed columns are not
supported with memory-optimized tables in SQL Server 2016. To access this column use the view
Sales.vSalesOrderDetail_extended_inmem.
Rowguid - The rowguid column is omitted. For details see the description for the table SalesOrderHeader.
Production.Product
Alias UDTs – the original table uses the user-defined data type dbo.Flag, which is equivalent to the system
data type bit. The migrated table uses the bit data type instead.
Rowguid - The rowguid column is omitted. For details see the description for the table SalesOrderHeader.
Sales.SpecialOffer
Rowguid - The rowguid column is omitted. For details see the description for the table SalesOrderHeader.
Sales.SpecialOfferProduct
Rowguid - The rowguid column is omitted. For details see the description for the table SalesOrderHeader.
Considerations for indexes on memory-optimized tables
The baseline index for memory-optimized tables is the NONCLUSTERED index, which supports point lookups
(index seek on equality predicate), range scans (index seek in inequality predicate), full index scans, and ordered
scans. In addition, NONCLUSTERED indexes support searching on leading columns of the index key. In fact
memory-optimized NONCLUSTERED indexes support all the operations supported by disk-based NONCLUSTERED
indexes, with the only exception being backward scans. Therefore, using NONCLUSTERED indexes is a safe choice
for your indexes.
HASH indexes are can be used to further optimize the workload. They are particularly optimized for point lookups
and row inserts. However, one must consider that they do not support range scans, ordered scans, or search on
leading index key columns. Therefore, care needs to be taken when using these indexes. In addition, it is necessary
to specify the bucket_count at create time. It should usually be set at between one and two times the number of
index key values, but overestimating is usually not a problem.
See Books Online for more details about index guidelines and guidelines for choosing the right bucket_count.
The indexes on the migrated tables have been tuned for the demo sales order processing workload. The workload
relies on inserts and point lookups in the tables Sales.SalesOrderHeader_inmem and
Sales.SalesOrderDetail_inmem, and it also relies on point lookups on the primary key columns in the tables
Production.Product_inmem and Sales.SpecialOffer_inmem.
Sales.SalesOrderHeader_inmem has three indexes, which are all HASH indexes for performance reasons, and
because no ordered or range scans are needed for the workload.
HASH index on (SalesOrderID): bucket_count is sized at 10 million (rounded up to 16 million), because the
expected number of sales orders is 10 million
HASH index on (SalesPersonID): bucket_count is 1 million. The data set provided does not have a lot of sales
persons, but this allows for future growth, plus you don’t pay a performance penalty for point lookups if the
bucket_count is oversized.
HASH index on (CustomerID): bucket_count is 1 million. The data set provided does not have a lot of
customers, but this allows for future growth.
Sales.SalesOrderDetail_inmem has three indexes, which are all HASH indexes for performance reasons, and
because no ordered or range scans are needed for the workload.
HASH index on (SalesOrderID, SalesOrderDetailID): this is the primary key index, and even though lookups
on (SalesOrderID, SalesOrderDetailID) will be infrequent, using a hash index for the key speeds up row
inserts. The bucket_count is sized at 50 million (rounded up to 67 million): the expected number of sales
orders is 10 million, and this is sized to have an average of 5 items per order
HASH index on (SalesOrderID): lookups by sales order are frequent: you will want to find all the line items
corresponding to a single order. bucket_count is sized at 10 million (rounded up to 16 million), because the
expected number of sales orders is 10 million
HASH index on (ProductID): bucket_count is 1 million. The data set provided does not have a lot of product,
but this allows for future growth.
Production.Product_inmem has three indexes
HASH index on (ProductID): lookups on ProductID are in the critical path for the demo workload, therefore
this is a hash index
NONCLUSTERED index on (Name): this will allow ordered scans of product names
NONCLUSTERED index on (ProductNumber): this will allow ordered scans of product numbers
Sales.SpecialOffer_inmem has one HASH index on (SpecialOfferID): point lookups of special offers are in the
critical part of the demo workload. The bucket_count is sized at 1 million to allow for future growth.
Sales.SpecialOfferProduct_inmem is not referenced in the demo workload, and thus there is no apparent
need to use hash indexes on this table to optimize the workload – the indexes on (SpecialOfferID, ProductID)
and (ProductID) are NONCLUSTERED.
Notice that in the above some of the bucket_counts are over-sized, but not the bucket_counts for the
indexes on SalesOrderHeader_inmem and SalesOrderDetail_inmem: they are sized for just 10 million sales
orders. This was done to allow installing the sample on systems with low memory availability, although in
those cases the demo workload will fail with out-of-memory. If you do want to scale well beyond 10 million
sales orders, feel free to increase the bucket counts accordingly.
Considerations for memory utilization
Memory utilization in the sample database, both before and after running the demo workload, is discussed in the
Section Memory utilization for the memory-optimized tables.
Stored Procedures added by the sample
The two key stored procedures for inserting sales order and updating shipping details are as follows:
Sales.usp_InsertSalesOrder_inmem
Inserts a new sales order in the database and outputs the SalesOrderID for that sales order. As input
parameters it takes details for the sales order header, as well as the line items in the order.
Output parameter:
@SalesOrderID int – the SalesOrderID for the sales order that was just inserted
Input parameters (required):
@DueDate datetime2
@CustomerID int
@BillToAddressID [int]
@ShipToAddressID [int]
@ShipMethodID [int]
@SalesOrderDetails Sales.SalesOrderDetailType_inmem – TVP that contains the line items of
the order
Input parameters (optional):
@Status [tinyint]
@OnlineOrderFlag [bit]
@PurchaseOrderNumber [nvarchar](25)
@AccountNumber [nvarchar](15)
@SalesPersonID [int]
@TerritoryID [int]
@CreditCardID [int]
@CreditCardApprovalCode [varchar](15)
@CurrencyRateID [int]
@Comment nvarchar(128)
Sales.usp_UpdateSalesOrderShipInfo_inmem
Update the shipping information for a given sales order. This will also update the shipping
information for all line items of the sales order.
This is a wrapper procedure for the natively compiled stored procedures
Sales.usp_UpdateSalesOrderShipInfo_native with retry logic to deal with (unexpected) potential
conflicts with concurrent transactions updating the same order. For more information about retry
logic see the Books Online topic here.
Sales.usp_UpdateSalesOrderShipInfo_native
This is the natively compiled stored procedure that actually processes the update to the shipping
information. It is means to be called from the wrapper stored procedure
Sales.usp_UpdateSalesOrderShipInfo_inmem. If the client can deal with failures and implements retry
logic, you can call this procedure directly, rather than using the wrapper stored procedure.
The following stored procedure is used for the demo workload.
Demo.usp_DemoReset
Resets the demo by emptying and reseeding the SalesOrderHeader and SalesOrderDetail tables.
The following stored procedures are used for inserting in and deleting from memory-optimized tables while
guaranteeing domain and referential integrity.
Production.usp_InsertProduct_inmem
Production.usp_DeleteProduct_inmem
Sales.usp_InsertSpecialOffer_inmem
Sales.usp_DeleteSpecialOffer_inmem
Sales.usp_InsertSpecialOfferProduct_inmem
Finally the following stored procedure is used to verify domain and referential integrity.
1. dbo.usp_ValidateIntegrity
Optional parameter: @object_id – ID of the object to validate integrity for
This procedure relies on the tables dbo.DomainIntegrity, dbo.ReferentialIntegrity, and
dbo.UniqueIntegrity for the integrity rules that need to be verified – the sample populates these
tables based on the check, foreign key, and unique constraints that exist for the original tables in the
AdventureWorks database.
It relies on the helper procedures dbo.usp_GenerateCKCheck, dbo.usp_GenerateFKCheck, and
dbo.GenerateUQCheck to generate the T-SQL needed for performing the integrity checks.
Performance Measurements using the Demo Workload
Ostress is a command-line tool that was developed by the Microsoft CSS SQL Server support team. This tool can
be used to execute queries or run stored procedures in parallel. You can configure the number of threads to run a
given T-SQL statement in parallel, and you can specify how many times the statement should be executed on this
thread; ostress will spin up the threads and execute the statement on all threads in parallel. After execution finishes
for all threads, ostress will report the time taken for all threads to finish execution.
Installing ostress
Ostress is installed as part of the RML Utilities; there is no standalone installation for ostress.
Installation steps:
1. Download and run the x64 installation package for the RML utilities from the following page:
http://blogs.msdn.com/b/psssql/archive/2013/10/29/cumulative-update-2-to-the-rml-utilities-formicrosoft-sql-server-released.aspx
2. If there is a dialog box saying certain files are in use, click ‘Continue’
Running ostress
Ostress is run from the command-line prompt. It is most convenient to run the tool from the "RML Cmd Prompt",
which is installed as part of the RML Utilities.
To open the RML Cmd Prompt follow these instructions:
In Windows Server 2012 [R2] and in Windows 8 and 8.1, open the start menu by clicking the Windows key, and
type ‘rml’. Click on “RML Cmd Prompt”, which will be in the list of search results.
Ensure that the command prompt is located in the RML Utilities installation folder.
The command-line options for ostress can be seen when simply running ostress.exe without any command-line
options. The main options to consider for running ostress with this sample are:
-S name of Microsoft SQL Server instance to connect to
-E use Windows authentication to connect (default); if you use SQL Server authentication, use the options –
U and –P to specify the username and password, respectively
-d name of the database, for this example AdventureWorks2014
-Q the T-SQL statement to be executed
-n number of connections processing each input file/query
-r the number of iterations for each connection to execute each input file/query
Demo Workload
The main stored procedure used in the demo workload is Sales.usp_InsertSalesOrder_inmem/ondisk. The script in
the below constructs a table-valued parameter (TVP) with sample data, and calls the procedure to insert a sales
order with 5 line items.
The ostress tool is used to execute the stored procedure calls in parallel, to simulate clients inserting sales orders
concurrently.
Reset the demo after each stress run executing Demo.usp_DemoReset. This procedure deletes the rows in the
memory-optimized tables, truncates the disk-based tables, and executes a database checkpoint.
The following script is executed concurrently to simulate a sales order processing workload:
DECLARE
@i int = 0,
@od Sales.SalesOrderDetailType_inmem,
@SalesOrderID int,
@DueDate datetime2 = sysdatetime(),
@CustomerID int = rand() * 8000,
@BillToAddressID int = rand() * 10000,
@ShipToAddressID int = rand() * 10000,
@ShipMethodID int = (rand() * 5) + 1;
INSERT INTO @od
SELECT OrderQty, ProductID, SpecialOfferID
FROM Demo.DemoSalesOrderDetailSeed
WHERE OrderID= cast((rand()*106) + 1 as int);
WHILE (@i < 20)
BEGIN;
EXEC Sales.usp_InsertSalesOrder_inmem @SalesOrderID OUTPUT, @DueDate, @CustomerID, @BillToAddressID,
@ShipToAddressID, @ShipMethodID, @od;
SET @i += 1
END
With this script, each sample order that is constructed is inserted 20 times, through 20 stored procedures executed
in a WHILE loop. The loop is used to account for the fact that the database is used to construct the sample order. In
typical production environments, the mid-tier application will construct the sales order to be inserted.
The above script inserts sales orders into memory-optimized tables. The script to insert sales orders into diskbased tables is derived by replacing the two occurrences of ‘_inmem’ with ‘_ondisk’.
We will use the ostress tool to execute the scripts using several concurrent connections. We will use the parameter
‘-n’ to control the number of connections, and the parameter ‘r’ to control how many times the script is executed
on each connection.
Running the Workload
To test at scale we insert 10 million sales orders, using 100 connections. This test performs reasonably on a
modest server (e.g., 8 physical, 16 logical cores), and basic SSD storage for the log. If the test does not perform well
on your hardware, take look at the Section Troubleshooting slow-running tests.If you want to reduce the level of
stress for this test, lower the number of connections by changing the parameter ‘-n’. For example to lower the
connection count to 40, change the parameter ‘-n100’ to ‘-n40’.
As a performance measure for the workload we use the elapsed time as reported by ostress.exe after running the
workload.
The below instructions and measurements use a workload that inserts 10 million sales orders. For instructions to
run a scaled-down workload inserting 1 million sales orders, see the instructions in 'In-Memory OLTP\readme.txt'
that is part of the SQLServer2016CTP3Samples.zip archive.
Mem o r y-o pt i m i z ed t abl es
We will start by running the workload on memory-optimized tables. The following command opens 100 threads,
each running for 5,000 iterations. Each iteration inserts 20 sales orders in separate transactions. There are 20
inserts per iteration to compensate for the fact that the database is used to generate the data to be inserted. This
yield a total of 20 * 5,000 * 100 = 10,000,000 sales order inserts.
Open the RML Cmd Prompt, and execute the following command:
Click the Copy button to copy the command, and paste it into the RML Utilities command prompt.
ostress.exe –n100 –r5000 -S. -E -dAdventureWorks2016CTP3 -q -Q"DECLARE @i int = 0, @od
Sales.SalesOrderDetailType_inmem, @SalesOrderID int, @DueDate datetime2 = sysdatetime(), @CustomerID int =
rand() * 8000, @BillToAddressID int = rand() * 10000, @ShipToAddressID int = rand() * 10000, @ShipMethodID int
= (rand() * 5) + 1; INSERT INTO @od SELECT OrderQty, ProductID, SpecialOfferID FROM
Demo.DemoSalesOrderDetailSeed WHERE OrderID= cast((rand()*106) + 1 as int); while (@i < 20) begin; EXEC
Sales.usp_InsertSalesOrder_inmem @SalesOrderID OUTPUT, @DueDate, @CustomerID, @BillToAddressID,
@ShipToAddressID, @ShipMethodID, @od; set @i += 1 end"
On one test server with a total number of 8 physical (16 logical) cores, this took 2 minutes and 5 seconds. On a
second test server with 24 physical (48 logical) cores, this took 1 minute and 0 seconds.
Observe the CPU utilization while the workload is running, for example using task manager. You will see that CPU
utilization is close to 100%. If this is not the case, you have a log IO bottleneck see also Troubleshooting slowrunning tests.
D i sk- b a se d t a b l e s
The following command will run the workload on disk-based tables. Note that this workload may take a while to
execute, which is largely due to latch contention in the system. Memory-optimized table are latch-free and thus do
not suffer from this problem.
Open the RML Cmd Prompt, and execute the following command:
Click the Copy button to copy the command, and paste it into the RML Utilities command prompt.
ostress.exe –n100 –r5000 -S. -E -dAdventureWorks2016CTP3 -q -Q"DECLARE @i int = 0, @od
Sales.SalesOrderDetailType_ondisk, @SalesOrderID int, @DueDate datetime2 = sysdatetime(), @CustomerID int =
rand() * 8000, @BillToAddressID int = rand() * 10000, @ShipToAddressID int = rand() * 10000, @ShipMethodID int
= (rand() * 5) + 1; INSERT INTO @od SELECT OrderQty, ProductID, SpecialOfferID FROM
Demo.DemoSalesOrderDetailSeed WHERE OrderID= cast((rand()*106) + 1 as int); while (@i < 20) begin; EXEC
Sales.usp_InsertSalesOrder_ondisk @SalesOrderID OUTPUT, @DueDate, @CustomerID, @BillToAddressID,
@ShipToAddressID, @ShipMethodID, @od; set @i += 1 end"
On one test server with a total number of 8 physical (16 logical) cores, this took 41 minutes and 25 seconds. On a
second test server with 24 physical (48 logical) cores, this took 52 minutes and 16 seconds.
The main factor in the performance difference between memory-optimized tables and disk-based tables in this test
is the fact that when using disk-based tables, SQL Server cannot not fully utilize the CPU. The reason is latch
contention: concurrent transactions are attempting to write to the same data page; latches are used to ensure only
one transaction at a time can write to a page. The In-Memory OLTP engine is latch-free, and data rows are not
organized in pages. Thus, concurrent transactions do not block each other’s inserts, thus enabling SQL Server to
fully utilize the CPU.
You can observe the CPU utilization while the workload is running, for example using task manager. You will see
with disk-based tables the CPU utilization is far from 100%. On a test configuration with 16 logical processors, the
utilization would hover around 24%.
Optionally, you can view the number of latch waits per second using Performance Monitor, with the performance
counter ‘\SQL Server:Latches\Latch Waits/sec’.
Resetting the demo
To reset the demo, open the RML Cmd Prompt, and execute the following command:
ostress.exe -S. -E -dAdventureWorks2016CTP3 -Q"EXEC Demo.usp_DemoReset"
Depending on the hardware this may take a few minutes to run.
We recommend a reset after every demo run. Because this workload is insert-only, each run will consume more
memory, and thus a reset is required to prevent running out of memory. The amount of memory consumed after a
run is discussed in Section Memory utilization after running the workload.
Troubleshooting slow-running tests
Test results will typically vary with hardware, and also the level of concurrency used in the test run. A couple of
things to look for if the results are not as expected:
Number of concurrent transactions: When running the workload on a single thread, performance gain with
In-Memory OLTP will likely be less than 2X. Latch contention is only a big problem if there is a high level of
concurrency.
Low number of cores available to SQL Server: This means there will be a low level of concurrency in the
system, as there can only be as many concurrently executing transactions as there are cores available to
SQL.
Symptom: if the CPU utilization is high when running the workload on disk-based tables, this means
there is not a lot of contention, pointing to a lack of concurrency.
Speed of the log drive: If the log drive cannot keep up with the level of transaction throughput in the system,
the workload becomes bottlenecked on log IO. Although logging is more efficient with In-Memory OLTP, if
log IO is a bottleneck, the potential performance gain is limited.
Symptom: if the CPU utilization is not close to 100% or is very spiky when running the workload on
memory-optimized tables, it is possible there is a log IO bottleneck. This can be confirmed by opening
Resource Monitor and looking at the queue length for the log drive.
Memory and Disk Space Utilization in the Sample
In the below we describe what to expect in terms of memory and disk space utilization for the sample database.
We also show the results we have seen in on a test server with 16 logical cores.
Memory utilization for the memory-optimized tables
Overall utilization of the database
The following query can be used to obtain the total memory utilization for In-Memory OLTP in the system.
SELECT type
, name
, pages_kb/1024 AS pages_MB
FROM sys.dm_os_memory_clerks WHERE type LIKE '%xtp%'
Snapshot after the database has just been created:
type
name
pages_MB
MEMORYCLERK_XTP
Default
94
MEMORYCLERK_XTP
DB_ID_5
877
MEMORYCLERK_XTP
Default
0
MEMORYCLERK_XTP
Default
0
The default memory clerks contain system-wide memory structures and are relatively small. The memory clerk for
the user database, in this case database with ID 5, is about 900MB.
Memory utilization per table
The following query can be used to drill down into the memory utilization of the individual tables and their
indexes:
SELECT object_name(t.object_id) AS [Table Name]
, memory_allocated_for_table_kb
, memory_allocated_for_indexes_kb
FROM sys.dm_db_xtp_table_memory_stats dms JOIN sys.tables t
ON dms.object_id=t.object_id
WHERE t.type='U'
The following shows the results of this query for a fresh installation of the sample:
Table Name
memory_allocated_for_table_kb
memory_allocated_for_indexes_kb
SpecialOfferProduct_inmem
64
3840
DemoSalesOrderHeaderSeed
1984
5504
SalesOrderDetail_inmem
15316
663552
DemoSalesOrderDetailSeed
64
10432
SpecialOffer_inmem
3
8192
SalesOrderHeader_inmem
7168
147456
Product_inmem
124
12352
As you can see the tables are fairly small: SalesOrderHeader_inmem is about 7MB, and SalesOrderDetail_inmem is
about 15MB in size.
What is striking here is the size of the memory allocated for indexes, compared to the size of the table data. That is
because the hash indexes in the sample are pre-sized for a larger data size. Note that hash indexes have a fixed
size, and thus their size will not grow with the size of data in the table.
Memory utilization after running the workload
After insert 10 million sales orders, the all-up memory utilization looks similar to the following:
SELECT type
, name
, pages_kb/1024 AS pages_MB
FROM sys.dm_os_memory_clerks WHERE type LIKE '%xtp%'
type
name
pages_MB
MEMORYCLERK_XTP
Default
146
MEMORYCLERK_XTP
DB_ID_5
7374
MEMORYCLERK_XTP
Default
0
MEMORYCLERK_XTP
Default
0
As you can see, SQL Server is using a bit under 8GB for the memory-optimized tables and indexes in the sample
database.
Looking at the detailed memory usage per table after one example run:
SELECT object_name(t.object_id) AS [Table Name]
, memory_allocated_for_table_kb
, memory_allocated_for_indexes_kb
FROM sys.dm_db_xtp_table_memory_stats dms JOIN sys.tables t
ON dms.object_id=t.object_id
WHERE t.type='U'
Table Name
memory_allocated_for_table_kb
memory_allocated_for_indexes_kb
SalesOrderDetail_inmem
5113761
663552
DemoSalesOrderDetailSeed
64
10368
SpecialOffer_inmem
2
8192
SalesOrderHeader_inmem
1575679
147456
Product_inmem
111
12032
SpecialOfferProduct_inmem
64
3712
DemoSalesOrderHeaderSeed
1984
5504
We can see a total of about 6.5GB of data. Notice that the size of the indexes on the table
SalesOrderHeader_inmem and SalesOrderDetail_inmem is is the same as the size of the indexes before inserting
the sales orders. The index size did not change because both tables are using hash indexes, and hash indexes are
static.
After demo reset
The stored procedure Demo.usp_DemoReset can be used to reset the demo. It deletes the data in the tables
SalesOrderHeader_inmem and SalesOrderDetail_inmem, and re-seeds the data from the original tables
SalesOrderHeader and SalesOrderDetail.
Now, even though the rows in the tables have been deleted, this does not mean that memory is reclaimed
immediately. SQL Server reclaims memory from deleted rows in memory-optimized tables in the background, as
needed. You will see that immediately after demo reset, with no transactional workload on the system, memory
from deleted rows is not yet reclaimed:
SELECT type
, name
, pages_kb/1024 AS pages_MB
FROM sys.dm_os_memory_clerks WHERE type LIKE '%xtp%'
type
name
pages_MB
MEMORYCLERK_XTP
Default
2261
MEMORYCLERK_XTP
DB_ID_5
7396
MEMORYCLERK_XTP
Default
0
MEMORYCLERK_XTP
Default
0
This is expected: memory will be reclaimed when the transactional workload is running.
If you start a second run of the demo workload you will see the memory utilization decrease initially, as the
previously deleted rows are cleaned up. At some point the memory size will increase again, until the workload
finishes. After inserting 10 million rows after demo reset, the memory utilization will be very similar to the
utilization after the first run. For example:
SELECT type
, name
, pages_kb/1024 AS pages_MB
FROM sys.dm_os_memory_clerks WHERE type LIKE '%xtp%'
type
name
pages_MB
MEMORYCLERK_XTP
Default
1863
MEMORYCLERK_XTP
DB_ID_5
7390
MEMORYCLERK_XTP
Default
0
MEMORYCLERK_XTP
Default
0
Disk utilization for memory-optimized tables
The overall on-disk size for the checkpoint files of a database at a given time can be found using the query:
SELECT SUM(df.size) * 8 / 1024 AS [On-disk size in MB]
FROM sys.filegroups f JOIN sys.database_files df
ON f.data_space_id=df.data_space_id
WHERE f.type=N'FX'
Initial state
When the sample filegroup and sample memory-optimized tables are created initially, a number of checkpoint files
are pre-created and the system starts filling the files – the number of checkpoint files pre-created depends on the
number of logical processors in the system. As the sample is initially very small, the pre-created files will be mostly
empty after initial create.
The following shows the initial on-disk size for the sample on a machine with 16 logical processors:
SELECT SUM(df.size) * 8 / 1024 AS [On-disk size in MB]
FROM sys.filegroups f JOIN sys.database_files df
ON f.data_space_id=df.data_space_id
WHERE f.type=N'FX'
||
|-|
|On-disk size in MB|
|2312|
As you can see, there is a big discrepancy between the on-disk size of the checkpoint files, which is 2.3GB, and the
actual data size, which is closer to 30MB.
Looking closer at where the disk-space utilization comes from, you can use the following query. The size on disk
returned by this query is approximate for files with state in 5 (REQUIRED FOR BACKUP/HA), 6 (IN TRANSITION TO
TOMBSTONE), or 7 (TOMBSTONE).
SELECT state_desc
, file_type_desc
, COUNT(*) AS [count]
, SUM(CASE
WHEN state = 5 AND file_type=0 THEN 128*1024*1024
WHEN state = 5 AND file_type=1 THEN 8*1024*1024
WHEN state IN (6,7) THEN 68*1024*1024
ELSE file_size_in_bytes
END) / 1024 / 1024 AS [on-disk size MB]
FROM sys.dm_db_xtp_checkpoint_files
GROUP BY state, state_desc, file_type, file_type_desc
ORDER BY state, file_type
For the initial state of the sample, the result will look something like for a server with 16 logical processors:
state_desc
file_type_desc
count
on-disk size MB
PRECREATED
DATA
16
2048
PRECREATED
DELTA
16
128
UNDER CONSTRUCTION
DATA
1
128
UNDER CONSTRUCTION
DELTA
1
8
As you can see, most of the space is used by precreated data and delta files. SQL Server pre-created one pair of
(data, delta) files per logical processor. In addition, data files are pre-sized at 128MB, and delta files at 8MB, in
order to make inserting data into these files more efficient.
The actual data in the memory-optimized tables is in the single data file.
After running the workload
After a single test run that inserts 10 million sales orders, the overall on-disk size looks something like this (for a
16-core test server):
SELECT SUM(df.size) * 8 / 1024 AS [On-disk size in MB]
FROM sys.filegroups f JOIN sys.database_files df
ON f.data_space_id=df.data_space_id
WHERE f.type=N'FX'
||
|-|
|On-disk size in MB|
|8828|
The on-disk size is close to 9GB, which comes close to the in-memory size of the data.
Looking more closely at the sizes of the checkpoint files across the various states:
SELECT state_desc
, file_type_desc
, COUNT(*) AS [count]
, SUM(CASE
WHEN state = 5 AND file_type=0 THEN 128*1024*1024
WHEN state = 5 AND file_type=1 THEN 8*1024*1024
WHEN state IN (6,7) THEN 68*1024*1024
ELSE file_size_in_bytes
END) / 1024 / 1024 AS [on-disk size MB]
FROM sys.dm_db_xtp_checkpoint_files
GROUP BY state, state_desc, file_type, file_type_desc
ORDER BY state, file_type
state_desc
file_type_desc
count
on-disk size MB
PRECREATED
DATA
16
2048
PRECREATED
DELTA
16
128
UNDER CONSTRUCTION
DATA
1
128
UNDER CONSTRUCTION
DELTA
1
8
We still have 16 pairs of pre-created files, ready to go as checkpoints are closed.
There is one pair under construction, which is used until the current checkpoint is closed. Along with the active
checkpoint files this gives about 6.5GB of disk utilization for 6.5GB of data in memory. Recall that indexes are not
persisted on disk, and thus the overall size on disk is smaller than the size in memory in this case.
After demo reset
After demo reset, disk space is not reclaimed immediately if there is no transactional workload on the system, and
there are not database checkpoints. For checkpoint files to be moved through their various stages and eventually
be discarded, a number of checkpoints and log truncation events need to happen, to initiate merge of checkpoint
files, as well as to initiate garbage collection. These will happen automatically if you have a transactional workload
in the system [and take regular log backups, in case you are using the FULL recovery model], but not when the
system is idle, as in a demo scenario.
In the example, after demo reset, you may see something like
SELECT SUM(df.size) * 8 / 1024 AS [On-disk size in MB]
FROM sys.filegroups f JOIN sys.database_files df
ON f.data_space_id=df.data_space_id
WHERE f.type=N'FX'
||
|-|
|On-disk size in MB|
|11839|
At nearly 12GB, this is significantly more than the 9GB we had before the demo reset. This is because some
checkpoint file merges have been started, but some of the merge targets have not yet been installed, and some of
the merge source files have not yet been cleaned up, as can be seen from the following:
SELECT state_desc
, file_type_desc
, COUNT(*) AS [count]
, SUM(CASE
WHEN state = 5 AND file_type=0 THEN 128*1024*1024
WHEN state = 5 AND file_type=1 THEN 8*1024*1024
WHEN state IN (6,7) THEN 68*1024*1024
ELSE file_size_in_bytes
END) / 1024 / 1024 AS [on-disk size MB]
FROM sys.dm_db_xtp_checkpoint_files
GROUP BY state, state_desc, file_type, file_type_desc
ORDER BY state, file_type
state_desc
file_type_desc
count
on-disk size MB
PRECREATED
DATA
16
2048
PRECREATED
DELTA
16
128
ACTIVE
DATA
38
5152
ACTIVE
DELTA
38
1331
MERGE TARGET
DATA
7
896
MERGE TARGET
DELTA
7
56
MERGED SOURCE
DATA
13
1772
MERGED SOURCE
DELTA
13
455
Merge targets are installed and merged source are cleaned up as transactional activity happens in the system.
After a second run of the demo workload, inserting 10 million sales orders after the demo reset, you will see that
the files constructed during the first run of the workload have been cleaned up. If you run the above query several
times while the workload is running, you can see the checkpoint files make their way through the various stages.
After the second run of the workload insert 10 million sales orders you will see disk utilization very similar to,
though not necessarily the same as after the first run, as the system is dynamic in nature. For example:
SELECT state_desc
, file_type_desc
, COUNT(*) AS [count]
, SUM(CASE
WHEN state = 5 AND file_type=0 THEN 128*1024*1024
WHEN state = 5 AND file_type=1 THEN 8*1024*1024
WHEN state IN (6,7) THEN 68*1024*1024
ELSE file_size_in_bytes
END) / 1024 / 1024 AS [on-disk size MB]
FROM sys.dm_db_xtp_checkpoint_files
GROUP BY state, state_desc, file_type, file_type_desc
ORDER BY state, file_type
state_desc
file_type_desc
count
on-disk size MB
PRECREATED
DATA
16
2048
PRECREATED
DELTA
16
128
UNDER CONSTRUCTION
DATA
2
268
UNDER CONSTRUCTION
DELTA
2
16
ACTIVE
DATA
41
5608
ACTIVE
DELTA
41
328
In this case, there are two checkpoint file pairs in the ‘under construction’ state, which means multiple file pairs
were moved to the ‘under construction’ state, likely due to the high level of concurrency in the workload. Multiple
concurrent threads required a new file pair at the same time, and thus moved a pair from ‘precreated’ to ‘under
construction’.
See Also
In-Memory OLTP (In-Memory Optimization)
Memory-Optimized Tables
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
SQL Server In-Memory OLTP helps improve performance of OLTP applications through efficient, memoryoptimized data access, native compilation of business logic, and lock- and latch free algorithms. The In-Memory
OLTP feature includes memory-optimized tables and table types, as well as native compilation of Transact-SQL
stored procedures for efficient access to these tables.
For more information about memory-optimized tables, see:
Introduction to Memory-Optimized Tables
Native Compilation of Tables and Stored Procedures
Altering Memory-Optimized Tables
Transactions with Memory-Optimized Tables
Application Pattern for Partitioning Memory-Optimized Tables
Statistics for Memory-Optimized Tables
Table and Row Size in Memory-Optimized Tables
A Guide to Query Processing for Memory-Optimized Tables
See Also
In-Memory OLTP (In-Memory Optimization)
Introduction to Memory-Optimized Tables
3/24/2017 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database
Warehouse
Parallel Data Warehouse
Memory-optimized tables are tables, created using CREATE TABLE (Transact-SQL).
Azure SQL Data
Memory-optimized tables are fully durable by default, and, like transactions on (traditional) disk-based tables, fully
durable transactions on memory-optimized tables are fully atomic, consistent, isolated, and durable (ACID).
Memory-optimized tables and natively compiled stored procedures support a subset of Transact-SQL.
Starting with SQL Server 2016, and in Azure SQL Database, there are no limitations for collations or code pages
that are specific to In-Memory OLTP.
The primary store for memory-optimized tables is main memory; memory-optimized tables reside in memory.
Rows in the table are read from and written to memory. The entire table resides in memory. A second copy of the
table data is maintained on disk, but only for durability purposes. See Creating and Managing Storage for
Memory-Optimized Objects for more information about durable tables. Data in memory-optimized tables is only
read from disk during database recovery. For example, after a server restart.
For even greater performance gains, In-Memory OLTP supports durable tables with transaction durability delayed.
Delayed durable transactions are saved to disk soon after the transaction has committed and returned control to
the client. In exchange for the increased performance, committed transactions that have not saved to disk are lost
in a server crash or fail over.
Besides the default durable memory-optimized tables, SQL Server also supports non-durable memory-optimized
tables, which are not logged and their data is not persisted on disk. This means that transactions on these tables do
not require any disk IO, but the data will not be recovered if there is a server crash or failover.
In-Memory OLTP is integrated with SQL Server to provide a seamless experience in all areas such as development,
deployment, manageability, and supportability. A database can contain in-memory as well as disk-based objects.
Rows in memory-optimized tables are versioned. This means that each row in the table potentially has multiple
versions. All row versions are maintained in the same table data structure. Row versioning is used to allow
concurrent reads and writes on the same row. For more information about concurrent reads and writes on the
same row, see Transactions with Memory-Optimized Tables.
The following figure illustrates multi-versioning. The figure shows a table with three rows and each row has
different versions.
The table has three rows: r1, r2, and r3. r1 has three versions, r2 has two versions, and r3 has four versions. Note
that different versions of the same row do not necessarily occupy consecutive memory locations. The different row
versions can be dispersed throughout the table data structure.
The memory-optimized table data structure can be seen as a collection of row versions. Rows in disk-based tables
are organized in pages and extents, and individual rows addressed using page number and page offset, row
versions in memory-optimized tables are addressed using 8-byte memory pointers.
Data in memory-optimized tables is accessed in two ways:
Through natively compiled stored procedures.
Through interpreted Transact-SQL, outside of a natively-compiled stored procedure. These Transact-SQL
statements may be either inside interpreted stored procedures or they may be ad-hoc Transact-SQL
statements.
Accessing Data in Memory-Optimized Tables
Memory-optimized tables can be accessed most efficiently from natively compiled stored procedures (Natively
Compiled Stored Procedures). Memory-optimized tables can also be accessed with (traditional) interpreted
Transact-SQL. Interpreted Transact-SQL refers to accessing memory-optimized tables without a natively compiled
stored procedure. Some examples of interpreted Transact-SQL access include accessing a memory-optimized table
from a DML trigger, ad hoc Transact-SQL batch, view, and table-valued function.
The following table summarizes native and interpreted Transact-SQL access for various objects.
FEATURE
ACCESS USING A NATIVELY
COMPILED STORED
PROCEDURE
INTERPRETED TRANSACT-SQL
ACCESS
CLR ACCESS
Memory-optimized table
Yes
Yes
No*
Memory-optimized table
type
Yes
Yes
No
Natively compiled stored
procedure
Nesting of natively compiled
stored procedures is now
supported. You can use the
EXECUTE syntax inside the
stored procedures, as long
as the referenced procedure
is also natively compiled.
Yes
No*
*You cannot access a memory-optimized table or natively compiled stored procedure from the context connection
(the connection from SQL Server when executing a CLR module). You can, however, create and open another
connection from which you can access memory-optimized tables and natively compiled stored procedures.
Performance and Scalability
The following factors will affect the performance gains that can be achieved with In-Memory OLTP:
Communication: An application with many calls to short stored procedures may see a smaller performance gain
compared to an application with fewer calls and more functionality implemented in each stored procedure.
Transact-SQL Execution: In-Memory OLTP achieves the best performance when using natively compiled stored
procedures rather than interpreted stored procedures or query execution. There can be a benefit to accessing
memory-optimized tables from such stored procedures.
Range Scan vs Point Lookup: Memory-optimized nonclustered indexes support range scans and ordered scans. For
point lookups, memory-optimized hash indexes have better performance than memory-optimized nonclustered
indexes. Memory-optimized nonclustered indexes have better performance than disk-based indexes.
Starting in SQL Server 2016, the query plan for a memory-optimized table can scan the table in parallel. This
improves the performance of analytical queries.
Hash indexes also became scannable in parallel in SQL Server 2016.
Nonclustered indexes also became scannable in parallel in SQL Server 2016.
Columnstore indexes have been scannable in parallel since their inception in SQL Server 2014.
Index operations: Index operations are not logged, and they exist only in memory.
Concurrency: Applications whose performance is affected by engine-level concurrency, such as latch contention or
blocking, improves significantly when the application moves to In-Memory OLTP.
The following table lists the performance and scalability issues that are commonly found in relational databases
and how In-Memory OLTP can improve performance.
ISSUE
IN-MEMORY OLTP IMPACT
Performance
CPU
Natively compiled stored procedures can lower CPU usage
significantly because they require significantly fewer
instructions to execute a Transact-SQL statement compared
to interpreted stored procedures.
High resource (CPU, I/O, network or memory) usage.
In-Memory OLTP can help reduce the hardware investment in
scaled-out workloads because one server can potentially
deliver the throughput of five to ten servers.
I/O
If you encounter an I/O bottleneck from processing to data or
index pages, In-Memory OLTP may reduce the bottleneck.
Additionally, the checkpointing of In-Memory OLTP objects is
continuous and does not lead to sudden increases in I/O
operations. However, if the working set of the performance
critical tables does not fit in memory, In-Memory OLTP will
not improve performance because it requires data to be
memory resident. If you encounter an I/O bottleneck in
logging, In-Memory OLTP can reduce the bottleneck because
it does less logging. If one or more memory-optimized tables
are configured as non-durable tables, you can eliminate
logging for data.
Memory
In-Memory OLTP does not offer any performance benefit. InMemory OLTP can put extra pressure on memory as the
objects need to be memory resident.
Network
In-Memory OLTP does not offer any performance benefit. The
data needs to be communicated from data tier to application
tier.
ISSUE
IN-MEMORY OLTP IMPACT
Scalability
Latch Contention
A typical scenario is contention on the last page of an index
when inserting rows concurrently in key order. Because InMemory OLTP does not take latches when accessing data, the
scalability issues related to latch contentions are fully
removed.
Most scaling issues in SQL Server applications are caused by
concurrency issues such as contention in locks, latches, and
spinlocks.
Spinlock Contention
Because In-Memory OLTP does not take latches when
accessing data, the scalability issues related to spinlock
contentions are fully removed.
Locking Related Contention
If your database application encounters blocking issues
between read and write operations, In-Memory OLTP
removes the blocking issues because it uses a new form of
optimistic concurrency control to implement all transaction
isolation levels. In-Memory OLTP does not use TempDB to
store row versions.
If the scaling issue is caused by conflict between two write
operations, such as two concurrent transactions trying to
update the same row, In-Memory OLTP lets one transaction
succeed and fails the other transaction. The failed transaction
must be re-submitted either explicitly or implicitly, re-trying
the transaction. In either case, you need to make changes to
the application.
If your application experiences frequent conflicts between two
write operations, the value of optimistic locking is diminished.
The application is not suitable for In-Memory OLTP. Most
OLTP applications don’t have a write conflicts unless the
conflict is induced by lock escalation.
Row-Level Security in Memory-Optimized Tables
Row-Level Security is supported in memory-optimized tables. Applying Row-Level Security policies to memoryoptimized tables is essentially the same as disk-based tables, except that the inline table-valued functions used as
security predicates must be natively compiled (created using the WITH NATIVE_COMPILATION option). For details,
see the Cross-Feature Compatibility section in the Row-Level Security topic.
Various built-in security functions that are essential to row-level security have been enabled for in-memory tables.
For details, see Built-in Functions in Natively Compiled Modules.
EXECUTE AS CALLER - All native modules now support and use EXECUTE AS CALLER by default, even if the hint is
not specified. This is because it is expected that all row-level security predicate functions will use EXECUTE AS
CALLER so that the function (and any built-in functions used within it) will be evaluated in the context of the calling
user.
EXECUTE AS CALLER has a small (approximately10%) performance hit caused by permission checks on the caller. If
the module specifies EXECUTE AS OWNER or EXECUTE AS SELF explicitly, these permission checks and their
associated performance cost will be avoided. However, using either of these options together with the built-in
functions above will incur a significantly higher performance hit due to the necessary context-switching.
Scenarios
For a brief discussion of typical scenarios where SQL Server In-Memory OLTP can improve performance, see InMemory OLTP.
See Also
In-Memory OLTP (In-Memory Optimization)
Native Compilation of Tables and Stored Procedures
4/21/2017 • 6 min to read • Edit Online
In-Memory OLTP introduces the concept of native compilation. SQL Server can natively compile stored procedures
that access memory-optimized tables. SQL Server is also able to natively compile memory-optimized tables. Native
compilation allows faster data access and more efficient query execution than interpreted (traditional) TransactSQL. Native compilation of tables and stored procedures produce DLLs.
Native compilation of memory optimized table types is also supported. For more information, see Faster temp
table and table variable by using memory optimization.
Native compilation refers to the process of converting programming constructs to native code, consisting of
processor instructions without the need for further compilation or interpretation.
In-Memory OLTP compiles memory-optimized tables when they are created, and natively compiled stored
procedures when they are loaded to native DLLs. In addition, the DLLs are recompiled after a database or server
restart. The information necessary to recreate the DLLs is stored in the database metadata. The DLLs are not part of
the database, though they are associated with the database. For example, the DLLs are not included in database
backups.
NOTE
Memory-optimized tables are recompiled during a server restart. To speed up database recovery, natively compiled stored
procedures are not recompiled during a server restart, they are compiled at the time of first execution. As a result of this
deferred compilation, natively compiled stored procedures only appear when calling sys.dm_os_loaded_modules (TransactSQL) after first execution.
Maintenance of In-Memory OLTP DLLs
The following query shows all table and stored procedure DLLs currently loaded in memory on the server:
SELECT
mod1.name,
mod1.description
from
sys.dm_os_loaded_modules as mod1
where
mod1.description = 'XTP Native DLL';
Database administrators do not need to maintain files that are generated by a native compilation. SQL Server
automatically removes generated files that are no longer needed. For example, generated files will be deleted when
a table and stored procedure is deleted, or if a database is dropped.
NOTE
If compilation fails or is interrupted, some generated files are not removed. These files are intentionally left behind for
supportability and are removed when the database is dropped.
NOTE
SQL Server compiles DLLs for all tables needed for database recovery. If a table was dropped just prior to a database restart
there can still be remnants of the table in the checkpoint files or the transaction log so the DLL for the table might be
recompiled during database startup. After restart the DLL will be unloaded and the files will be removed by the normal
cleanup process.
Native Compilation of Tables
Creating a memory-optimized table using the CREATE TABLE statement results in the table information being
written to the database metadata and the table and index structures created in memory. The table will also be
compiled to a DLL.
Consider the following sample script, which creates a database and a memory-optimized table:
USE master;
GO
CREATE DATABASE DbMemopt3;
GO
ALTER DATABASE DbMemopt3
add filegroup DbMemopt3_mod_memopt_1_fg
contains memory_optimized_data
;
GO
-- You must edit the front portion of filename= path, to where your DATA\ subdirectory is,
-- keeping only the trailing portion '\DATA\DbMemopt3_mod_memopt_1_fn'!
ALTER DATABASE DbMemopt3
add file
(
name
= 'DbMemopt3_mod_memopt_1_name',
filename = 'C:\DATA\DbMemopt3_mod_memopt_1_fn'
--filename = 'C:\Program Files\Microsoft SQL
Server\MSSQL13.SQLSVR2016ID\MSSQL\DATA\DbMemopt3_mod_memopt_1_fn'
)
to filegroup DbMemopt3_mod_memopt_1_fg
;
GO
USE DbMemopt3;
GO
CREATE TABLE dbo.t1
(
c1 int not null primary key nonclustered,
c2 int
)
with (memory_optimized = on)
;
GO
-- You can safely rerun from here to the end.
-- Retrieve the path of the DLL for table t1.
DECLARE @moduleName nvarchar(256);
SET @moduleName =
(
'%xtp_t_' +
cast(db_id() as nvarchar(16)) +
'_' +
cast(object_id('dbo.t1') as nvarchar(16)) +
'%.dll'
)
;
-- SEARCHED FOR NAME EXAMPLE: mod1.name LIKE '%xtp_t_8_565577053%.dll'
PRINT @moduleName;
SELECT
mod1.name,
mod1.description
from
sys.dm_os_loaded_modules as mod1
where
mod1.name LIKE @moduleName
order by
mod1.name
;
-- ACTUAL NAME EXAMPLE: mod1.name = 'C:\Program Files\Microsoft SQL
Server\MSSQL13.SQLSVR2016ID\MSSQL\DATA\xtp\8\xtp_t_8_565577053_184009305855461.dll'
GO
-GO
DROP DATABASE DbMemopt3; -- Clean up.
Creating the table also creates the table DLL and loads the DLL in memory. The DMV query immediately after the
CREATE TABLE statement retrieves the path of the table DLL.
The table DLL understands the index structures and row format of the table. SQL Server uses the DLL for traversing
indexes, retrieving rows, as well as storing the contents of the rows.
Native Compilation of Stored Procedures
Stored procedures that are marked with NATIVE_COMPILATION are natively compiled. This means the TransactSQL statements in the procedure are all compiled to native code for efficient execution of performance-critical
business logic.
For more information about natively compiled stored procedures, see Natively Compiled Stored Procedures.
Consider the following sample stored procedure, which inserts rows in the table t1 from the previous example:
CREATE PROCEDURE dbo.native_sp
with native_compilation,
schemabinding,
execute as owner
as
begin atomic
with (transaction isolation level = snapshot,
language = N'us_english')
DECLARE @i int = 1000000;
WHILE @i > 0
begin
INSERT dbo.t1 values (@i, @i+1);
SET @i -= 1;
end
end;
GO
EXECUTE dbo.native_sp;
GO
-- Reset.
DELETE from dbo.t1;
GO
The DLL for native_sp can interact directly with the DLL for t1, as well as the In-Memory OLTP storage engine, to
insert the rows as fast as possible.
The In-Memory OLTP compiler leverages the query optimizer to create an efficient execution plan for each of the
queries in the stored procedure. Note that natively compiled stored procedures are not automatically recompiled if
the data in the table changes. For more information on maintaining statistics and stored procedures with InMemory OLTP see Statistics for Memory-Optimized Tables.
Security Considerations for Native Compilation
Native compilation of tables and stored procedures uses the In-Memory OLTP compiler. This compiler produces
files that are written to disk and loaded into memory. SQL Server uses the following mechanisms to limit access to
these files.
Native Compiler
The compiler executable, as well as binaries and header files required for native compilation are installed as part of
the SQL Server instance under the folder MSSQL\Binn\Xtp. So, if the default instance is installed under C:\Program
Files, the compiler files are installed in C:\Program Files\ Microsoft SQL
Server\MSSQL13.MSSQLSERVER\MSSQL\Binn\Xtp.
To limit access to the compiler, SQL Server uses access control lists (ACLs) to restrict access to binary files. All SQL
Server binaries are protected against modification or tampering through ACLs. The native compiler's ACLs also
limit use of the compiler; only the SQL Server service account and system administrators have read and execute
permissions for native compiler files.
Files Generated by a Native Compilation
The files produced when a table or stored procedure is compiled include the DLL and intermediate files including
files with the following extensions: .c, .obj, .xml, and .pdb. The generated files are saved in a subfolder of the default
data folder. The subfolder is called Xtp. When installing the default instance with the default data folder, the
generated files are placed in C:\Program Files\ Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\Xtp.
SQL Server prevents tampering with the generated DLLs in three ways:
When a table or stored procedure is compiled to a DLL, this DLL is immediately loaded into memory and
linked to the sqlserver.exe process. A DLL cannot be modified while it is linked to a process.
When a database is restarted, all tables and stored procedures are recompiled (removed and recreated)
based on the database metadata. This will remove any changes made to a generated file by a malicious
agent.
The generated files are considered part of user data, and have the same security restrictions, via ACLs, as
database files: only the SQL Server service account and system administrators can access these files.
No user interaction is needed to manage these files. SQL Server will create and remove the files as necessary.
See Also
Memory-Optimized Tables
Natively Compiled Stored Procedures
Altering Memory-Optimized Tables
3/24/2017 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
Schema and index changes on memory-optimized tables can be performed by using the ALTER TABLE statement.
The database application can continue to run, and any operation that is accessing the table is blocked until the
alteration process is completed.
ALTER TABLE
The ALTER TABLE syntax is used for making changes to the table schema, as well as for adding, deleting, and
rebuilding indexes. Indexes are considered part of the table definition:
The syntax ALTER TABLE … ADD/DROP/ALTER INDEX is supported only for memory-optimized tables.
Without using an ALTER TABLE statement, the statements CREATE INDEX and DROP INDEX and ALTER
INDEX are not supported for indexes on memory-optimized tables.
The following is the syntax for the ADD and DROP and ALTER INDEX clauses on the ALTER TABLE statement.
| ADD
{
<column_definition>
| <table_constraint>
| <table_index>
} [ ,...n ]
| DROP
{
[ CONSTRAINT ]
{
constraint_name
} [ ,...n ]
| COLUMN
{
column_name
} [ ,...n ]
| INDEX
{
index_name
} [ ,...n ]
} [ ,...n ]
| ALTER INDEX index_name
{
REBUILD WITH ( <rebuild_index_option> )
}
}
The following types of alterations are supported.
Changing the bucket count
Adding and removing an index
Changing, adding and removing a column
Adding and removing a constraint
For more information on ALTER TABLE functionality and the complete syntax, see ALTER TABLE (TransactSQL)
Schema-bound Dependency
Natively compiled stored procedures are required to be schema-bound, meaning they have a schema-bound
dependency on the memory optimized tables they access, and the columns they reference. A schema-bound
dependency is a relationship between two entities that prevents the referenced entity from being dropped or
incompatibly modified as long as the referencing entity exists.
For example, if a schema-bound natively compiled stored procedure references a column c1 from table mytable,
column c1 cannot be dropped. Similarly, if there is such a procedure with an INSERT statement without column list
(e.g., INSERT INTO dbo.mytable VALUES (...) ), then no column in the table can be dropped.
Examples
The following example alters the bucket count of an existing hash index. This rebuilds the hash index with the new
bucket count while other properties of the hash index remain the same.
ALTER TABLE Sales.SalesOrderDetail_inmem
ALTER INDEX imPK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID
REBUILD WITH (BUCKET_COUNT=67108864);
GO
The following example adds a column with a NOT NULL constraint and with a DEFAULT definition, and uses WITH
VALUES to provide values for each existing row in the table. If WITH VALUES is not used, each row has the value
NULL in the new column.
ALTER TABLE Sales.SalesOrderDetail_inmem
ADD Comment NVARCHAR(100) NOT NULL DEFAULT N'' WITH VALUES;
GO
The following example adds a primary key constraint to an existing column.
CREATE TABLE dbo.UserSession (
SessionId int not null,
UserId int not null,
CreatedDate datetime2 not null,
ShoppingCartId int,
index ix_UserId nonclustered hash (UserId) with (bucket_count=400000)
)
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY) ;
GO
ALTER TABLE dbo.UserSession
ADD CONSTRAINT PK_UserSession PRIMARY KEY NONCLUSTERED (SessionId);
GO
The following example removes an index.
ALTER TABLE Sales.SalesOrderDetail_inmem
DROP INDEX ix_ModifiedDate;
GO
The following example adds an index.
ALTER TABLE Sales.SalesOrderDetail_inmem
ADD INDEX ix_ModifiedDate (ModifiedDate);
GO
The following example adds multiple columns, with an index and constraints.
ALTER TABLE Sales.SalesOrderDetail_inmem
ADD
CustomerID int NOT NULL DEFAULT -1 WITH VALUES,
ShipMethodID int NOT NULL DEFAULT -1 WITH VALUES,
INDEX ix_Customer (CustomerID);
GO
Logging of ALTER TABLE on memory-optimized tables
On a memory-optimized table, most ALTER TABLE scenarios now run in parallel and result in an optimization of
writes to the transaction log. The optimization is that only the metadata changes are written to the transaction log.
However, the following ALTER TABLE operations run single-threaded and are not log-optimized.
The single-threaded operations require that the entire contents of the altered table be written to the log. A list of
single-threaded operations follows:
Alter or add a column to use a large object (LOB) type: nvarchar(max), varchar(max), or varbinary(max).
Add or drop a COLUMNSTORE index.
Almost anything that affects an off-row column.
Cause an on-row column to move off-row.
Cause an off-row column to move on-row.
Create a new off-row column.
Exception: Lengthening an already off-row column is logged in the optimized way.
See Also
Memory-Optimized Tables
Transactions with Memory-Optimized Tables
3/24/2017 • 14 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This article describes all the aspects of transactions that are specific to memory-optimized tables and natively
compiled stored procedures.
The transaction isolation levels in SQL Server apply differently to memory-optimized tables versus disk-based
tables, and the underlying mechanisms are different. An understanding of the differences helps the programmer
design a high throughput system. The goal of transaction integrity is shared in all cases.
For error conditions specific to transactions on memory-optimized tables, jump to the section Conflict Detection
and Retry Logic.
For general information, see SET TRANSACTION ISOLATION LEVEL (Transact-SQL).
Sections in this article:
Pessimistic versus Optimistic
Transaction Initiation Modes
Code Example with Explicit Mode
Row Versioning
Transaction Isolation Levels
Transaction Phases and Lifetime
Conflict Detection and Retry Logic
Retry T-SQL Code Example
Cross-Container Transaction
Limitations
Natively Compiled Stored Procedures
Other Transaction Links
Pessimistic versus Optimistic
The functional differences are due to pessimistic versus optimistic approaches to transaction integrity. Memoryoptimized tables use the optimistic approach:
Pessimistic approach uses locks to block potential conflicts before they occur. Lock are taken when the
statement is executed, and released when the transaction is committed.
Optimistic approach detects conflicts as the occur and performs validation checks at commit time.
Error 1205, a deadlock, cannot occur for a memory-optimized table.
The optimistic approach is less overhead and is usually more efficient, partly because transaction conflicts are
uncommon in most applications. The main functional difference between the pessimistic and optimistic
approaches is that if a conflict occurs, in the pessimistic approach you wait, while in the optimistic approach one
of the transactions fails and needs to be retried by the client. The functional differences are bigger when the
REPEATABLE READ isolation level is in force, and are biggest for the SERIALIZABLE level.
Transaction Initiation Modes
SQL Server has the following modes for transaction initiation:
Autocommit - The start of a simple query or DML statement implicitly opens a transaction, and the end of
the statement implicitly commits the transaction. This is the default.
In autocommit mode, usually you are not required to code a table hint about the transaction isolation
level on the memory-optimized table in the FROM clause.
Explicit - Your Transact-SQL contains the code BEGIN TRANSACTION, along with an eventual COMMIT
TRANSACTION. Two or more statements statements can be corralled into the same transaction.
In explicit mode, you must either use the database option
MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT or code a table hint about the transaction isolation
level on the memory-optimized table in the FROM clause.
Implicit - When SET IMPLICIT_TRANSACTION ON is in force. Perhaps a better name would have been
IMPLICIT_BEGIN_TRANSACTION, because all this option does is implicitly perform the equivalent of an
explicit BEGIN TRANSACTION before each UPDATE statement if 0 = @@trancount. Therefore it is up to
your T-SQL code to eventually issue an explicit COMMIT TRANSACTION.
ATOMIC BLOCK - All statements in ATOMIC blocks, which are required with natively compiled stored
procedures, always run as part of a single transaction - either the actions of the atomic block as a whole are
committed, or they are all rolled back, in case of a failure.
Code Example with Explicit Mode
The following interpreted Transact-SQL script uses:
An explicit transaction.
A memory-optimized table, named dbo.Order_mo.
The READ COMMITTED transaction isolation level context.
Therefore it is necessary to have a table hint on the memory-optimized table. The hint must be for SNAPSHOT or
an even more isolating level. In the case of the code example, the hint is WITH (SNAPSHOT). If this hint is
removed, the script would suffer an error 41368, for which an automated retry would be inappropriate:
41368: Accessing memory optimized tables using the READ COMMITTED isolation level is supported only for
autocommit transactions. It is not supported for explicit or implicit transactions. Provide a supported isolation
level for the memory-optimized table using a table hint, such as WITH (SNAPSHOT).
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
GO
BEGIN TRANSACTION; -- Explicit transaction.
-- Order_mo is a memory-optimized table.
SELECT *
FROM
dbo.Order_mo as o WITH (SNAPSHOT) -- Table hint.
JOIN dbo.Customer as c on c.CustomerId = o.CustomerId;
COMMIT TRANSACTION;
Note that the need for the
hint can be avoided through the use of the database option
MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT . When this option is set to ON , access to a memory-optimized table under
a lower isolation level is automatically elevated to SNAPSHOT isolation.
WITH (SNAPSHOT)
ALTER DATABASE CURRENT SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON
Row Versioning
Memory-optimized tables use a highly sophisticated row versioning system that makes the optimistic approach
efficient, even at the most strict isolation level of SERIALIZABLE. For details see Introduction to Memory-Optimized
Tables.
Disk-based tables indirectly have a row versioning system when READ_COMMITTED_SNAPSHOT or the
SNAPSHOT isolation level is in effect. This system is based on tempdb, while memory-optimized data structures
have row versioning built-in, for maximum efficiency.
Isolation Levels
The following table lists the possible levels of transaction isolation, in sequence from least isolation to most. For
details about conflicts that can occur and retry logic to deal with these conflicts, see Conflict Detection and Retry
Logic.
ISOLATION LEVEL
DESCRIPTION
READ UNCOMMITTED
Not available: memory-optimized tables cannot be accessed
under Read Uncommitted isolation. It is still possible to access
memory-optimized tables under SNAPSHOT isolation if the
session-level TRANSACTION ISOLATION LEVEL is set to READ
UNCOMMITTED, by using the WITH (SNAPSHOT) table hint
or setting the database setting
MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT to ON.
READ COMMITTED
Supported for memory-optimized tables only when the
autocommit mode is in effect. It is still possible to access
memory-optimized tables under SNAPSHOT isolation if the
session-level TRANSACTION ISOLATION LEVEL is set to READ
COMMITTED, by using the WITH (SNAPSHOT) table hint or
setting the database setting
MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT to ON.
Note that if the database option
READ_COMMITTED_SNAPSHOT is set to ON, it is not allowed
to access both a memory-optimized and a disk-based table
under READ COMMITTED isolation in the same statement.
SNAPSHOT
Supported for memory-optimized tables.
Internally SNAPSHOT is the least demanding transaction
isolation level for memory-optimized tables.
SNAPSHOT uses fewer system resources than does
REPEATABLE READ or SERIALIZABLE.
REPEATABLE READ
Supported for memory-optimized tables. The guarantee
provided by REPEATABLE READ isolation is that, at commit
time, no concurrent transaction has updated any of the rows
read by this transaction.
Because of the optimistic model, concurrent transactions are
not prevented from updating rows read by this transaction.
Instead, at commit time this transaction validated that
REPEATABLE READ isolation has not been violated. If it has,
this transaction is rolled back and must be retried.
ISOLATION LEVEL
DESCRIPTION
SERIALIZABLE
Supported for memory-optimized tables.
Named Serializable because the isolation is so strict that it is
almost a bit like having the transactions run in series rather
than concurrently.
Transaction Phases and Lifetime
When a memory-optimized table is involved, the lifetime of a transaction progresses through the phases as
displayed in the following image.
Descriptions of the phases follow.
Regular Processing: Phase 1 (of 3)
This phase is comprised of the execution of all queries and DML statements in the query.
During this phase, the statements see the version of the memory-optimized tables as of the logical start time
of the transaction.
Validation: Phase 2 (of 3)
The validation phase begins by assigning the end time, thereby marking the transaction as logically complete.
This makes all changes of the transaction visible to other transactions, which will take a dependency on this
transaction, and will not be allowed to commit until this transaction has successfully committed. In addition,
transactions which hold such dependencies are not allowed to return result sets to the client to ensure the
client only sees data that has been successfully committed to the database.
This phase comprises the repeatable read and serializable validation. For repeatable read validation it checks
whether any of the rows read by the transaction has since been updated. For serializable validation it checks
whether any row has been inserted into any data range scanned by this transaction. Note that, per the table in
Isolation Levels and Conflicts, both repeatable read and serializable validation can happen when using
snapshot isolation, to validate consistency of unique and foreign key constraints.
Commit Processing: Phase 3 (of 3)
During the commit phase, the changes to durable tables are written to the log, and the log is written to disk.
Then control is returned to the client.
After commit processing completes, all dependent transactions are notified that they can commit.
As always, you should try to keep your transactional units of work as minimal and brief as is valid for your data
needs.
Conflict Detection and Retry Logic
There are two kinds of transaction-related error conditions that cause a transaction to fail and roll back. In most
cases, once such a failure occurs, the transaction needs to be retried, similar to when a deadlock occurs.
Conflicts between concurrent transactions. These are update conflicts and validation failures, and can be due to
transaction isolation level violations or constraint violations.
Dependency failures. These result from transaction you depend on failing to commit, or the number of
dependencies growing too large.
Below are the error conditions that can cause transactions accessing memory-optimized tables to fail.
ERROR CODE
DESCRIPTION
CAUSE
41302
Attempted to update a row that was
updated in a different transaction since
the start of the present transaction.
This error condition occurs if two
concurrent transactions attempt to
update or delete the same row at the
same time. One of the two transactions
receives this error message and will
need to be retried.
41305
Repeatable read validation failure. A
row read from a memory-optimized
table this transaction has been updated
by another transaction that has
committed before the commit of this
transaction.
This error can occur when using
REPEATABLE READ or SERIALIZABLE
isolation, and also if the actions of a
concurrent transaction cause violation
of a FOREIGN KEY constraint.
Such concurrent violation of foreign key
constraints is usually rare, and typically
indicates an issue with the application
logic or with data entry. However, the
error can also occur if there is no index
on the columns involved with the
FOREIGN KEY constraint. Therefore, the
guidance is to always create an index
on foreign key columns in a memoryoptimized table.
For more detailed considerations about
validation failures caused by foreign key
violations, see this blog post by the
SQL Server Customer Advisory Team.
41325
Serializable validation failure. A new row
was inserted into a range that was
scanned earlier by the present
transaction. We call this a phantom
row.
This error can occur when using
SERIALIZABLE isolation, and also if the
actions of a concurrent transaction
cause violation of a PRIMARY KEY,
UNIQUE, or FOREIGN KEY constraint.
Such concurrent constraint violation is
usually rare, and typically indicates an
issue with the application logic or data
entry. However, similar to repeatable
read validation failures, this error can
also occur if there is a FOREIGN KEY
constraint with no index on the
columns involved.
41301
Dependency failure: a dependency was
taken on another transaction that later
failed to commit.
This transaction (Tx1) took a
dependency on another transaction
(Tx2) while that transaction (Tx2) was in
its validation or commit processing
phase, by reading data that was written
by Tx2. Tx2 subsequently failed to
commit. Most common causes for Tx2
to fail to commit are repeatable read
(41305) and serializable (41325)
validation failures; a less common cause
is log IO failure.
ERROR CODE
DESCRIPTION
CAUSE
41839
Transaction exceeded the maximum
number of commit dependencies.
There is a limit on the number
transactions a given transaction (Tx1)
can depend on - those are the
outgoing dependencies. In addition,
there is a limit on the number of
transactions that can depend on a
given transaction (Tx1) - these are the
incoming dependencies. The limit for
both is 8.
The most common case for this failure
is where there is a large number of read
transactions accessing data written by a
single write transaction. The likelihood
of hitting this condition increases if the
read transactions are all performing
large scans of the same data and if
validation or commit processing of the
write transaction takes long, for
example the write transaction performs
large scans under serializable isolation
(increases length of the validation
phase) or the transaction log is placed
on a slow log IO device (increases
length of commit processing). If the
read transactions are performing large
scans and they are expected to access
only few rows, this could be an
indication of a missing index. Similarly, if
the write transaction uses serializable
isolation and is performing large scans
but is expected to access only few rows,
this is also an indication of a missing
index.
The limit on number of commit
dependencies can be lifted by use Trace
Flag 9926. Use this trace flag only if
you are still hitting this error condition
after confirming that there are no
missing indexes, as it could mask these
issues in the above-mentioned cases.
Another caution is that complex
dependency graphs, where each
transaction has a large number of
incoming as well as outgoing
dependencies, and individual
transactions have many layers of
dependencies, can lead to inefficiencies
in the system.
Retry Logic
When a transaction fails due to any of the above-mentioned conditions, the transaction should be retried.
Retry logic can be implemented at the client or server side. The general recommendation is to implement retry
logic on the client side, as it is more efficient, and allows you to deal with result sets returned by the transaction
before the failure occurs.
Retry T-SQL Code Example
Server-side retry logic using T-SQL should only be used for transactions that do not return result sets to the client,
since retries can potentially result on additional result sets being returned to the client that may not be
anticipated.
The following interpreted T-SQL script illustrates what retry logic can look like for the errors associated with
transaction conflicts involving memory-optimized tables.
-- Retry logic, in Transact-SQL.
DROP PROCEDURE If Exists usp_update_salesorder_dates;
GO
CREATE PROCEDURE usp_update_salesorder_dates
AS
BEGIN
DECLARE @retry INT = 10;
WHILE (@retry > 0)
BEGIN
BEGIN TRY
BEGIN TRANSACTION;
UPDATE dbo.SalesOrder_mo WITH (SNAPSHOT)
set OrderDate = GetUtcDate()
where CustomerId = 42;
UPDATE dbo.SalesOrder_mo WITH (SNAPSHOT)
set OrderDate = GetUtcDate()
where CustomerId = 43;
COMMIT TRANSACTION;
SET @retry = 0; -- //Stops the loop.
END TRY
BEGIN CATCH
SET @retry -= 1;
IF (@retry > 0 AND
ERROR_NUMBER() in (41302, 41305, 41325, 41301, 41839, 1205)
)
BEGIN
IF XACT_STATE() = -1
ROLLBACK TRANSACTION;
WAITFOR DELAY '00:00:00.001';
END
ELSE
BEGIN
PRINT 'Suffered an error for which Retry is inappropriate.';
THROW;
END
END CATCH
END -- //While loop
END;
GO
-- EXECUTE usp_update_salesorder_dates;
Cross-Container Transaction
A transaction is called a cross-container transaction if it:
Accesses a memory-optimized table from interpreted Transact-SQL; or
Executes a native proc when a transaction is already open (XACT_STATE() = 1).
The term "cross-container" derives from the fact that the transaction runs across the two transaction management
containers, one for disk-based tables and one for memory-optimized tables.
Within a single cross-container transaction, different isolation levels can be used for accessing disk-based and
memory-optimized tables. This difference is expressed through explicit table hints such as WITH (SERIALIZABLE)
or through the database option MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT, which implicitly elevates the
isolation level for memory-optimized table to snapshot if the TRANSACTION ISOLATION LEVEL is configured as
READ COMMITTED or READ UNCOMMITTED.
In the following Transact-SQL code example:
The disk-based table, Table_D1, is accessed using the READ COMMITTED isolation level.
The memory-optimized table Table_MO7 is accessed using the SERIALIZABLE isolation level. Table_MO6 does
not have a specific associated isolation level, since inserts are always consistent and executed essentially under
serializable isolation.
-- Different isolation levels for
-- disk-based tables versus memory-optimized tables,
-- within one explicit transaction.
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
go
BEGIN TRANSACTION;
-- Table_D1 is a traditional disk-based table, accessed using READ COMMITTED isolation.
-SELECT * FROM Table_D1;
-- Table_MO6 and Table_MO7 are memory-optimized tables. Table_MO7 is accessed using SERIALIZABLE
isolation,
-- while Table_MO6 does not have a specific
-INSERT Table_MO6
SELECT * FROM Table_MO7 WITH (SERIALIZABLE);
COMMIT TRANSACTION;
go
Limitations
Cross-database transactions are not supported for memory-optimized tables. If a transaction accesses a
memory-optimized table, the transaction cannot access any other database, except for:
tempdb database.
Read-only from the master database.
Distributed transactions are not supported: When BEGIN DISTRIBUTED TRANSACTION is used, the
transaction cannot access a memory-optimized table.
Natively Compiled Stored Procedures
In a native proc, the ATOMIC block must declare the transaction isolation level for the whole block, such as:
... BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, ...) ...
No explicit transaction control statements are allowed within the body of a native proc. BEGIN
TRANSACTION, ROLLBACK TRANSACTION and so on are all disallowed.
For more information about transaction control with ATOMIC blocks see Atomic Blocks
Other Transaction Links
SET IMPLICIT_TRANSACTIONS
sp_getapplock (Transact-SQL)
Row Versioning-based Isolation Levels in the Database Engine
Control Transaction Durability
<!-Link Guids:
016fb05e-a702-484b-bd2a-a6eabd0d76fd , ms173763.aspx , "SET TRANSACTION ISOLATION LEVEL (TransactSQL)"
ef1cc7de-63be-4fa3-a622-6d93b440e3ac , dn511014(v=sql.130,d=robot).aspx , "Introduction to MemoryOptimized Tables"
a300ac43-e4c0-4329-8b79-a1a05e63370a , ms187807.aspx , "SET IMPLICIT_TRANSACTIONS (Transact-SQL)"
e1e85908-9f31-47cf-8af6-88c77e6f24c9 , ms189823.aspx , "sp_getapplock (Transact-SQL)"
3ac93b28-cac7-483e-a8ab-ac44e1cc1c76 , dn449490.aspx , "Control Transaction Durability"
Image: 'hekaton_transactions' , e9c5eb2f-c9a3-4625-8ae4-ac91447db42f
See also XMetal articles: dn133169.aspx , "Transaction Lifetime"
Transactions with In-Memory Tables and Procedures
{ba6f1a15-8b69-4ca6-9f44-f5e3f2962bc5} , dn479429.aspx
Maybe replaces: 06075248-705e-4563-9371-b64cd609793c , dn479429.aspx , "Understanding Transactions on
Memory-Optimized Tables"
GeneMi , 2016-03-28 11:40am
-->
Application Pattern for Partitioning MemoryOptimized Tables
3/24/2017 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
In-Memory OLTP supports a pattern where a limited amount of active data is kept in a memory-optimized table,
while less-frequently accessed data is processed on disk. Typically, this would be a scenario where data is stored
based on a datetime key.
You can emulate partitioned tables with memory-optimized tables by maintaining a partitioned table and a
memory-optimized table with a common schema. Current data would be inserted and updated in the memoryoptimized table, while less-frequently accessed data would be maintained in the traditional partitioned table.
An application that knows that the active data is in a memory-optimized table can use natively compiled stored
procedures to access the data. Operations that need to access the entire span of data, or which may not know
which table holds relevant data, use interpreted Transact-SQL to join the memory-optimized table with the
partitioned table.
This partition switch is described as follows:
Insert data from the In-Memory OLTP table into a staging table, possibly using a cutoff date.
Delete the same data from the memory-optimized table.
Swap in the staging table.
Add the active partition.
Active Data Maintenance
The actions starting with Deleting ActiveOrders need to be done during a maintenance window to avoid
queries missing data during the time between deleting data and switching in the staging table.
For a related sample, see Application-Level Partitioning.
Code Sample
The following sample shows how to use a memory-optimized table with a partitioned disk-based table.
Frequently-used data is stored in memory. To save the data to disk, create a new partition and copy the data to the
partitioned table.
The first part of this sample creates the database and necessary objects. The second part of the sample shows how
to move data from a memory-optimized table into a partitioned table.
CREATE DATABASE partitionsample;
GO
-- enable for In-Memory OLTP ALTER DATABASE partitionsample
ALTER DATABASE partitionsample
'c:\data\partitionsample_mod')
change file path as needed
ADD FILEGROUP partitionsample_mod CONTAINS MEMORY_OPTIMIZED_DATA
ADD FILE( NAME = 'partitionsample_mod' , FILENAME =
TO FILEGROUP partitionsample_mod;
'c:\data\partitionsample_mod') TO FILEGROUP partitionsample_mod;
GO
USE partitionsample;
GO
-- frequently used portion of the SalesOrders - memory-optimized
CREATE TABLE dbo.SalesOrders_hot (
so_id INT IDENTITY PRIMARY KEY NONCLUSTERED,
cust_id INT NOT NULL,
so_date DATETIME2 NOT NULL INDEX ix_date NONCLUSTERED,
so_total MONEY NOT NULL,
INDEX ix_date_total NONCLUSTERED (so_date desc, so_total desc)
) WITH (MEMORY_OPTIMIZED=ON)
GO
-- cold portion of the SalesOrders - partitioned disk-based table
CREATE PARTITION FUNCTION [ByDatePF](datetime2) AS RANGE RIGHT
FOR VALUES();
GO
CREATE PARTITION SCHEME [ByDateRange]
AS PARTITION [ByDatePF]
ALL TO ([PRIMARY]);
GO
CREATE TABLE dbo.SalesOrders_cold (
so_id INT NOT NULL,
cust_id INT NOT NULL,
so_date DATETIME2 NOT NULL,
so_total MONEY NOT NULL,
CONSTRAINT PK_SalesOrders_cold PRIMARY KEY (so_id, so_date),
INDEX ix_date_total NONCLUSTERED (so_date desc, so_total desc)
) ON [ByDateRange](so_date)
GO
-- table for temporary partitions
CREATE TABLE dbo.SalesOrders_cold_staging (
so_id INT NOT NULL,
cust_id INT NOT NULL,
so_date datetime2 NOT NULL,
so_total MONEY NOT NULL,
CONSTRAINT PK_SalesOrders_cold_staging PRIMARY KEY (so_id, so_date),
INDEX ix_date_total NONCLUSTERED (so_date desc, so_total desc),
CONSTRAINT CHK_SalesOrders_cold_staging CHECK (so_date >= '1900-01-01')
)
GO
-- aggregate view of the hot and cold data
CREATE VIEW dbo.SalesOrders
AS SELECT so_id,
cust_id,
so_date,
so_total,
1 AS 'is_hot'
FROM dbo.SalesOrders_hot
UNION ALL
SELECT so_id,
cust_id,
so_date,
so_total,
0 AS 'is_hot'
FROM dbo.SalesOrders_cold;
GO
-- move all sales orders up to the split date to cold storage
CREATE PROCEDURE dbo.usp_SalesOrdersOffloadToCold @splitdate datetime2
AS
BEGIN
BEGIN
BEGIN TRANSACTION;
-- create new heap based on the hot data to be moved to cold storage
INSERT INTO dbo.SalesOrders_cold_staging WITH( TABLOCKX)
SELECT so_id , cust_id , so_date , so_total
FROM dbo.SalesOrders_hot WITH ( serializable)
WHERE so_date <= @splitdate;
-- remove moved data
DELETE FROM dbo.SalesOrders_hot WITH( SERIALIZABLE)
WHERE so_date <= @splitdate;
-- update partition function, and switch in new partition
ALTER PARTITION SCHEME [ByDateRange] NEXT USED [PRIMARY];
DECLARE @p INT = ( SELECT MAX( partition_number) FROM sys.partitions WHERE object_id = OBJECT_ID(
'dbo.SalesOrders_cold'));
EXEC sp_executesql N'alter table dbo.SalesOrders_cold_staging
SWITCH TO dbo.SalesOrders_cold partition @i' , N'@i int' , @i = @p;
ALTER PARTITION FUNCTION [ByDatePF]()
SPLIT RANGE( @splitdate);
-- modify constraint on staging table to align with new partition
ALTER TABLE dbo.SalesOrders_cold_staging DROP CONSTRAINT CHK_SalesOrders_cold_staging;
DECLARE @s nvarchar( 100) = CONVERT( nvarchar( 100) , @splitdate , 121);
DECLARE @sql nvarchar( 1000) = N'alter table dbo.SalesOrders_cold_staging
add constraint CHK_SalesOrders_cold_staging check (so_date > ''' + @s + ''')';
PRINT @sql;
EXEC sp_executesql @sql;
COMMIT;
END;
GO
-- insert sample values in the hot table
INSERT INTO dbo.SalesOrders_hot VALUES(1,SYSDATETIME(), 1);
GO
INSERT INTO dbo.SalesOrders_hot VALUES(1, SYSDATETIME(), 1);
GO
INSERT INTO dbo.SalesOrders_hot VALUES(1, SYSDATETIME(), 1);
GO
-- verify contents of the table
SELECT * FROM dbo.SalesOrders;
GO
-- offload all sales orders to date to cold storage
DECLARE @t datetime2 = SYSDATETIME();
EXEC dbo.usp_SalesOrdersOffloadToCold @t;
-- verify contents of the tables
SELECT * FROM dbo.SalesOrders;
GO
-- verify partitions
SELECT OBJECT_NAME( object_id) , * FROM sys.dm_db_partition_stats ps
WHERE object_id = OBJECT_ID( 'dbo.SalesOrders_cold');
-- insert more rows in the hot table
INSERT INTO dbo.SalesOrders_hot VALUES(2, SYSDATETIME(), 1);
GO
INSERT INTO dbo.SalesOrders_hot VALUES(2, SYSDATETIME(), 1);
GO
INSERT INTO dbo.SalesOrders_hot VALUES(2, SYSDATETIME(), 1);
INSERT INTO dbo.SalesOrders_hot VALUES(2, SYSDATETIME(), 1);
GO
-- verify contents of the tables
SELECT * FROM dbo.SalesOrders;
GO
-- offload all sales orders to date to cold storage
DECLARE @t datetime2 = SYSDATETIME();
EXEC dbo.usp_SalesOrdersOffloadToCold @t;
-- verify contents of the tables
SELECT * FROM dbo.SalesOrders;
GO
-- verify partitions
SELECT OBJECT_NAME( object_id) , partition_number , row_count FROM sys.dm_db_partition_stats ps
WHERE object_id = OBJECT_ID( 'dbo.SalesOrders_cold') AND index_id = 1;
See Also
Memory-Optimized Tables
Statistics for Memory-Optimized Tables
3/24/2017 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
The query optimizer uses statistics about columns to create query plans that improve query performance. Statistics
are collected from the tables in the database and stored in the database metadata.
Statistics are created automatically, but can also be created manually. For example, statistics are created
automatically for index key columns when the index is created. For more information about creating statistics see
Statistics.
Table data typically changes over time as rows are inserted, updated, and deleted. This means statistics need to be
updated periodically. By default, statistics on tables are updated automatically when the query optimizer
determines they might be out of date.
Considerations for statistics on memory-optimized tables:
Starting in SQL Server 2016 and in Azure SQL Database, automatic update of statistics is supported for
memory-optimized tables, when using database compatibility level of at least 130. See ALTER DATABASE
Compatibility Level (Transact-SQL). If a database has tables that were previously created using a lower
compatibility level, the statistics need to be updated manually once, to enable automatic update of statistics
going forward.
For natively compiled stored procedures, execution plans for queries in the procedure are optimized when
the procedure is compiled, which happens at create time. They are not automatically recompiled when
statistics are updated. Therefore, the tables should contain a representative set of data before the
procedures are created.
Natively compiled stored procedures can be manually recompiled using sp_recompile (Transact-SQL), and
they are automatically recompiled if the database is taken offline and brought back online, or if there is a
database failover or server restart.
Enabling Automatic Update of Statistics in Existing Tables
When tables are created in a database that has compatibility level of at least 130, automatic update of statistics is
enabled for all statistics on that table, and no further action is needed.
If a database has memory-optimized tables that were created in an earlier version of SQL Server or under a lower
compatibility level than 130, the statistics need to be updated manually once to enable automatic update going
forward.
To enable automatic update of statistics for memory-optimized tables that were created under an older
compatibility level, follow these steps:
1. Update the database compatibility level:
ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL=130
2. Manually update the statistics of the memory-optimized tables. Below is a sample script that performs the
same.
3. Manually recompile the natively compiled stored procedures to benefit from the updated statistics.
One-time script for statistics: For memory-optimized tables that were created under a lower compability level, you
can run the following Transact-SQL script one time to update the statistics of all memory-optimized tables, and
enable automatic update of statistics from then onward (assuming AUTO_UPDATE_STATISTICS is enabled for the
database):
-- Assuming AUTO_UPDATE_STATISTICS is already ON for your database:
-- ALTER DATABASE CURRENT SET AUTO_UPDATE_STATISTICS ON;
ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 130;
GO
DECLARE @sql NVARCHAR(MAX) = N'';
SELECT
@sql += N'UPDATE STATISTICS '
+ quotename(schema_name(t.schema_id))
+ N'.'
+ quotename(t.name)
+ ';' + CHAR(13) + CHAR(10)
FROM sys.tables AS t
WHERE t.is_memory_optimized = 1 AND
t.object_id IN (SELECT object_id FROM sys.stats WHERE no_recompute=1)
;
EXECUTE sp_executesql @sql;
GO
-- Each row appended to @sql looks roughly like:
-- UPDATE STATISTICS [dbo].[MyMemoryOptimizedTable];
Verify auto-update is enabled: The following script verifies whether automatic update is enabled for statistics on
memory-optimized tables. After running the previous script it will return 1 in the column auto-update enabled
for all statistic objects.
SELECT
quotename(schema_name(o.schema_id)) + N'.' + quotename(o.name) AS [table],
s.name AS [statistics object],
1-s.no_recompute AS [auto-update enabled]
FROM sys.stats s JOIN sys.tables o ON s.object_id=o.object_id
WHERE o.is_memory_optimized=1
Guidelines for Deploying Tables and Procedures
To ensure that the query optimizer has up-to-date statistics when creating query plans, deploy memory-optimized
tables and natively compiled stored procedures that access these tables using these four steps:
1. Ensure the database has compatibility level of at least 130. See ALTER DATABASE Compatibility Level
(Transact-SQL).
2. Create tables and indexes. Indexes should be specified inline in the CREATE TABLE statements.
3. Load data into the tables.
4. Create stored procedures that access the tables.
Creating natively compiled stored procedures after you load the data ensures that the optimizer has
statistics available for the memory-optimized tables. This will ensure efficient query plans when the
procedure is compiled.
See Also
Memory-Optimized Tables
Table and Row Size in Memory-Optimized Tables
3/24/2017 • 9 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
A memory-optimized table consists of a collection of rows and indexes that contain pointers to rows. In a
memory-optimized table, in-row data cannot be longer than 8,060 bytes. However, starting SQL Server 2016 it is
possible to create a table with multiple large columns (e.g., multiple varbinary(8000) columns) and LOB columns
(i.e., varbinary(max), varchar(max), and nvarchar(max)). Columns that exceed the max size for in-row data are
placed off-row, in special internal tables. For details about these internal tables see
sys.memory_optimized_tables_internal_attributes (Transact-SQL).
There are two reasons for computing table and row size:
How much memory does a table use?
The amount of memory used by the table cannot be calculated exactly. Many factors affect the
amount of memory used. Factors such as page-based memory allocation, locality, caching, and
padding. Also, multiple versions of rows that either have active transactions associated or that are
waiting for garbage collection.
The minimum size required for the data and indexes in the table is given by the calculation for [table
size], discussed below.
Calculating memory use is at best an approximation and you are advised to include capacity
planning in your deployment plans.
The data size of a row, and does it fit in the 8,060 byte row size limitation? To answer these questions, use
the computation for [row body size], discussed below.
Columns that do not fit in the 8060 byte row size limit are placed off-row, in a separate internal table. Each off-row
column has a corresponding internal table, which in turn has a single nonclustered index. For details about internal
tables used for off-row columns see sys.memory_optimized_tables_internal_attributes (Transact-SQL).
The following figure illustrates a table with indexes and rows, which in turn have row headers and bodies:
Memory-optimized table, consisting of indexes and rows.
The in-memory size of a table, in bytes, is computed as follows:
[table size] = [size of index 1] + … + [size of index n] + ([row size] * [row count])
The size of a hash index is fixed at table creation time and depends on the actual bucket count. The bucket_count
specified with the index specification is rounded up to the nearest power of 2 to obtain the [actual bucket count].
For example, if the specified bucket_count is 100000, the [actual bucket count] for the index is 131072.
[hash index size] = 8 * [actual bucket count]
The size of a nonclustered index is in the order of
[row count] * [index key size]
.
The row size is computed by adding the header and the body:
[row size] = [row header size] + [actual row body size]
[row header size] = 24 + 8 * [number of indices]
Row body size
The calculation of [row body size] is discussed in the following table.
There are two different computations for row body size: computed size and the actual size:
The computed size, denoted with [computed row body size], is used to determine if the row size limitation
of 8,060 bytes is exceeded.
The actual size, denoted with [actual row body size], is the actual storage size of the row body in memory
and in the checkpoint files.
Both [computed row body size] and [actual row body size] are calculated similarly. The only difference is the
calculation of the size of (n)varchar(i) and varbinary(i) columns, as reflected at the bottom of the following
table. The computed row body size uses the declared size i as the size of the column, while the actual row
body size uses the actual size of the data.
The following table describes the calculation of the row body size, given as [actual row body size] =
SUM([size of shallow types]) + 2 + 2 * [number of deep type columns].
SECTION
SIZE
COMMENTS
SECTION
SIZE
Shallow type columns
SUM([size of shallow types]). Size in
bytes of the individual types is as
follows:
COMMENTS
Bit: 1
Tinyint: 1
Smallint: 2
Int: 4
Real: 4
Smalldatetime: 4
Smallmoney: 4
Bigint: 8
Datetime: 8
Datetime2: 8
Float: 8
Money: 8
Numeric (precision <=18): 8
Time: 8
Numeric(precision>18): 16
Uniqueidentifier: 16
Shallow column padding
Possible values are:
Deep types are the types (var)binary
and (n)(var)char.
1 if there are deep type columns and
the total data size of the shallow
columns is as odd number.
0 otherwise
Offset array for deep type columns
Possible values are:
Deep types are the types (var)binary
and (n)(var)char.
0 if there are no deep type columns
2 + 2 * [number of deep type columns]
otherwise
NULL array
[number of nullable columns] / 8,
rounded up to full bytes.
The array has one bit per nullable
column. This is rounded up to full bytes.
SECTION
SIZE
COMMENTS
NULL array padding
Possible values are:
Deep types are the types (var)binary
and (n)(var)char.
1 if there are deep type columns and
the size of the NULL array is an odd
number of bytes.
0 otherwise
Padding
If there are no deep type columns: 0
Deep types are the types (var)binary
and (n)(var)char.
If there are deep type columns, 0-7
bytes of padding is added, based on
the largest alignment required by a
shallow column. Each shallow column
requires alignment equal to its size as
documented above, except that GUID
columns need alignment of 1 byte (not
16) and numeric columns always need
alignment of 8 bytes (never 16). The
largest alignment requirement among
all shallow columns is used, and 0-7
bytes of padding is added in such a way
that the total size so far (without the
deep type columns) is a multiple of the
required alignment.
Fixed-length deep type columns
SUM([size of fixed length deep type
columns])
Fixed-length deep type columns are
columns of type char(i), nchar(i), or
binary(i).
The size of each column is as follows:
i for char(i) and binary(i).
2 * i for nchar(i)
Variable length deep type columns
[computed size]
SUM([computed size of variable length
deep type columns])
This row only applied to [computed row
body size].
The computed size of each column is as
follows:
Variable-length deep type columns are
columns of type varchar(i), nvarchar(i),
or varbinary(i). The computed size is
determined by the max length (i) of the
column.
i for varchar(i) and varbinary(i)
2 * i for nvarchar(i)
SECTION
SIZE
COMMENTS
Variable length deep type columns
[actual size]
SUM([actual size of variable length deep
type columns])
This row only applied to [actual row
body size].
The actual size of each column is as
follows:
The actual size is determined by the
data stored in the columns in the row.
n, where n is the number of characters
stored in the column, for varchar(i).
2 * n, where n is the number of
characters stored in the column, for
nvarchar(i).
n, where n is the number of bytes
stored in the column, for varbinary(i).
Row Structure
The rows in a memory-optimized table have the following components:
The row header contains the timestamp necessary to implement row versioning. The row header also
contains the index pointer to implement the row chaining in the hash buckets (described above).
The row body contains the actual column data, which includes some auxiliary information like the null array
for nullable columns and the offset array for variable-length data types.
The following figure illustrates the row structure for a table that has two indexes:
The begin and end timestamps indicate the period in which a particular row version is valid. Transactions
that start in this interval can see this row version. For more details see Transactions with MemoryOptimized Tables.
The index pointers point to the next row in the chain belonging to the hash bucket. The following figure
illustrates the structure of a table with two columns (name, city), and with two indexes, one on the column
name, and one on the column city.
In this figure, the names John and Jane are hashed to the first bucket. Susan is hashed to the second bucket.
The cities Beijing and Bogota are hashed to the first bucket. Paris and Prague are hashed to the second
bucket.
Thus, the chains for the hash index on name are as follows:
First bucket: (John, Beijing); (John, Paris); (Jane, Prague)
Second bucket: (Susan, Bogota)
The chains for the index on city are as follows:
First bucket: (John, Beijing), (Susan, Bogota)
Second bucket: (John, Paris), (Jane, Prague)
An end timestamp ∞ (infinity) indicates that this is the currently valid version of the row. The row has not
been updated or deleted since this row version was written.
For a time greater than 200, the table contains the following rows:
NAME
CITY
John
Beijing
Jane
Prague
However, any active transaction with begin time 100 will see the following version of the table:
NAME
CITY
John
Paris
Jane
Prague
Susan
Bogata
Example: Table and Row Size Computation
For hash indexes, the actual bucket count is rounded up to the nearest power of 2. For example, if the specified
bucket_count is 100000, the actual bucket count for the index is 131072.
Consider an Orders table with the following definition:
CREATE TABLE dbo.Orders (
OrderID int NOT NULL
PRIMARY KEY NONCLUSTERED,
CustomerID int NOT NULL
INDEX IX_CustomerID HASH WITH (BUCKET_COUNT=10000),
OrderDate datetime NOT NULL,
OrderDescription nvarchar(1000)
) WITH (MEMORY_OPTIMIZED=ON)
GO
Notice that this table has one hash index and a nonclustered index (the primary key). It also has three fixed-length
columns and one variable-length column, with one of the columns being NULLable (OrderDescription). Let’s
assume the Orders table has 8379 rows, and the average length of the values in the OrderDescription column is
78 characters.
To determine the table size, first determine the size of the indexes. The bucket_count for both indexes is specified
as 10000. This is rounded up to the nearest power of 2: 16384. Therefore, the total size of the indexes for the
Orders table is:
8 * 16384 = 131072 bytes
What remains is the table data size, which is,
[row size] * [row count] = [row size] * 8379
(The example table has 8379 rows.) Now, we have:
[row size] = [row header size] + [actual row body size]
[row header size] = 24 + 8 * [number of indices] = 24 + 8 * 1 = 32 bytes
Next, let’s calculate [actual row body size]:
Shallow type columns:
SUM([size of shallow types]) = 4 [int] + 4 [int] + 8 [datetime] = 16
Shallow column padding is 0, as the total shallow column size is even.
Offset array for deep type columns:
2 + 2 * [number of deep type columns] = 2 + 2 * 1 = 4
NULL array = 1
NULL array padding = 1, as the NULL array size is odd and there is a deep type column.
Padding
8 is the largest alignment requirement.
Size so far is 16 + 0 + 4 + 1 + 1 = 22.
Nearest multiple of 8 is 24.
Total padding is 24 – 22 = 2 bytes.
There are no fixed-length deep type columns (Fixed-length deep type columns: 0.).
The actual size of deep type column is 2 * 78 = 156. The single deep type column OrderDescription has
type nvarchar.
[actual row body size] = 24 + 156 = 180 bytes
To complete the calculation:
[row size] = 32 + 180 = 212 bytes
[table size] = 8 * 16384 + 212 * 8379 = 131072 + 1776348 = 1907420
Total table size in memory is thus approximately 2 megabytes. This does not account for potential overhead
incurred by memory allocation as well as any row versioning required for the transactions accessing this table.
The actual memory allocated for and used by this table and its indexes can be obtained through the following
query:
select * from sys.dm_db_xtp_table_memory_stats
where object_id = object_id('dbo.Orders')
See Also
Memory-Optimized Tables
A Guide to Query Processing for Memory-Optimized
Tables
3/24/2017 • 12 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
In-Memory OLTP introduces memory-optimized tables and natively compiled stored procedures in SQL Server.
This article gives an overview of query processing for both memory-optimized tables and natively compiled stored
procedures.
The document explains how queries on memory-optimized tables are compiled and executed, including:
The query processing pipeline in SQL Server for disk-based tables.
Query optimization; the role of statistics on memory-optimized tables as well as guidelines for
troubleshooting bad query plans.
The use of interpreted Transact-SQL to access memory-optimized tables.
Considerations about query optimization for memory-optimized table access.
Natively compiled stored procedure compilation and processing.
Statistics that are used for cost estimation by the optimizer.
Ways to fix bad query plans.
Example Query
The following example will be used to illustrate the query processing concepts discussed in this article.
We consider two tables, Customer and Order. The following Transact-SQL script contains the definitions for these
two tables and associated indexes, in their (traditional) disk-based form:
CREATE TABLE dbo.[Customer] (
CustomerID nchar (5) NOT NULL PRIMARY KEY,
ContactName nvarchar (30) NOT NULL
)
GO
CREATE TABLE dbo.[Order] (
OrderID int NOT NULL PRIMARY KEY,
CustomerID nchar (5) NOT NULL,
OrderDate date NOT NULL
)
GO
CREATE INDEX IX_CustomerID ON dbo.[Order](CustomerID)
GO
CREATE INDEX IX_OrderDate ON dbo.[Order](OrderDate)
GO
For constructing the query plans shown in this article, the two tables were populated with sample data from the
Northwind sample database, which you can download from Northwind and pubs Sample Databases for SQL
Server 2000.
Consider the following query, which joins the tables Customer and Order and returns the ID of the order and the
associated customer information:
SELECT o.OrderID, c.* FROM dbo.[Customer] c INNER JOIN dbo.[Order] o ON c.CustomerID = o.CustomerID
The estimated execution plan as displayed by SQL Server Management Studio is as follows
Query plan for join of disk-based tables.
About this query plan:
The rows from the Customer table are retrieved from the clustered index, which is the primary data
structure and has the full table data.
Data from the Order table is retrieved using the non-clustered index on the CustomerID column. This index
contains both the CustomerID column, which is used for the join, and the primary key column OrderID,
which is returned to the user. Returning additional columns from the Order table would require lookups in
the clustered index for the Order table.
The logical operator Inner Join is implemented by the physical operator Merge Join. The other physical
join types are Nested Loops and Hash Join. The Merge Join operator takes advantage of the fact that
both indexes are sorted on the join column CustomerID.
Consider a slight variation on this query, which returns all rows from the Order table, not only OrderID:
SELECT o.*, c.* FROM dbo.[Customer] c INNER JOIN dbo.[Order] o ON c.CustomerID = o.CustomerID
The estimated plan for this query is:
Query plan for a hash join of disk-based tables.
In this query, rows from the Order table are retrieved using the clustered index. The Hash Match physical operator
is now used for the Inner Join. The clustered index on Order is not sorted on CustomerID, and so a Merge Join
would require a sort operator, which would affect performance. Note the relative cost of the Hash Match operator
(75%) compared with the cost of the Merge Join operator in the previous example (46%). The optimizer would
have considered the Hash Match operator also in the previous example, but concluded that the Merge Join
operator gave better performance.
SQL Server Query Processing for Disk-Based Tables
The following diagram outlines the query processing flow in SQL Server for ad hoc queries:
SQL Server query processing pipeline.
In this scenario:
1. The user issues a query.
2. The parser and algebrizer construct a query tree with logical operators based on the Transact-SQL text
submitted by the user.
3. The optimizer creates an optimized query plan containing physical operators (for example, nested-loops
join). After optimization, the plan may be stored in the plan cache. This step is bypassed if the plan cache
already contains a plan for this query.
4. The query execution engine processes an interpretation of the query plan.
5. For each index seek, index scan, and table scan operator, the execution engine requests rows from the
respective index and table structures from Access Methods.
6. Access Methods retrieves the rows from the index and data pages in the buffer pool and loads pages from
disk into the buffer pool as needed.
For the first example query, the execution engine requests rows in the clustered index on Customer and the
non-clustered index on Order from Access Methods. Access Methods traverses the B-tree index structures to
retrieve the requested rows. In this case all rows are retrieved as the plan calls for full index scans.
Interpreted Transact-SQL Access to Memory-Optimized Tables
Transact-SQL ad hoc batches and stored procedures are also referred to as interpreted Transact-SQL. Interpreted
refers to the fact that the query plan is interpreted by the query execution engine for each operator in the query
plan. The execution engine reads the operator and its parameters and performs the operation.
Interpreted Transact-SQL can be used to access both memory-optimized and disk-based tables. The following
figure illustrates query processing for interpreted Transact-SQL access to memory-optimized tables:
Query processing pipeline for interpreted Transact-SQL access to memory-optimized tables.
As illustrated by the figure, the query processing pipeline remains mostly unchanged:
The parser and algebrizer construct the query tree.
The optimizer creates the execution plan.
The query execution engine interprets the execution plan.
The main difference with the traditional query processing pipeline (figure 2) is that rows for memoryoptimized tables are not retrieved from the buffer pool using Access Methods. Instead, rows are retrieved
from the in-memory data structures through the In-Memory OLTP engine. Differences in data structures
cause the optimizer to pick different plans in some cases, as illustrated by the following example.
The following Transact-SQL script contains memory-optimized versions of the Order and Customer tables,
using hash indexes:
CREATE TABLE dbo.[Customer] (
CustomerID nchar (5) NOT NULL PRIMARY KEY NONCLUSTERED,
ContactName nvarchar (30) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON)
GO
CREATE TABLE dbo.[Order] (
OrderID int NOT NULL PRIMARY KEY NONCLUSTERED,
CustomerID nchar (5) NOT NULL INDEX IX_CustomerID HASH(CustomerID) WITH (BUCKET_COUNT=100000),
OrderDate date NOT NULL INDEX IX_OrderDate HASH(OrderDate) WITH (BUCKET_COUNT=100000)
) WITH (MEMORY_OPTIMIZED=ON)
GO
Consider the same query executed on memory-optimized tables:
SELECT o.OrderID, c.* FROM dbo.[Customer] c INNER JOIN dbo.[Order] o ON c.CustomerID = o.CustomerID
The estimated plan is as follows:
Query plan for join of memory-optimized tables.
Observe the following differences with the plan for the same query on disk-based tables (figure 1):
This plan contains a table scan rather than a clustered index scan for the table Customer:
The definition of the table does not contain a clustered index.
Clustered indexes are not supported with memory-optimized tables. Instead, every memoryoptimized table must have at least one nonclustered index and all indexes on memory-optimized
tables can efficiently access all columns in the table without having to store them in the index or refer
to a clustered index.
This plan contains a Hash Match rather than a Merge Join. The indexes on both the Order and the
Customer table are hash indexes, and are thus not ordered. A Merge Join would require sort operators that
would decrease performance.
Natively Compiled Stored Procedures
Natively compiled stored procedures are Transact-SQL stored procedures compiled to machine code, rather than
interpreted by the query execution engine. The following script creates a natively compiled stored procedure that
runs the example query (from the Example Query section).
CREATE PROCEDURE usp_SampleJoin
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
( TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = 'english')
SELECT o.OrderID, c.CustomerID, c.ContactName
FROM dbo.[Order] o INNER JOIN dbo.[Customer] c
ON c.CustomerID = o.CustomerID
END
Natively compiled stored procedures are compiled at create time, whereas interpreted stored procedures are
compiled at first execution time. (A portion of the compilation, particularly parsing and algebrization, take place at
create. However, for interpreted stored procedures, optimization of the query plans takes place at first execution.)
The recompilation logic is similar. Natively compiled stored procedures are recompiled on first execution of the
procedure if the server is restarted. Interpreted stored procedures are recompiled if the plan is no longer in the
plan cache. The following table summarizes compilation and recompilation cases for both natively compiled and
interpreted stored procedures:
NATIVELY COMPILED
INTERPRETED
Initial compilation
At create time.
At first execution.
Automatic recompilation
Upon first execution of the procedure
after a database or server restart.
On server restart. Or, eviction from the
plan cache, usually based on schema or
stats changes, or memory pressure.
Manual recompilation
Use sp_recompile.
Use sp_recompile. You can manually
evict the plan from the cache, for
example through DBCC
FREEPROCCACHE. You can also create
the stored procedure WITH RECOMPILE
and the stored procedure will be
recompiled at every execution.
Compilation and Query Processing
The following diagram illustrates the compilation process for natively compiled stored procedures:
Native compilation of stored procedures.
The process is described as,
1. The user issues a CREATE PROCEDURE statement to SQL Server.
2. The parser and algebrizer create the processing flow for the procedure, as well as query trees for the
Transact-SQL queries in the stored procedure.
3. The optimizer creates optimized query execution plans for all the queries in the stored procedure.
4. The In-Memory OLTP compiler takes the processing flow with the embedded optimized query plans and
generates a DLL that contains the machine code for executing the stored procedure.
5. The generated DLL is loaded into memory.
Invocation of a natively compiled stored procedure translates to calling a function in the DLL.
Execution of natively compiled stored procedures.
Invocation of a natively compiled stored procedure is described as follows:
6. The user issues an EXECusp_myproc statement.
7. The parser extracts the name and stored procedure parameters.
If the statement was prepared, for example using sp_prep_exec, the parser does not need to extract the
procedure name and parameters at execution time.
8. The In-Memory OLTP runtime locates the DLL entry point for the stored procedure.
9. The machine code in the DLL is executed and the results of are returned to the client.
Parameter sniffing
Interpreted Transact-SQL stored procedures are compiled at first execution, in contrast to natively compiled
stored procedures, which are compiled at create time. When interpreted stored procedures are compiled at
invocation, the values of the parameters supplied for this invocation are used by the optimizer when
generating the execution plan. This use of parameters during compilation is called parameter sniffing.
Parameter sniffing is not used for compiling natively compiled stored procedures. All parameters to the
stored procedure are considered to have UNKNOWN values. Like interpreted stored procedures, natively
compiled stored procedures also support the OPTIMIZE FOR hint. For more information, see Query Hints
(Transact-SQL).
Retrieving a Query Execution Plan for Natively Compiled Stored Procedures
The query execution plan for a natively compiled stored procedure can be retrieved using Estimated Execution
Plan in Management Studio, or using the SHOWPLAN_XML option in Transact-SQL. For example:
SET SHOWPLAN_XML ON
GO
EXEC dbo.usp_myproc
GO
SET SHOWPLAN_XML OFF
GO
The execution plan generated by the query optimizer consists of a tree with query operators on the nodes and
leaves of the tree. The structure of the tree determines the interaction (the flow of rows from one operator to
another) between the operators. In the graphical view of SQL Server Management Studio, the flow is from right to
left. For example, the query plan in figure 1 contains two index scan operators, which supplies rows to a merge join
operator. The merge join operator supplies rows to a select operator. The select operator, finally, returns the rows
to the client.
Query Operators in Natively Compiled Stored Procedures
The following table summarizes the query operators supported inside natively compiled stored procedures:
OPERATOR
SAMPLE QUERY
NOTES
SELECT
SELECT OrderID FROM dbo.[Order]
INSERT
INSERT dbo.Customer VALUES
('abc', 'def')
UPDATE
UPDATE dbo.Customer SET
ContactName='ghi' WHERE
CustomerID='abc'
DELETE
DELETE dbo.Customer WHERE
CustomerID='abc'
Compute Scalar
SELECT OrderID+1 FROM dbo.[Order]
This operator is used both for intrinsic
functions and type conversions. Not all
functions and type conversions are
supported inside natively compiled
stored procedures.
Nested Loops Join
SELECT o.OrderID, c.CustomerID
FROM dbo.[Order] o INNER JOIN
dbo.[Customer] c
Nested Loops is the only join operator
supported in natively compiled stored
procedures. All plans that contain joins
will use the Nested Loops operator,
even if the plan for same query
executed as interpreted Transact-SQL
contains a hash or merge join.
Sort
SELECT ContactName FROM
dbo.Customer ORDER BY ContactName
Top
SELECT TOP 10 ContactName FROM
dbo.Customer
Top-sort
SELECT TOP 10 ContactName FROM
dbo.Customer ORDER BY ContactName
The TOP expression (the number of
rows to be returned) cannot exceed
8,000 rows. Fewer if there are also join
and aggregation operators in the query.
Joins and aggregation do typically
reduce the number of rows to be
sorted, compared with the row count of
the base tables.
Stream Aggregate
SELECT count(CustomerID) FROM
dbo.Customer
Note that the Hash Match operator is
not supported for aggregation.
Therefore, all aggregation in natively
compiled stored procedures uses the
Stream Aggregate operator, even if the
plan for the same query in interpreted
Transact-SQL uses the Hash Match
operator.
Column Statistics and Joins
SQL Server maintains statistics on values in index key columns to help estimate the cost of certain operations, such
as index scan and index seeks. ( SQL Server also creates statistics on non-index key columns if you explicitly create
them or if the query optimizer creates them in response to a query with a predicate.) The main metric in cost
estimation is the number of rows processed by a single operator. Note that for disk-based tables, the number of
pages accessed by a particular operator is significant in cost estimation. However, as page count is not important
for memory-optimized tables (it is always zero), this discussion focuses on row count. The estimation starts with
the index seek and scan operators in the plan, and is then extended to include the other operators, like the join
operator. The estimated number of rows to be processed by a join operator is based on the estimation for the
underlying index, seek, and scan operators. For interpreted Transact-SQL access to memory-optimized tables, you
can observe the actual execution plan to see the difference between the estimated and actual row counts for the
operators in the plan.
For the example in figure 1,
The clustered index scan on Customer has estimated 91; actual 91.
The nonclustered index scan on CustomerID has estimated 830; actual 830.
The Merge Join operator has estimated 815; actual 830.
The estimates for the index scans are accurate. SQL Server maintains the row count for disk-based tables.
Estimates for full table and index scans are always accurate. The estimate for the join is fairly accurate, too.
If these estimates change, the cost considerations for different plan alternatives change as well. For example,
if one of the sides of the join has an estimated row count of 1 or just a few rows, using a nested loops joins
is less expensive.
The following is the plan for the query:
SELECT o.OrderID, c.* FROM dbo.[Customer] c INNER JOIN dbo.[Order] o ON c.CustomerID = o.CustomerID
After deleting all rows but one in the table Customer:
Regarding this query plan:
The Hash Match has been replaced with a Nested Loops physical join operator.
The full index scan on IX_CustomerID has been replaced with an index seek. This resulted in scanning 5
rows, instead of the 830 rows required for the full index scan.
See Also
Memory-Optimized Tables
Faster temp table and table variable by using
memory optimization
3/24/2017 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
If you use temporary tables, table variables, or table-valued parameters, consider conversions of them to leverage
memory-optimized tables and table variables to improve performance. The code changes are usually minimal.
This article describes:
Scenarios which argue in favor of conversion to In-Memory.
Technical steps for implementing the conversions to In-Memory.
Prerequisites before conversion to In-Memory.
A code sample that highlights the performance benefits of memory-optimization
A. Basics of memory-optimized table variables
A memory-optimized table variable provides great efficiency by using the same memory-optimized algorithm and
data structures that are used by memory-optimized tables. The efficiency is maximized when the table variable is
accessed from within a natively compiled module.
A memory-optimized table variable:
Is stored only in memory, and has no component on disk.
Involves no IO activity.
Involves no tempdb utilization or contention.
Can be passed into a stored proc as a table-valued parameter (TVP).
Must have at least one index, either hash or nonclustered.
For a hash index, the bucket count should ideally be 1-2 times the number of expected unique index
keys, but overestimating bucket count is usually fine (up to 10X). For details see Indexes for MemoryOptimized Tables.
Object types
In-Memory OLTP provides the following objects that can be used for memory-optimizing temp tables and table
variables:
Memory-optimized tables
Durability = SCHEMA_ONLY
Memory-optimized table variables
Must be declared in two steps (rather than inline):
CREATE TYPE my_type AS TABLE ...; , then
DECLARE @mytablevariable my_type; .
B. Scenario: Replace global tempdb ##table
Suppose you have the following global temporary table.
CREATE TABLE ##tempGlobalB
(
Column1 INT NOT NULL ,
Column2 NVARCHAR(4000)
);
Consider replacing the global temporary table with the following memory-optimized table that has DURABILITY =
SCHEMA_ONLY.
CREATE TABLE dbo.soGlobalB
(
Column1 INT NOT NULL INDEX ix1 NONCLUSTERED,
Column2 NVARCHAR(4000)
)
WITH
(MEMORY_OPTIMIZED = ON,
DURABILITY
= SCHEMA_ONLY);
B.1 Steps
The conversion from global temporary to SCHEMA_ONLY is the following steps:
1. Create the dbo.soGlobalB table, one time, just as you would any traditional on-disk table.
2. From your Transact-SQL, remove the create of the ##tempGlobalB table.
3. In your T-SQL, replace all mentions of ##tempGlobalB with dbo.soGlobalB.
C. Scenario: Replace session tempdb #table
The preparations for replacing a session temporary table involve more T-SQL than for the earlier global
temporary table scenario. Happily the extra T-SQL does not mean any more effort is needed to accomplish the
conversion.
Suppose you have the following session temporary table.
CREATE TABLE #tempSessionC
(
Column1 INT NOT NULL ,
Column2 NVARCHAR(4000)
);
First, create the following table-value function to filter on @@spid. The function will be usable by all
SCHEMA_ONLY tables that you convert from session temporary tables.
CREATE FUNCTION dbo.fn_SpidFilter(@SpidFilter smallint)
RETURNS TABLE
WITH SCHEMABINDING , NATIVE_COMPILATION
AS
RETURN
SELECT 1 AS fn_SpidFilter
WHERE @SpidFilter = @@spid;
Second, create the SCHEMA_ONLY table, plus a security policy on the table.
Note that each memory-optimized table must have at least one index.
For table dbo.soSessionC a HASH index might be better, if we calculate the appropriate BUCKET_COUNT. But
for this sample we simplify to a NONCLUSTERED index.
CREATE TABLE dbo.soSessionC
(
Column1
INT
NOT NULL,
Column2
NVARCHAR(4000) NULL,
SpidFilter SMALLINT
NOT NULL
DEFAULT (@@spid),
INDEX ix_SpidFiler NONCLUSTERED (SpidFilter),
--INDEX ix_SpidFilter HASH
-(SpidFilter) WITH (BUCKET_COUNT = 64),
CONSTRAINT CHK_soSessionC_SpidFilter
CHECK ( SpidFilter = @@spid ),
)
WITH
(MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_ONLY);
go
CREATE SECURITY POLICY dbo.soSessionC_SpidFilter_Policy
ADD FILTER PREDICATE dbo.fn_SpidFilter(SpidFilter)
ON dbo.soSessionC
WITH (STATE = ON);
go
Third, in your general T-SQL code:
1. Change all references to the temp table in your Transact-SQL statements to the new memory-optimized table:
Old: #tempSessionC
New: dbo.soSessionC
2. Replace the CREATE TABLE #tempSessionC statements in your code with DELETE FROM dbo.soSessionC , to ensure a
session is not exposed to table contents inserted by a previous session with the same session_id
3. Remove the DROP TABLE #tempSessionC statements from your code – optionally you can insert a
DELETE FROM dbo.soSessionC statement, in case memory size is a potential concern
D. Scenario: Table variable can be MEMORY_OPTIMIZED=ON
A traditional table variable represents a table in the tempdb database. For much faster performance you can
memory-optimize your table variable.
Here is the T-SQL for a traditional table variable. Its scope ends when either the batch or the session ends.
DECLARE @tvTableD TABLE
( Column1 INT NOT NULL ,
Column2 CHAR(10) );
D.1 Convert inline to explicit
The preceding syntax is said to create the table variable inline. The inline syntax does not support memoryoptimization. So let us convert the inline syntax to the explicit syntax for the TYPE.
Scope: The TYPE definition created by the first go-delimited batch persists even after the server is shutdown and
restarted. But after the first go delimiter, the declared table @tvTableC persists only until the next go is reached
and the batch ends.
CREATE TYPE dbo.typeTableD
AS TABLE
(
Column1 INT NOT NULL ,
Column2 CHAR(10)
);
go
SET NoCount ON;
DECLARE @tvTableD dbo.typeTableD
;
INSERT INTO @tvTableD (Column1) values (1), (2)
;
SELECT * from @tvTableD;
go
D.2 Convert explicit on-disk to memory-optimized
A memory-optimized table variable does not reside in tempdb. Memory-optimization results in speed increases
that are often 10 times faster or more.
The conversion to memory-optimized is achieved in only one step. Enhance the explicit TYPE creation to be the
following, which adds:
An index. Again, each memory-optimized table must have at least one index.
MEMORY_OPTIMIZED = ON.
CREATE TYPE dbo.typeTableD
AS TABLE
(
Column1 INT NOT NULL INDEX ix1,
Column2 CHAR(10)
)
WITH
(MEMORY_OPTIMIZED = ON);
Done.
E. Prerequisite FILEGROUP for SQL Server
On Microsoft SQL Server, to use memory-optimized features, your database must have a FILEGROUP that is
declared with MEMORY_OPTIMIZED_DATA.
Azure SQL Database does not require creating this FILEGROUP.
Prerequisite: The following Transact-SQL code for a FILEGROUP is a prerequisite for the long T-SQL code samples
in later sections of this article.
1. You must use SSMS.exe or another tool that can submit T-SQL.
2. Paste the sample FILEGROUP T-SQL code into SSMS.
3. Edit the T-SQL to change its specific names and directory paths to your liking.
All directories in the FILENAME value must preexist, except the final directory must not preexist.
4. Run your edited T-SQL.
There is no need to run the FILEGROUP T-SQL more than one time, even if you repeatedly adjust and
rerun the speed comparison T-SQL in the next subsection.
ALTER DATABASE InMemTest2
ADD FILEGROUP FgMemOptim3
CONTAINS MEMORY_OPTIMIZED_DATA;
go
ALTER DATABASE InMemTest2
ADD FILE
(
NAME = N'FileMemOptim3a',
FILENAME = N'C:\DATA\FileMemOptim3a'
-- C:\DATA\
preexisted.
)
TO FILEGROUP FgMemOptim3;
go
The following script creates the filegroup for you and configures recommended database settings: enable-inmemory-oltp.sql
For more information about
ALTER DATABASE ... ADD
for FILE and FILEGROUP, see:
ALTER DATABASE File and Filegroup Options (Transact-SQL)
The Memory Optimized Filegroup
F. Quick test to prove speed improvement
This section provides Transact-SQL code that you can run to test and compare the speed gain for INSERT-DELETE
from using a memory-optimized table variable. The code is composed of two halves that are nearly the same,
except in the first half the table type is memory-optimized.
The comparison test lasts about 7 seconds. To run the sample:
1. Prerequisite: You must already have run the FILEGROUP T-SQL from the previous section.
2. Run the following T-SQL INSERT-DELETE script.
Notice the 'GO 5001' statement, which resubmits the T-SQL 5001 times. You can adjust the number and
rerun.
When running the script in an Azure SQL Database, make sure to run from a VM in the same region.
PRINT ' ';
PRINT '---- Next, memory-optimized, faster. ----';
DROP TYPE IF EXISTS dbo.typeTableC_mem;
go
CREATE TYPE dbo.typeTableC_mem -- !! Memory-optimized.
AS TABLE
(
Column1 INT NOT NULL INDEX ix1,
Column2 CHAR(10)
)
WITH
(MEMORY_OPTIMIZED = ON);
go
DECLARE @dateString_Begin nvarchar(64) =
Convert(nvarchar(64), GetUtcDate(), 121);
PRINT Concat(@dateString_Begin, ' = Begin time, _mem.');
go
SET NoCount ON;
DECLARE @tvTableC dbo.typeTableC_mem; -- !!
INSERT INTO @tvTableC (Column1) values (1), (2);
INSERT INTO @tvTableC (Column1) values (3), (4);
DELETE @tvTableC;
GO 5001
DECLARE @dateString_End nvarchar(64) =
Convert(nvarchar(64), GetUtcDate(), 121);
PRINT Concat(@dateString_End, ' = End time, _mem.');
go
DROP TYPE IF EXISTS dbo.typeTableC_mem;
go
---- End memory-optimized.
---------------------------------------------------- Start traditional on-disk.
PRINT ' ';
PRINT '---- Next, tempdb based, slower. ----';
DROP TYPE IF EXISTS dbo.typeTableC_tempdb;
go
CREATE TYPE dbo.typeTableC_tempdb -- !! Traditional tempdb.
AS TABLE
(
Column1 INT NOT NULL ,
Column2 CHAR(10)
);
go
DECLARE @dateString_Begin nvarchar(64) =
Convert(nvarchar(64), GetUtcDate(), 121);
PRINT Concat(@dateString_Begin, ' = Begin time, _tempdb.');
go
SET NoCount ON;
DECLARE @tvTableC dbo.typeTableC_tempdb; -- !!
INSERT INTO @tvTableC (Column1) values (1), (2);
INSERT INTO @tvTableC (Column1) values (3), (4);
DELETE @tvTableC;
GO 5001
DECLARE @dateString_End nvarchar(64) =
Convert(nvarchar(64), GetUtcDate(), 121);
PRINT Concat(@dateString_End, ' = End time, _tempdb.');
go
DROP TYPE IF EXISTS dbo.typeTableC_tempdb;
go
---PRINT '---- Tests done. ----';
go
/*** Actual output, SQL Server 2016:
---- Next, memory-optimized, faster. ---2016-04-20 00:26:58.033 = Begin time, _mem.
Beginning execution loop
Batch execution completed 5001 times.
2016-04-20 00:26:58.733 = End time, _mem.
---- Next, tempdb based, slower. ---2016-04-20 00:26:58.750 = Begin time, _tempdb.
Beginning execution loop
Batch execution completed 5001 times.
2016-04-20 00:27:05.440 = End time, _tempdb.
---- Tests done. ---***/
G. Predict active memory consumption
You can learn to predict the active memory needs of your memory-optimized tables with the following resources:
Estimate Memory Requirements for Memory-Optimized Tables
Table and Row Size in Memory-Optimized Tables: Example Calculation
For larger table variables, nonclustered indexes use more memory than they do for memory-optimized tables. The
larger the row count and the index key, the more the difference increases.
If the memory-optimized table variable is accessed only with one exact key value per access, a hash index might
be a better choice than a nonclustered index. However, if you cannot estimate the appropriate BUCKET_COUNT, a
NONCLUSTERED index is a good second choice.
H. See also
Memory-Optimized Tables
Defining Durability for Memory-Optimized Objects
<!-CAPS Title: "Faster temp table and table variable by using memory optimization"
https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/03/21/improving-temp-table-and-tablevariable-performance-using-memory-optimization/
ALTER DATABASE File and Filegroup Options (Transact-SQL)
The Memory Optimized Filegroup
Resource Governor Resource Pool
Memory Optimization Advisor
Estimate Memory Requirements for Memory-Optimized Tables
Table and Row Size in Memory-Optimized Tables: Example Calculation
Durability for Memory-Optimized Tables
Defining Durability for Memory-Optimized Objects
Memory-Optimized Table Variables
GeneMi , 2016-05-02 Monday 18:40pm
-->
Scalar User-Defined Functions for In-Memory OLTP
3/24/2017 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
In SQL Server 2016, you can create and drop natively compiled, scalar user-defined functions. You can also alter
these user-defined functions. Native compilation improves performance of the evaluation of user-defined
functions in Transact-SQL.
When you alter a natively compiled, scalar user-defined function, the application remains available while the
operation is being run and the new version of the function is being compiled.
For supported T-SQL constructs, see Supported Features for Natively Compiled T-SQL Modules.
Creating, Dropping, and Altering User-Defined Functions
You use the CREATE FUNCTION to create the natively compiled, scalar user-defined function, the DROP FUNCTION
to remove the user-defined function, and the ALTER FUNCTION to change the function. BEGIN ATOMIC WITH is
required for the user-defined functions.
For information about the supported syntax and any restrictions, see the following topics.
CREATE FUNCTION (Transact-SQL)
ALTER FUNCTION (Transact-SQL)
DROP FUNCTION (Transact-SQL)
The DROP FUNCTION syntax for natively compiled, scalar user-defined functions is the same as for
interpreted user-defined functions.
EXECUTE (Transact-SQL)
The sp_recompile (Transact-SQL)stored procedure can be used with the natively compiled, scalar userdefined function. It will result in the function being recompiled using the definition that exists in metadata.
The following sample shows a scalar UDF from the AdventureWorks2016CTP3 sample database.
CREATE FUNCTION [dbo].[ufnLeadingZeros_native](@Value int)
RETURNS varchar(8)
WITH NATIVE_COMPILATION, SCHEMABINDING
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'English')
DECLARE @ReturnValue varchar(8);
SET @ReturnValue = CONVERT(varchar(8), @Value);
DECLARE @i int = 0, @count int = 8 - LEN(@ReturnValue)
WHILE @i < @count
BEGIN
SET @ReturnValue = '0' + @ReturnValue;
SET @i += 1
END
RETURN (@ReturnValue);
END
Calling User-Defined Functions
Natively compiled, scalar user-defined functions can be used in expressions, in the same place as built-in scalar
functions and interpreted scalar user-defined functions. Natively compiled, scalar user-defined functions can also
be used with the EXECUTE statement, in a Transact-SQL statement and in a natively compiled stored procedure.
You can use these scalar user-defined functions in natively compiled store procedures and natively compiled userdefined functions, and wherever built-in functions are permitted. You can also use natively compiled, scalar userdefined functions in traditional Transact-SQL modules.
You can use these scalar user-defined functions in interop mode, wherever an interpreted scalar user-defined
function can be used. This use is subject to cross-container transaction limitations, as described in Supported
Isolation Levels for Cross-Container Transactions section in Transactions with Memory-Optimized Tables. For
more information about interop mode, see Accessing Memory-Optimized Tables Using Interpreted Transact-SQL.
Natively compiled, scalar user-defined functions do require an explicit execution context. For more information, see
EXECUTE AS Clause (Transact-SQL). EXECUTE AS CALLER is not supported. For more information, see EXECUTE
(Transact-SQL).
For the supported syntax for Transact-SQL Execute statements, for natively compiled, scalar user-defined functions,
see EXECUTE (Transact-SQL). For the supported syntax for executing the user-defined functions in a natively
compiled stored procedure, see Supported Features for Natively Compiled T-SQL Modules.
Hints and Parameters
Support for table, join, and query hints inside natively compiled, scalar user-defined functions is equal to support
for these hints for natively compiled stored procedures. As with interpreted scalar user-defined functions, the
query hints included with a Transact-SQL query that reference a natively compiled, scalar user-defined function do
not impact the query plan for this user-defined function.
The parameters supported for the natively compiled, scalar user-defined functions are all the parameters
supported for natively compiled stored procedures, as long as the parameters are allowed for scalar user-defined
functions. An example of a supported parameter is the table-valued parameter.
Schema-Bound
The following apply to natively compiled, scalar user-defined functions.
Must be schema-bound, by using the WITH SCHEMABINDING argument in the CREATE FUNCTION and
ALTER FUNCTION.
Cannot be dropped or altered when referenced by a schema-bound stored procedure or user-defined
function.
SHOWPLAN_XML
Natively compiled, scalar user-defined functions support SHOWPLAN_XML. It conforms to the general
SHOWPLAN_XML schema, as with natively compiled stored procedures. The base element for the user-defined
functions is <UDF> .
STATISTICS XML is not supported for natively compiled, scalar user-defined functions. When you run a query
referencing the user-defined function, with STATISTICS XML enabled, the XML content is returned without the part
for the user-defined function.
Permissions
As with natively compiled stored procedures, the permissions for objects referenced from a natively compiled,
scalar user-defined function are checked when the function is created. The CREATE FUNCTION fails if the
impersonated user does not have the correct permissions. If permission changes result in the impersonated user
no longer having the correct permissions, subsequent executions of the user-defined function fail.
When you use a natively compiled, scalar user-defined function inside a natively compiled stored procedure, the
permissions for executing the user-defined function are checked when the outer procedure is created. If the user
impersonated by the outer procedure does not have EXEC permissions for the user-defined function, the creation
of the stored procedure fails. If permission changes result in the user no longer having the EXEC permissions, the
execution of the outer procedure fails.
See Also
Built-in Functions (Transact-SQL)
Save an Execution Plan in XML Format
Indexes for Memory-Optimized Tables
3/24/2017 • 5 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This article describes the types of indexes that are available for a memory-optimized table. The article:
Provides short code examples to demonstrate the Transact-SQL syntax.
Describes how memory-optimized indexes differ from traditional disk-based indexes.
Explains the circumstances when each type of memory-optimized index is best.
Hash indexes are discussed in more detail in a closely related article.
Columnstore indexes are discussed in another article.
A. Syntax for memory-optimized indexes
Each CREATE TABLE statement for a memory-optimized table must include between 1 and 8 clauses to declare
indexes. The index must be one of the following:
Hash index.
Nonclustered index (meaning the default internal structure of a B-tree).
To be declared with the default DURABILITY = SCHEMA_AND_DATA, the memory-optimized table must have a
primary key. The PRIMARY KEY NONCLUSTERED clause in the following CREATE TABLE statement satisfies two
requirements:
Provides an index to meet the minimum requirement of one index in the CREATE TABLE statement.
Provides the primary key that is required for the SCHEMA_AND_DATA clause.
CREATE TABLE SupportEvent
(
SupportEventId int NOT NULL
PRIMARY KEY NONCLUSTERED,
...
)
WITH (
MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_AND_DATA);
A.1 Code sample for syntax
This subsection contains a Transact-SQL code block that demonstrates the syntax to create various indexes on a
memory-optimized table. The code demonstrates the following:
1. Create a memory-optimized table.
2. Use ALTER TABLE statements to add two indexes.
3. INSERT a few rows of data.
DROP TABLE IF EXISTS SupportEvent;
go
CREATE TABLE SupportEvent
(
SupportEventId int
PRIMARY KEY NONCLUSTERED,
StartDateTime
CustomerName
SupportEngineerName
Priority
Description
not null
identity(1,1)
datetime2
not null,
nvarchar(16) not null,
nvarchar(16)
null,
int
null,
nvarchar(64)
null
)
WITH (
MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_AND_DATA);
go
-------------------ALTER TABLE SupportEvent
ADD CONSTRAINT constraintUnique_SDT_CN
UNIQUE NONCLUSTERED (StartDateTime DESC, CustomerName);
go
ALTER TABLE SupportEvent
ADD INDEX idx_hash_SupportEngineerName
HASH (SupportEngineerName) WITH (BUCKET_COUNT = 64); -- Nonunique.
go
-------------------INSERT INTO SupportEvent
(StartDateTime, CustomerName, SupportEngineerName, Priority, Description)
VALUES
('2016-02-25 13:40:41:123', 'Abby', 'Zeke', 2, 'Display problem.'
),
('2016-02-25 13:40:41:323', 'Ben' , null , 1, 'Cannot find help.'
),
('2016-02-25 13:40:41:523', 'Carl', 'Liz' , 2, 'Button is gray.'
),
('2016-02-25 13:40:41:723', 'Dave', 'Zeke', 2, 'Cannot unhide column.');
go
B. Nature of memory-optimized indexes
On a memory-optimized table, every index is also memory-optimized. There are several ways in which an index
on a memory-optimized index differs from a traditional index on a disk-base table.
Each memory-optimized index exists only in active memory. The index has no representation on the disk.
Memory-optimized indexes are rebuilt when the database is brought back online.
When an SQL UPDATE statement modifies data in a memory-optimized table, corresponding changes to its
indexes are not written to the log.
The entries in a memory-optimized index contain a direct memory address to the row in the table.
In contrast, entries in a traditional B-tree index on disk contain a key value that the system must first use to find
the memory address to the associated table row.
Memory-optimized indexes have no fixed pages as do disk-based indexes.
They do not accrue the traditional type of fragmentation within a page, so they have no fillfactor.
C. Duplicate index key values
Duplicate index key values can impact the performance of operations on memory-optimized tables. Large
numbers of duplicates (e.g., 100+) make the job of maintaining an index inefficient because duplicate chains must
be traversed for most index operations. The impact can be seen in INSERT, UPDATE, and DELETE operations on
memory-optimized tables. This problem is more visible in the case of hash indices, due both to the lower cost per
operation for hash indices and the interference of large duplicate chains with the hash collision chain. To reduce
duplication in an index, use a nonclustered index and add additional columns (for example from the primary key)
to the end of the index key to reduce the number of duplicates.
Consider, as an example, a Customers table with a primary key on CustomerId and an index on column
CustomerCategoryID. There will typically be many customers in a given category, and thus many duplicate values
for a given key in the index on CustomerCategoryID. In this scenario, best practice is to use a nonclustered index
on (CustomerCategoryID, CustomerId). This index can be used for queries that use a predicate involving
CustomerCategoryID, and does not contain duplication, and thus does not cause inefficiency in index maintenance.
The following query shows the average number of duplicate index key values for the index on
in table Sales.Customers , in the sample database WideWorldImporters.
CustomerCategoryID
SELECT AVG(row_count) FROM
(SELECT COUNT(*) AS row_count
FROM Sales.Customers
GROUP BY CustomerCategoryID) a
To evaluate the average number of index key duplicates for your own table and index, replace
with your table name, and replace CustomerCategoryID with the list of index key columns.
Sales.Customers
D. Comparing when to use each index type
The nature of your particular queries determines which type of index is the best choice.
When implementing memory-optimized tables in an existing application, the general recommendation is to start
with nonclustered indexes, as their capabilities more closely resemble the capabilities of traditional clustered and
nonclustered indexes on disk-based tables.
D.1 Strengths of nonclustered indexes
A nonclustered index is preferable over a hash index when:
Queries have an ORDER BY clause on the indexed column.
Queries where only the leading column(s) of a multi-column index is tested.
Queries test the indexed column by use of a WHERE clause with:
An inequality: WHERE StatusCode != 'Done'
A value range: WHERE Quantity >= 100
In all the following SELECTs, a nonclustered index is preferable over a hash index:
SELECT col2 FROM TableA
WHERE StartDate > DateAdd(day, -7, GetUtcDate());
SELECT col3 FROM TableB
WHERE ActivityCode != 5;
SELECT StartDate, LastName
FROM TableC
ORDER BY StartDate;
SELECT IndexKeyColumn2
FROM TableD
WHERE IndexKeyColumn1 = 42;
D.2 Strengths of hash indexes
A hash index is preferable over a nonclustered index when:
Queries test the indexed columns by use of a WHERE clause with an exact equality on all index key columns, as
in the following:
SELECT col9 FROM TableZ
WHERE Z_Id = 2174;
D.3 Summary table to compare index strengths
The following table lists all operations that are supported by the different index types.
MEMORY-OPTIMIZED,
HASH
MEMORY-OPTIMIZED,
NONCLUSTERED
DISK-BASED,
(NON)CLUSTERED
Index Scan, retrieve all table
rows.
Yes
Yes
Yes
Index seek on equality
predicates (=).
Yes
(Full key is required.)
Yes
Yes
Index seek on inequality and
range predicates
(>, <, <=, >=, BETWEEN).
No
(Results in an index scan.)
Yes
Yes
Retrieve rows in a sort order
that matches the index
definition.
No
Yes
Yes
Retrieve rows in a sort-order
that matches the reverse of
the index definition.
No
No
Yes
OPERATION
In the table, Yes means that the index can efficiently service the request, and No means that the index cannot
efficiently satisfy the request.
<!-Indexes_for_Memory-Optimized_Tables.md , which is....
CAPS guid: {eecc5821-152b-4ed5-888f-7c0e6beffed9}
mt670614.aspx
Application-Level%20Partitioning.xml , {162d1392-39d2-4436-a4d9-ee5c47864c5a}
/Image/hekaton_tables_23d.png , fbc511a0-304c-42f7-807d-d59f3193748f
Replaces dn511012.aspx , which is....
CAPS guid: {86805eeb-6972-45d8-8369-16ededc535c7}
GeneMi , 2016-05-05 Thursday 17:25pm (Hash content moved to new child article, e922cc3a-3d6e-453b-8d32f4b176e98488.)
-->
Hash Indexes for Memory-Optimized Tables
4/12/2017 • 11 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This article describes the hash type of index that is available for a memory-optimized table. The article:
Provides short code examples to demonstrate the Transact-SQL syntax.
Describes the fundamentals of hash indexes.
Describes how to estimate an appropriate bucket count.
Describes how to design and manage your hash indexes.
Prerequisite
Important context information for understanding this article is available at:
Indexes for Memory-Optimized Tables
A. Syntax for memory-optimized indexes
A.1 Code sample for syntax
This subsection contains a Transact-SQL code block that demonstrates the available syntaxes to create a hash
index on a memory-optimized table:
The sample shows the hash index is declared inside the CREATE TABLE statement.
You can instead declare the hash index in a separate ALTER TABLE...ADD INDEX statement.
DROP TABLE IF EXISTS SupportEventHash;
go
CREATE TABLE SupportIncidentRating_Hash
(
SupportIncidentRatingId int
not null
PRIMARY KEY NONCLUSTERED,
RatingLevel
int
identity(1,1)
not null,
SupportEngineerName nvarchar(16) not null,
Description
nvarchar(64)
null,
INDEX ix_hash_SupportEngineerName
HASH (SupportEngineerName) WITH (BUCKET_COUNT = 100000)
)
WITH (
MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_ONLY);
go
To determine the right
BUCKET_COUNT
B. Hash indexes
B.1 Performance basics
The performance of a hash index is:
for your data, see Configuring the hash index bucket count.
Excellent when the WHERE clause specifies an exact value for each column in the hash index key.
Poor when the WHERE clause looks for a range of values in the index key.
Poor when the WHERE clause specifies one specific value for the first column of a two column hash index key,
but does not specify a value for the second column of the key.
B.2 Declaration limitations
A hash index can exist only on a memory-optimized table. It cannot exist on a disk-based table.
A hash index can be declared as:
UNIQUE, or can default to Nonunique.
NONCLUSTERED, which is the default.
Here is an example of the syntax to create a hash index outside of the CREATE TABLE statement:
ALTER TABLE MyTable_memop
ADD INDEX ix_hash_Column2 UNIQUE
HASH (Column2) WITH (BUCKET_COUNT = 64);
B.3 Buckets and hash function
A hash index anchors its key values in what we call a bucket array:
Each bucket is 8 bytes, which are used to store the memory address of a link list of key entries.
Each entry is a value for an index key, plus the address of its corresponding row in the underlying memoryoptimized table.
Each entry points to the next entry in a link list of entries, all chained to the current bucket.
The hashing algorithm tries to spread all the unique or distinct key values evenly among its buckets, but total
evenness is an unreached ideal. All instances of any given key value are chained to the same bucket. The bucket
might also have mixed in all instances of a different key value.
This mixture is called a hash collision. Collisions are common but are not ideal.
A realistic goal is for 30% of the buckets contain two different key values.
You declare how many buckets a hash index shall have.
The lower the ratio of buckets to table rows or to distinct values, the longer the average bucket link list will be.
Short link lists perform faster than long link lists.
SQL Server has one hash function it uses for all hash indexes:
The hash function is deterministic: given the same input key value, it consistently outputs the same bucket slot.
With repeated calls, the outputs of the hash function tend to form a Poisson or bell curve distribution, not a flat
linear distribution.
The interplay of the hash index and the buckets is summarized in the following image.
B.4 Row versions and garbage collection
In a memory-optimized table, when a row is affected by an SQL UPDATE, the table creates an updated version of
the row. During the update transaction, other sessions might be able to read the older version of the row and
thereby avoid the performance slowdown associated with a row lock.
The hash index might also have different versions of its entries to accommodate the update.
Later when the older versions are no longer needed, a garbage collection (GC) thread traverses the buckets and
their link lists to clean away old entries. The GC thread performs better if the link list chain lengths are short.
C. Configuring the hash index bucket count
The hash index bucket count is specified at index create time, and can be changed using the ALTER TABLE...ALTER
INDEX REBUILD syntax.
In most cases the bucket count would ideally be between 1 and 2 times the number of distinct values in the index
key.
You may not always be able to predict how many values a particular index key may have or will have. Performance
is usually still good if the BUCKET_COUNT value is within 10 times of the actual number of key values, and
overestimating is generally better than underestimating.
Too few buckets has the following drawbacks:
More hash collisions of distinct key values.
Each distinct value is forced to share the same bucket with a different distinct value.
The average chain length per bucket grows.
The longer the bucket chain, the slower the speed of equality lookups in the index and .
Too many buckets has the following drawbacks:
Too high a bucket count might result in more empty buckets.
Empty buckets impact the performance of full index scans. If those are performed regularly, consider
picking a bucket count close to the number of distinct index key values.
Empty buckets use memory, though each bucket uses only 8 bytes.
NOTE
Adding more buckets does nothing to reduce the chaining together of entries that share a duplicate value. The rate of value
duplication is used to decide whether a hash is the appropriate index type, not to calculate the bucket count.
C.1 Practical numbers
Even if the BUCKET_COUNT is moderately below or above the preferred range, the performance of your hash
index is likely to be tolerable or acceptable. No crisis is created.
Give your hash index a BUCKET_COUNT roughly equal to the number of rows you predict your memoryoptimized table will grow to have.
Suppose your growing table has 2,000,000 rows, but you predict the quantity will grow 10 times to 20,000,000
rows. Start with a bucket count that is 10 times the number of rows in the table. This gives you room for an
increased quantity of rows.
Ideally you would increase the bucket count when the quantity of rows reaches the initial bucket count.
Even if the quantity of rows grows to 5 times larger than the bucket count, the performance is still good in
most situations.
Suppose a hash index has 10,000,000 distinct key values.
A bucket count of 2,000,000 would be about as low as you could accept. The degree of performance
degradation could be tolerable.
C.2 Too many duplicate values in the index?
If the hash indexed values have a high rate of duplicates, the hash buckets suffer longer chains.
Assume you have the same SupportEvent table from the earlier T-SQL syntax code block. The following T-SQL
code demonstrates how you can find and display the ratio of all values to unique values:
-- Calculate ratio of: Rows / Unique_Values.
DECLARE @allValues float(8) = 0.0, @uniqueVals float(8) = 0.0;
SELECT @allValues = Count(*) FROM SupportEvent;
SELECT @uniqueVals = Count(*) FROM
(SELECT DISTINCT SupportEngineerName
FROM SupportEvent) as d;
-- If (All / Unique) >= 10.0, use a nonclustered index, not a hash.
SELECT Cast((@allValues / @uniqueVals) as float) as [All_divby_Unique];
go
A ratio of 10.0 or higher means a hash would be a poor type of index. Consider using a nonclustered index
instead,
D. Troubleshooting hash index bucket count
This section discusses how to troubleshoot the bucket count for your hash index.
D.1 Monitor statistics for chains and empty buckets
You can monitor the statistical health of your hash indexes by running the following T-SQL SELECT. The SELECT
uses the data management view (DMV) named sys.dm_db_xtp_hash_index_stats.
SELECT
QUOTENAME(SCHEMA_NAME(t.schema_id)) + N'.' + QUOTENAME(OBJECT_NAME(h.object_id)) as [table],
i.name
as [index],
h.total_bucket_count,
h.empty_bucket_count,
FLOOR((
CAST(h.empty_bucket_count as float) /
h.total_bucket_count) * 100)
as [empty_bucket_percent],
h.avg_chain_length,
h.max_chain_length
FROM
sys.dm_db_xtp_hash_index_stats as h
JOIN sys.indexes
as i
ON h.object_id = i.object_id
AND h.index_id = i.index_id
JOIN sys.memory_optimized_tables_internal_attributes ia ON h.xtp_object_id=ia.xtp_object_id
JOIN sys.tables t on h.object_id=t.object_id
WHERE ia.type=1
ORDER BY [table], [index];
Compare the SELECT results to the following statistical guidelines:
Empty buckets:
33% is a good target value, but a larger percentage (even 90%) is usually fine.
When the bucket count equals the number of distinct key values, approximately 33% of the buckets are
empty.
A value below 10% is too low.
Chains within buckets:
An average chain length of 1 is ideal in case there are no duplicate index key values. Chain lengths up to
10 are usually acceptable.
If the average chain length is greater than 10, and the empty bucket percent is greater than 10%, the
data has so many duplicates that a hash index might not be the most appropriate type.
D.2 Demonstration of chains and empty buckets
The following T-SQL code block gives you an easy way to test a SELECT * FROM sys.dm_db_xtp_hash_index_stats; .
The code block completes in 1 minute. Here are the phases of the following code block:
1. Creates a memory-optimized table that has a few hash indexes.
2. Populates the table with thousands of rows.
a. A modulo operator is used to configure the rate of duplicate values in the StatusCode column.
b. The loop INSERTs 262144 rows in approximately 1 minute.
3. PRINTs a message asking you to run the earlier SELECT from sys.dm_db_xtp_hash_index_stats.
DROP TABLE IF EXISTS SalesOrder_Mem;
go
CREATE TABLE SalesOrder_Mem
(
SalesOrderId uniqueidentifier
OrderSequence int
OrderDate
datetime2(3)
StatusCode
tinyint
NOT
NOT
NOT
NOT
NULL DEFAULT newid(),
NULL,
NULL,
NULL,
PRIMARY KEY NONCLUSTERED
HASH (SalesOrderId) WITH (BUCKET_COUNT = 262144),
INDEX ix_OrderSequence
HASH (OrderSequence) WITH (BUCKET_COUNT = 20000),
INDEX ix_StatusCode
HASH (StatusCode)
WITH (BUCKET_COUNT = 8),
INDEX ix_OrderDate
NONCLUSTERED (OrderDate DESC)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
go
-------------------SET NoCount ON;
-- Same as PK bucket_count. 68 seconds to complete.
DECLARE @i int = 262144;
BEGIN TRANSACTION;
WHILE @i > 0
Begin
INSERT SalesOrder_Mem
(OrderSequence, OrderDate, StatusCode)
Values
(@i, GetUtcDate(), @i % 8); -- Modulo technique.
SET @i -= 1;
End
COMMIT TRANSACTION;
PRINT 'Next, you should query: sys.dm_db_xtp_hash_index_stats .';
go
The preceding INSERT loop does the following:
Inserts unique values for the primary key index, and for ix_OrderSequence.
Inserts a couple hundred thousands rows which represent only 8 distinct values for StatusCode. Therefore
there is a high rate of value duplication in index ix_StatusCode.
For troubleshooting when the bucket count is not optimal, examine the following output of the SELECT from
sys.dm_db_xtp_hash_index_stats. For these results we added
WHERE Object_Name(h.object_id) = 'SalesOrder_Mem' to the SELECT copied from section D.1.
Our SELECT results are displayed after the code, artificially split into two narrower results tables for better display.
Here are the results for bucket count.
INDEXNAME
TOTAL_BUCKET_COUNT
EMPTY_BUCKET_COUNT
EMPTYBUCKETPERCENT
ix_OrderSequence
32768
13
0
ix_StatusCode
8
4
50
PK_SalesOrd_B14003...
262144
96525
36
Next are the results for chain length.
INDEXNAME
AVG_CHAIN_LENGTH
MAX_CHAIN_LENGTH
ix_OrderSequence
8
26
ix_StatusCode
65536
65536
PK_SalesOrd_B14003...
1
8
Let us interpret the preceding results tables for the three hash indexes:
ix_StatusCode:
50% of the buckets are empty, which is good.
However, the average chain length is very high at 65536.
This indicates a high rate of duplicate values.
Therefore, using a hash index is not appropriate in this case. A nonclustered index should be used
instead.
ix_OrderSequence:
0% of the buckets are empty, which is too low.
The average chain length is 8, even though all values in this index are unique.
Therefore the bucket count should be increased, to reduce the average chain length closer to 2 or 3.
Because the index key has 262144 unique values, the bucket count should be at least 262144.
If future growth is expected, the bucket count should be higher.
Primary key index (PK_SalesOrd_...):
36% of the buckets are empty, which is good.
The average chain length is 1, which is also good. No change is needed.
D.3 Balancing the trade -off
OLTP workloads focus on individual rows. Full table scans are not usually in the performance critical path for OLTP
workloads. Therefore, the trade-off you must balance is between:
Quantity of memory utilization; versus
Performance of equality tests, and of insert operations.
If memory utilization is the bigger concern:
Choose a bucket count close to the number of index key records.
The bucket count should not be significantly lower than the number of index key values, as this impacts most
DML operations as well the time it takes to recover the database after server restart.
If performance of equality tests is the bigger concern:
A higher bucket count, of two or three times the number of unique index values, is appropriate. A higher count
means:
Faster retrievals when looking for one specific value.
An increased memory utilization.
An increase in the time required for a full scan of the hash index.
E. Strengths of hash indexes
A hash index is preferable over a nonclustered index when:
Queries test the indexed column by use of a WHERE clause with an equality, as in the following:
SELECT col9 FROM TableZ
WHERE Z_Id = 2174;
E.1 Multi-column hash index keys
Your two column index could be a nonclustered index or a hash index. Suppose the index columns are col1 and
col2. Given the following SQL SELECT statement, only the nonclustered index would be useful to the query
optimizer:
SELECT col1, col3
FROM MyTable_memop
WHERE col1 = 'dn';
The hash index needs the WHERE clause to specify an equality test for each of the columns in its key. Else the hash
index is not useful to the optimizer.
Neither index type is useful if the WHERE clause specifies only the second column in the index key.
<!-Hash_Indexes_for_Memory-Optimized_Tables.md , which is....
CAPS guid: {e922cc3a-3d6e-453b-8d32-f4b176e98488}
CAPS guid of parent is: {eecc5821-152b-4ed5-888f-7c0e6beffed9}
TOTAL_BUCKET_CO
UNT
EMPTY_BUCKET_C
OUNT
EMPTYBUCKETPER
CENT
AVG_CHAIN_LENG
TH
MAX_CHAIN_LENG
TH
ix_OrderSequenc
e
32768
13
0
8
26
ix_StatusCode
8
4
50
65536
65536
PK_SalesOrd_B14
003E308C1A23C
262144
96525
36
1
8
INDEXNAME
GeneMi , 2016-05-05 Thursday 15:01pm
-->
Natively Compiled Stored Procedures
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
Natively compiled stored procedures are Transact-SQL stored procedures compiled to native code that access
memory-optimized tables. Natively compiled stored procedures allow for efficient execution of the queries and
business logic in the stored procedure. For more details about the native compilation process, see Native
Compilation of Tables and Stored Procedures. For more information about migrating disk-based stored
procedures to natively compiled stored procedures, see Migration Issues for Natively Compiled Stored
Procedures.
NOTE
One difference between interpreted (disk-based) stored procedures and natively compiled stored procedures is that an
interpreted stored procedure is compiled at first execution, whereas a natively compiled stored procedure is compiled
when it is created. With natively compiled stored procedures, many error conditions can be detected at create time and
will cause creation of the natively compiled stored procedure to fail (such as arithmetic overflow, type conversion, and
some divide-by-zero conditions). With interpreted stored procedures, these error conditions typically do not cause a
failure when the stored procedure is created, but all executions will fail.
Topics in this section:
Creating Natively Compiled Stored Procedures
Atomic Blocks
Supported Features for Natively Compiled T-SQL Modules
Supported DDL for Natively Compiled T-SQL modules
Natively Compiled Stored Procedures and Execution Set Options
Best Practices for Calling Natively Compiled Stored Procedures
Monitoring Performance of Natively Compiled Stored Procedures
Calling Natively Compiled Stored Procedures from Data Access Applications
See Also
Memory-Optimized Tables
Creating Natively Compiled Stored Procedures
3/24/2017 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
Natively compiled stored procedures do not implement the full Transact-SQL programmability and query surface
area. There are certain Transact-SQL constructs that cannot be used inside natively compiled stored procedures.
For more information, see Supported Features for Natively Compiled T-SQL Modules.
There are, however, several Transact-SQL features that are only supported for natively compiled stored
procedures:
Atomic blocks. For more information, see Atomic Blocks.
NOT NULL constraints on parameters of and variables in natively compiled stored procedures. You cannot
assign NULL values to parameters or variables declared as NOT NULL. For more information, see DECLARE
@local_variable (Transact-SQL).
CREATE PROCEDURE dbo.myproc (@myVarchar varchar(32) not null) ...
DECLARE @myVarchar varchar(32) not null = "Hello"; -- (Must initialize to a value.)
SET @myVarchar = null; -- (Compiles, but fails during run time.)
Schema binding of natively compiled stored procedures.
Natively compiled stored procedures are created using CREATE PROCEDURE (Transact-SQL). The following
example shows a memory-optimized table and a natively compiled stored procedure used for inserting
rows into the table.
create table dbo.Ord
(OrdNo integer not null primary key nonclustered,
OrdDate datetime not null,
CustCode nvarchar(5) not null)
with (memory_optimized=on)
go
create procedure dbo.OrderInsert(@OrdNo integer, @CustCode nvarchar(5))
with native_compilation, schemabinding
as
begin atomic with
(transaction isolation level = snapshot,
language = N'English')
declare @OrdDate datetime = getdate();
insert into dbo.Ord (OrdNo, CustCode, OrdDate) values (@OrdNo, @CustCode, @OrdDate);
end
go
In the code sample, NATIVE_COMPILATION indicates that this Transact-SQL stored procedure is a natively
compiled stored procedure. The following options are required:
OPTION
DESCRIPTION
OPTION
DESCRIPTION
SCHEMABINDING
A natively compiled stored procedure must be bound to the
schema of the objects it references. This means that tables
referenced by the procedure cannot be dropped. Tables
referenced in the procedure must include their schema name,
and wildcards (*) are not allowed in queries (meaning no
SELECT * from... ). SCHEMABINDING is only supported
for natively compiled stored procedures in this version of SQL
Server.
BEGIN ATOMIC
The natively compiled stored procedure body must consist of
exactly one atomic block. Atomic blocks guarantee atomic
execution of the stored procedure. If the procedure is invoked
outside the context of an active transaction, it will start a new
transaction, which commits at the end of the atomic block.
Atomic blocks in natively compiled stored procedures have
two required options:
TRANSACTION ISOLATION LEVEL. See Transaction Isolation
Levels for Memory-Optimized Tables for supported isolation
levels.
LANGUAGE. The language for the stored procedure must be
set to one of the available languages or language aliases.
See Also
Natively Compiled Stored Procedures
Altering Natively Compiled T-SQL Modules
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
In SQL Server 2016 (and later) and SQL Database you can perform ALTER operations on natively compiled stored
procedures and other natively compiled T-SQL modules such as scalar UDFs and triggers using the ALTER
statement.
When executing ALTER on a natively compiled T-SQL module, the module is recompiled using a new definition.
While recompilation is in progress, the old version of the module continues to be available for execution. Once
compilation completes, module executions are drained, and the new version of the module is installed. When you
alter a natively compiled T-SQL module, you can modify the following options.
Parameters
EXECUTE AS
TRANSACTION ISOLATION LEVEL
LANGUAGE
DATEFIRST
DATEFORMAT
DELAYED_DURABILITY
NOTE
Natively compiled T-SQL modules cannot be converted to non-natively compiled modules. Non-natively compiled T-SQL
modules cannot be converted to natively compiled modules.
For more information on ALTER PROCEDURE functionality and syntax, see ALTER PROCEDURE (Transact-SQL)
You can execute sp_recompile on a natively compiled T-SQL modules, which causes the module to recompile on
the next execution.
Example
The following example creates a memory-optimized table (T1), and a natively compiled stored procedure (SP1) that
selects all the T1 columns. Then, SP1 is altered to remove the EXECUTE AS clause, change the LANGUAGE, and
select only one column (C1) from T1.
CREATE TABLE [dbo].[T1]
(
[c1] [int] NOT NULL,
[c2] [float] NOT NULL,
CONSTRAINT [PK_T1] PRIMARY KEY NONCLUSTERED ([c1])
)WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_AND_DATA )
GO
CREATE PROCEDURE [dbo].[usp_1]
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
(
TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english'
)
SELECT c1, c2 from dbo.T1
END
GO
ALTER PROCEDURE [dbo].[usp_1]
WITH NATIVE_COMPILATION, SCHEMABINDING
AS BEGIN ATOMIC WITH
(
TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'Dutch'
)
SELECT c1 from dbo.T1
END
GO
Atomic Blocks in Native Procedures
3/24/2017 • 4 min to read • Edit Online
BEGIN ATOMIC is part of the ANSI SQL standard. SQL Server supports atomic blocks at the top-level of natively
compiled stored procedures, as well as for natively compiled, scalar user-defined functions. For more information
about these functions, see Scalar User-Defined Functions for In-Memory OLTP.
Every natively compiled stored procedure contains exactly one block of Transact-SQL statements. This is an
ATOMIC block.
Non-native, interpreted Transact-SQL stored procedures and ad hoc batches do not support atomic blocks.
Atomic blocks are executed (atomically) within the transaction. Either all statements in the block succeed or
the entire block will be rolled back to the savepoint that was created at the start of the block. In addition, the
session settings are fixed for the atomic block. Executing the same atomic block in sessions with different
settings will result in the same behavior, independent of the settings of the current session.
Transactions and Error Handling
If a transaction already exists on a session (because a batch executed a BEGIN TRANSACTION statement and the
transaction remains active), then starting an atomic block will create a savepoint in the transaction. If the block
exits without an exception, the savepoint that was created for the block commits, but the transaction will not
commit until the transaction at the session level commits. If the block throws an exception, the effects of the block
are rolled back but the transaction at the session level will proceed, unless the exception is transaction-dooming.
For example a write conflict is transaction-dooming, but not a type casting error.
If there is no active transaction on a session, BEGIN ATOMIC will start a new transaction. If no exception is thrown
outside the scope of the block, the transaction will be committed at the end of the block. If the block throws an
exception (that is, the exception is not caught and handled within the block), the transaction will be rolled back. For
transactions that span a single atomic block (a single natively compiled stored procedure), you do not need to
write explicit BEGIN TRANSACTION and COMMIT or ROLLBACK statements.
Natively compiled stored procedures support the TRY, CATCH, and THROW constructs for error handling.
RAISERROR is not supported.
The following example illustrates the error handling behavior with atomic blocks and natively compiled stored
procedures:
-- sample table
CREATE TABLE dbo.t1 (
c1 int not null primary key nonclustered
)
WITH (MEMORY_OPTIMIZED=ON)
GO
-- sample proc that inserts 2 rows
CREATE PROCEDURE dbo.usp_t1 @v1 bigint not null, @v2 bigint not null
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english', DELAYED_DURABILITY = ON)
INSERT dbo.t1 VALUES (@v1)
INSERT dbo.t1 VALUES (@v2)
END
END
GO
-- insert two rows
EXEC dbo.usp_t1 1, 2
GO
-- verify we have no active transaction
SELECT @@TRANCOUNT
GO
-- verify the rows 1 and 2 were committed
SELECT c1 FROM dbo.t1
GO
-- execute proc with arithmetic overflow
EXEC dbo.usp_t1 3, 4444444444444
GO
-- expected error message:
-- Msg 8115, Level 16, State 0, Procedure usp_t1
-- Arithmetic overflow error converting bigint to data type int.
-- verify we have no active transaction
SELECT @@TRANCOUNT
GO
-- verify rows 3 was not committed; usp_t1 has been rolled back
SELECT c1 FROM dbo.t1
GO
-- start a new transaction
BEGIN TRANSACTION
-- insert rows 3 and 4
EXEC dbo.usp_t1 3, 4
-- verify there is still an active transaction
SELECT @@TRANCOUNT
-- verify the rows 3 and 4 were inserted
SELECT c1 FROM dbo.t1 WITH (SNAPSHOT)
ORDER BY c1
-- catch the arithmetic overflow error
BEGIN TRY
EXEC dbo.usp_t1 5, 4444444444444
END TRY
BEGIN CATCH
PRINT N'Error occurred: ' + error_message()
END CATCH
-- verify there is still an active transaction
SELECT @@TRANCOUNT
-- verify rows 3 and 4 are still in the table, and row 5 has not been inserted
SELECT c1 FROM dbo.t1 WITH (SNAPSHOT)
ORDER BY c1
COMMIT
GO
-- verify we have no active transaction
SELECT @@TRANCOUNT
GO
-- verify rows 3 and 4 has been committed
SELECT c1 FROM dbo.t1
ORDER BY c1
GO
The following error messages specific to memory-optimized tables are transaction dooming. If they occur in the
scope of an atomic block, they will cause the transaction to abort: 10772, 41301, 41302, 41305, 41325, 41332, and
41333.
Session Settings
The session settings in atomic blocks are fixed when the stored procedure is compiled. Some settings can be
specified with BEGIN ATOMIC while other settings are always fixed to the same value.
The following options are required with BEGIN ATOMIC:
REQUIRED SETTING
DESCRIPTION
TRANSACTION ISOLATION LEVEL
Supported values are SNAPSHOT, REPEATABLEREAD, and
SERIALIZABLE.
LANGUAGE
Determines date and time formats and system messages. All
languages and aliases in sys.syslanguages (Transact-SQL) are
supported.
The following settings are optional:
OPTIONAL SETTING
DESCRIPTION
DATEFORMAT
All SQL Server date formats are supported. When specified,
DATEFORMAT overrides the default date format associated
with LANGUAGE.
DATEFIRST
When specified, DATEFIRST overrides the default associated
with LANGUAGE.
DELAYED_DURABILITY
Supported values are OFF and ON.
SQL Server transaction commits can be either fully durable,
the default, or delayed durable.For more information, see
Control Transaction Durability.
The following SET options have the same system default value for all atomic blocks in all natively compiled stored
procedures:
SET OPTION
SYSTEM DEFAULT FOR ATOMIC BLOCKS
ANSI_NULLS
ON
ANSI_PADDING
ON
ANSI_WARNING
ON
ARITHABORT
ON
ARITHIGNORE
OFF
CONCAT_NULL_YIELDS_NULL
ON
IDENTITY_INSERT
OFF
SET OPTION
SYSTEM DEFAULT FOR ATOMIC BLOCKS
NOCOUNT
ON
NUMERIC_ROUNDABORT
OFF
QUOTED_IDENTIFIER
ON
ROWCOUNT
0
TEXTSIZE
0
XACT_ABORT
OFF
Uncaught exceptions cause the atomic block to roll back, but
not cause the transaction to abort unless the error is
transaction dooming.
See Also
Natively Compiled Stored Procedures
Natively Compiled Stored Procedures and Execution
Set Options
3/24/2017 • 1 min to read • Edit Online
Session options are fixed in atomic blocks. A stored procedure's execution is not affected by a session's SET
options. However, certain SET options, such as SET NOEXEC and SET SHOWPLAN_XML, cause stored procedures
(including natively compiled stored procedures) to not execute.
When a natively compiled stored procedure is executed with any STATISTICS option turned on, statistics are
gathered for the procedure as a whole and not per statement. For more information, see SET STATISTICS IO
(Transact-SQL), SET STATISTICS PROFILE (Transact-SQL), SET STATISTICS TIME (Transact-SQL), and SET STATISTICS
XML (Transact-SQL). To obtain execution statistics on a per-statement level in natively compiled stored procedures,
use an Extended Event session on the sp_statement_completed event, which starts when each individual query in a
stored procedures execution completes. For more information on creating Extended Event sessions, see CREATE
EVENT SESSION (Transact-SQL).
SHOWPLAN_XML is supported for natively compiled stored procedures. SHOWPLAN_ALL and
SHOWPLAN_TEXT are not supported with natively compiled stored procedures.
SET FMTONLY in not supported with natively compiled stored procedures. Use sp_describe_first_result_set
(Transact-SQL) instead.
See Also
Natively Compiled Stored Procedures
Best Practices for Calling Natively Compiled Stored
Procedures
3/27/2017 • 1 min to read • Edit Online
Natively compiled stored procedures are:
Used typically in performance-critical parts of an application.
Frequently executed.
Expected to be very fast.
The performance benefit of using a natively compiled stored procedure increases with the number of rows
and the amount of logic that is processed by the procedure. For example, a natively compiled stored
procedure will exhibit better performance if it uses one or more of the following:
Aggregation.
Nested-loops joins.
Multi-statement select, insert, update, and delete operations.
Complex expressions.
Procedural logic, such as conditional statements and loops.
If you need to process only a single row, using a natively compiled stored procedure may not provide a
performance benefit.
To avoid the server having to map parameter names and convert types:
Match the types of the parameters passed to the procedure with the types in the procedure definition.
Use ordinal (nameless) parameters when calling natively compiled stored procedures. For the most efficient
execution, do not use named parameters.
Inefficiencies in parameters with natively compiled stored procedures can be detected through the XEvent
natively_compiled_proc_slow_parameter_passing:
Mismatched types: reason=parameter_conversion
Named parameters: reason=named_parameters
DEFAULT values: reason=default
See Also
Natively Compiled Stored Procedures
Monitoring Performance of Natively Compiled Stored
Procedures
3/24/2017 • 3 min to read • Edit Online
This topic discusses how you can monitor the performance of natively compiled stored procedures
Using Extended Events
Use the sp_statement_completed extended event to trace execution of a query. Create an extended event session
with this event, optionally with a filter on object_id for a particular natively compiled stored procedure, The
extended event is raised after the execution of each query. The CPU time and duration reported by the extended
event indicate how much CPU the query used and the execution time. A natively compiled stored procedure that
uses a lot of CPU time may have performance problems.
line_number, along with the object_id in the extended event can be used to investigate the query. The following
query can be used to retrieve the procedure definition. The line number can be used to identify the query within the
definition:
select [definition] from sys.sql_modules where object_id=object_id
For more information about the sp_statement_completed extended event, see How to retrieve the statement that
caused an event.
Using Data Management Views
SQL Server supports collecting execution statistics for natively compiled stored procedures, both on the procedure
level and the query level. Collecting execution statistics is not enabled by default due to performance impact.
You can enable and disable statistics collection on natively compiled stored procedures using
sys.sp_xtp_control_proc_exec_stats (Transact-SQL).
When statistics collection is enabled with sys.sp_xtp_control_proc_exec_stats (Transact-SQL), you can use
sys.dm_exec_procedure_stats (Transact-SQL) to monitor performance of a natively compiled stored procedure.
When statistics collection is enabled with sys.sp_xtp_control_query_exec_stats (Transact-SQL), you can use
sys.dm_exec_query_stats (Transact-SQL) to monitor performance of a natively compiled stored procedure.
At the start of collection, enable statistics collection. Then, execute the natively compiled stored procedure. At the
end of collection, disable statistics collection. Then, analyze the execution statistics returned by the DMVs.
After you collect statistics, the execution statistics for natively compiled stored procedures can be queried for a
procedure with sys.dm_exec_procedure_stats (Transact-SQL), and for queries with sys.dm_exec_query_stats
(Transact-SQL).
NOTE
For natively compiled stored procedures when statistics collection is enabled, worker time is collected in milliseconds. If the
query executes in less than a millisecond, the value will be 0. For natively compiled stored procedures, total_worker_time
may not be accurate if many executions take less than 1 millisecond.
The following query returns the procedure names and execution statistics for natively compiled stored procedures
in the current database, after statistics collection:
select object_id,
object_name(object_id) as 'object name',
cached_time,
last_execution_time,
execution_count,
total_worker_time,
last_worker_time,
min_worker_time,
max_worker_time,
total_elapsed_time,
last_elapsed_time,
min_elapsed_time,
max_elapsed_time
from sys.dm_exec_procedure_stats
where database_id=db_id() and object_id in (select object_id
from sys.sql_modules where uses_native_compilation=1)
order by total_worker_time desc
The following query returns the query text as well as execution statistics for all queries in natively compiled stored
procedures in the current database for which statistics have been collected, ordered by total worker time, in
descending order:
select st.objectid,
object_name(st.objectid) as 'object name',
SUBSTRING(st.text, (qs.statement_start_offset/2) + 1, ((qs.statement_end_offsetqs.statement_start_offset)/2) + 1) as 'query text',
qs.creation_time,
qs.last_execution_time,
qs.execution_count,
qs.total_worker_time,
qs.last_worker_time,
qs.min_worker_time,
qs.max_worker_time,
qs.total_elapsed_time,
qs.last_elapsed_time,
qs.min_elapsed_time,
qs.max_elapsed_time
from sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(sql_handle) st
where st.dbid=db_id() and st.objectid in (select object_id
from sys.sql_modules where uses_native_compilation=1)
order by qs.total_worker_time desc
Natively compiled stored procedures support SHOWPLAN_XML (estimated execution plan). The estimated
execution plan can be used to inspect the query plan, to find any bad plan issues. Common reasons for bad plans
are:
Stats were not updated before the procedure was created.
Missing indexes
Showplan XML is obtained by executing the following Transact-SQL:
SET SHOWPLAN_XML ON
GO
EXEC my_proc
GO
SET SHOWPLAN_XML OFF
GO
Alternatively, in SQL Server Management Studio, select the procedure name and click Display Estimated
Execution Plan.
The estimated execution plan for natively compiled stored procedures shows the query operators and expressions
for the queries in the procedure. SQL Server 2014 does not support all SHOWPLAN_XML attributes for natively
compiled stored procedures. For example, attributes related to query optimizer costing are not part of the
SHOWPLAN_XML for the procedure.
See Also
Natively Compiled Stored Procedures
Calling Natively Compiled Stored Procedures from
Data Access Applications
3/24/2017 • 7 min to read • Edit Online
This topic discusses guidance on calling natively compiled stored procedures from data access applications.
Cursors cannot iterate over a natively compiled stored procedure.
Calling natively compiled stored procedures from CLR modules using the context connection is not supported.
SqlClient
For SqlClient there is no distinction between prepared and direct execution. Execute stored procedures with
SqlCommand with CommandType = CommandType.StoredProcedure.
SqlClient does not support prepared RPC procedure calls.
SqlClient does not support retrieving schema-only information (metadata discovery) about the result sets returned
by a natively compiled stored procedure (CommandType.SchemaOnly). Instead, use sp_describe_first_result_set
(Transact-SQL).
SQL Server Native Client
Versions of SQL Server Native Client prior to SQL Server 2012 do not support retrieving schema-only information
(metadata discovery) about the result sets returned by a natively compiled stored procedure. Instead, use
sp_describe_first_result_set (Transact-SQL).
The following recommendations apply to calls of natively compiled stored procedure using the ODBC driver in SQL
Server Native Client.
The most efficient way to call a stored procedure once is to issue a direct RPC call using SQLExecDirect and ODBC
CALL clauses. Do not use the Transact-SQLEXECUTE statement. If a stored procedure is called more than once,
prepared execution is more efficient.
The most efficient way to call a SQL Server stored procedure more than once is through prepared RPC procedure
calls. Prepared RPC calls are performed as follows using the ODBC driver in SQL Server Native Client:
Open a connection to the database.
Bind the parameters using SQLBindParameter.
Prepare the procedure call using SQLPrepare.
Execute the stored procedure multiple times using SQLExecute.
The following code fragment shows prepared execution of a stored procedure to add line items to an order.
SQLPrepare is called only once and SQLExecute is called multiple times, once for each procedure
execution.
// Bind parameters
// 1 - OrdNo
SQLRETURN returnCode = SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER, 10, 0,
&order.OrdNo, sizeof(SQLINTEGER), NULL);
if (returnCode != SQL_SUCCESS && returnCode != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
// 2, 3, 4 - ItemNo, ProdCode, Qty
…
// Prepare stored procedure
returnCode = SQLPrepare(hstmt, (SQLTCHAR *) _T("{call ItemInsert(?, ?, ?, ?)}"),SQL_NTS);
for (unsigned int i = 0; i < order.ItemCount; i++) {
ItemNo = order.ItemNo[i];
ProdCode = order.ProdCode[i];
Qty = order.Qty[i];
// Execute stored procedure
returnCode = SQLExecute(hstmt);
if (returnCode != SQL_SUCCESS && returnCode != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
}
Using ODBC to Execute a Natively Compiled Stored Procedure
This sample shows how to bind parameters and execute stored procedures using the SQL Server Native Client
ODBC driver. The sample compiles to a console application that inserts a single order using direct execution, and
inserts the order details using prepared execution.
To run this sample:
1. Create a sample database with a memory-optimized data filegroup. For information on how to create a
database with a memory-optimized data filegroup, see Creating a Memory-Optimized Table and a Natively
Compiled Stored Procedure.
2. Create an ODBC data source called PrepExecSample that points to the database. Use the SQL Server Native
Client driver. You could also modify the sample and use the Microsoft ODBC Driver for SQL Server.
3. Run the Transact-SQL script (below) on the sample database.
4. Compile and run the sample.
5. Verify successful execution of the program by querying the contents of the tables:
SELECT * FROM dbo.Ord
SELECT * FROM dbo.Item
The following is the Transact-SQL code listing that creates the memory-optimized database objects.
IF EXISTS (SELECT * FROM SYS.OBJECTS WHERE OBJECT_ID=OBJECT_ID('dbo.OrderInsert'))
DROP PROCEDURE dbo.OrderInsert
go
IF EXISTS (SELECT * FROM SYS.OBJECTS WHERE OBJECT_ID=OBJECT_ID('dbo.ItemInsert'))
DROP PROCEDURE dbo.ItemInsert
GO
IF EXISTS (SELECT * FROM SYS.OBJECTS WHERE OBJECT_ID=OBJECT_ID('dbo.Ord'))
DROP TABLE dbo.Ord
GO
IF EXISTS (SELECT * FROM SYS.OBJECTS WHERE OBJECT_ID=OBJECT_ID('dbo.Item'))
DROP TABLE dbo.Item
GO
CREATE TABLE dbo.Ord
(
OrdNo INTEGER NOT NULL PRIMARY KEY NONCLUSTERED,
OrdDate DATETIME NOT NULL,
CustCode VARCHAR(5) NOT NULL)
WITH (MEMORY_OPTIMIZED=ON)
GO
CREATE TABLE dbo.Item
(
OrdNo INTEGER NOT NULL,
ItemNo INTEGER NOT NULL,
ProdCode INTEGER NOT NULL,
Qty INTEGER NOT NULL,
CONSTRAINT PK_Item PRIMARY KEY NONCLUSTERED (OrdNo,ItemNo))
WITH (MEMORY_OPTIMIZED=ON)
GO
CREATE PROCEDURE dbo.OrderInsert(@OrdNo INTEGER, @CustCode VARCHAR(5))
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
( TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = 'english')
DECLARE @OrdDate datetime = GETDATE();
INSERT INTO dbo.Ord (OrdNo, CustCode, OrdDate) VALUES (@OrdNo, @CustCode, @OrdDate);
END
GO
CREATE PROCEDURE dbo.ItemInsert(@OrdNo INTEGER, @ItemNo INTEGER, @ProdCode INTEGER, @Qty INTEGER)
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
( TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'us_english')
INSERT INTO dbo.Item (OrdNo, ItemNo, ProdCode, Qty) VALUES (@OrdNo, @ItemNo, @ProdCode, @Qty)
END
GO
The following is the C code listing.
// compile with: user32.lib odbc32.lib
#pragma once
#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers
#include <stdio.h>
#include <stdlib.h>
#include <tchar.h>
#include <windows.h>
#include "sql.h"
#include "sqlext.h"
#include "sqlncli.h"
// cardinality of order item related array variables
#define ITEM_ARRAY_SIZE 20
// struct to pass order entry data
typedef struct OrdEntry_struct {
SQLINTEGER OrdNo;
SQLTCHAR CustCode[6];
SQLUINTEGER ItemCount;
SQLINTEGER ItemNo[ITEM_ARRAY_SIZE];
SQLINTEGER ProdCode[ITEM_ARRAY_SIZE];
SQLINTEGER Qty[ITEM_ARRAY_SIZE];
} OrdEntryData;
SQLHANDLE henv, hdbc, hstmt;
void ODBCError(SQLHANDLE henv, SQLHANDLE hdbc, SQLHANDLE hstmt, SQLHANDLE hdesc, bool ShowError) {
SQLRETURN r = 0;
SQLTCHAR szSqlState[6] = {0};
SQLINTEGER fNativeError = 0;
SQLTCHAR szErrorMsg[256] = {0};
SQLSMALLINT cbErrorMsgMax = sizeof(szErrorMsg) - 1;
SQLSMALLINT cbErrorMsg = 0;
TCHAR text[1024] = {0}, title[256] = {0};
if (hdesc != NULL)
r = SQLGetDiagRec(SQL_HANDLE_DESC, hdesc, 1, szSqlState, &fNativeError, szErrorMsg, cbErrorMsgMax,
&cbErrorMsg);
else {
if (hstmt != NULL)
r = SQLGetDiagRec(SQL_HANDLE_STMT, hstmt, 1, szSqlState, &fNativeError, szErrorMsg, cbErrorMsgMax,
&cbErrorMsg);
else {
if (hdbc != NULL)
r = SQLGetDiagRec(SQL_HANDLE_DBC, hdbc, 1, szSqlState, &fNativeError, szErrorMsg, cbErrorMsgMax,
&cbErrorMsg);
else
r = SQLGetDiagRec(SQL_HANDLE_ENV, henv, 1, szSqlState, &fNativeError, szErrorMsg, cbErrorMsgMax,
&cbErrorMsg);
}
}
if (ShowError) {
_sntprintf_s(title, _countof(title), _TRUNCATE, _T("ODBC Error %i"), fNativeError);
_sntprintf_s(text, _countof(text), _TRUNCATE, _T("[%s] - %s"), szSqlState, szErrorMsg);
MessageBox(NULL, (LPCTSTR) text, (LPCTSTR) _T("ODBC Error"), MB_OK);
}
}
void connect() {
SQLRETURN r;
r = SQLAllocHandle(SQL_HANDLE_ENV, NULL, &henv);
// This is an ODBC v3 application
r = SQLSetEnvAttr(henv, SQL_ATTR_ODBC_VERSION, (SQLPOINTER) SQL_OV_ODBC3, 0);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, NULL, NULL, NULL, true);
exit(-1);
}
r = SQLAllocHandle(SQL_HANDLE_DBC, henv, &hdbc);
// Run in ANSI/implicit transaction mode
r = SQLSetConnectAttr(hdbc, SQL_ATTR_AUTOCOMMIT, (SQLPOINTER) SQL_AUTOCOMMIT_OFF, SQL_IS_INTEGER);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, NULL, NULL, NULL, true);
exit(-1);
}
TCHAR szConnStrIn[256] = _T("DSN=PrepExecSample");
TCHAR szConnStrIn[256] = _T("DSN=PrepExecSample");
r = SQLDriverConnect(hdbc, NULL, (SQLTCHAR *) szConnStrIn, SQL_NTS, NULL, 0, NULL, SQL_DRIVER_NOPROMPT);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, NULL, NULL, true);
exit(-1);
}
}
void setup_ODBC_basics() {
SQLRETURN r;
r = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
}
void OrdEntry(OrdEntryData& order) {
// Simple order entry
SQLRETURN r;
SQLINTEGER ItemNo, ProdCode, Qty;
// Bind parameters for the Order
// 1 - OrdNo input
r = SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER, 0, 0, &order.OrdNo,
sizeof(SQLINTEGER), NULL);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
// 2 - Custcode input
r = SQLBindParameter(hstmt, 2, SQL_PARAM_INPUT,SQL_C_TCHAR, SQL_VARCHAR, 5, 0, &order.CustCode,
sizeof(order.CustCode), NULL);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
// Insert the order
r = SQLExecDirect(hstmt, (SQLTCHAR *) _T("{call OrderInsert(?, ?)}"),SQL_NTS);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
// Flush results & reset hstmt
r = SQLMoreResults(hstmt);
if (r != SQL_NO_DATA) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
r = SQLFreeStmt(hstmt, SQL_RESET_PARAMS);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
// Bind parameters for the Items
// 1 - OrdNo
r = SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER, 10, 0, &order.OrdNo,
sizeof(SQLINTEGER), NULL);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
}
// 2 - ItemNo
r = SQLBindParameter(hstmt, 2, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER, 10, 0, &ItemNo,
sizeof(SQLINTEGER), NULL);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
// 3 - ProdCode
r = SQLBindParameter(hstmt, 3, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER, 10, 0, &ProdCode,
sizeof(SQLINTEGER), NULL);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
// 4 - Qty
r = SQLBindParameter(hstmt, 4, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER, 10, 0, &Qty, sizeof(SQLINTEGER),
NULL);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
// Prepare to insert items one at a time
r = SQLPrepare(hstmt, (SQLTCHAR *) _T("{call ItemInsert(?, ?, ?, ?)}"),SQL_NTS);
for (unsigned int i = 0; i < order.ItemCount; i++) {
ItemNo = order.ItemNo[i];
ProdCode = order.ProdCode[i];
Qty = order.Qty[i];
r = SQLExecute(hstmt);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
}
// Flush results & reset hstmt
r = SQLMoreResults(hstmt);
if (r != SQL_NO_DATA) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
r = SQLFreeStmt(hstmt, SQL_RESET_PARAMS);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
// Commit the transaction
r = SQLEndTran(SQL_HANDLE_DBC, hdbc, SQL_COMMIT);
if (r != SQL_SUCCESS && r != SQL_SUCCESS_WITH_INFO) {
ODBCError(henv, hdbc, hstmt, NULL, true);
exit(-1);
}
}
void testOrderEntry() {
OrdEntryData order;
order.OrdNo = 1;
_tcscpy_s((TCHAR *) order.CustCode, _countof(order.CustCode), _T("CUST1"));
order.ItemNo[0] = 1;
order.ProdCode[0] = 10;
order.Qty[0] = 1;
order.Qty[0] = 1;
order.ItemNo[1] =
order.ProdCode[1]
order.Qty[1] = 2;
order.ItemNo[2] =
order.ProdCode[2]
order.Qty[2] = 3;
order.ItemNo[3] =
order.ProdCode[3]
order.Qty[3] = 4;
order.ItemCount =
2;
= 20;
3;
= 30;
4;
= 40;
4;
OrdEntry(order);
}
int _tmain() {
connect();
setup_ODBC_basics();
testOrderEntry();
}
See Also
Natively Compiled Stored Procedures
Estimate Memory Requirements for MemoryOptimized Tables
3/24/2017 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
Memory-optimized tables require that sufficient memory exist to keep all of the rows and indexes in memory.
Because memory is a finite resource, it is important that you understand and manage memory usage on your
system. The topics in this section cover common memory use and management scenarios.
Whether you are creating a new memory-optimized table or migrating an existing disk-based table to an InMemory OLTP memory-optimized table, it is important to have a reasonable estimate of each table’s memory
needs so you can provision the server with sufficient memory. This section describes how to estimate the amount
of memory that you need to hold data for a memory-optimized table.
If you are contemplating migrating from disk-based tables to memory-optimized tables, before you proceed in
this topic, see the topic Determining if a Table or Stored Procedure Should Be Ported to In-Memory OLTP for
guidance on which tables are best to migrate. All the topics under Migrating to In-Memory OLTP provide
guidance on migrating from disk-based to memory-optimized tables.
Basic Guidance for Estimating Memory Requirements
Starting with SQL Server 2016, there is no limit on the size of memory-optimized tables, though the tables do
need to fit in memory. In SQL Server 2014 the supported data size is 256GB for SCHEMA_AND_DATA tables.
The size of a memory-optimized table corresponds to the size of data plus some overhead for row headers. When
migrating a disk-based table to memory-optimized, the size of the memory-optimized table will roughly
correspond to the size of the clustered index or heap of the original disk-based table.
Indexes on memory-optimized tables tend to be smaller than nonclustered indexes on disk-based tables. The size
of nonclustered indexes is in the order of [primary key size] * [row count] . The size of hash indexes is
[bucket count] * 8 bytes .
When there is an active workload, additional memory is needed to account for row versioning and various
operations. How much memory is needed in practice depends on the workload, but to be safe the
recommendation is to start with two times the expected size of memory-optimized tables and indexes, and
observe what are the memory requirements in practice. The overhead for row versioning always depends on the
characteristics of the workload - especially long-running transactions increase the overhead. For most workloads
using larger databases (e.g., >100GB), overhead tends to be limited (25% or less).
Detailed Computation of Memory Requirements
Example memory-optimized table
Memory for the table
Memory for indexes
Memory for row versioning
Memory for table variables
Memory for growth
Example memory-optimized table
Consider the following memory-optimized table schema:
CREATE TABLE t_hk
(
col1 int NOT NULL PRIMARY KEY NONCLUSTERED,
col2 int NOT NULL INDEX t1c2_index
HASH WITH (bucket_count = 5000000),
col3 int NOT NULL INDEX t1c3_index
HASH WITH (bucket_count = 5000000),
col4 int NOT NULL INDEX t1c4_index
HASH WITH (bucket_count = 5000000),
col5 int NOT NULL INDEX t1c5_index NONCLUSTERED,
col6
col7
col8
col9
char
char
char
char
(50)
(50)
(30)
(50)
NOT
NOT
NOT
NOT
NULL,
NULL,
NULL,
NULL
WITH (memory_optimized = on)
);
GO
Using this schema we will determine the minimum memory needed for this memory-optimized table.
Memory for the table
A memory-optimized table row is comprised of three parts:
Timestamps
Row header/timestamps = 24 bytes.
Index pointers
For each hash index in the table, each row has an 8-byte address pointer to the next row in the index. Since
there are 4 indexes, each row will allocate 32 bytes for index pointers (an 8 byte pointer for each index).
Data
The size of the data portion of the row is determined by summing the type size for each data column. In
our table we have five 4-byte integers, three 50-byte character columns, and one 30-byte character
column. Therefore the data portion of each row is 4 + 4 + 4 + 4 + 4 + 50 + 50 + 30 + 50 or 200 bytes.
The following is a size computation for 5,000,000 (5 million) rows in a memory-optimized table. The total
memory used by data rows is estimated as follows:
Memory for the table’s rows
From the above calculations, the size of each row in the memory-optimized table is 24 + 32 + 200, or 256 bytes.
Since we have 5 million rows, the table will consume 5,000,000 * 256 bytes, or 1,280,000,000 bytes –
approximately 1.28 GB.
Memory for indexes
Memory for each hash index
Each hash index is a hash array of 8-byte address pointers. The size of the array is best determined by the number
of unique index values for that index – e.g., the number of unique Col2 values is a good starting point for the
array size for the t1c2_index. A hash array that is too big wastes memory. A hash array that is too small slows
performance since there will be too many collisions by index values that hash to the same index.
Hash indexes achieve very fast equality lookups such as:
SELECT * FROM t_hk
WHERE Col2 = 3;
Nonclustered indexes are faster for range lookups such as:
SELECT * FROM t_hk
WHERE Col2 >= 3;
If you are migrating a disk-based table you can use the following to determine the number of unique values for
the index t1c2_index.
SELECT COUNT(DISTINCT [Col2])
FROM t_hk;
If you are creating a new table, you’ll need to estimate the array size or gather data from your testing prior to
deployment.
For information on how hash indexes work in In-Memory OLTP memory-optimized tables, see Hash Indexes.
Setting the hash index array size
The hash array size is set by (bucket_count= value) where value is an integer value greater than zero. If value
is not a power of 2, the actual bucket_count is rounded up to the next closest power of 2. In our example table,
(bucket_count = 5000000), since 5,000,000 is not a power of 2, the actual bucket count rounds up to 8,388,608
(2^23). You must use this number, not 5,000,000 when calculating memory needed by the hash array.
Thus, in our example, the memory needed for each hash array is:
8,388,608 * 8 = 2^23 * 8 = 2^23 * 2^3 = 2^26 = 67,108,864 or approximately 64 MB.
Since we have three hash indexes, the memory needed for the hash indexes is 3 * 64MB = 192MB.
Memory for non-clustered indexes
Non-clustered indexes are implemented as BTrees with the inner nodes containing the index value and pointers
to subsequent nodes. Leaf nodes contain the index value and a pointer to the table row in memory.
Unlike hash indexes, non-clustered indexes do not have a fixed bucket size. The index grows and shrinks
dynamically with the data.
Memory needed by non-clustered indexes can be computed as follows:
Memory allocated to non-leaf nodes
For a typical configuration, the memory allocated to non-leaf nodes is a small percentage of the overall
memory taken by the index. This is so small it can safely be ignored.
Memory for leaf nodes
The leaf nodes have one row for each unique key in the table that points to the data rows with that unique
key. If you have multiple rows with the same key (i.e., you have a non-unique non-clustered index), there is
only one row in the index leaf node that points to one of the rows with the other rows linked to each other.
Thus, the total memory required can be approximated by:
memoryForNonClusteredIndex = (pointerSize + sum(keyColumnDataTypeSizes)) *
rowsWithUniqueKeys
Non-clustered indexes are best when used for range lookups, as exemplified by the following query:
SELECT * FRON t_hk
WHERE c2 > 5;
Memory for row versioning
To avoid locks, In-Memory OLTP uses optimistic concurrency when updating or deleting rows. This means that
when a row is updated, an additional version of the row is created. In addition, deletes are logical - the existing
row is marked as deleted, but not removed immediately. The system keeps old row versions (including deleted
rows) available until all transactions that could possibly use the version have finished execution.
Because there may be a number of additional rows in memory at any time waiting for the garbage collection
cycle to release their memory, you must have sufficient memory to accommodate these additional rows.
The number of additional rows can be estimated by computing the peak number of row updates and deletions
per second, then multiplying that by the number of seconds the longest transaction takes (minimum of 1).
That value is then multiplied by the row size to get the number of bytes you need for row versioning.
rowVersions = durationOfLongestTransctoinInSeconds * peakNumberOfRowUpdatesOrDeletesPerSecond
Memory needs for stale rows is then estimated by multiplying the number of stale rows by the size of a memoryoptimized table row (See Memory for the table above).
memoryForRowVersions = rowVersions * rowSize
Memory for table variables
Memory used for a table variable is released only when the table variable goes out of scope. Deleted rows,
including rows deleted as part of an update, from a table variable are not subject to garbage collection. No
memory is released until the table variable exits scope.
Table variables defined in a large SQL batch, as opposed to a procedure scope, which are used in many
transactions, can consume a lot of memory. Because they are not garbage collected, deleted rows in a table
variable can consume a lot memory and degrade performance since read operations need to scan past the
deleted rows.
Memory for growth
The above calculations estimate your memory needs for the table as it currently exists. In addition to this memory,
you need to estimate the growth of the table and provide sufficient memory to accommodate that growth. For
example, if you anticipate 10% growth then you need to multiple the results from above by 1.1 to get the total
memory needed for your table.
See Also
Migrating to In-Memory OLTP
Bind a Database with Memory-Optimized Tables to a
Resource Pool
3/24/2017 • 8 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
A resource pool represents a subset of physical resources that can be governed. By default, SQL Server databases
are bound to and consume the resources of the default resource pool. To protect SQL Server from having its
resources consumed by one or more memory-optimized tables, and to prevent other memory users from
consuming memory needed by memory-optimized tables, you should create a separate resource pool to manage
memory consumption for the database with memory-optimized tables.
A database can be bound on only one resource pool. However, you can bind multiple databases to the same pool.
SQL Server allows binding a database without memory-optimized tables to a resource pool but it has no effect.
You may want to bind a database to a named resource pool if, in future, you may want to create memoryoptimized tables in the database.
Before you can bind a database to a resource pool both the database and the resource pool must exist. The
binding takes effect the next time the database is brought online. See Database States for more information.
For information about resource pools, see Resource Governor Resource Pool.
Steps to bind a database to a resource pool
1. Create the database and resource pool
a. Create the database
b. Determine the minimum value for MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT
c. Create a resource pool and configure memory
2. Bind the database to the pool
3. Confirm the binding
4. Make the binding effective
Other content in this topic
Change MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT on an existing pool
Percent of memory available for memory-optimized tables and indexes
Create the database and resource pool
You can create the database and resource pool in any order. What matters is that they both exist prior to binding
the database to the resource pool.
Create the database
The following Transact-SQL creates a database named IMOLTP_DB which will contain one or more memoryoptimized tables. The path <driveAndPath> must exist prior to running this command.
CREATE DATABASE IMOLTP_DB
GO
ALTER DATABASE IMOLTP_DB ADD FILEGROUP IMOLTP_DB_fg CONTAINS MEMORY_OPTIMIZED_DATA
ALTER DATABASE IMOLTP_DB ADD FILE( NAME = 'IMOLTP_DB_fg' , FILENAME = 'c:\data\IMOLTP_DB_fg') TO FILEGROUP
IMOLTP_DB_fg;
GO
Determine the minimum value for MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT
Once you determine the memory needs for your memory-optimized tables, you need to determine what
percentage of available memory you need, and set the memory percentages to that value or higher.
Example:
For this example we will assume that from your calculations you determined that your memory-optimized tables
and indexes need 16 GB of memory. Assume that you have 32 GB of memory committed for your use.
At first glance it could seem that you need to set MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT to 50
(16 is 50% of 32). However, that would not give your memory-optimized tables sufficient memory. Looking at the
table below (Percent of memory available for memory-optimized tables and indexes) we see that if there is 32 GB
of committed memory, only 80% of that is available for memory-optimized tables and indexes. Therefore, we
calculate the min and max percentages based upon the available memory, not the committed memory.
memoryNeedeed = 16
memoryCommitted = 32
availablePercent = 0.8
memoryAvailable = memoryCommitted * availablePercent
percentNeeded = memoryNeeded / memoryAvailable
Plugging in real numbers:
percentNeeded = 16 / (32 * 0.8) = 16 / 25.6 = 0.625
Thus you need at least 62.5% of the available memory to meet the 16 GB requirement of your memory-optimized
tables and indexes. Since the values for MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT must be integers,
we set them to at least 63%.
Create a resource pool and configure memory
When configuring memory for memory-optimized tables, the capacity planning should be done based on
MIN_MEMORY_PERCENT, not on MAX_MEMORY_PERCENT. See ALTER RESOURCE POOL (Transact-SQL) for
information on MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT. This provides more predictable memory
availability for memory-optimized tables as MIN_MEMORY_PERCENT causes memory pressure to other resource
pools to make sure it is honored. To ensure that memory is available and help avoid out-of-memory conditions,
the values for MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT should be the same. See Percent of
memory available for memory-optimized tables and indexes below for the percent of memory available for
memory-optimized tables based on the amount of committed memory.
See Best Practices: Using In-Memory OLTP in a VM environment for more information when working in a VM
environment.
The following Transact-SQL code creates a resource pool named Pool_IMOLTP with half of the memory available
for its use. After the pool is created Resource Governor is reconfigured to include Pool_IMOLTP.
-- set MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT to the same value
CREATE RESOURCE POOL Pool_IMOLTP
WITH
( MIN_MEMORY_PERCENT = 63,
MAX_MEMORY_PERCENT = 63 );
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
Bind the database to the pool
Use the system function sp_xtp_bind_db_resource_pool to bind the database to the resource pool. The function
takes two parameters: the database name and the resource pool name.
The following Transact-SQL defines a binding of the database IMOLTP_DB to the resource pool Pool_IMOLTP. The
binding does not become effective until you bring the database online.
EXEC sp_xtp_bind_db_resource_pool 'IMOLTP_DB', 'Pool_IMOLTP'
GO
The system function sp_xtp_bind_db_resourece_pool takes two string parameters: database_name and pool_name.
Confirm the binding
Confirm the binding, noting the resource pool id for IMOLTP_DB. It should not be NULL.
SELECT d.database_id, d.name, d.resource_pool_id
FROM sys.databases d
GO
Make the binding effective
You must take the database offline and back online after binding it to the resource pool for binding to take effect.
If your database was bound to an a different pool earlier, this removes the allocated memory from the previous
resource pool and memory allocations for your memory-optimized table and indexes will now come from the
resource pool newly bound with the database.
USE master
GO
ALTER DATABASE IMOLTP_DB SET OFFLINE
GO
ALTER DATABASE IMOLTP_DB SET ONLINE
GO
USE IMOLTP_DB
GO
And now, the database is bound to the resource pool.
Change MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT on
an existing pool
If you add additional memory to the server or the amount of memory needed for your memory-optimized tables
changes, you may need to alter the value of MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT. The
following steps show you how to alter the value of MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT on a
resource pool. See the section below, for guidance on what values to use for MIN_MEMORY_PERCENT and
MAX_MEMORY_PERCENT. See the topic Best Practices: Using In-Memory OLTP in a VM environment for more
information.
1. Use ALTER RESOURCE POOL to change the value of both MIN_MEMORY_PERCENT and
MAX_MEMORY_PERCENT.
2. Use
ALTER RESURCE GOVERNOR
to reconfigure the Resource Governor with the new values.
Sample Code
ALTER RESOURCE POOL Pool_IMOLTP
WITH
( MIN_MEMORY_PERCENT = 70,
MAX_MEMORY_PERCENT = 70 )
GO
-- reconfigure the Resource Governor
ALTER RESOURCE GOVERNOR RECONFIGURE
GO
Percent of memory available for memory-optimized tables and indexes
If you map a database with memory-optimized tables and a SQL Server workload to the same resource pool, the
Resource Governor sets an internal threshold for In-Memory OLTP use so that the users of the pool do not have
conflicts over pool usage. Generally speaking, the threshold for In-Memory OLTP use is about 80% of the pool. The
following table shows actual thresholds for various memory sizes.
When you create a dedicated resource pool for the In-Memory OLTP database, you need to estimate how much
physical memory you need for the in-memory tables after accounting for row versions and data growth. Once
estimate the memory needed, you create a resource pool with a percent of the commit target memory for SQL
Instance as reflected by column ‘committed_target_kb’ in the DMV sys.dm_os_sys_info (see sys.dm_os_sys_info).
For example, you can create a resource pool P1 with 40% of the total memory available to the instance. Out of this
40%, the In-Memory OLTP engine gets a smaller percent to store In-Memory OLTP data. This is done to make sure
In-Memory OLTP does not consume all the memory from this pool. This value of the smaller percent depends
upon the Target committed Memory. The following table describes memory available to In-Memory OLTP
database in a resource pool (named or default) before an OOM error is raised.
TARGET COMMITTED MEMORY
PERCENT AVAILABLE FOR IN-MEMORY TABLES
<= 8 GB
70%
<= 16 GB
75%
<= 32 GB
80%
<= 96 GB
85%
>96 GB
90%
For example, if your ‘target committed memory’ is 100 GB, and you estimate your memory-optimized tables and
indexes need 60GBof memory, then you can create a resource pool with MAX_MEMORY_PERCENT = 67 (60GB
needed / 0.90 = 66.667GB – round up to 67GB; 67GB / 100GB installed = 67%) to ensure that your In-Memory
OLTP objects have the 60GB they need.
Once a database has been bound to a named resource pool, use the following query to see memory allocations
across different resource pools.
SELECT pool_id
, Name
, min_memory_percent
, max_memory_percent
, max_memory_kb/1024 AS max_memory_mb
, used_memory_kb/1024 AS used_memory_mb
, target_memory_kb/1024 AS target_memory_mb
FROM sys.dm_resource_governor_resource_pools
This sample output shows that the memory taken by memory-optimized objects is 1356 MB in resource pool,
PoolIMOLTP, with an upper bound of 2307 MB. This upper bound controls the total memory that can be taken by
user and system memory-optimized objects mapped to this pool.
Sample Output
This output is from the database and tables we created above.
pool_id
----------1
2
259
Name
min_memory_percent max_memory_percent max_memory_mb used_memory_mb target_memory_mb
----------- ------------------ ------------------ ------------- -------------- ---------------internal
0
100
3845
125
3845
default
0
100
3845
32
3845
PoolIMOLTP 0
100
3845
1356
2307
For more information see sys.dm_resource_governor_resource_pools (Transact-SQL).
If you do not bind your database to a named resource pool, it is bound to the ‘default’ pool. Since default resource
pool is used by SQL Server for most other allocations, you will not be able to monitor memory consumed by
memory-optimized tables using the DMV sys.dm_resource_governor_resource_pools accurately for the database
of interest.
See Also
sys.sp_xtp_bind_db_resource_pool (Transact-SQL)
sys.sp_xtp_unbind_db_resource_pool (Transact-SQL)
Resource Governor
Resource Governor Resource Pool
Create a Resource Pool
Change Resource Pool Settings
Delete a Resource Pool
Monitor and Troubleshoot Memory Usage
3/24/2017 • 6 min to read • Edit Online
SQL Server In-Memory OLTP consumes memory in different patterns than disk-based tables. You can monitor the
amount of memory allocated and used by memory-optimized tables and indexes in your database using the DMVs
or performance counters provided for memory and the garbage collection subsystem. This gives you visibility at
both the system and database level and lets you prevent problems due to memory exhaustion.
This topic covers monitoring your In-Memory OLTP memory usage.
Sections in this topic
Create a sample database with memory-optimized tables
Monitoring Memory Usage
Using SQL Server Management Studio
Using DMVs
Managing memory consumed by memory-optimized objects
Troubleshooting Memory Issues
Create a sample database with memory-optimized tables
You can skip this section if you already have a database with memory-optimized tables.
The following steps create a database with three memory-optimized tables that you can use in the remainder of
this topic. In the example, we mapped the database to a resource pool so that we can control how much memory
can be taken by memory-optimized tables.
1. Launch SQL Server Management Studio.
2. Click New Query.
3. Paste this code into the new query window and execute each section.
-- create a database to be used
CREATE DATABASE IMOLTP_DB
GO
ALTER DATABASE IMOLTP_DB ADD FILEGROUP IMOLTP_DB_xtp_fg CONTAINS MEMORY_OPTIMIZED_DATA
ALTER DATABASE IMOLTP_DB ADD FILE( NAME = 'IMOLTP_DB_xtp' , FILENAME = 'C:\Data\IMOLTP_DB_xtp') TO
FILEGROUP IMOLTP_DB_xtp_fg;
GO
USE IMOLTP_DB
GO
-- create the resoure pool
CREATE RESOURCE POOL PoolIMOLTP WITH (MAX_MEMORY_PERCENT = 60);
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
-- bind the database to a resource pool
EXEC sp_xtp_bind_db_resource_pool 'IMOLTP_DB', 'PoolIMOLTP'
-- you can query the binding using the catalog view as described here
SELECT d.database_id
, d.name
, d.resource_pool_id
FROM sys.databases d
GO
-- take database offline/online to finalize the binding to the resource pool
USE master
GO
ALTER DATABASE IMOLTP_DB SET OFFLINE
GO
ALTER DATABASE IMOLTP_DB SET ONLINE
GO
-- create some tables
USE IMOLTP_DB
GO
-- create table t1
CREATE TABLE dbo.t1 (
c1 int NOT NULL CONSTRAINT [pk_t1_c1] PRIMARY KEY NONCLUSTERED
, c2 char(40) NOT NULL
, c3 char(8000) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
-- load t1 150K rows
DECLARE @i int = 0
BEGIN TRAN
WHILE (@i <= 150000)
BEGIN
INSERT t1 VALUES (@i, 'a', replicate ('b', 8000))
SET @i += 1;
END
Commit
GO
-- Create another table, t2
CREATE TABLE dbo.t2 (
c1 int NOT NULL CONSTRAINT [pk_t2_c1] PRIMARY KEY NONCLUSTERED
, c2 char(40) NOT NULL
, c3 char(8000) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
-- Create another table, t3
CREATE TABLE dbo.t3 (
c1 int NOT NULL CONSTRAINT [pk_t3_c1] PRIMARY KEY NONCLUSTERED HASH (c1) WITH (BUCKET_COUNT =
1000000)
, c2 char(40) NOT NULL
, c3 char(8000) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
Monitoring Memory Usage
Using SQL Server Management Studio
SQL Server 2014 ships with built-in standard reports to monitor the memory consumed by in-memory tables. You
can access these reports using Object Explorer as described here. You can also use the object explorer to monitor
memory consumed by individual memory-optimized tables.
Consumption at the database level
You can monitor memory use at the database level as follows.
1. Launch SQL Server Management Studio and connect to a server.
2. In Object Explorer, right-click the database you want reports on.
3. In the context menu select, Reports -> Standard Reports -> Memory Usage By Memory Optimized
Objects
This report shows memory consumption by the database we created above.
Using DMVs
There are a number of DMVs available to monitor memory consumed by memory-optimized tables, indexes,
system objects, and by run-time structures.
Memory consumption by memory-optimized tables and indexes
You can find memory consumption for all user tables, indexes, and system objects by querying
sys.dm_db_xtp_table_memory_stats as shown here.
SELECT object_name(object_id) AS Name
, *
FROM sys.dm_db_xtp_table_memory_stats
Sample Output
Name
object_id memory_allocated_for_table_kb
memory_used_by_indexes_kb
---------- ----------- ---------------------------------------------------t3
629577281 0
t1
565577053 1372928
1942
t2
597577167 0
NULL
-6
0
NULL
-5
0
24
NULL
-4
0
NULL
-3
0
NULL
-2
192
16
memory_used_by_table_kb memory_allocated_for_indexes_kb
----------------------- ------------------------------- 0
1200008
128
7872
0
0
0
0
128
2
24
0
2
0
0
25
2
2
16
2
2
For more information see sys.dm_db_xtp_table_memory_stats.
Memory consumption by internal system structures
Memory is also consumed by system objects, such as, transactional structures, buffers for data and delta files,
garbage collection structures, and more. You can find the memory used for these system objects by querying
sys.dm_xtp_system_memory_consumers as shown here.
SELECT memory_consumer_desc
, allocated_bytes/1024 AS allocated_bytes_kb
, used_bytes/1024 AS used_bytes_kb
, allocation_count
FROM sys.dm_xtp_system_memory_consumers
Sample Output
memory_consumer_ desc allocated_bytes_kb used_bytes_kb
allocation_count
------------------------- -------------------- -------------------- ---------------VARHEAP
0
0
0
VARHEAP
384
0
0
DBG_GC_OUTSTANDING_T
64
64
910
ACTIVE_TX_MAP_LOOKAS
0
0
0
RECOVERY_TABLE_CACHE
0
0
0
RECENTLY_USED_ROWS_L
192
192
261
RANGE_CURSOR_LOOKSID
0
0
0
HASH_CURSOR_LOOKASID
128
128
455
SAVEPOINT_LOOKASIDE
0
0
0
PARTIAL_INSERT_SET_L
192
192
351
CONSTRAINT_SET_LOOKA
192
192
646
SAVEPOINT_SET_LOOKAS
0
0
0
WRITE_SET_LOOKASIDE
192
192
183
SCAN_SET_LOOKASIDE
64
64
31
READ_SET_LOOKASIDE
0
0
0
TRANSACTION_LOOKASID
448
448
156
PGPOOL:256K
768
768
3
PGPOOL: 64K
0
0
0
PGPOOL: 4K
0
0
0
For more information see sys.dm_xtp_system_memory_consumers (Transact-SQL).
Memory consumption at run-time when accessing memory-optimized tables
You can determine the memory consumed by run time structures, such as the procedure cache with the following
query: run this query to get the memory used by run-time structures such as for the procedure cache. All run-time
structures are tagged with XTP.
SELECT memory_object_address
, pages_in_bytes
, bytes_used
, type
FROM sys.dm_os_memory_objects WHERE type LIKE '%xtp%'
Sample Output
memory_object_address
--------------------0x00000001F1EA8040
0x00000001F1EAA040
0x00000001FD67A040
0x00000001FD68C040
0x00000001FD284040
0x00000001FD302040
0x00000001FD382040
0x00000001FD402040
0x00000001FD482040
0x00000001FD502040
0x00000001FD67E040
0x00000001F813C040
0x00000001F813E040
pages_ in_bytes bytes_used type
------------------- ---------- ---507904
NULL
MEMOBJ_XTPDB
68337664
NULL
MEMOBJ_XTPDB
16384
NULL
MEMOBJ_XTPPROCCACHE
16384
NULL
MEMOBJ_XTPPROCPARTITIONEDHEAP
16384
NULL
MEMOBJ_XTPPROCPARTITIONEDHEAP
16384
NULL
MEMOBJ_XTPPROCPARTITIONEDHEAP
16384
NULL
MEMOBJ_XTPPROCPARTITIONEDHEAP
16384
NULL
MEMOBJ_XTPPROCPARTITIONEDHEAP
16384
NULL
MEMOBJ_XTPPROCPARTITIONEDHEAP
16384
NULL
MEMOBJ_XTPPROCPARTITIONEDHEAP
16384
NULL
MEMOBJ_XTPPROCPARTITIONEDHEAP
8192
NULL
MEMOBJ_XTPBLOCKALLOC
16842752
NULL
MEMOBJ_XTPBLOCKALLOC
For more information see sys.dm_os_memory_objects (Transact-SQL).
Memory consumed by In-Memory OLTP engine across the instance
Memory allocated to the In-Memory OLTP engine and the memory-optimized objects is managed the same way as
any other memory consumer within a SQL Server instance. The clerks of type MEMORYCLERK_XTP accounts for all
the memory allocated to In-Memory OLTP engine. Use the following query to find all the memory used by the InMemory OLTP engine.
-- this DMV accounts for all memory used by the hek_2 engine
SELECT type
, name
, memory_node_id
, pages_kb/1024 AS pages_MB
FROM sys.dm_os_memory_clerks WHERE type LIKE '%xtp%'
The sample output shows that the total memory allocated is 18 MB system-level memory consumption and
1358MB allocated to database id of 5. Since this database is mapped to a dedicated resource pool, this memory is
accounted for in that resource pool.
Sample Output
type
-------------------MEMORYCLERK_XTP
MEMORYCLERK_XTP
MEMORYCLERK_XTP
name
---------Default
DB_ID_5
Default
memory_node_id
-------------0
0
64
pages_MB
-------------------18
1358
0
For more information see sys.dm_os_memory_clerks (Transact-SQL).
Managing memory consumed by memory-optimized objects
You can control the total memory consumed by memory-optimized tables by binding it to a named resource pool
as described in the topic Bind a Database with Memory-Optimized Tables to a Resource Pool.
Troubleshooting Memory Issues
Troubleshooting memory issues is a three step process:
1. Identify how much memory is being consumed by the objects in your database or instance. You can use a
rich set of monitoring tools available for memory-optimized tables as described earlier. For example the
DMVs sys.dm_db_xtp_table_memory_stats or sys.dm_os_memory_clerks .
2. Determine how memory consumption is growing and how much head room you have left. By monitoring
the memory consumption periodically, you can know how the memory use is growing. For example, if you
have mapped the database to a named resource pool, you can monitor the performance counter Used
Memory (KB) to see how memory usage is growing.
3. Take action to mitigate the potential memory issues. For more information see Resolve Out Of Memory
Issues.
See Also
Bind a Database with Memory-Optimized Tables to a Resource Pool
Change MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT on an existing pool
Resolve Out Of Memory Issues
3/24/2017 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
SQL Server In-Memory OLTP uses more memory and in different ways than does SQL Server. It is possible that the
amount of memory you installed and allocated for In-Memory OLTP becomes inadequate for your growing needs.
If so, you could run out of memory. This topic covers how to recover from an OOM situation. See Monitor and
Troubleshoot Memory Usage for guidance that can help you avoid many OOM situations.
Covered in this topic
TOPIC
OVERVIEW
Resolve database restore failures due to OOM
What to do if you get the error message, “Restore operation
failed for database '<databaseName>' due to insufficient
memory in the resource pool '<resourcePoolName>'.”
Resolve impact of low memory or OOM conditions on the
workload
What to do if you find low memory issues are negatively
impacting performance.
Resolve page allocation failures due to insufficient memory
when sufficient memory is available
What to do if you get the error message, “Disallowing page
allocations for database '<databaseName>' due to
insufficient memory in the resource pool
'<resourcePoolName>'. …” when available memory is
sufficient for the operation.
Resolve database restore failures due to OOM
When you attempt to restore a database you may get the error message: “Restore operation failed for database
'<databaseName>' due to insufficient memory in the resource pool '<resourcePoolName>'.” This indicates that
the server does not have enough available memory for restoring the database.
The server you restore a database to must have enough available memory for the memory-optimized tables in the
database backup, otherwise the database will not come online.
If the server does have enough physical memory, but you are still seeing this error, it could be that other processes
are using too much memory or a configuration issue causes not enough memory to be available for restore. For
this class of issues, use the following measures to make more memory available to the restore operation:
Temporarily close running applications.
By closing one or more running applications or stopping services not needed at the moment, you make the
memory they were using available for the restore operation. You can restart them following the successful
restore.
Increase the value of MAX_MEMORY_PERCENT.
If the database is bound to a resource pool, which is best practice, the memory available to restore is
governed by MAX_MEMORY_PERCENT. If the value is too low, restore will fail. This code snippet changes
MAX_MEMORY_PERCENT for the resource pool PoolHk to 70% of installed memory.
IMPORTANT
If the server is running on a VM and is not dedicated, set the value of MIN_MEMORY_PERCENT to the same value as
MAX_MEMORY_PERCENT.
See the topic Best Practices: Using In-Memory OLTP in a VM environment for more information.
-- disable resource governor
ALTER RESOURCE GOVERNOR DISABLE
-- change the value of MAX_MEMORY_PERCENT
ALTER RESOURCE POOL PoolHk
WITH
( MAX_MEMORY_PERCENT = 70 )
GO
-- reconfigure the Resource Governor
-RECONFIGURE enables resource governor
ALTER RESOURCE GOVERNOR RECONFIGURE
GO
For information on maximum values for MAX_MEMORY_PERCENT see the topic section Percent of memory
available for memory-optimized tables and indexes.
Increase max server memory.
For information on configuring max server memory see the topic Optimizing Server Performance Using
Memory Configuration Options.
Resolve impact of low memory or OOM conditions on the workload
Obviously, it is best to not get into a low memory or OOM (Out of Memory) situation. Good planning and
monitoring can help avoid OOM situations. Still, the best planning does not always foresee what actually happens
and you might end up with low memory or OOM. There are two steps to recovering from OOM:
1. Open a DAC (Dedicated Administrator Connection)
2. Take corrective action
Open a DAC (Dedicated Administrator Connection)
Microsoft SQL Server provides a dedicated administrator connection (DAC). The DAC allows an administrator to
access a running instance of SQL Server Database Engine to troubleshoot problems on the server—even when the
server is unresponsive to other client connections. The DAC is available through the sqlcmd utility and SQL Server
Management Studio (SSMS).
For guidance on using sqlcmd and DAC see Using a Dedicated Administrator Connection. For guidance on using
DAC through SSMS see How to: Use the Dedicated Administrator Connection with SQL Server Management
Studio.
Take corrective action
To resolve your OOM condition you need to either free up existing memory by reducing usage, or make more
memory available to your in-memory tables.
Free up existing memory
D e l e t e n o n - e sse n t i a l m e m o r y o p t i m i z e d t a b l e r o w s a n d w a i t fo r g a r b a g e c o l l e c t i o n
You can remove non-essential rows from a memory optimized table. The garbage collector returns the memory
used by these rows to available memory. . In-memory OLTP engine collects garbage rows aggressively. However, a
long running transaction can prevent garbage collection. For example, if you have a transaction that runs for 5
minutes, any row versions created due to update/delete operations while the transaction was active can’t be
garbage collected.
M o v e o n e o r m o r e r o w s t o a d i sk- b a se d t a b l e
The following TechNet articles provide guidance on moving rows from a memory-optimized table to a disk-based
table.
Application-Level Partitioning
Application Pattern for Partitioning Memory-Optimized Tables
Increase available memory
I n c r e a se v a l u e o f M A X _ M E M O R Y _ P E R C E N T o n t h e r e so u r c e p o o l
If you have not created a named resource pool for your in-memory tables you should do that and bind your InMemory OLTP databases to it. See the topic Bind a Database with Memory-Optimized Tables to a Resource Pool for
guidance on creating and binding your In-Memory OLTP databases to a resource pool.
If your In-Memory OLTP database is bound to a resource pool you may be able to increase the percent of memory
the pool can access. See the sub-topic Change MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT on an
existing pool for guidance on changing the value of MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT for a
resource pool.
Increase the value of MAX_MEMORY_PERCENT.
This code snippet changes MAX_MEMORY_PERCENT for the resource pool PoolHk to 70% of installed memory.
IMPORTANT
If the server is running on a VM and is not dedicated, set the value of MIN_MEMORY_PERCENT and
MAX_MEMORY_PERCENT to the same value.
See the topic Best Practices: Using In-Memory OLTP in a VM environment for more information.
-- disable resource governor
ALTER RESOURCE GOVERNOR DISABLE
-- change the value of MAX_MEMORY_PERCENT
ALTER RESOURCE POOL PoolHk
WITH
( MAX_MEMORY_PERCENT = 70 )
GO
-- reconfigure the Resource Governor
-RECONFIGURE enables resource governor
ALTER RESOURCE GOVERNOR RECONFIGURE
GO
For information on maximum values for MAX_MEMORY_PERCENT see the topic section Percent of memory
available for memory-optimized tables and indexes.
I n st a l l a d d i t i o n a l m e m o r y
Ultimately the best solution, if possible, is to install additional physical memory. If you do this, remember that you
will probably be able to also increase the value of MAX_MEMORY_PERCENT (see the sub-topic Change
MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT on an existing pool) since SQL Server won’t likely need
more memory, allowing you to make most if not all of the newly installed memory available to the resource pool.
IMPORTANT
If the server is running on a VM and is not dedicated, set the value of MIN_MEMORY_PERCENT and
MAX_MEMORY_PERCENT to the same value.
See the topic Best Practices: Using In-Memory OLTP in a VM environment for more information.
Resolve page allocation failures due to insufficient memory when
sufficient memory is available
If you get the error message, “Disallowing page allocations for database '<databaseName>' due to insufficient
memory in the resource pool '<resourcePoolName>'. See 'http://go.microsoft.com/fwlink/?LinkId=330673' for
more information.” in the error log when the available physical memory is sufficient to allocate the page, it may be
due to a disabled Resource Governor. When the Resource Governor is disabled MEMORYBROKER_FOR_RESERVE
induces artificial memory pressure.
To resolve this you need to enable the Resource Governor.
See Enable Resource Governor for information on Limits and Restrictions as well as guidance on enabling
Resource Governor using Object Explorer, Resource Governor properties, or Transact-SQL.
See Also
Managing Memory for In-Memory OLTP
Monitor and Troubleshoot Memory Usage
Bind a Database with Memory-Optimized Tables to a Resource Pool
Best Practices: Using In-Memory OLTP in a VM environment
Restore a Database and Bind it to a Resource Pool
3/24/2017 • 1 min to read • Edit Online
Even though you have enough memory to restore a database with memory-optimized tables, you want to follow
best practices and bind the database to a named resource pool. Since the database must exist before you can bind it
to the pool restoring your database is a multi-step process. This topic walks you through that process.
Restoring a database with memory-optimized tables
The following steps fully restore the database IMOLTP_DB and bind it to the Pool_IMOLTP.
1. Restore with NORECOVERY
2. Create the resource pool
3. Bind the database and resource pool
4. Restore with RECOVERY
5. Monitor the resource pool performance
Restore with NORECOVERY
When you restore a database, NORECOVERY causes the database to be created and the disk image restored
without consuming memory.
RESTORE DATABASE IMOLTP_DB
FROM DISK = 'C:\IMOLTP_test\IMOLTP_DB.bak'
WITH NORECOVERY
Create the resource pool
The following Transact-SQL creates a resource pool named Pool_IMOLTP with 50% of memory available for its use.
After the pool is created, the Resource Governor is reconfigured to include Pool_IMOLTP.
CREATE RESOURCE POOL Pool_IMOLTP WITH (MAX_MEMORY_PERCENT = 50);
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
Bind the database and resource pool
Use the system function sp_xtp_bind_db_resource_pool to bind the database to the resource pool. The function
takes two parameters: the database name followed by the resource pool name.
The following Transact-SQL defines a binding of the database IMOLTP_DB to the resource pool Pool_IMOLTP. The
binding does not become effective until you complete the next step.
EXEC sp_xtp_bind_db_resource_pool 'IMOLTP_DB', 'Pool_IMOLTP'
GO
Restore with RECOVERY
When you restore the database with recovery the database is brought online and all the data restored.
RESTORE DATABASE IMOLTP_DB
WITH RECOVERY
Monitor the resource pool performance
Once the database is bound to the named resource pool and restored with recovery, monitor the SQL Server,
Resource Pool Stats Object. For more information see SQL Server, Resource Pool Stats Object.
See Also
Bind a Database with Memory-Optimized Tables to a Resource Pool
sys.sp_xtp_bind_db_resource_pool (Transact-SQL)
SQL Server, Resource Pool Stats Object
sys.dm_resource_governor_resource_pools
In-Memory OLTP Garbage Collection
3/24/2017 • 1 min to read • Edit Online
A data row is considered stale if it was deleted by a transaction that is no longer active. A stale row is eligible for
garbage collection. The following are characteristics of garbage collection in In-Memory OLTP:
Non-blocking. Garbage collection is distributed over time with minimal impact on the workload.
Cooperative. User transactions participate in garbage collection with main garbage-collection thread.
Efficient. User transactions delink stale rows in the access path (the index) being used. This reduces the work
required when the row is finally removed.
Responsive. Memory pressure leads to aggressive garbage collection.
Scalable. After commit, user transactions do part of the work of garbage collection. The more transaction
activity, the more the transactions delink stale rows.
Garbage collection is controlled by the main garbage collection thread. The main garbage collection thread
runs every minute, or when the number of committed transactions exceeds an internal threshold. The task of
the garbage collector is to:
Identify transactions that have deleted or updated a set of rows and have committed before the oldest active
transaction.
Identity row versions created by these old transactions.
Group old rows into one or more units of 16 rows each. This is done to distribute the work of the garbage
collector into smaller units.
Move these work units into the garbage collection queue, one for each scheduler. Refer to the garbage
collector DMVs for the details: sys.dm_xtp_gc_stats (Transact-SQL), sys.dm_db_xtp_gc_cycle_stats (TransactSQL), and sys.dm_xtp_gc_queue_stats (Transact-SQL).
After a user transaction commits, it identifies all queued items associated with the scheduler it ran on and
then releases the memory. If the garbage collection queue on the scheduler is empty, it searches for any
non-empty queue in the current NUMA node. If there is low transactional activity and there is memory
pressure, the main garbage-collection thread can access garbage collect rows from any queue. If there is no
transactional activity after (for example) deleting a large number of rows and there is no memory pressure,
the deleted rows will not be garbage collected until the transactional activity resumes or there is memory
pressure.
See Also
Managing Memory for In-Memory OLTP
Creating and Managing Storage for MemoryOptimized Objects
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
The In-Memory OLTP engine is integrated into SQL Server, which lets you have both memory-optimized tables
and (traditional) disk-based tables in the same database. However, the storage structure for memory-optimized
tables is different from disk-based tables.
Storage for disk-based table has following key attributes:
Mapped to a filegroup and the filegroup contains one or more files.
Each file is divided into extents of 8 pages and each page is of size 8K bytes.
An extent can be shared across multiple tables, but a there is 1-to-1 mapping between an allocated page
and the table or index. In other words, a page can’t have rows from two or more tables or index.
The data is moved into memory (the buffer pool) as needed and the modified or newly created pages are
asynchronously written to the disk generating mostly random IO.
Storage for memory-optimized tables has following key attributes:
All memory-optimized tables are mapped to a memory-optimized data-filegroup. This filegroup uses
syntax and semantics similar to Filestream.
There are no pages and the data is persisted as a row.
All changes to memory-optimized tables are stored by appending to active files. Both reading and writing
to files is sequential.
An update is implemented as a delete followed by an insert. The deleted rows are not immediately
removed from the storage. The deleted rows are removed by a background process, called MERGE, based
on a policy as described in Durability for Memory-Optimized Tables.
Unlike disk-based tables, storage for memory-optimized tables is not compressed. When migrating a
compressed (ROW or PAGE) disk-based table to memory-optimized table, you will need to account for the
change in size.
A memory-optimized table can be durable or can be non-durable. You only need to configure storage for
durable memory-optimized tables.
This section describes checkpoint file pairs and other aspects of how data in memory-optimized tables is
stored.
Topics in this section:
Configuring Storage for Memory-Optimized Tables
The Memory Optimized Filegroup
Durability for Memory-Optimized Tables
Checkpoint Operation for Memory-Optimized Tables
Defining Durability for Memory-Optimized Objects
Comparing Disk-Based Table Storage to Memory-Optimized Table Storage
See Also
In-Memory OLTP (In-Memory Optimization)
Configuring Storage for Memory-Optimized Tables
3/24/2017 • 2 min to read • Edit Online
You need to configure storage capacity and input/output operations per second (IOPS).
Storage Capacity
Use the information in Estimate Memory Requirements for Memory-Optimized Tables to estimate the in-memory
size of the database's durable memory-optimized tables. Because indexes are not persisted for memory-optimized
tables, do not include the size of indexes. Once you determine the size, you need to provide disk space that is four
times the size of durable, in-memory tables.
Storage IOPS
In-Memory OLTP can significantly increase your workload throughput. Therefore, it is important to ensure that IO
is not a bottleneck.
When migrating disk-based tables to memory-optimized tables, make sure that the transaction log is on a
storage media that can support increased transaction log activity. For example, if your storage media
supports transaction log operations at 100 MB/sec, and memory-optimized tables result in five times
greater performance, the transaction log's storage media must be able to also support five times
performance improvement, to prevent the transaction log activity from becoming a performance bottleneck.
Memory-optimized tables are persisted in files distributed across one or more containers. Each container
should typically be mapped to its own spindle and is used both for increased storage capacity and improved
IOPS. You need to ensure that that sequential IOPS of the storage media can support a 3 times increase in
transaction log throughput.
For example, if memory-optimized tables generate 500MB/sec of activity in the transaction log, the storage
for memory-optimized tables must support 1.5GB/sec IOPS. The need to support a 3 times increase in
transaction log throughput comes from the observation that the data and delta file pairs are first written
with the initial data and then need to be read/re-written as part of a merge operation.
Another factor in estimating the IOPS for storage is the recovery time for memory-optimized tables. Data
from durable tables must be read into memory before a database is made available to applications.
Commonly, loading data into memory-optimized tables can be done at the speed of IOPS. So if the total
storage for durable, memory-optimized tables is 60 GB and you want to be able to load this data in 1
minute, the IOPS for the storage must be set at 1 GB/sec.
If you have even number of spindles, unlike in SQL Server 2014, the checkpoint files will be distributed
uniformly across all spindles.
Encryption
In SQL Server 2016, the storage for memory-optimized tables will be encrypted as part of enabling TDE on the
database. For more information, see Transparent Data Encryption (TDE).
See Also
Creating and Managing Storage for Memory-Optimized Objects
The Memory Optimized Filegroup
3/24/2017 • 2 min to read • Edit Online
To create memory-optimized tables, you must first create a memory-optimized filegroup. The memory-optimized
filegroup holds one or more containers. Each container contains data files or delta files or both.
Even though data rows from SCHEMA_ONLY tables are not persisted and the metadata for memory-optimized
tables and natively compiled stored procedures is stored in the traditional catalogs, the In-Memory OLTP engine
still requires a memory-optimized filegroup for SCHEMA_ONLY memory-optimized tables to provide a uniform
experience for databases with memory-optimized tables.
The memory-optimized filegroup is based on filestream filegroup, with the following differences:
You can only create one memory-optimized filegroup per database. You need to explicitly mark the
filegroup as containing memory_optimized_data. You can create the filegroup when you create the
database or you can add it later:
ALTER DATABASE imoltp ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA
You need to add one or more containers to the MEMORY_OPTIMIZED_DATA filegroup. For example:
ALTER DATABASE imoltp ADD FILE (name='imoltp_mod1', filename='c:\data\imoltp_mod1') TO FILEGROUP
imoltp_mod
You do not need to enable filestream (Enable and Configure FILESTREAM) to create a memory-optimized
filegroup. The mapping to filestream is done by the In-Memory OLTP engine.
You can add new containers to a memory-optimized filegroup. You may need a new container to expand
the storage needed for durable memory-optimized table and also to distribute IO across multiple
containers.
Data movement with a memory-optimized filegroup is optimized in an Always On Availability Group
configuration. Unlike filestream files that are sent to secondary replicas, the checkpoint files (both data and
delta) within the memory-optimized filegroup are not sent to secondary replicas. The data and delta files
are constructed using the transaction log on the secondary replica.
The following limitations of memory-optimized filegroup,
Once you create a memory-optimized filegroup, you can only remove it by dropping the database. In a
production environment, it is unlikely that you will need to remove the memory-optimized filegroup.
You cannot drop a non-empty container or move data and delta file pairs to another container in the
memory-optimized filegroup.
You cannot specify MAXSIZE for the container.
Configuring a Memory-Optimized Filegroup
You should consider creating multiple containers in the memory-optimized filegroup and distribute them on
different drives to achieve more bandwidth to stream the data into memory.
When configuring storage, you must provide free disk space that is four times the size of durable memory-
optimized tables. You must also ensure that your IO subsystem supports the required IOPS for your workload. If
data and delta file pairs are populated at a given IOPS, you need 3 times that IOPS to account for storing and
merge operations. You can add storage capacity and IOPS by adding one or more containers to the memoryoptimized filegroup.
See Also
Creating and Managing Storage for Memory-Optimized Objects
Durability for Memory-Optimized Tables
3/24/2017 • 11 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
In-Memory OLTP provides full durability for memory-optimized tables. When a transaction that changed a
memory-optimized table commits, SQL Server (as it does for disk-based tables), guarantees that the changes are
permanent (will survive a database restart), provided the underlying storage is available. There are two key
components of durability: transaction logging and persisting data changes to on-disk storage.
For details on any size limitations for durable tables see Estimate Memory Requirements for Memory-Optimized
Tables.
Transaction Log
All changes made to disk-based tables or durable memory-optimized tables are captured in one or more
transaction log records. When a transaction commits, SQL Server writes the log records associated with the
transaction to disk before communicating to the application or user session that the transaction has committed.
This guarantees that changes made by the transaction are durable. The transaction log for memory-optimized
tables is fully integrated with the same log stream used by disk-based tables. This integration allows existing
transaction log backup, recover, and restore operations to continue to work without requiring any additional steps.
However, since In-Memory OLTP can increase transaction throughput of your workload significantly, log IO may
become a performance bottleneck. To sustain this increased throughput, ensure the log IO subsystem can handle
the increased load.
Data and Delta Files
The data in memory-optimized tables is stored as free-form data rows in an in-memory heap data structure, and
are linked through one or more indexes in memory. There are no page structures for data rows, such as those
used for disk-based tables. For long term persistence and to allow truncation of the transction log, operations on
memory-optimized tables are persisted in a set of data and delta files. These files are generated based on the
transaction log, using an asynchronous background process. The data and delta files are located in one or more
containers (using the same mechanism used for FILESTREAM data). These containers are part of a memoryoptimized filegroup.
Data is written to these files in a strictly sequential fashion, which minimizes disk latency for spinning media. You
can use multiple containers on different disks to distribute the I/O activity. Data and delta files in multiple
containers on different disks will increase database restore/recovery performance when data is read from the data
and delta files on disk, into memory.
User transactions do not directly access data and delta files. All data reads and writes use in-memory data
structures.
The Data File
A data file contains rows from one or more memory-optimized tables that were inserted by multiple transactions
as part of INSERT or UPDATE operations. For example, one row can be from memory-optimized table T1 and the
next row can be from memory-optimized table T2. The rows are appended to the data file in the order of
transactions in the transaction log, making data access sequential. This enables an order of magnitude better I/O
throughput compared to random I/O.
Once the data file is full, the rows inserted by new transactions are stored in another data file. Over time, the rows
from durable memory-optimized tables are stored in one of more data files and each data file containing rows
from a disjoint but contiguous range of transactions. For example a data file with transaction commit timestamp in
the range of (100, 200) has all the rows inserted by transactions that have commit timestamp greater than 100
and less than or equal to 200. The commit timestamp is a monotonically increasing number assigned to a
transaction when it is ready to commit. Each transaction has a unique commit timestamp.
When a row is deleted or updated, the row is not removed or changed in-place in the data file but the deleted rows
are tracked in another type of file: the delta file. Update operations are processed as a tuple of delete and insert
operations for each row. This eliminates random IO on the data file.
Size: Each data file is sized approximately to 128MB for computers with memory greater than 16GB, and 16MB for
computers with less than or equal to 16GB. In SQL Server 2016 SQL Server can use large checkpoint mode if it
deems the storage subsystem is fast enough. In large checkpoint mode, data files are sized at 1GB. This allows for
greater efficiency in the storage subsystem for high-throughput workloads.
The Delta File
Each data file is paired with a delta file that has the same transaction range and tracks the deleted rows inserted by
transactions in the transaction range. This data and delta file is referred to as a Checkpoint File Pair (CFP) and it is
the unit of allocation and deallocation as well as the unit for Merge operations. For example, a delta file
corresponding to transaction range (100, 200) will store deleted rows that were inserted by transactions in the
range (100, 200). Like data files, the delta file is accessed sequentially.
When a row is deleted, the row is not removed from the data file but a reference to the row is appended to the
delta file associated with the transaction range where this data row was inserted. Since the row to be deleted
already exists in the data file, the delta file only stores the reference information
{inserting_tx_id, row_id, deleting_tx_id } and it follows the transactional log order of the originating delete or
update operations.
Size: Each delta file is sized approximately to 16MB for computers with memory greater than 16GB, and 1MB for
computers with less than or equal to 16GB. Starting SQL Server 2016 SQL Server can use large checkpoint mode if
it deems the storage subsystem is fast enough. In large checkpoint mode, delta files are sized at 128MB.
Populating Data and Delta Files
Data and delta file are populated based on the transaction log records generated by committed transactions on
memory-optimized tables and appends information about the inserted and deleted rows into appropriate data and
delta files. Unlike disk-based tables where data/index pages are flushed with random I/O when checkpoint is done,
the persistence of memory-optimized table is continuous background operation. Multiple delta files are accessed
because a transaction can delete or update any row that was inserted by any previous transaction. Deletion
information is always appended at the end of the delta file. For example, a transaction with a commit timestamp of
600 inserts one new row and deletes rows inserted by transactions with a commit timestamp of 150, 250 and 450
as shown in the picture below. All 4 file I/O operations (three for deleted rows and 1 for the newly inserted rows),
are append-only operations to the corresponding delta and data files.
Accessing Data and Delta Files
Data and delta file pairs are accessed when the following occurs.
Offline checkpoint worker(s)
This thread appends inserts and deletes to memory-optimized data rows, to the corresponding data and delta file
pairs. In SQL Server 2014 there is one offline checkpoint worker; starting SQL Server 2016 there are multiple
checkpoint workers.
Merge operation
The operation merges one or more data and delta file pairs and creates a new data and delta file pair.
During crash recovery
When SQL Server is restarted or the database is brought back online, the memory-optimized data is populated
using the data and delta file pairs. The delta file acts as a filter for the deleted rows when reading the rows from
the corresponding data file. Because each data and delta file pair is independent, these files are loaded in parallel
to reduce the time taken to populate data into memory. Once the data has been loaded into memory, the InMemory OLTP engine applies the active transaction log records not yet covered by the checkpoint files so that the
memory-optimized data is complete.
During restore operation
The In-Memory OLTP checkpoint files are created from the database backup, and then one or more transaction log
backups are applied. As with crash recovery, the In-Memory OLTP engine loads data into memory in parallel, to
minimize the impact on recovery time.
Merging Data and Delta Files
The data for memory optimized tables is stored in one or more data and delta file pairs (also called a checkpoint
file pair, or CFP). Data files store inserted rows and delta files reference deleted rows. During the execution of an
OLTP workload, as the DML operations update, insert, and delete rows, new CFPs are created to persist the new
rows, and the reference to the deleted rows is appended to delta files.
Over time, with DML operations, the number of data and delta files grow causing, causing increased disk space
usage and increased recovery time..
To help prevent these inefficiencies, the older closed data and delta files are merged, based on a merge policy
described below, so the storage array is compacted to represent the same set of data, with a reduced number of
files.
The merge operation takes as input one or more adjacent closed checkpoint file pairs (CFPs), which are pairs of
data and delta files, (called merge source) based on an internally defined merge policy, and produces one resultant
CFP, called the merge target. The entries in each delta file of the source CFPs are used to filter rows from the
corresponding data file to remove the data rows that are not needed. The remaining rows in the source CFPs are
consolidated into one target CFP. After the merge is complete, the resultant merge-target CFP replaces the source
CFPs (merge sources). The merge-source CFPs go through a transition phase before they are removed from
storage.
In the example below, the memory-optimized table file group has four data and delta file pairs at timestamp 500
containing data from previous transactions. For example, the rows in the first data file correspond to transactions
with timestamp greater than 100 and less than or equal to 200; alternatively represented as (100, 200]. The second
and third data files are shown to be less than 50 percent full after accounting for the rows marked as deleted. The
merge operation combines these two CFPs and creates a new CFP containing transactions with timestamp greater
than 200 and less than or equal to 400, which is the combined range of these two CFPs. You see another CFP with
range (500, 600] and non-empty delta file for transaction range (200, 400] shows that merge operation can be
done concurrently with transactional activity including deleting more rows from the source CFPs.
A background thread evaluates all closed CFPs using a merge policy and then initiates one or more merge
requests for the qualifying CFPs. These merge requests are processed by the offline checkpoint thread. The
evaluation of merge policy is done periodically and also when a checkpoint is closed.
SQL Server Merge Policy
SQL Server implements the following merge policy:
A merge is scheduled if 2 or more consecutive CFPs can be consolidated, after accounting for deleted rows,
such that the resultant rows can fit into 1 CFP of target size. The target size of data and delta files
corresponds to the original sizing, as described above.
A single CFP can be self-merged if the data file exceeds double the target size and more than half of the
rows are deleted. A data file can grow larger than the target size if, for example, a single transaction or
multiple concurrent transactions inserts or updates a large amount of data, forcing the data file to grow
beyond its target size, because a transaction cannot span multiple CFPs.
Here are some examples that show the CFPs that will be merged under the merge policy:
ADJACENT CFPS SOURCE FILES (% FULL)
MERGE SELECTION
CFP0 (30%), CFP1 (50%), CFP2 (50%), CFP3 (90%)
(CFP0, CFP1)
CFP2 is not chosen as it will make resultant data file greater
than 100% of the ideal size.
ADJACENT CFPS SOURCE FILES (% FULL)
MERGE SELECTION
CFP0 (30%), CFP1 (20%), CFP2 (50%), CFP3 (10%)
(CFP0, CFP1, CFP2). Files are chosen starting from left.
CTP3 is not chosen as it will make resultant data file greater
than 100% of the ideal size.
CFP0 (80%), CFP1 (30%), CFP2 (10%), CFP3 (40%)
(CFP1, CFP2, CFP3). Files are chosen starting from left.
CFP0 is skipped because if combined with CFP1, the resultant
data file will be greater than 100% of the ideal size.
Not all CFPs with available space qualify for merge. For example, if two adjacent CFPs are 60% full, they will not
qualify for merge and each of these CFPs will have 40% storage unused. In the worst case, all CFPs will be 50% full,
a storage utilization of only 50%. While the deleted rows may exist in storage because the CFPs don’t qualify for
merge, the deleted rows may have already been removed from memory by in-memory garbage collection. The
management of storage and the memory is independent from garbage collection. Storage taken by active CFPs
(not all CFPs are being updated) can be up to 2 times larger than the size of durable tables in memory.
Life Cycle of a CFP
CPFs transition through several states before they can be deallocated. Database checkpoints and log backups need
to happen to transition the files through the phases, and ultimately clean up files that are no longer needed. For a
description of these phases, see sys.dm_db_xtp_checkpoint_files (Transact-SQL).
You can manually force the checkpoint followed by log backup to expedite the garbage collection. In production
scenarios, the automatic checkpoints and log backups taken as part of backup strategy will seamlessly transition
CFPs through these phases without requiring any manual intervention. The impact of the garbage collection
process is that databases with memory-optimized tables may have a larger storage size compared to its size in
memory. If checkpoint and log backups do not happen, the on-disk footprint of checkpoint files continues to grow.
See Also
Creating and Managing Storage for Memory-Optimized Objects
Checkpoint Operation for Memory-Optimized Tables
3/24/2017 • 1 min to read • Edit Online
A checkpoint needs to occur periodically for memory-optimized data in data and delta files to advance the active
part of transaction log. The checkpoint allows memory-optimized tables to restore or recover to the last successful
checkpoint and then the active portion of transaction log is applied to update the memory-optimized tables to
complete the recovery. The checkpoint operation for disk-based tables and memory-optimized tables are distinct
operations. The following describes different scenarios and the checkpoint behavior for disk-based and memoryoptimized tables:
Manual Checkpoint
When you issue a manual checkpoint, it closes the checkpoint for both disk-based and memory-optimized tables.
The active data file is closed even though it may be partially filled.
Automatic Checkpoint
Automatic checkpoint is implemented differently for disk-based and memory-optimized tables because of the
different ways the data is persisted.
For disk-based tables, an automatic checkpoint is taken based on the recovery interval configuration option (for
more information, see Change the Target Recovery Time of a Database (SQL Server)).
For memory-optimized tables, an automatic checkpoint is taken when transaction log file becomes bigger than 1.5
GB since the last checkpoint. This 1.5 GB size includes transaction log records for both disk-based and memoryoptimized tables.
See Also
Creating and Managing Storage for Memory-Optimized Objects
Defining Durability for Memory-Optimized Objects
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014)
Warehouse
Parallel Data Warehouse
There are two durability options for memory-optimized tables:
Azure SQL Database
Azure SQL Data
SCHEMA_AND_DATA (default)
This option provides durability of both schema and data. The level of data durability depends on whether you
commit a transaction as fully durable or with delayed durability. Fully durable transactions provide the same
durability guarantee for data and schema, similar to a disk-based table. Delayed durability will improve
performance but can potentially result in data loss in case of a server crash or fail over. (For more information
about delayed durability, see Control Transaction Durability.)
SCHEMA_ONLY
This option ensures durability of the table schema. When SQL Server is restarted or a reconfiguration occurs in an
Azure SQL Database, the table schema persists, but data in the table is lost. (This is unlike a table in tempdb, where
both the table and its data are lost upon restart.) A typical scenario for creating a non-durable table is to store
transient data, such as a staging table for an ETL process. A SCHEMA_ONLY durability avoids both transaction
logging and checkpoint, which can significantly reduce I/O operations.
When using the default SCHEMA_AND_DATA tables, SQL Server provides the same durability guarantees as for
disk-based tables:
Transactional Durability
When you commit a fully durable transaction that made (DDL or DML) changes to a memory-optimized table, the
changes made to a durable memory-optimized table are permanent.
When you commit a delayed durable transaction to a memory-optimized table, the transaction becomes durable
only after the in-memory transaction log is saved to disk. (For more information about delayed durability, see
Control Transaction Durability.)
Restart Durability
When SQL Server restarts after a crash or planned shutdown, the memory-optimized durable tables are
reinstantiated to restore them to the state before the shutdown or crash.
Media Failure Durability
If a failed or corrupt disk contains one or more persisted copies of durable memory-optimized objects, the SQL
Server backup and restore feature restores memory-optimized tables on the new media.
See Also
Creating and Managing Storage for Memory-Optimized Objects
Comparing Disk-Based Table Storage to MemoryOptimized Table Storage
3/24/2017 • 1 min to read • Edit Online
CATEGORIES
DISK-BASED TABLE
DURABLE MEMORY-OPTIMIZED TABLE
DDL
Metadata information is stored in
system tables in the primary filegroup
of the database and is accessible
through catalog views.
Metadata information is stored in
system tables in the primary filegroup
of the database and is accessible
through catalog views.
Structure
Rows are stored in 8K pages. A page
stores only rows from the same table.
Rows are stored as individual rows.
There is no page structure. Two
consecutive rows in a data file can
belong to different memory-optimized
tables.
Indexes
Indexes are stored in a page structure
similar to data rows.
Only the index definition is persisted
(not index rows). Indexes are maintained
in-memory and are regenerated when
the memory-optimized table is loaded
into memory as part of restarting a
database. Since index rows are not
persisted, no logging is done for index
changes.
DML operation
The first step is to find the page and
then load it into buffer-pool.
For memory-optimized tables, since the
data resides in memory, the DML
operations are done directly in memory.
There is a background thread that reads
the log records for memory-optimized
tables and persist them into data and
delta files. An update generates a new
row version. But an update is logged as
a delete followed by an insert.
Insert
SQL Server inserts the row on the page
accounting for row ordering in case of
clustered index.
Delete
SQL Server locates the row to be
deleted on the page and marks it
deleted.
Update
SQL Server locates the row on the page.
The update is done in-place for non-key
columns. Key-column update is done by
a delete and insert operation.
After the DML operation completes, the
affected pages are flushed to disk as
part of buffer pool policy, checkpoint or
transaction commit for minimallylogged operations. Both read/write
operations on pages leads to
unnecessary I/O.
CATEGORIES
DISK-BASED TABLE
DURABLE MEMORY-OPTIMIZED TABLE
Data Fragmentation
Data manipulation fragments data
leading to partially filled pages and
logically consecutive pages that are not
contiguous on disk. This degrades data
access performance and requires you to
defragment data.
Memory-optimized data is not stored in
pages so there is no data
fragmentation. However, as rows are
updated and deleted, the data and delta
files need to be compacted. This is done
by a background MERGE thread based
on a merge policy.
See Also
Creating and Managing Storage for Memory-Optimized Objects
Scalability
3/24/2017 • 1 min to read • Edit Online
SQL Server 2016 contains scalability enhancements to the on-disk storage for memory-optimized tables.
Multiple threads to persist memory-optimized tables
In the previous release of SQL Server, there was a single offline checkpoint thread that scanned the
transaction log for changes to memory-optimized tables and persisted them in checkpoint files (such as data
and delta files). With larger number of COREs, the single offline checkpoint thread could fall behind.
In SQL Server 2016, there are multiple concurrent threads responsible to persist changes to memoryoptimized tables.
Multi-threaded recovery
In the previous release of SQL Server, the log apply as part of recovery operation was single threaded. In
SQL Server 2016, the log apply is multi-threaded.
MERGE Operation
The MERGE operation is now multi-threaded.
Dynamic management views
sys.dm_db_xtp_checkpoint_stats (Transact-SQL) and sys.dm_db_xtp_checkpoint_files (Transact-SQL)have
been changed significantly.
Manual Merge has been disabled as multi-threaded merge is expected to keep up with the load.
The In-memory OLTP engine continues to use memory-optimized filegroup based on FILESTREAM, but the
individual files in the filegroup are decoupled from FILESTREAM. These files are fully managed (such as for
create, drop, and garbage collection) by the In-Memory OLTP engine. DBCC SHRINKFILE (Transact-SQL) is
not supported.
Backing Up a Database with Memory-Optimized
Tables
3/24/2017 • 3 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016)
Warehouse
Parallel Data Warehouse
Azure SQL Database
Azure SQL Data
Memory-optimized tables are backed up as part of regular database backups. As for disk-based tables, the
CHECKSUM of data and delta file pairs is validated as part of the database backup to detect storage corruption.
NOTE
During a backup, if you detect a CHECKSUM error in one or more files in a memory-optimized filegroup, the backup
operation fails. In that situation, you must restore your database from the last known good backup.
If you don’t have a backup, you can export the data from memory-optimized tables and disk-based tables and reload after
you drop and recreate the database.
A full backup of a database with one or more memory-optimized tables consists of the allocated storage for diskbased tables (if any), the active transaction log, and the data and delta file pairs (also known as checkpoint file pairs)
for memory-optimized tables. However, as described in Durability for Memory-Optimized Tables, the storage used
by memory-optimized tables can be much larger than its size in memory, and it affects the size of the database
backup.
Full Database Backup
This discussion focuses on database backups for databases with only durable memory-optimized tables, because
the backup for disk-based tables is the same. The checkpoint file pairs in the memory-optimized filegroup could be
in various states. The table below describes what part of the files is backed up.
CHECKPOINT FILE PAIR STATE
BACKUP
PRECREATED
File metadata only
UNDER CONSTRUCTION
File metadata only
ACTIVE
File metadata plus used bytes
MERGE TARGET
File metadata only
WAITING FOR LOG TRUNCATION
File metadata plus used bytes
For descriptions of the states for checkpoint file pairs, see sys.dm_db_xtp_checkpoint_files (Transact-SQL), and its
column state_desc.
The size of database backups with one or more memory-optimized tables is typically larger than its size in memory,
but smaller than its on-disk storage. The extra size depends on the number of deleted rows, among other factors.
Estimating Size of Full Database Backup
IMPORTANT
It’s recommended that you not use the BackupSizeInBytes value to estimate the backup size for In-Memory OLTP.
The first workload scenario is for (mostly) insert. In this scenario, most data files are in the Active state, fully loaded,
and with very few deleted rows. The size of database backup up is close to the size of data in memory.
The second workload scenario is for frequent insert, delete, and update operations. In the worst case, each of the
checkpoint file pairs are 50% loaded, after accounting for the deleted rows. The size of the database backup will at
least be 2 times the size of data in memory.
Differential Backups of Databases with Memory-Optimized Tables
The storage for memory-optimized tables consists of data and delta files as described in Durability for MemoryOptimized Tables. The differential backup of a database with memory-optimized tables contains the following data:
Differential backup for filegroups storing disk-based tables is not affected by the presence of memoryoptimized tables.
The active transaction log is the same as in a full database backup.
For a memory-optimized data filegroup, the differential backup uses the same algorithm as full database
backup to identify the data and delta files for backup but it then filters out the subset of files as follows:
A data file contains newly inserted rows, and when it is full it is closed and marked read-only. A data
file is backed up only if it was closed after the last full database backup. The differential backup only
backs up data files containing the rows inserted since the last full database backup. An exception is an
update and delete scenario where it is possible that some of the inserted rows may have already been
either marked for garbage collection or already garbage collected.
A delta file stores references to the deleted data rows. Since any future transaction can delete a row, a
delta file can be modified anytime in its life time, it is never closed. A delta file is always backed up.
Delta files typically use less than 10% of the storage, so delta files have a minimal impact on the size
of differential backup.
If memory-optimized tables are a significant portion of your database size, the differential backup can
significantly reduce the size of your database backup. For typical OLTP workloads, the differential backups
will be considerably smaller than the full database backups.
See Also
Backup, Restore, and Recovery of Memory-Optimized Tables
Piecemeal Restore of Databases With MemoryOptimized Tables
3/24/2017 • 2 min to read • Edit Online
Piecemeal restore is supported on databases with memory-optimized tables except for one restriction described
below. For more information about piecemeal backup and restore, see RESTORE (Transact-SQL) and Piecemeal
Restores (SQL Server).
A memory-optimized filegroup must be backed up and restored together with the primary filegroup:
If you back up (or restore) the primary filegroup you must specify the memory-optimized filegroup.
If you backup (or restore) the memory-optimized filegroup you must specify the primary filegroup.
Key scenarios for piecemeal backup and restore are,
Piecemeal backup allows you to reduce the size of backup. Some examples:
Configure the database backup to occur at different times or days to minimize the impact on the
workload. One example is a very large database (greater than 1 TB) where a full database backup
cannot complete in the time allocated for database maintenance. In that situation, you can use
piecemeal backup to backup the full database in multiple piecemeal backups.
If a filegroup is marked read-only, it does not require a transaction log backup after it was marked
read-only. You can choose to back up the filegroup only once after marking it read-only.
Piecemeal restore.
The goal of a piecemeal restore is to bring the critical parts of database online without waiting for all
the data. One example is if a database has partitioned data, such that older partitions are only used
rarely. You can restore them only on an as-needed basis. Similar for filegroups that contain, for
example, historical data.
Using page repair, you can fix page corruption by specifically restoring the page. For more
information, see Restore Pages (SQL Server).
Samples
The examples use the following schema:
CREATE DATABASE imoltp
ON PRIMARY (name = imoltp_primary1, filename = 'c:\data\imoltp_data1.mdf')
LOG ON (name = imoltp_log, filename = 'c:\data\imoltp_log.ldf')
GO
ALTER DATABASE imoltp ADD FILE (name = imoltp_primary2, filename = 'c:\data\imoltp_data2.ndf')
GO
ALTER DATABASE imoltp ADD FILEGROUP imoltp_secondary
ALTER DATABASE imoltp ADD FILE (name = imoltp_secondary, filename = 'c:\data\imoltp_secondary.ndf') TO
FILEGROUP imoltp_secondary
GO
ALTER DATABASE imoltp ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA
ALTER DATABASE imoltp ADD FILE (name='imoltp_mod1', filename='c:\data\imoltp_mod1') TO FILEGROUP imoltp_mod
ALTER DATABASE imoltp ADD FILE (name='imoltp_mod2', filename='c:\data\imoltp_mod2') TO FILEGROUP imoltp_mod
GO
Backup
This sample shows how to backup the primary filegroup and the memory-optimized filegroup. You must specify
both primary and memory-optimized filegroup together.
backup database imoltp filegroup='primary', filegroup='imoltp_mod' to disk='c:\data\imoltp.dmp' with init
The following sample shows that a back up of filegroup(s) other than primary and memory-optimized filegroup
works similar to the databases without memory-optimized tables. The following command backups up the
secondary filegroup
backup database imoltp filegroup='imoltp_secondary' to disk='c:\data\imoltp_secondary.dmp' with init
Restore
The following sample shows how to restore the primary filegroup and memory-optimized filegroup together.
restore database imoltp filegroup = 'primary', filegroup = 'imoltp_mod'
from disk='c:\data\imoltp.dmp' with partial, norecovery
--restore the transaction log
RESTORE LOG [imoltp] FROM DISK = N'c:\data\imoltp_log.dmp' WITH FILE = 1, NOUNLOAD, STATS = 10
GO
The next sample shows that restoring filegroup(s) other than the primary and memory-optimized filegroup works
similar to the databases without memory-optimized tables
RESTORE DATABASE [imoltp] FILE = N'imoltp_secondary'
FROM DISK = N'c:\data\imoltp_secondary.dmp' WITH FILE = 1, RECOVERY, NOUNLOAD, STATS = 10
GO
See Also
Backup, Restore, and Recovery of Memory-Optimized Tables
Restore and Recovery of Memory-Optimized Tables
3/24/2017 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016)
Warehouse
Parallel Data Warehouse
Azure SQL Database
Azure SQL Data
The basic mechanism to recover or restore a database with memory-optimized tables is similar to databases with
only disk-based tables. But unlike disk-based tables, memory-optimized tables must be loaded into memory before
database is available for user access. This adds a new step in the database recovery.
During recovery or restore operations, the In-Memory OLTP engine reads data and delta files for loading into
physical memory. The load time is determined by:
The amount of data to load.
Sequential I/O bandwidth.
Degree of parallelism, determined by number of file containers and processor cores.
The amount of log records in the active portion of the log that need to be redone.
When the SQL Server restarts, each database goes through a recovery phase that consists of the following
three phases:
1. The analysis phase. During this phase, a pass is made on the active transaction logs to detect committed and
uncommitted transactions. The In-Memory OLTP engine identifies the checkpoint to load and preloads its
system table log entries. It will also process some file allocation log records.
2. The redo phase. This phase is run concurrently on both disk-based and memory-optimized tables.
For disk-based tables, the database is moved to the current point in time and acquires locks taken by
uncommitted transactions.
For memory-optimized tables, data from the data and delta file pairs are loaded into memory and then
update the data with the active transaction log based on the last durable checkpoint.
When the above operations on disk-based and memory-optimized tables are complete, the database is
available for access.
3. The undo phase. In this phase, the uncommitted transactions are rolled back.
Loading memory-optimized tables into memory can affect performance of the recovery time objective
(RTO). To improve the load time of memory-optimized data from data and delta files, the In-Memory OLTP
engine loads the data/delta files in parallel as follows:
Creating a Delta Map Filter. Delta files store references to the deleted rows. One thread per container reads
the delta files and creates a delta map filter. (A memory optimized data filegroup can have one or more
containers.)
Streaming the data files. Once the delta-map filter is created, data files are read using as many threads as
there are logical CPUs. Each thread reading the data file reads the data rows, checks the associated delta map
and only inserts the row into table if this row has not been marked deleted. This part of recovery can be CPU
bound in some cases as noted below.
Memory-optimized tables can generally be loaded into memory at the speed of I/O but there are cases when
loading data rows into memory will be slower. Specific cases are:
Low bucket count for hash index can lead to excessive collision causing data row inserts to be slower. This
generally results in very high CPU utilization throughout, and especially towards the end of recovery. If you
configured the hash index correctly, it should not impact the recovery time.
Large memory-optimized tables with one or more nonclustered indexes, unlike a hash index whose bucket
count is sized at create time, the nonclustered indexes grow dynamically, resulting in high CPU utilization.
See Also
Backup, Restore, and Recovery of Memory-Optimized Tables
SQL Server Support for In-Memory OLTP
3/24/2017 • 1 min to read • Edit Online
This section discusses new and updated syntax and features supporting memory-optimized tables.
Unsupported SQL Server Features for In-Memory OLTP
SQL Server Management Objects Support for In-Memory OLTP
SQL Server Integration Services Support for In-Memory OLTP
SQL Server Management Studio Support for In-Memory OLTP
High Availability Support for In-Memory OLTP databases
Transact-SQL Support for In-Memory OLTP
See Also
In-Memory OLTP (In-Memory Optimization)
Unsupported SQL Server Features for In-Memory
OLTP
3/24/2017 • 4 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This topic discusses SQL Server features that are not supported for use with memory-optimized objects.
SQL Server Features Not Supported for In-Memory OLTP
The following SQL Server features are not supported on a database that has memory-optimized objects (including
memory-optimized data filegroup).
UNSUPPORTED FEATURE
FEATURE DESCRIPTION
Data compression for memory-optimized tables.
You can use the data compression feature to help compress
the data inside a database, and to help reduce the size of the
database. For more information, see Data Compression.
Partitioning of memory-optimized tables and HASH indexes,
as well as nonclustered indexes.
The data of partitioned tables and indexes is divided into units
that can be spread across more than one filegroup in a
database. For more information, see Partitioned Tables and
Indexes.
Replication
Replication configurations other than transactional replication
to memory-optimized tables on subscribers are incompatible
with tables or views referencing memory-optimized tables.
Replication using sync_mode=’database snapshot’ is not
supported if there is a memory-optimized filegroup. For more
information, see Replication to Memory-Optimized Table
Subscribers.
Mirroring
Database mirroring is not supported for databases with a
MEMORY_OPTIMIZED_DATA filegroup. For more information
about mirroring, see Database Mirroring (SQL Server).
Rebuild log
Rebuilding the log, either through attach or ALTER
DATABASE, is not supported for databases with a
MEMORY_OPTIMIZED_DATA filegroup.
Linked Server
You cannot access linked servers in the same query or
transaction as memory-optimized tables. For more
information, see Linked Servers (Database Engine).
Bulk logging
Regardless of the recovery model of the database, all
operations on durable memory-optimized tables are always
fully logged.
Minimal logging
Minimal logging is not supported for memory-optimized
tables. For more information about minimal logging, see The
Transaction Log (SQL Server) and Prerequisites for Minimal
Logging in Bulk Import.
UNSUPPORTED FEATURE
FEATURE DESCRIPTION
Change tracking
Change tracking can be enabled on a database with InMemory OLTP objects. However, changes in memoryoptimized tables are not tracked.
DDL triggers
Both database-level and server-level DDL triggers are not
supported with In-Memory OLTP tables and natively
compiled modules.
Change Data Capture (CDC)
CDC cannot be used with a database that has memoryoptimized tables, as it uses a DDL trigger for DROP TABLE
under the hood.
Fiber mode
Fiber mode is not supported with memory-optimized tables:
If fiber mode is active, you cannot create databases with
memory-optimized filegroups or add memory-optimized
filegroups to existing databases.
You can enable fiber mode if there are databases with
memory-optimized filegroups. However, enabling fiber mode
requires a server restart. In that situation, databases with
memory-optimized filegroups would fail to recover and you
will see an error message suggesting that you disable fiber
mode to use databases with memory-optimized filegroups.
Attaching and restoring databases with memory-optimized
filegroups will fail if fiber mode is active. The databases will be
marked as suspect.
For more information, see lightweight pooling Server
Configuration Option.
Service Broker limitation
Cannot access a queue from a natively compiled stored
procedure.
Cannot access a queue in a remote database in a transaction
that accesses memory-optimized tables.
Replication on subscribers
Transactional replication to memory-optimized tables on
subscribers is supported but with some restrictions. For more
information, see Replication to Memory-Optimized Table
Subscribers
With a few exceptions, cross-database transactions are not supported. The following table describes which cases
are supported, and the corresponding restrictions. (See also, Cross-Database Queries.)
DATABASES
ALLOWED
DESCRIPTION
DATABASES
ALLOWED
DESCRIPTION
User databases, model, and msdb.
No
Cross-database queries and
transactions are not supported.
Queries and transactions that access
memory-optimized tables or natively
compiled stored procedures cannot
access other databases, with the
exception of the system databases
master (read-only access) and tempdb.
Resource database, tempdb
Yes
There are no restrictions on crossdatabase transactions that, besides a
single user database, use only resource
database and tempdb.
master
read-only
Cross-database transactions that touch
In-Memory OLTP and the master
database fail to commit if it includes
any writes to the master database.
Cross-database transactions that only
read from master and use only one
user database are allowed.
Scenarios Not Supported
Database containment (Contained Databases) is not supported with In-Memory OLTP. Contained database
authentication is supported. However, all In-Memory OLTP objects are marked as breaking containment in
the DMV dm_db_uncontained_entities.
Accessing memory-optimized tables using the context connection from inside CLR stored procedures.
Keyset and dynamic cursors on queries accessing memory-optimized tables. These cursors are degraded to
static and read-only.
Using MERGE INTOtarget with target is a memory-optimized table. MERGE USINGsource is supported for
memory-optimized tables.
The ROWVERSION (TIMESTAMP) data type is not supported. See FROM (Transact-SQL) for more
information.
Auto-close is not supported with databases that have a MEMORY_OPTIMIZED_DATA filegroup
Database snapshots as not supported for databases that have a MEMORY_OPTIMIZED_DATA filegroup.
Transactional DDL. CREATE/ALTER/DROP of In-Memory OLTP objects is not supported inside user
transactions.
Event notification.
Policy-based management (PBM). Prevent and log only modes of PBM are not supported. Existence of such
policies on the server may prevent In-Memory OLTP DDL from executing successfully. On demand and on
schedule modes are supported.
See Also
SQL Server Support for In-Memory OLTP
SQL Server Management Objects Support for InMemory OLTP
3/24/2017 • 1 min to read • Edit Online
This topic describes changes in SQL Server Management Objects (SMO) for In-Memory OLTP.
The following types and members support In-Memory OLTP:
DurabilityType
FileGroupType
FileGroup
FileGroupType
BucketCount
NonClusteredHashIndex
IsMemoryOptimized
IsXTPSupported
IsNativelyCompiled
IsSchemaBound
Durability
IsMemoryOptimized
IsMemoryOptimized
Code Sample
This sample does the following:
Creates a database with memory-optimized filegroup and memory-optimized file.
Creates a durable memory-optimized table with a primary key, nonclustered index, and a nonclustered hash
index.
Creates columns and indexes.
Creates a user-defined memory-optimized table type.
Creates a natively compiled stored procedure.
This sample needs to reference the following assemblies:
Microsoft.SqlServer.Smo.dll
Microsoft.SqlServer.Management.Sdk.Sfc.dll
Microsoft.SqlServer.ConnectionInfo.dll
Microsoft.SqlServer.SqlEnum.dll
using Microsoft.SqlServer.Management.Smo;
using System;
public class A {
static void Main(string[] args) {
Server server = new Server("(local)");
// Create a database with memory-optimized filegroup and memory-optimized file
Database db = new Database(server, "MemoryOptimizedDatabase");
db.Create();
FileGroup fg = new FileGroup(db, "memOptFilegroup", FileGroupType.MemoryOptimizedDataFileGroup);
db.FileGroups.Add(fg);
fg.Create();
// change this path if needed
DataFile file = new DataFile(fg, "memOptFile", @"C:\Program Files\Microsoft SQL
Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\MSSQLmemOptFileName");
file.Create();
// Create a durable memory-optimized table with primary key, nonclustered index and nonclustered hash
index
// Define the table as memory optimized and set the durability
Table table = new Table(db, "memOptTable");
table.IsMemoryOptimized = true;
table.Durability = DurabilityType.SchemaAndData;
// Create columns
Column col1 = new Column(table, "col1", DataType.Int);
col1.Nullable = false;
table.Columns.Add(col1);
Column col2 = new Column(table, "col2", DataType.Float);
col2.Nullable = false;
table.Columns.Add(col2);
Column col3 = new Column(table, "col3", DataType.Decimal(2, 10));
col3.Nullable = false;
table.Columns.Add(col3);
// Create indexes
Index pk = new Index(table, "PK_memOptTable");
pk.IndexType = IndexType.NonClusteredIndex;
pk.IndexKeyType = IndexKeyType.DriPrimaryKey;
pk.IndexedColumns.Add(new IndexedColumn(pk, col1.Name));
table.Indexes.Add(pk);
Index ixNonClustered = new Index(table, "ix_nonClustered");
ixNonClustered.IndexType = IndexType.NonClusteredIndex;
ixNonClustered.IndexKeyType = IndexKeyType.None;
ixNonClustered.IndexedColumns.Add(new IndexedColumn(ixNonClustered, col2.Name));
table.Indexes.Add(ixNonClustered);
Index ixNonClusteredHash = new Index(table, "ix_nonClustered_Hash");
ixNonClusteredHash.IndexType = IndexType.NonClusteredHashIndex;
ixNonClusteredHash.IndexKeyType = IndexKeyType.None;
ixNonClusteredHash.BucketCount = 1024;
ixNonClusteredHash.IndexedColumns.Add(new IndexedColumn(ixNonClusteredHash, col3.Name));
table.Indexes.Add(ixNonClusteredHash);
table.Create();
// Create a user-defined memory-optimized table type
UserDefinedTableType uDTT = new UserDefinedTableType(db, "memOptUDTT");
uDTT.IsMemoryOptimized = true;
// Add columns
Column udTTCol1 = new Column(uDTT, "udtCol1", DataType.Int);
udTTCol1.Nullable = false;
uDTT.Columns.Add(udTTCol1);
Column udTTCol2 = new Column(uDTT, "udtCol2", DataType.Float);
udTTCol2.Nullable = false;
uDTT.Columns.Add(udTTCol2);
Column udTTCol3 = new Column(uDTT, "udtCol3", DataType.Decimal(2, 10));
udTTCol3.Nullable = false;
uDTT.Columns.Add(udTTCol3);
// Add index
Index ix = new Index(uDTT, "IX_UDT");
ix.IndexType = IndexType.NonClusteredHashIndex;
ix.BucketCount = 1024;
ix.IndexKeyType = IndexKeyType.DriPrimaryKey;
ix.IndexedColumns.Add(new IndexedColumn(ix, udTTCol1.Name));
uDTT.Indexes.Add(ix);
uDTT.Create();
// Create a natively compiled stored procedure
StoredProcedure sProc = new StoredProcedure(db, "nCSProc");
sProc.TextMode = false;
sProc.TextBody = "--Type body here";
sProc.IsNativelyCompiled = true;
sProc.IsSchemaBound = true;
sProc.ExecutionContext = ExecutionContext.Owner;
sProc.Create();
}
}
See Also
SQL Server Support for In-Memory OLTP
SQL Server Integration Services Support for InMemory OLTP
3/24/2017 • 2 min to read • Edit Online
You can use a memory-optimized table, a view referencing memory-optimized tables, or a natively compiled
stored procedure as the source or destination for your SQL Server Integration Services (SSIS) package. You can use
ADO NET Source, OLE DB Source, or ODBC Source in the data flow of an SSIS package and configure the source
component to retrieve data from a memory-optimized table or a view, or specify a SQL statement to execute a
natively compiled stored procedure. Similarly, you can use ADO NET Destination, OLE DB Destination, or ODBC
Destination to load data into a memory-optimized table or a view, or specify a SQL statement to execute a natively
compiled stored procedure.
You can configure the above mentioned source and destination components in an SSIS package to read from/write
to memory-optimized tables and views in the same way as with other SQL Server tables and views. However, you
need to be aware of the important points in the following section when using natively compiled stored procedures.
Invoking a natively compiled stored procedure from an SSIS Package
To invoke a natively compiled stored procedure from an SSIS package, we recommend that you use an ODBC
Source or ODBC Destination with an SQL statement of the format: <procedure name> without the EXEC
keyword. If you use the EXEC keyword in the SQL statement, you will see an error message because the ODBC
connection manager interprets the SQL command text as a Transact-SQL statement rather than a stored procedure
and use cursors, which are not supported for execution of natively compiled stored procedures. The connection
manager treats the SQL statement without the EXEC keyword as a stored procedure call and will not use a cursor.
You can also use ADO .NET Source and OLE DB Source to invoke a natively compiled stored procedure, but we
recommend that you use ODBC Source. If you configure the ADO .NET Source to execute a natively compiled
stored procedure, you will see an error message because the data provider for SQL Server (SqlClient), which the
ADO .NET Source uses by default, does not support execution of natively compiled stored procedures. You can
configure the ADO .NET Source to use the ODBC Data Provider, OLE DB Provider for SQL Server, or SQL Server
Native Client. However, note that the ODBC Source performs better than ADO .NET Source with ODBC Data
Provider.
See Also
SQL Server Support for In-Memory OLTP
SQL Server Management Studio Support for InMemory OLTP
3/24/2017 • 6 min to read • Edit Online
SQL Server Management Studio is an integrated environment for managing your SQL Server infrastructure. SQL
Server Management Studio provides tools to configure, monitor, and administer instances of SQL Server. For more
information, see SQL Server Management Studio
The tasks in this topic describe how to use SQL Server Management Studio to manage memory-optimized tables;
indexes on memory-optimized tables; natively compiled stored procedures; and user-defined, memory-optimized
table types.
For information on how to programmatically create memory-optimized tables, see Creating a Memory-Optimized
Table and a Natively Compiled Stored Procedure.
To create a database with a memory-optimized data filegroup
1. In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that
instance.
2. Right-click Databases, and then click New Database.
3. To add a new memory-optimized data filegroup, click the Filegroups page. Under MEMORY OPTIMIZED
DATA, click Add filegroup and then enter the name of the memory-optimized data filegroup. The column
labeled FILESTREAM Files represents the number of containers in the filegroup. Containers are added on
the General page.
4. To add a file (container) to the filegroup, click the General page. Under Database files, click Add. Select
File Type as FILESTREAM Data, specify the logical name of the container, select the memory-optimized
filegroup, and make sure that Autogrowth / Maxsize is set to Unlimited.
For more information on how to create a new database by using SQL Server Management Studio, see
Create a Database.
To create a memory-optimized table
1. In Object Explorer, right-click the Tables node of your database, click New, and then click Memory
Optimized Table.
A template for creating memory-optimized tables is displayed.
2. To replace the template parameters, click Specify Values for Template Parameters on the Query menu.
For more information on how to use templates, see Template Explorer.
3. In Object Explorer, tables will be ordered first by disk-based tables followed by memory-optimized tables.
Use Object Explorer Details to see all tables ordered by name.
To create a natively compiled stored procedure
1. In Object Explorer, right-click the Stored Procedures node of your database, click New, and then click
Natively Compiled Stored Procedure.
A template for creating natively compiled stored procedures is displayed.
2. To replace the template parameters, click Specify Values for Template Parameters on the Query menu.
For more information on how to create a new stored procedure, see Create a Stored Procedure.
To create a user-defined memory-optimized table type
1. In Object Explorer, expand the Types node of your database, right-click the User-Defined Table Types
node, click New, and then click User-Defined Memory Optimized Table Type.
A template for creating user-defined memory-optimized table type is displayed.
2. To replace the template parameters, click Specify Values for Template Parameters on the Query menu.
For more information on how to create a new stored procedure, see CREATE TYPE (Transact-SQL).
Memory Monitoring
View Memory Usage by Memory-Optimized Objects Report
In Object Explorer, right-click your database, click Reports, click Standard Reports, and then click
Memory Usage By Memory Optimized Objects.
This report provides detailed data on the utilization of memory space by memory-optimized objects within
the database.
View Properties for Allocated and Used Memory for a Table, Database
1. To get information about in-memory usage:
In Object Explorer, right-click on your memory-optimized table, click Properties, and then the
Storage page. The value for the Data Space property indicates the memory used by the data in the
table. The value for the Index Space property indicates the memory used by the indexes on table.
In Object Explorer, right-click on your database, click Properties, and then click the General page.
The value for the Memory Allocated To Memory Optimized Objects property indicates the
memory allocated to memory-optimized objects in the database. The value for the Memory Used By
Memory Optimized Objects property indicates the memory used by memory-optimized objects in
the database.
Supported Features in SQL Server Management Studio
SQL Server Management Studio supports features and operations that are supported by the database engine on
databases with memory-optimized data filegroup, memory-optimized tables, indexes, and natively compiled
stored procedures.
For database, table, stored procedure, user-defined table type, or index objects, the following SQL Server
Management Studio features have been updated or extended to support In-Memory OLTP.
Object Explorer
Context menus
Filter settings
Script As
Tasks
Reports
Properties
Database tasks:
Attach and detach a database that contains memory-optimized tables.
The Attach Databases user interface does not display the memory-optimized data filegroup.
However, you can proceed with attaching the database and the database will be attached
correctly.
NOTE
If you want to use SQL Server Management Studio to attach a database that has a memoryoptimized data filegroup container, and if the database's memory-optimized data filegroup container
was created on another computer, the location of the memory-optimized data filegroup container
must be the same on both computers. If you want the location of the database's memory-optimized
data filegroup container to be different on the new computer, you can, use Transact-SQL to attach
the database. In the following example, the location of the memory-optimized data filegroup
container on the new computer is C:\Folder2. But when the memory-optimized data filegroup
container was created, on the first computer, the location was C:\Folder1.
CREATE DATABASE[imoltp] ON
(NAME =N'imoltp',FILENAME=N'C:\Folder2\imoltp.mdf'),
(NAME =N'imoltp_mod1',FILENAME=N'C:\Folder2\imoltp_mod1'),
(NAME =N'imoltp_log',FILENAME=N'C:\Folder2\imoltp_log.ldf')
FOR ATTACH
GO
Generate scripts.
In the Generate and Publish Scripts Wizard, the default value for Check for object
existence scripting option is FALSE. If the value of Check for object existence scripting
option is set to TRUE in the Set Scripting Options screen of the wizard, the script generated
would contain "CREATE PROCEDURE AS" and "ALTER PROCEDURE ". When executed, the
generated script will return an error as ALTER PROCEDURE is not supported on natively
compiled stored procedures.
To change the generated script for each natively compiled stored procedure:
1. In "CREATE PROCEDURE AS", replace "AS" with "".
2. Delete "ALTER PROCEDURE ".
Copy databases. For databases with memory-optimized objects, the creation of the database
on the destination server and transfer of data will not be executed within a transaction.
Import and export data. Use the SQL Server Import and Export WizardCopy data from
one or more tables or views option. If the destination table is a memory-optimized table
that does not exist in the destination database:
1. In the SQL Server Import and Export Wizard, in the Specify Table Copy or Query
screen, select Copy data from one or more tables or views. Then click Next.
2. Click Edit Mappings. Then select Create destination table and click Edit SQL. Enter
the CREATE TABLE syntax for creating a memory-optimized table on the destination
database. Click OK and complete the remaining steps in the wizard.
Maintenance plans. The maintenance tasks reorganize index and rebuild index are not
supported on memory-optimized tables and their indexes. Therefore, when a maintenance
plan for rebuild index and reorganize index are executed, the memory-optimized tables and
their indexes in the selected databases are omitted.
The maintenance task update statistics are not supported with a sample scan on memoryoptimized tables and their indexes. Therefore, when a maintenance plan for update statistics is
executed, the statistics for memory-optimized tables and their indexes are always updated to
WITH FULLSCAN, NORECOMPUTE.
Object Explorer details pane
Template Explorer
Unsupported Features in SQL Server Management Studio
For In-Memory OLTP objects, SQL Server Management Studio does not support features and operations that are
also not supported by the database engine.
For more information on unsupported SQL Server features, see Unsupported SQL Server Features for In-Memory
OLTP.
See Also
SQL Server Support for In-Memory OLTP
High Availability Support for In-Memory OLTP
databases
3/24/2017 • 2 min to read • Edit Online
Databases containing memory-optimized tables, with or without native compiled stored procedures, are fully
supported with Always On Availability Groups. There is no difference in the configuration and support for
databases which contain In-Memory OLTP objects as compared to those without.
When an in-memory OLTP database is deployed in an Always On Availability Group configuration, changes to
memory-optimized tables on the primary replica are applied in memory to the tables on the secondary replicas,
when REDO is applied. This means that failover to a secondary replica can be very quick, since the data is already in
memory. In addition, the tables are available for queries on secondary replicas that have been configured for read
access.
Always On Availability Groups and In-Memory OLTP Databases
Configuring databases with In-Memory OLTP components provides the following:
A fully integrated experience
You can configure your databases containing memory-optimized tables using the same wizard with the
same level of support for both synchronous and asynchronous secondary replicas. Additionally, health
monitoring is provided using the familiar Always On dashboard in SQL Server Management Studio.
Comparable Failover time
Secondary replicas maintain the in-memory state of the durable memory-optimized tables. In the event of
automatic or forced failover, the time to failover to the new primary is comparable to disk-bases tables as
no recovery is needed. Memory-optimized tables created as SCHEMA_ONLY are supported in this
configuration. However changes to these tables are not logged and therefore no data will exist in these
tables on the secondary replica.
Readable Secondary
You can access and query memory-optimized tables on the secondary replica if it has been configured for
read access. In SQL Server 2016, the read timestamp on the secondary replica is in close synchronization
with the read timestamp on the primary replica, which means that changes on the primary become visible
on the secondary very quickly. This close synchronization behaviour is different from SQL Server 2014 InMemory OLTP.
Failover Clustering Instance (FCI) and In-Memory OLTP Databases
To achieve high-availability in a shared-storage configuration, you can set up failover clustering on instances with
one or more database with memory-optimized tables. You need to consider the following factors as part of setting
up an FCI.
Recovery Time Objective
Failover time will likely to be higher as the memory-optimized tables must be loaded into memory before
the database is made available.
SCHEMA_ONLY tables
Be aware that SCHEMA_ONLY tables will be empty with no rows after the failover. This is as designed and
defined by the application. This is exactly the same behavior when you restart an In-Memory OLTP database
with one or more SCHEMA_ONLY tables.
Support for transaction replication in In-Memory OLTP
Tables acting as transactional replication subscribers, excluding Peer-to-peer transactional replication, can be
configured as memory-optimized tables. Other replication configurations are not compatible with memoryoptimized tables. For more information see Replication to Memory-Optimized Table Subscribers.
See Also
Always On Availability Groups (SQL Server)
Overview of Always On Availability Groups (SQL Server)
Active Secondaries: Readable Secondary Replicas (Always On Availability Groups)
Replication to Memory-Optimized Table Subscribers
Transact-SQL Support for In-Memory OLTP
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
The following Transact-SQL statements include syntax options to support In-Memory OLTP:
ALTER DATABASE File and Filegroup Options (Transact-SQL) (added MEMORY_OPTIMIZED_DATA)
ALTER TRIGGER (Transact-SQL)
CREATE DATABASE (SQL Server Transact-SQL) (added MEMORY_OPTIMIZED_DATA)
CREATE PROCEDURE (Transact-SQL)
CREATE TABLE (Transact-SQL)
CREATE TRIGGER (Transact-SQL)
CREATE TYPE (Transact-SQL)
DECLARE @local_variable (Transact-SQL)
In a natively compiled stored procedure, you can declare a variable as NOT NULL. You cannot do so in a
regular stored procedure.
AUTO_UPDATE_STATISTICS can be ON for memory-optimized tables, starting with SQL Server 2016. For
more information, see sp_autostats (Transact-SQL).
SET STATISTICS XML (Transact-SQL) ON is not supported for natively compiled stored procedures.
For information on unsupported features, see Transact-SQL Constructs Not Supported by In-Memory
OLTP.
For information about supported constructs in and on natively compiled stored procedures, see Supported
Features for Natively Compiled T-SQL Modules and Supported DDL for Natively Compiled T-SQL modules.
See Also
In-Memory OLTP (In-Memory Optimization)
Migration Issues for Natively Compiled Stored Procedures
Unsupported SQL Server Features for In-Memory OLTP
Natively Compiled Stored Procedures
Supported Data Types for In-Memory OLTP
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This article lists the data types that are unsupported for the In-Memory OLTP features of:
Memory-optimized tables
Natively compiled stored procedures
Unsupported Data Types
The following data types are not supported:
datetimeoffset (Transact-SQL)
geography (Transact-SQL)
geometry (Transact-SQL)
hierarchyid (Transact-SQL)
rowversion (Transact-SQL)
xml (Transact-SQL)
sql_variant (Transact-SQL)
User-Defined Types
.
Notable Supported Data Types
Most data types are supported by the features of In-Memory OLTP. The following few are worth noting explicitly:
STRING AND BINARY TYPES
FOR MORE INFORMATION
binary and varbinary*
binary and varbinary (Transact-SQL)
char and varchar*
char and varchar (Transact-SQL)
nchar and nvarchar*
nchar and nvarchar (Transact-SQL)
For the preceding string and binary data types, starting with SQL Server 2016:
An individual memory-optimized table can also have several long columns such as
though their lengths would add to more than the physical row size of 8060 bytes.
nvarchar(4000)
, even
A memory-optimized table can have max length string and binary columns of data types such as
varchar(max) .
Identify LOBs and other columns that are off-row
The following Transact-SQL SELECT statement reports all columns that are off-row, for memory-optimized tables.
Note that:
All index key columns are stored in-row.
Nonunique index keys can now include NULLable columns, on memory-optimized tables.
Indexes can be declared as UNIQUE on a memory-optimized table.
All LOB columns are stored off-row.
A max_length of -1 indicates a large object (LOB) column.
SELECT
OBJECT_NAME(m.object_id) as [table],
c.name
as [column],
c.max_length
FROM
sys.memory_optimized_tables_internal_attributes AS m
JOIN sys.columns
AS c
ON m.object_id = c.object_id
AND m.minor_id = c.column_id
WHERE
m.type = 5;
Natively compiled modules support for LOBs
When you use a built-in string function in a natively compiled modules, such as a native proc, the function can
accept a string LOB type. For example, in a native proc, the LTrim function can input a parameter of type
nvarchar(max) or varbinary(max).
These LOBs can be the return type from a natively compiled scalar UDF (user-defined function).
Other Data Types
OTHER TYPES
FOR MORE INFORMATION
table types
Memory-Optimized Table Variables
See Also
Transact-SQL Support for In-Memory OLTP
Implementing LOB Columns in a Memory-Optimized Table
Implementing SQL_VARIANT in a Memory-Optimized Table
Accessing Memory-Optimized Tables Using
Interpreted Transact-SQL
3/24/2017 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
With only a few exceptions, you can access memory-optimized tables using any Transact-SQL query or DML
operation (select, insert, update, or delete), ad hoc batches, and SQL modules such as stored procedures, tablevalue functions, triggers, and views.
Interpreted Transact-SQL refers to Transact-SQL batches or stored procedures other than a natively compiled
stored procedure. Interpreted Transact-SQL access to memory-optimized tables is referred to as interop access.
Starting with SQL Server 2016, queries in interpreted Transact-SQL can scan memory-optimized tables in parallel,
instead of just in serial mode.
Memory-optimized tables can also be accessed using a natively compiled stored procedure. Natively compiled
stored procedures are recommended for performance-critical OLTP operations.
Interpreted Transact-SQL access is recommended for these scenarios:
Ad hoc queries and administrative tasks.
Reporting queries, which typically use constructs not available in natively compiled stored procedures (such
as window functions, sometimes referred to as OVER functions).
To migrate performance-critical parts of your application to memory-optimized tables, with minimal (or no)
application code changes. You can potentially see performance improvements from migrating tables. If you
then migrate stored procedures to natively compiled stored procedures, you may see further performance
improvement.
When a Transact-SQL statement is not available for natively compiled stored procedures.
However, the following Transact-SQL constructs are not supported in interpreted Transact-SQL stored procedures
that access data in a memory-optimized table.
AREA
UNSUPPORTED
Access to tables
TRUNCATE TABLE
MERGE (memory-optimized table as target)
Dynamic and keyset cursors (these automatically degrade to
static).
Access from CLR modules, using the context connection.
Referencing a memory-optimized table from an indexed view.
Cross-database
Cross-database queries
Cross-database transactions
Linked servers
Table Hints
For more information about table hints, see. Table Hints (Transact-SQL). The SNAPSHOT was added to support InMemory OLTP.
The following table hints are not supported when accessing a memory-optimized table using interpreted TransactSQL.
HOLDLOCK
IGNORE_CONSTRAINTS
IGNORE_TRIGGERS
NOWAIT
PAGLOCK
READCOMMITTED
READCOMMITTEDLOCK
READPAST
READUNCOMMITTED
ROWLOCK
SPATIAL_WINDOW_MAX_CE
LLS = integer
TABLOCK
TABLOCKXX
UPDLOCK
XLOCK
When accessing a memory-optimized table from an explicit or implicit transaction using interpreted Transact-SQL,
you must do at least one of the following:
Specify an isolation level table hint such as SNAPSHOT, REPEATABLEREAD, or SERIALIZABLE.
Set the database option MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT to ON.
An isolation level table hint is not required for memory-optimized tables accessed by queries running in autocommit mode.
See Also
Transact-SQL Support for In-Memory OLTP
Migrating to In-Memory OLTP
Supported Features for Natively Compiled T-SQL
Modules
4/19/2017 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This topic contains a list of T-SQL surface area and supported features in the body of natively compiled T-SQL
modules, such as stored procedures (CREATE PROCEDURE (Transact-SQL)), scalar user-defined functions, inline
table-valued functions, and triggers.
For supported features around the definition of native modules, see Supported DDL for Natively Compiled T-SQL
modules.
Query Surface Area in Native Modules
Data Modification
Control-of-flow language
Supported Operators
Built-in Functions in Natively Compiled Modules
Auditing
Table and Query Hints
Limitations on Sorting
For complete information about unsupported constructs, and for information about how to work around
some of the unsupported features in natively compiled modules, see Migration Issues for Natively
Compiled Stored Procedures. For more information about unsupported features, see Transact-SQL
Constructs Not Supported by In-Memory OLTP.
Query Surface Area in Native Modules
The following query constructs are supported:
CASE expression: CASE can be used in any statement or clause that allows a valid expression.
Applies to: SQL Server 2016.
Beginning with SQL Server 2016, CASE statements are now supported for natively compiled T-SQL modules.
SELECT clause:
Columns and name aliases (using either AS or = syntax).
Scalar subqueries
Applies to: SQL Server 2016. Beginning with SQL Server 2016, scalar subqueries are now supported in
natively compiled modules.
TOP*
SELECT DISTINCT
Applies to: SQL Server 2016. Beginning with SQL Server 2016, the DISTINCT operator is supported
in natively compiled modules.
DISTINCT aggregates are not supported.
UNION and UNION ALL
Applies to: SQL Server 2016. Beginning with SQL Server 2016, UNION and UNION ALL operators are
now supported in natively compiled modules.
Variable assignments
FROM clause:
FROM <memory optimized table or table variable>
FROM <natively compiled inline TVF>
LEFT OUTER JOIN, RIGHT OUTER JOIN, CROSS JOIN and INNER JOIN.
Applies to: SQL Server 2016. Beginning with SQL Server 2016, JOINS are now supported in natively
compiled modules.
Subqueries [AS] table_alias . For more information, see FROM (Transact-SQL).
Applies to: SQL Server 2016. Beginning with SQL Server 2016, Subqueries are now supported in
natively compiled modules.
WHERE clause:
Filter predicate IS [NOT] NULL
AND, BETWEEN
OR, NOT, IN, EXISTS
Applies to: SQL Server 2016. Beginning with SQL Server 2016, OR/NOT/IN/EXISTS operators are now
supported in natively compiled modules.
GROUP BY clause:
Aggregate functions AVG, COUNT, COUNT_BIG, MIN, MAX, and SUM.
MIN and MAX are not supported for types nvarchar, char, varchar, varchar, varbinary, and binary.
ORDER BY clause:
There is no support for DISTINCT in the ORDER BY clause.
Is supported with GROUP BY (Transact-SQL) if an expression in the ORDER BY list appears verbatim in the
GROUP BY list.
For example, GROUP BY a + b ORDER BY a + b is supported, but GROUP BY a, b ORDER BY a + b is not.
HAVING clause:
Is subject to the same expression limitations as the WHERE clause.
ORDER BY and TOP are supported in natively compiled modules, with some restrictions
There is no support for WITH TIES or PERCENT in the TOP clause.
There is no support for DISTINCT in the ORDER BY clause.
TOP combined with ORDER BY does not support more than 8,192 when using a constant in the TOP
clause.
This limit may be lowered in case the query contains joins or aggregate functions. (For example, with
one join (two tables), the limit is 4,096 rows. With two joins (three tables), the limit is 2,730 rows.)
You can obtain results greater than 8,192 by storing the number of rows in a variable:
DECLARE @v INT = 9000;
SELECT TOP (@v) … FROM … ORDER BY …
However, a constant in the TOP clause results in better performance compared to using a variable.
These restrictions on natively compiled Transact-SQL do not apply to interpreted Transact-SQL access on
memory-optimized tables.
Data Modification
The following DML statements are supported.
INSERT VALUES (one row per statement) and INSERT ... SELECT
UPDATE
DELETE
WHERE is supported with UPDATE and DELETE statements.
Control-of-flow language
The following control-of-flow language constructs are supported.
IF...ELSE (Transact-SQL)
WHILE (Transact-SQL)
RETURN (Transact-SQL)
DECLARE @local_variable (Transact-SQL) can use all Supported Data Types for In-Memory OLTP, as well as
memory-optimized table types. Variables can be declared as NULL or NOT NULL.
SET @local_variable (Transact-SQL)
TRY...CATCH (Transact-SQL)
To achieve optimal performance, use a single TRY/CATCH block for an entire natively compiled TSQL module.
THROW (Transact-SQL)
BEGIN ATOMIC (at the outer level of the stored procedure). For more detail see Atomic Blocks.
Supported Operators
The following operators are supported.
Comparison Operators (Transact-SQL) (for example, >, <, >=, and <=)
Unary operators (+, -).
Binary operators (*, /, +, -, % (modulo)).
The plus operator (+) is supported on both numbers and strings.
Logical operators (AND, OR, NOT).
Bitwise operators ~, &, |, and ^
APPLY operator
Applies to: SQL Server 2017 CTP 1.1.
Beginning with SQL Server 2017 CTP 1.1, the APPLY operator is supported in natively compiled
modules.
Built-in Functions in Natively Compiled Modules
The following functions are supported in constraints on memory-optimized tables and in natively compiled T-SQL
modules.
All Mathematical Functions (Transact-SQL)
Date functions: CURRENT_TIMESTAMP, DATEADD, DATEDIFF, DATEFROMPARTS, DATEPART,
DATETIME2FROMPARTS, DATETIMEFROMPARTS, DAY, EOMONTH, GETDATE, GETUTCDATE, MONTH,
SMALLDATETIMEFROMPARTS, SYSDATETIME, SYSUTCDATETIME, and YEAR.
String functions: LEN, LTRIM, RTRIM, and SUBSTRING.
Applies to: SQL Server 2017 CTP 1.1.
Beginning with SQL Server 2017 CTP 1.1, the following built-in functions are also supported: TRIM,
TRANSLATE, and CONCAT_WS.
Identity functions: SCOPE_IDENTITY
NULL functions: ISNULL
Uniqueidentifier functions: NEWID and NEWSEQUENTIALID
JSON functions
Applies to: SQL Server 2017 CTP 1.1.
Beginning with SQL Server 2017 CTP 1.1, the JSON functions are supported in natively compiled
modules.
Error functions: ERROR_LINE, ERROR_MESSAGE, ERROR_NUMBER, ERROR_PROCEDURE,
ERROR_SEVERITY, and ERROR_STATE
System Functions: @@rowcount. Statements inside natively compiled stored procedures update
@@rowcount and you can use @@rowcount in a natively compiled stored procedure to determine the
number of rows affected by the last statement executed within that natively compiled stored procedure.
However, @@rowcount is reset to 0 at the start and at the end of the execution of a natively compiled
stored procedure.
Security functions: IS_MEMBER({'group' | 'role'}), IS_ROLEMEMBER ('role' [, 'database_principal']),
IS_SRVROLEMEMBER ('role' [, 'login']), ORIGINAL_LOGIN(), SESSION_USER, CURRENT_USER,
SUSER_ID(['login']), SUSER_SID(['login'] [, Param2]), SUSER_SNAME([server_user_sid]), SYSTEM_USER,
SUSER_NAME, USER, USER_ID(['user']), USER_NAME([id]), CONTEXT_INFO().
Executions of native modules can be nested.
Auditing
Procedure level auditing is supported in natively compiled stored procedures.
For more information about auditing, see Create a Server Audit and Database Audit Specification.
Table and Query Hints
The following are supported:
INDEX, FORCESCAN, and FORCESEEK hints, either in table hints syntax or in OPTION Clause (Transact-SQL)
of the query. For more information, see Table Hints (Transact-SQL).
FORCE ORDER
LOOP JOIN hint
OPTIMIZE FOR
For more information, see Query Hints (Transact-SQL).
Limitations on Sorting
You can sort greater than 8,000 rows in a query that uses TOP (Transact-SQL) and an ORDER BY Clause (TransactSQL). However, without ORDER BY Clause (Transact-SQL), TOP (Transact-SQL) can sort up to 8,000 rows (fewer
rows if there are joins).
If your query uses both the TOP (Transact-SQL) operator and an ORDER BY Clause (Transact-SQL), you can
specify up to 8192 rows for the TOP operator. If you specify more than 8192 rows you get the error message:
Msg 41398, Level 16, State 1, Procedure <procedureName>, Line <lineNumber> The TOP operator can
return a maximum of 8192 rows; <number> was requested.
If you do not have a TOP clause, you can sort any number of rows with ORDER BY.
If you do not use an ORDER BY clause, you can use any integer value with the TOP operator.
Example with TOP N = 8192: Compiles
CREATE PROCEDURE testTop
WITH EXECUTE AS OWNER, SCHEMABINDING, NATIVE_COMPILATION
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
SELECT TOP 8192 ShoppingCartId, CreatedDate, TotalPrice FROM dbo.ShoppingCart
ORDER BY ShoppingCartId DESC
END;
GO
Example with TOP N > 8192: Fails to compile.
CREATE PROCEDURE testTop
WITH EXECUTE AS OWNER, SCHEMABINDING, NATIVE_COMPILATION
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
SELECT TOP 8193 ShoppingCartId, CreatedDate, TotalPrice FROM dbo.ShoppingCart
ORDER BY ShoppingCartId DESC
END;
GO
The 8192 row limitation only applies to TOP N where N is a constant, as in the preceding examples. If you need
N greater than 8192 you can assign the value to a variable and use that variable with TOP .
Example using a variable: Compiles
CREATE PROCEDURE testTop
WITH EXECUTE AS OWNER, SCHEMABINDING, NATIVE_COMPILATION
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
DECLARE @v int = 8193
SELECT TOP (@v) ShoppingCartId, CreatedDate, TotalPrice FROM dbo.ShoppingCart
ORDER BY ShoppingCartId DESC
END;
GO
Limitations on rows returned: There are two cases where that can potentially reduce the number of rows that
can be returned by the TOP operator:
Using JOINs in the query. The influence of JOINs on the limitation depends on the query plan.
Using aggregate functions or references to aggregate functions in the ORDER BY clause.
The formula to calculate a worst case maximum supported N in TOP N is:
N = floor ( 65536 / number_of_tables * 8 + total_size+of+aggs ) .
See Also
Natively Compiled Stored Procedures
Migration Issues for Natively Compiled Stored Procedures
Supported DDL for Natively Compiled T-SQL
modules
3/24/2017 • 1 min to read • Edit Online
This topic lists the supported DDL constructs for natively compiled T-SQL modules, such as stored procedures,
scalar UDFs, inline TVFs, and triggers.
For information on features and T-SQL surface area that can be used as part of natively compiled T-SQL modules,
see Supported Features for Natively Compiled T-SQL Modules.
For information about unsupported constructs, see Transact-SQL Constructs Not Supported by In-Memory OLTP.
The following are supported:
CREATE PROCEDURE (Transact-SQL)
DROP PROCEDURE (Transact-SQL)
ALTER PROCEDURE (Transact-SQL)
SELECT (Transact-SQL) and INSERT SELECT statements
SCHEMABINDING and BEGIN ATOMIC (required for natively compiled stored procedures)
For more information, see Creating Natively Compiled Stored Procedures.
NATIVE_COMPILATION
For more information, see Native Compilation of Tables and Stored Procedures.
Parameters and variables can be declared as NOT NULL (available only for natively compiled modules:
natively compiled stored procedures and natively compiled, scalar user-defined functions).
Table-valued parameters.
For more information, see Use Table-Valued Parameters (Database Engine).
EXECUTE AS OWNER, SELF, CALLER and user.
GRANT and DENY permissions on tables and procedures.
For more information, see GRANT Object Permissions (Transact-SQL) and DENY Object Permissions
(Transact-SQL).
See Also
Natively Compiled Stored Procedures
Transact-SQL Constructs Not Supported by InMemory OLTP
4/25/2017 • 15 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
Memory-optimized tables, natively compiled stored procedures, and user-defined functions do not support the
full Transact-SQL surface area that is supported by disk-based tables, interpreted Transact-SQL stored
procedures, and user-defined functions. When attempting to use one of the unsupported features, the server
returns an error.
The error message text mentions the type of Transact-SQL statement (feature, operation, option, for example) and
well as the name of the feature or Transact-SQL keyword. Most unsupported features will return error 10794,
with the error message text indicating the unsupported feature. The following tables list the Transact-SQL features
and keywords that can appear in the error message text, as well as the corrective action to resolve the error.
For more information on supported features with memory-optimized tables and natively compiled stored
procedures, see:
Migration Issues for Natively Compiled Stored Procedures
Transact-SQL Support for In-Memory OLTP
Unsupported SQL Server Features for In-Memory OLTP
Natively Compiled Stored Procedures
Databases That Use In-Memory OLTP
The following table lists the Transact-SQL features that are not supported, and the keywords that can appear in
the message text of an error involving an In-Memory OLTP database. The table also lists the resolution for the
error.
TYPE
NAME
RESOLUTION
Option
AUTO_CLOSE
The database option
AUTO_CLOSE=ON is not supported
with databases that have a
MEMORY_OPTIMIZED_DATA filegroup.
Option
ATTACH_REBUILD_LOG
The CREATE database option
ATTACH_REBUILD_LOG is not
supported with databases that have a
MEMORY_OPTIMIZED_DATA filegroup.
Feature
DATABASE SNAPSHOT
Creating database snapshots is not
supported with databases that have a
MEMORY_OPTIMIZED_DATA filegroup.
TYPE
NAME
RESOLUTION
Feature
Replication using the sync_method
'database snapshot' or 'database
snapshot character'
Replication using the sync_method
'database snapshot' or 'database
snapshot character' is not supported
with databases that have a
MEMORY_OPTIMIZED_DATA filegroup.
Feature
DBCC CHECKDB
DBCC CHECKDB skips the memoryoptimized tables in the database.
DBCC CHECKTABLE
DBCC CHECKTABLE will fail for
memory-optimized tables.
Memory-Optimized Tables
The following table lists the Transact-SQL features that are not supported, and the keywords that can appear in
the message text of an error involving a memory-optimized table. The table also lists the resolution for the error.
TYPE
NAME
RESOLUTION
Feature
ON
Memory-optimized tables cannot be
placed on a filegroup or partition
scheme. Remove the ON clause from
the CREATE TABLE statement.
All memory optimized tables are
mapped to memory-optimized
filegroup.
Data type
Data type name
The indicated data type is not
supported. Replace the type with one
of the supported data types. For more
information, see Supported Data Types
for In-Memory OLTP.
Feature
Computed columns
Computed columns are not supported
for memory-optimized tables. Remove
the computed columns from the
CREATE TABLE statement.
Applies to: SQL Server 2017 CTP 1.1.
Beginning with SQL Server 2017 CTP
1.1, computed columns are supported
in memory-optimized tables and
indexes.
Feature
Replication
Replication is not supported with
memory-optimized tables.
Feature
FILESTREAM
FILESTREAM storage is not supported
columns of memory-optimized tables.
Remove the FILESTREAM keyword
from the column definition.
TYPE
NAME
RESOLUTION
Feature
SPARSE
Columns of memory-optimized tables
cannot be defined as SPARSE. Remove
the SPARSE keyword from the column
definition.
Feature
ROWGUIDCOL
The option ROWGUIDCOL is not
supported for columns of memoryoptimized tables. Remove the
ROWGUIDCOL keyword from the
column definition.
Feature
FOREIGN KEY
For memory-optimized tables,
FOREIGN KEY constraints are only
supported for foreign keys referencing
primary keys. Remove the constraint
from the table definition if the foreign
key references a unique constraint.
Feature
clustered index
Specify a nonclustered index. In the
case of a primary key index be sure to
specify PRIMARY KEY
NONCLUSTERED [HASH].
Feature
DDL inside transactions
Memory-optimized tables and natively
compiled stored procedures cannot be
created or dropped in the context of a
user transaction. Do not start a
transaction and ensure the session
setting IMPLICIT_TRANSACTIONS is
OFF before executing the CREATE or
DROP statement.
Feature
DDL triggers
Memory-optimized tables and natively
compiled stored procedures cannot be
created or dropped if there is a server
or database trigger for that DDL
operation. Remove the server and
database triggers on CREATE/DROP
TABLE and CREATE/DROP PROCEDURE.
Feature
EVENT NOTIFICATION
Memory-optimized tables and natively
compiled stored procedures cannot be
created or dropped if there is a server
or database event notification for that
DDL operation. Remove the server and
database event notifications on CREATE
TABLE or DROP TABLE and CREATE
PROCEDURE or DROP PROCEDURE.
Feature
FileTable
Memory-optimized tables cannot be
created as file tables. Remove the
argument AS FileTable from the
CREATE TABLE statement
TYPE
NAME
RESOLUTION
Operation
Update of primary key columns
Primary key columns in memoryoptimized tables and table types
cannot be updated. If the primary key
needs to be updated, delete the old
row and insert the new row with the
updated primary key.
Operation
CREATE INDEX
Indexes on memory-optimized tables
must be specified inline with the
CREATE TABLE statement, or with the
ALTER TABLE statement.
Operation
CREATE FULLTEXT INDEX
Fulltext indexes are not supported for
memory-optimized tables.
Operation
schema change
Memory-optimized tables and natively
compiled stored procedures do not
support schema changes, for example,
sp_rename.
Attempting to make certain schema
changes will generate error 12320.
Operations that require a change to
the schema version, for example
renaming, are not supported with
memory-optimized tables.
Certain schema changes using ALTER
TABLE and ALTER PROCEDURE are
allowed.
Applies to: SQL Server 2016.
Beginning with SQL Server 2016,
sp_rename is supported.
Operation
TRUNCATE TABLE
The TRUNCATE operation is not
supported for memory-optimized
tables. To remove all rows from a table,
delete all rows using DELETE
FROMtable or drop and recreate the
table.
Operation
ALTER AUTHORIZATION
Changing the owner of an existing
memory-optimized table or natively
compiled stored procedure is not
supported. Drop and recreate the table
or procedure to change ownership.
Operation
ALTER SCHEMA
Transferring a securable between
schemas.
Operation
DBCC CHECKTABLE
DBCC CHECKTABLE is not supported
with memory-optimized tables.
TYPE
NAME
RESOLUTION
Feature
ANSI_PADDING OFF
The session option ANSI_PADDING
must be ON when creating memoryoptimized tables or natively compiled
stored procedures. Execute SET
ANSI_PADDING ON before running
the CREATE statement.
Option
DATA_COMPRESSION
Data compression is not supported for
memory-optimized tables. Remove the
option from the table definition.
Feature
DTC
Memory-optimized tables and natively
compiled stored procedures cannot be
accessed from distributed transactions.
Use SQL transactions instead.
Operation
Memory-optimized tables as target of
MERGE
Memory-optimized tables cannot be
the target of a MERGE operation. Use
INSERT, UPDATE, or DELETE
statements instead.
Indexes on Memory-Optimized Tables
The following table lists the Transact-SQL features and keywords that can appear in the message text of an error
involving an index on a memory-optimized table, as well as the corrective action to resolve the error.
TYPE
NAME
RESOLUTION
Feature
Filtered index
Filtered indexes are not supported with
memory-optimized tables. Omit the
WHERE clause from the index
specification.
Feature
Included columns
Specifying included columns is not
necessary for memory-optimized
tables. All columns of the memoryoptimized table are implicitly included
in every memory-optimized index.
Operation
DROP INDEX
Dropping indexes on memoryoptimized tables is not supported. You
can delete indexes using ALTER TABLE.
For more information, see Altering
Memory-Optimized Tables.
Index option
Index option
Only one index option is supported –
BUCKET_COUNT for HASH indexes.
Nonclustered Hash Indexes
The following table lists the Transact-SQL features and keywords that can appear in the message text of an error
involving a nonclustered hash index, as well as the corrective action to resolve the error.
TYPE
NAME
RESOLUTION
Option
ASC/DESC
Nonclustered hash indexes are not
ordered. Remove the keywords ASC
and DESC from the index key
specification.
Natively Compiled Stored Procedures and User-Defined Functions
The following table lists the Transact-SQL features and keywords that can appear in the message text of an error
involving natively compiled stored procedures and user-defined functions, as well as the corrective action to
resolve the error.
TYPE
FEATURE
RESOLUTION
Feature
Inline table variables
Table types cannot be declared inline
with variable declarations. Table types
must be declared explicitly using a
CREATE TYPE statement.
Feature
Cursors
Cursors are not supported on or in
natively compiled stored procedures.
When executing the procedure from
the client, use RPC rather than the
cursor API. With ODBC, avoid the
Transact-SQL statement EXECUTE,
instead specify the name of the
procedure directly.
When executing the procedure from a
Transact-SQL batch or another stored
procedure, avoid using a cursor with
the natively compiled stored procedure.
When creating a natively compiled
stored procedure, rather than using a
cursor, use set-based logic or a WHILE
loop.
Feature
Non-constant parameter defaults
When using default values with
parameters on natively compiled stored
procedures, the values must be
constants. Remove any wildcards from
the parameter declarations.
Feature
EXTERNAL
CLR stored procedures cannot be
natively compiled. Either remove the AS
EXTERNAL clause or the
NATIVE_COMPILATION option from
the CREATE PROCEDURE statement.
Feature
Numbered stored procedures
Natively compiled stored procedures
cannot be numbered. Remove the
;number from the CREATE
PROCEDURE statement.
TYPE
FEATURE
RESOLUTION
Feature
multi-row INSERT … VALUES
statements
Cannot insert multiple rows using the
same INSERT statement in a natively
compiled stored procedure. Create
INSERT statements for each row.
Feature
Common Table Expressions (CTEs)
Common table expressions (CTE) are
not supported in natively compiled
stored procedures. Rewrite the query.
Feature
COMPUTE
The COMPUTE clause is not supported.
Remove it from the query.
Feature
SELECT INTO
The INTO clause is not supported with
the SELECT statement. Rewrite the
query as p INTOTableSELECT.
Feature
incomplete insert column list
In general, in INSERT statements values
must be specified for all columns in the
table.
However, we do support DEFAULT
constraints and IDENTITY(1,1) columns
on memory optimized tables. These
columns can be, and in the case of
IDENTITY columns must be, omitted
from the INSERT column list.
Feature
Function
Some built-in functions are not
supported in natively compiled stored
procedures. Remove the rejected
function from the stored procedure. For
more information about supported
built-in functions, see
Supported Features for Natively
Compiled T-SQL Modules, or
Natively Compiled Stored Procedures.
Feature
CASE
The CASE statement is not supported
in queries inside natively compiled
stored procedures. Create queries for
each case. For more information, see
Implementing a CASE Expression in a
Natively Compiled Stored Procedure.
Feature
INSERT EXECUTE
Remove the reference.
Feature
EXECUTE
Supported only to execute natively
compiled stored procedures and userdefined functions.
Feature
user-defined aggregates
User-defined aggregate functions
cannot be used in natively compiled
stored procedures. Remove the
reference to the function from the
procedure.
TYPE
FEATURE
RESOLUTION
Feature
browse mode metadata
Natively compiled stored procedures do
not support browse mode metadata.
Make sure the session option
NO_BROWSETABLE is set to OFF.
Feature
DELETE with FROM clause
The FROM clause is not supported for
DELETE statements with a table source
in natively compiled stored procedures.
DELETE with the FROM clause is
supported when it is used to indicate
the table to delete from.
Feature
UPDATE with FROM clause
The FROM clause is not supported for
UPDATE statements in natively
compiled stored procedures.
Feature
temporary procedures
Temporary stored procedures cannot
be natively compiled. Either create a
permanent natively compiled stored
procedure or a temporary interpreted
Transact-SQL stored procedure.
Isolation level
READ UNCOMMITTED
The isolation level READ
UNCOMMITTED is not supported for
natively compiled stored procedures.
Use a supported isolation level, such as
SNAPSHOT.
Isolation level
READ COMMITTED
The isolation level READ COMMITTED
is not supported for natively compiled
stored procedures. Use a supported
isolation level, such as SNAPSHOT.
Feature
temporary tables
Tables in tempdb cannot be used in
natively compiled stored procedures.
Instead, use a table variable or a
memory-optimized table with
DURABILITY=SCHEMA_ONLY.
Feature
DTC
Memory-optimized tables and natively
compiled stored procedures cannot be
accessed from distributed transactions.
Use SQL transactions instead.
Feature
EXECUTE WITH RECOMPILE
The option WITH RECOMPILE is not
supported for natively compiled stored
procedures.
Feature
Execution from the dedicated
administrator connection.
Natively compiled stored procedures
cannot be executed from the dedicated
admin connection (DAC). Use a regular
connection instead.
TYPE
FEATURE
RESOLUTION
Operation
savepoint
Natively compiled stored procedures
cannot be invoked from transactions
that have an active savepoint. Remove
the savepoint from the transaction.
Operation
ALTER AUTHORIZATION
Changing the owner of an existing
memory-optimized table or natively
compiled stored procedure is not
supported. Drop and recreate the table
or procedure to change ownership.
Operator
OPENROWSET
This operator is not supported. Remove
OPENROWSET from the natively
compiled stored procedure.
Operator
OPENQUERY
This operator is not supported. Remove
OPENQUERY from the natively
compiled stored procedure.
Operator
OPENDATASOURCE
This operator is not supported. Remove
OPENDATASOURCE from the natively
compiled stored procedure.
Operator
OPENXML
This operator is not supported. Remove
OPENXML from the natively compiled
stored procedure.
Operator
CONTAINSTABLE
This operator is not supported. Remove
CONTAINSTABLE from the natively
compiled stored procedure.
Operator
FREETEXTTABLE
This operator is not supported. Remove
FREETEXTTABLE from the natively
compiled stored procedure.
Feature
table-valued functions
Table-valued functions cannot be
referenced from natively compiled
stored procedures. One possible
workaround for this restriction is to add
the logic in the table-valued functions
to the procedure body.
Operator
CHANGETABLE
This operator is not supported. Remove
CHANGETABLE from the natively
compiled stored procedure.
Operator
GOTO
This operator is not supported. Use
other procedural constructs such as
WHILE.
Operator
OFFSET
This operator is not supported. Remove
OFFSET from the natively compiled
stored procedure.
TYPE
FEATURE
RESOLUTION
Operator
INTERSECT
This operator is not supported. Remove
INTERSECT from the natively compiled
stored procedure. In some cases an
INNER JOIN can be used to obtain the
same result.
Operator
EXCEPT
This operator is not supported. Remove
EXCEPT from the natively compiled
stored procedure.
Operator
APPLY
This operator is not supported. Remove
APPLY from the natively compiled
stored procedure.
Applies to: SQL Server 2017 CTP 1.1.
Beginning with SQL Server 2017 CTP
1.1, the APPLY operator is supported in
natively compiled modules.
Operator
PIVOT
This operator is not supported. Remove
PIVOT from the natively compiled
stored procedure.
Operator
UNPIVOT
This operator is not supported. Remove
UNPIVOT from the natively compiled
stored procedure.
Operator
CONTAINS
This operator is not supported. Remove
CONTAINS from the natively compiled
stored procedure.
Operator
FREETEXT
This operator is not supported. Remove
FREETEXT from the natively compiled
stored procedure.
Operator
TSEQUAL
This operator is not supported. Remove
TSEQUAL from the natively compiled
stored procedure.
Operator
LIKE
This operator is not supported. Remove
LIKE from the natively compiled stored
procedure.
Operator
NEXT VALUE FOR
Sequences cannot be referenced inside
natively compiled stored procedures.
Obtain the value using interpreted
Transact-SQL, and then pass it into the
natively compiled stored procedure. For
more information, see Implementing
IDENTITY in a Memory-Optimized
Table.
TYPE
FEATURE
RESOLUTION
Set option
option
SET options cannot be changed inside
natively compiled stored procedures.
Certain options can be set with the
BEGIN ATOMIC statement. For more
information, see the section on atonic
blocks in Natively Compiled Stored
Procedures.
Operand
TABLESAMPLE
This operator is not supported. Remove
TABLESAMPLE from the natively
compiled stored procedure.
Option
RECOMPILE
Natively compiled stored procedures
are compiled at create time. Remove
RECOMPILE from the procedure
definition.
You can execute sp_recompile on a
natively compiled stored procedure,
which causes it to recompile on the
next execution.
Option
ENCRYPTION
This option is not supported. Remove
ENCRYPTION from the procedure
definition.
Option
FOR REPLICATION
Natively compiled stored procedures
cannot be created for replication.
Removed FOR REPLICATION from the
procedure definition.
Option
FOR XML
This option is not supported. Remove
FOR XML from the natively compiled
stored procedure.
Option
FOR BROWSE
This option is not supported. Remove
FOR BROWSE from the natively
compiled stored procedure.
Join hint
HASH, MERGE
Natively compiled stored procedures
only support nested-loops joins. Hash
and merge joins are not supported.
Remove the join hint.
Query hint
Query hint
This query hint is not inside natively
compiled stored procedures. For
supported query hints see Query Hints
(Transact-SQL).
Option
PERCENT
This option is not supported with TOP
clauses. Remove PERCENT from the
query in the natively compiled stored
procedure.
TYPE
FEATURE
RESOLUTION
Option
WITH TIES
This option is not supported with TOP
clauses. Remove WITH TIES from the
query in the natively compiled stored
procedure.
Aggregate function
Aggregate function
This clause is not supported. For more
information about aggregate functions
in natively compiled stored procedures,
see Natively Compiled Stored
Procedures.
Ranking function
Ranking function
Ranking functions are not supported in
natively compiled stored procedures.
Remove them from the procedure
definition.
Function
Function
This function is not supported. Remove
it from the natively compiled stored
procedure.
Statement
Statement
This statement is not supported.
Remove it from the natively compiled
stored procedure.
Feature
MIN and MAX used with binary and
character strings
The aggregate functions MIN and MAX
cannot be used for character and
binary string values inside natively
compiled stored procedures.
Feature
GROUP BY ALL
ALL cannot be used with GROUP BY
clauses in natively compiled stored
procedures. Remove ALL from the
GROUP BY clause.
Feature
GROUP BY ()
Grouping by an empty list is not
supported. Either remove the GROUP
BY clause, or include columns in the
grouping list.
Feature
ROLLUP
ROLLUP cannot be used with GROUP
BY clauses in natively compiled stored
procedures. Remove ROLLUP from the
procedure definition.
Feature
CUBE
CUBE cannot be used with GROUP BY
clauses in natively compiled stored
procedures. Remove CUBE from the
procedure definition.
Feature
GROUPING SETS
GROUPING SETS cannot be used with
GROUP BY clauses in natively compiled
stored procedures. Remove
GROUPING SETS from the procedure
definition.
TYPE
FEATURE
RESOLUTION
Feature
BEGIN TRANSACTION, COMMIT
TRANSACTION, and ROLLBACK
TRANSACTION
Use ATOMIC blocks to control
transactions and error handling. For
more information, see Atomic Blocks.
Feature
Inline table variable declarations.
Table variables must reference explicitly
defined memory-optimized table types.
You should create a memory-optimized
table type and use that type for the
variable declaration, rather than
specifying the type inline.
Feature
Disk-based tables
Disk-based tables cannot be accessed
from natively compiled stored
procedures. Remove references to diskbased tables from the nativelycompiled stored procedures. Or,
migrate the disk-based table(s) to
memory optimized.
Feature
Views
Views cannot be accessed from natively
compiled stored procedures. Instead of
views, reference the underlying base
tables.
Feature
Table valued functions
Table-valued functions cannot be
accessed from natively compiled stored
procedures. Remove references to
table-valued functions from the
natively compiled stored procedure.
Option
PRINT
Remove reference
Feature
DDL
No DDL is supported.
Option
STATISTICS XML
Not supported. When you run a query,
with STATISTICS XML enabled, the XML
content is returned without the part for
the natively compiled stored procedure.
Transactions that Access Memory-Optimized Tables
The following table lists the Transact-SQL features and keywords that can appear in the message text of an error
involving transactions that access memory-optimized tables, as well as the corrective action to resolve the error.
TYPE
NAME
RESOLUTION
Feature
savepoint
Creating explicit savepoints in
transactions that access memoryoptimized tables is not supported.
Feature
bound transaction
Bound sessions cannot participate in
transactions that access memoryoptimized tables. Do not bind the
session before executing the procedure.
TYPE
NAME
RESOLUTION
Feature
DTC
Transactions that access memoryoptimized tables cannot be distributed
transactions.
See Also
Migrating to In-Memory OLTP
Migrating to In-Memory OLTP
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
The articles in this section of the Table of Contents discuss how to adopt In-Memory OLTP in an existing
application by migrating database objects to use In-Memory OLTP (Online Transaction Processing).
Determining if a Table or Stored Procedure Should Be Ported to In-Memory OLTP
Memory Optimization Advisor
Native Compilation Advisor
PowerShell Cmdlet for Migration Evaluation
Implementing SQL_VARIANT in a Memory-Optimized Table
Migration Issues for Natively Compiled Stored Procedures
Migrating Computed Columns
Migrating Triggers
Cross-Database Queries
Implementing IDENTITY in a Memory-Optimized Table
For information about migration methodologies, see In-Memory OLTP – Common Workload Patterns and
Migration Considerations.
See Also
In-Memory OLTP (In-Memory Optimization)
Estimate Memory Requirements for Memory-Optimized Tables
Determining if a Table or Stored Procedure Should
Be Ported to In-Memory OLTP
3/24/2017 • 7 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
The Transaction Performance Analysis report in SQL Server Management Studio helps you evaluate if In-Memory
OLTP will improve your database application’s performance. The report also indicates how much work you must
do to enable In-Memory OLTP in your application. After you identify a disk-based table to port to In-Memory
OLTP, you can use the Memory Optimization Advisor, to help you migrate the table. Similarly, the Native
Compilation Advisor will help you port a stored procedure to a natively compiled stored procedure. For
information about migration methodologies, see In-Memory OLTP – Common Workload Patterns and Migration
Considerations.
The Transaction Performance Analysis report is run directly against the production database, or a test database
with an active workload that is similar to the production workload.
The report and migration advisors help you accomplish the following tasks:
Analyze your workload to determine hot spots where In-Memory OLTP can potentially help to improve
performance. The Transaction Performance Analysis report recommends tables and stored procedures that
would benefit most from conversion to In-Memory OLTP.
Help you plan and execute your migration to In-Memory OLTP. The migration path from a disk based table
to a memory-optimized table can be time consuming. The Memory-Optimization Advisor helps you
identify the incompatibilities in your table that you must remove before moving the table to In-Memory
OLTP. The Memory-Optimization Advisor also helps you understand the impact that the migration of a
table to a memory-optimized table will have on your application.
You can see if your application would benefit from In-Memory OLTP, when you want to plan your
migration to In-Memory OLTP, and whenever you work to migrate some of your tables and stored
procedures to In-Memory OLTP.
IMPORTANT
The performance of a database system is dependent on a variety of factors, not all of which the transaction
performance collector can observe and measure. Therefore, the transaction performance analysis report does not
guarantee actual performance gains will match its predictions, if any predictions are made.
The Transaction Performance Analysis report and the migration advisors are installed as part of SQL Server
Management Studio (SSMS) when you select Management Tools—Basic or Management Tools—
Advanced when you install SQL Server 2016, or when you Download SQL Server Management Studio.
Transaction Performance Analysis Reports
You can generate transaction performance analysis reports in Object Explorer by right-clicking on the database,
selecting Reports, then Standard Reports, and then Transaction Performance Analysis Overview. The
database needs to have an active workload, or a recent run of a workload, in order to generate a meaningful
analysis report.
The details report for a table consists of three sections:
Scan Statistics Section
This section includes a single table that shows the statistics that were collected about scans on the database
table. The columns are:
Percent of total accesses. The percentage of scans and seeks on this table with respect to the activity
of the entire database. The higher this percentage, the more heavily used the table is compared to
other tables in the database.
Lookup Statistics/Range Scan Statistics. This column records the number of point lookups and range
scans (index scans and table scans) conducted on the table during profiling. Average per transaction
is an estimate.
Interop Gain and Native Gain. These columns estimate the amount of performance benefit a point
lookup or range scan would have if the table is converted to a memory-optimized table.
Contention Statistics Section
This section includes a table that shows contention on the database table. For more information regarding
database latches and locks, please see Locking Architecture. The columns are as follows:
Percent of total waits. The percentage of latch and lock waits on this database table compared to
activity of the database. The higher this percentage, the more heavily used the table is compared to
other tables in the database.
Latch Statistics. These columns record the number of latch waits for queries involving for this table.
For information on latches, see Latching. The higher this number, the more latch contention on the
table.
Lock Statistics. This group of columns record the number of page lock acquisitions and waits for
queries for this table. For more information on locks, see Understanding Locking in SQL Server. The
more waits, the more lock contention on the table.
Migration Difficulties Section
This section includes a table that shows the difficulty of converting this database table to a memoryoptimized table. A higher difficulty rating indicates more difficultly to convert the table. To see details to
convert this database table, please use the Memory Optimization Advisor.
Scan and contention statistics on the table details report is gathered and aggregated from
sys.dm_db_index_operational_stats (Transact-SQL).
A stored procedure with high ratio of CPU time to elapsed time is a candidate for migration. The report
shows all table references, because natively compiled stored procedures can only reference memoryoptimized tables, which can add to the migration cost.
The details report for a stored procedure consists of two sections:
Execution Statistics Section
This section includes a table that shows the statistics that were collected about the stored procedure’s
executions. The columns are as follows:
Cached Time. The time this execution plan is cached. If the stored procedure drops out of the plan
cache and re-enters, there will be times for each cache.
Total CPU Time. The total CPU time that the stored procedure consumed during profiling. The higher
this number, the more CPU the stored procedure used.
Total Execution Time. The total amount of execution time the stored procedure used during profiling.
The higher the difference between this number and the CPU time is, the less efficiently the stored
procedure is using the CPU.
Total Cache Missed. The number of cache misses (reads from physical storage) that is caused by the
stored procedure’s executions during profiling.
Execution Count. The number of times this stored procedure executed during profiling.
Table References Section
This section includes a table that shows the tables to which this stored procedure refers. Before converting
the stored procedure into a natively compiled stored procedure, all of these tables must be converted to
memory-optimized tables, and they must stay on the same server and database.
Execution Statistics on the stored procedure details report is gathered and aggregated from
sys.dm_exec_procedure_stats (Transact-SQL). The references are obtained from
sys.sql_expression_dependencies (Transact-SQL).
To see details about how to convert a stored procedure to a natively compiled stored procedure, please use
the Native Compilation Advisor.
Generating In-Memory OLTP Migration Checklists
Migration checklists identify any table or stored procedure features that are not supported with memoryoptimized tables or natively compiled stored procedures. The memory-optimization and native compilation
advisors can generate a checklist for a single disk-based table or interpreted T-SQL stored procedure. It is also
possible to generation migration checklists for multiple tables and stored procedures in a database.
You can generate a migration checklist in SQL Server Management Studio by using the Generate In-Memory
OLTP Migration Checklists command or by using PowerShell.
To generate a migration checklist using the UI command
1. In Object Explorer, right click a database other than the system database, click Tasks, and then click
Generate In-Memory OLTP Migration Checklists.
2. In the Generate In-Memory OLTP Migration Checklists dialog box, click Next to navigate to the Configure
Checklist Generation Options page. On this page do the following.
a. Enter a folder path in the Save checklist to box.
b. Verify that Generate checklists for specific tables and stored procedures is selected.
c. Expand the Table and Stored Procedure nodes in the section box.
d. Select a few objects in the selection box.
3. Click Next and confirm that the list of tasks matches your settings on the Configure Checklist
Generation Options page.
4. Click Finish, and then confirm that migration checklist reports were generated only for the objects you
selected.
You can verify the accuracy of the reports by comparing them to reports generated by the Memory
Optimization Advisor tool and the Native Compilation Advisor tool. For more information, see Memory
Optimization Advisor and Native Compilation Advisor.
To generate a migration checklist using SQL Server PowerShell
5. In Object Explorer, click on a database and then click Start PowerShell. Verify that the following prompt
appears.
PS SQLSERVER: \SQL\{Instance Name}\DEFAULT\Databases\{two-part DB Name}>
6. Enter the following command.
Save-SqlMigrationReport –FolderPath “<folder_path>”
7. Verify the following.
The folder path is created, if it doesn’t already exist.
The migration checklist report is generated for all tables and stored procedures in the database, and
the report is in the location specified by folder_path.
To generate a migration checklist using Windows PowerShell
8. Start an elevated Windows PowerShell session.
9. Enter the following commands. The object can either be a table or a stored procedure.
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO')
Save-SqlMigrationReport –Server "<instance_name>" -Database "<db_name>" -FolderPath "<folder_path1>"
Save-SqlMigrationReport –Server "<instance_name>" -Database "<db_name>" -Object <object_name> FolderPath "<folder_path2>"
10. Verify the following.
A migration checklist report is generated for all tables and stored procedures in the database, and
the report is in the location specified by folder_path.
A migration checklist report for is the only report in the location specified by folder_path2.
See Also
Migrating to In-Memory OLTP
Memory Optimization Advisor
3/24/2017 • 6 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
Transaction Performance Analysis reports (see Determining if a Table or Stored Procedure Should Be Ported to InMemory OLTP) informs you about which tables in your database will benefit if ported to use In-Memory OLTP.
After you identify a table that you would like to port to use In-Memory OLTP, you can use the memory
optimization advisor in SQL Server Management Studio to help you migrate the disk-based table to a memoryoptimized table.
The memory-optimization advisor allows you to:
Identify any features used in a disk-based table that are not supported for memory-optimized tables.
Migrate a table and data to memory-optimized (if there are no unsupported features).
For information about migration methodologies, see In-Memory OLTP – Common Workload Patterns and
Migration Considerations.
Walkthrough Using the Memory-Optimization Advisor
In Object Explorer, right click the table you want to convert, and select Memory-Optimization Advisor. This
will display the welcome page for the Table Memory Optimization Advisor.
Memory Optimization Checklist
When you click Next in the welcome page for the Table Memory Optimization Advisor, you will see the
memory optimization checklist. Memory-optimized tables do not support all the features in a disk-based table.
The memory optimization checklist reports if the disk-based table uses any features that are incompatible with a
memory-optimized table. The Table Memory Optimization Advisor does not modify the disk-based table so
that it can be migrated to use In-Memory OLTP. You must make those changes before continuing migration. For
each incompatibility found, the Table Memory Optimization Advisor displays a link to information that can
help you modify your disk-based tables.
If you wish to keep a list of these incompatibilities, to plan your migration, click the Generate Report to generate
a HTML list.
If your table has no incompatibilities and you are connected to a SQL Server 2014 instance with In-Memory OLTP,
click Next.
Memory Optimization Warnings
The next page, memory optimization warnings, contains a list of issues that do not prevent the table from being
migrated to use In-Memory OLTP, but that may cause the behavior of other objects (such as stored procedures or
CLR functions) to fail or result in unexpected behavior.
The first several warnings in the list are informational and may or may not apply to your table. Links in the righthand column of the table will take you to more information.
The warning table will also display potential warning conditions that are not present in your table.
Actionable warnings will have a yellow triangle in the left-hand column. If there are actionable warnings, you
should exit the migration, resolve the warnings, and then restart the process. If you do not resolve the warnings,
your migrated table may cause a failure.
Click Generate Report to generate an HTML report of these warnings. Click Next to proceed.
Review Optimization Options
The next screen lets you modify options for the migration to In-Memory OLTP:
Memory-optimized filegroup
The name for your memory-optimized filegroup. A database must have a memory-optimized filegroup with at
least one file before a memory-optimized table can be created.
If you do not have a memory-optimized filegroup, you can change the default name. Memory-optimized
filegroups cannot be deleted. The existence of a memory-optimized filegroup may disable some database-level
features such as AUTO CLOSE and database mirroring.
If a database already has a memory-optimized file group, this field will be pre-populated with its name and you
will not be able to change the value of this field.
Logical file name and File path
The name of the file that will contain the memory-optimized table. A database must have a memory-optimized file
group with at least one file before a memory-optimized table can be created.
If you do not have an existing memory-optimized file group, you can change the default name and path of the file
to be created at the end of the migration process.
If you have an existing memory-optimized filegroup, these fields will be pre-populated and you will not be able to
change the values.
Rename the original table as
At the end of the migration process, a new memory-optimized table will be created with the current name of the
table. To avoid a name conflict, the current table must be renamed. You may change that name in this field.
Estimated current memory cost (MB)
The Memory-Optimization Advisor estimates the amount of memory the new memory-optimized table will
consume based on metadata of the disk-based table. The calculation of the table size is explained in Table and
Row Size in Memory-Optimized Tables.
If sufficient memory is not allotted, the migration process may fail.
Also copy table data to the new memory optimized table
Select this option if you wish to also move the data in the current table to the new memory-optimized table. If this
option is not selected, the new memory-optimized table will be created with no rows.
The table will be migrated as a durable table by default
In-Memory OLTP supports non-durable tables with superior performance compared to durable memoryoptimized tables. However, data in a non-durable table will be lost upon server restart.
If this option is selected, the Memory-Optimization Advisor will create a non-durable table instead of a durable
table.
WARNING
Select this option only if you understand the risk of data loss associated with non-durable tables.
Click Next to continue.
Review Primary Key Conversion
The next screen is Review Primary Key Conversion. The Memory-Optimization Advisor will detect if there are
one or more primary keys in the table, and populates the list of columns based on the primary key metadata.
Otherwise, if you wish to migrate to a durable memory-optimized table, you must create a primary key.
If a primary key doesn’t exist and the table is being migrated to a non-durable table, this screen will not appear.
For textual columns (columns with types char, nchar, varchar, and nvarchar) you must select an appropriate
collation. In-Memory OLTP only supports BIN2 collations for columns on a memory-optimized table and it does
not support collations with supplementary characters. See Collations and Code Pages for information on the
collations supported and the potential impact of a change in collation.
You can configure the following parameters for the primary key:
Select a new name for this primary key
The primary key name for this table must be unique inside the database. You may change the name of the
primary key here.
Select the type of this primary key
In-Memory OLTP supports two types of indexes on a memory-optimized table:
A NONCLUSTERED HASH index. This index is best for indexes with many point lookups. You may configure
the bucket count for this index in the Bucket Count field.
A NONCLUSTERED index. This type of index is best for indexes with many range queries. You may
configure the sort order for each column in the Sort column and order list.
To understand the type of index best for your primary key, see Hash Indexes.
Click Next after you make your primary key choices.
Review Index Conversion
The next page is Review Index Conversion. The Memory-Optimization Advisor will detect if there are one or
more indexes in the table, and populates the list of columns and data type. The parameters you can configure in
the Review Index Conversion page are similar to the previous, Review Primary Key Conversion page.
If the table only has a primary key and it’s being migrated to a durable table, this screen will not appear.
After you make a decision for every index in your table, click Next.
Verify Migration Actions
The next page is Verify Migration Actions. To script the migration operation, click Script to generate a TransactSQL script. You may then modify and execute the script. Click Migrate to begin the table migration.
After the process is finished, refresh Object Explorer to see the new memory-optimized table and the old diskbased table. You can keep the old table or delete it at your convenience.
See Also
Migrating to In-Memory OLTP
Native Compilation Advisor
3/24/2017 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
Transaction Performance Analysis reports tells you which interpreted stored procedures in your database will
benefit if ported to use native compilation. For details see Determining if a Table or Stored Procedure Should Be
Ported to In-Memory OLTP.
After you identify a stored procedure that you would like to port to use native compilation, you can use the Native
Compilation Advisor (NCA) to help you migrate the interpreted stored procedure to native compilation. For more
information about natively compiled stored procedures, see Natively Compiled Stored Procedures.
In a given interpreted stored procedure, the NCA allows you to identify all the features that are not supported in
native modules. The NCA provides documentation links to work-arounds or solutions.
For information about migration methodologies, see In-Memory OLTP – Common Workload Patterns and
Migration Considerations.
Walkthrough Using the Native Compilation Advisor
In Object Explorer, right click the stored procedure you want to convert, and select Native Compilation
Advisor. This will display the welcome page for the Stored Procedure Native Compilation Advisor. Click Next
to continue.
Stored Procedure Validation
This page will report if the stored procedure uses any constructs that are not compatible with native compilation.
You can click Next to see details. If there are constructs that are not compatible with native compilation, you can
click Next to see details.
Stored Procedure Validation Result
If there are constructs that are not compatible with native compilation, the Stored Procedure Validation Result
page will display details. You can generate a report (click Generate Report), exit the Native Compilation
Advisor, and update your code so that it is compatible with native compilation.
Code Sample
The following sample shows an interpreted stored procedure and the equivalent stored procedure for native
compilation. The sample assumes a directory called c:\data.
NOTE
As usual, the FILEGROUP element, and the USE mydatabase statement, apply to Microsoft SQL Server, but do not apply to
Azure SQL Database.
CREATE DATABASE Demo
ON
PRIMARY(NAME = [Demo_data],
FILENAME = 'C:\DATA\Demo_data.mdf', size=500MB)
, FILEGROUP [Demo_fg] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [Demo_dir],
FILENAME = 'C:\DATA\Demo_dir')
LOG ON (name = [Demo_log], Filename='C:\DATA\Demo_log.ldf', size=500MB)
COLLATE Latin1_General_100_BIN2;
go
USE Demo;
go
CREATE TABLE [dbo].[SalesOrders]
(
[order_id] [int] NOT NULL,
[order_date] [datetime] NOT NULL,
[order_status] [tinyint] NOT NULL
CONSTRAINT [PK_SalesOrders] PRIMARY KEY NONCLUSTERED HASH
(
[order_id]
) WITH ( BUCKET_COUNT = 2097152)
) WITH ( MEMORY_OPTIMIZED = ON )
go
-- Interpreted.
CREATE PROCEDURE [dbo].[InsertOrder] @id INT, @date DATETIME2, @status TINYINT
AS
BEGIN
INSERT dbo.SalesOrders VALUES (@id, @date, @status);
END
go
-- Natively Compiled.
CREATE PROCEDURE [dbo].[InsertOrderXTP]
@id INT, @date DATETIME2, @status TINYINT
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC WITH
(TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english'
)
INSERT dbo.SalesOrders VALUES (@id, @date, @status);
END
go
SELECT * from SalesOrders;
go
EXECUTE dbo.InsertOrder @id= 10, @date = '1956-01-01 12:00:00', @status = 1;
EXECUTE dbo.InsertOrderXTP @id= 11, @date = '1956-01-01 12:01:00', @status = 2;
SELECT * from SalesOrders;
See Also
Migrating to In-Memory OLTP
Requirements for Using Memory-Optimized Tables
PowerShell Cmdlet for Migration Evaluation
3/24/2017 • 1 min to read • Edit Online
The Save-SqlMigrationReport cmdlet is a tool that evaluates the migration fitness of multiple objects in a SQL
Server database. Currently, it is limited to evaluating the migration fitness for In-Memory OLTP. The cmdlet can run
in both an elevated Windows PowerShell environment and sqlps.
Syntax
Save-SqlMigrationReport [ -MigrationType OLTP ] [ -Server server -Database database [ -Object object_name ] ]
| [ -InputObject smo_object ] -FolderPath path
Parameters
The following table describers the parameters.
PARAMETERS
DESCRIPTION
MigrationType
The type of migration scenario the cmdlet is targeting.
Currently the only value is the default OLTP. Optional.
Server
The name of the target SQL Server instance. Mandatory in
Windows Powershell environment if -InputObject parameter is
not supplied. Optional in SQLPS.
Database
The name of the target SQL Server database. Mandatory in
Windows Powershell environment if -InputObject parameter is
not supplied. Optional in SQLPS.
Object
The name of the target database object. Can be table or
stored procedure.
InputObject
The SMO object the cmdlet should target. Mandatory in
Windows Powershell environment if -Server and -Database
are not supplied. Optional in SQLPS.
FolderPath
The folder in which the cmdlet should deposit the generated
reports. Required.
Results
In the folder specified in the -FolderPath parameter, there will be two folder names: Tables and Stored Procedures.
If the targeted object is a table, its report will be inside the Tables folder. Otherwise it will be inside the Stored
Procedures folder.
Implementing SQL_VARIANT in a MemoryOptimized Table
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014)
Warehouse
Parallel Data Warehouse
Consider an example of a table with SQL_VARIANT column:
Azure SQL Database
Azure SQL Data
CREATE TABLE [dbo].[T1]([Key] [sql_variant] NOT NULL)
Assume that the key column can only be either a BIGINT or NVARCHAR(300). You can model this table as
follows:
-- original disk-based table
CREATE TABLE [dbo].[T1_disk]([Key] int not null primary key,
[Value] [sql_variant])
go
insert dbo.T1_disk values (1, 12345678)
insert dbo.T1_disk values (2, N'my nvarchar')
insert dbo.T1_disk values (3, NULL)
go
-- new memory-optimized table
CREATE TABLE [dbo].[T1_inmem]([Key] INT NOT NULL PRIMARY KEY NONCLUSTERED,
[Value_bi] BIGINT,
[Value_nv] NVARCHAR(300),
[Value_enum] TINYINT NOT NULL) WITH (MEMORY_OPTIMIZED=ON)
go
-- copy data
INSERT INTO dbo.T1_inmem
SELECT [Key],
CASE WHEN SQL_VARIANT_PROPERTY([Value], 'basetype') = 'bigint' THEN convert (bigint, [Value])
ELSE NULL END,
CASE WHEN SQL_VARIANT_PROPERTY([Value], 'basetype') != 'bigint' THEN convert (nvarchar(300),
[Value])
ELSE NULL END,
CASE WHEN SQL_VARIANT_PROPERTY([Value], 'basetype') = 'bigint' THEN 1
ELSE 0 END
FROM dbo.T1_disk
GO
-- select data, converting back to sql_variant [will not work inside native proc]
select [Key],
case [Value_enum] when 1 then convert(sql_variant, [Value_bi])
else convert(sql_variant, [Value_nv])
end
from dbo.T1_inmem
Now you can load data into [T1_HK] from T1 by opening a cursor on T1:
DECLARE T1_rows_cursor CURSOR FOR
select *
FROM dbo.T1
OPEN T1_rows_cursor
-- declare 1 variable each for column in HK table
Declare
@Key_biBIGINT = 0,
@Key_nvnvarchar(300)= ' ',
@Key_enumsmallint,
@Keysql_variant
FETCH NEXT FROM T1_rows_cursor INTO @key
WHILE @@FETCH_STATUS = 0
BEGIN
-- setting the input parameters for inserting into the memory-optimized table
-- convert SQL Variant types
-- @key_enum =1 represents BIGINT
if (SQL_VARIANT_PROPERTY(@Key, 'basetype') = 'bigint')
begin
set @key_bi = convert (bigint, @Key)
set @key_enum = 1
set @key_nv = 'invalid'
end
else
begin
set @Key_nv = convert (nvarchar (300), @Key)
set @Key_enum = 0
set @Key_bi = -1
end
-- inserting the row
INSERT INTO T1_HK VALUES (@Key_bi, @Key_nv, @Key_enum)
FETCH NEXT FROM T1_rows_cursor INTO @key
END
CLOSE T1_rows_cursor
DEALLOCATE T1_rows_cursor
You can convert data back to SQL_VARIANT as follows:
case [Key_enum] when 1 then convert(sql_variant, [Key_bi])
else convert(sql_variant, [Key_nv])
end
See Also
Migrating to In-Memory OLTP
Migration Issues for Natively Compiled Stored
Procedures
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
This section presents several issues related to creating natively compiled stored procedures.
For more information about natively compiled stored procedures, see Natively Compiled Stored Procedures.
Creating and Accessing Tables in TempDB from Natively Compiled Stored Procedures
Simulating an IF-WHILE EXISTS Statement in a Natively Compiled Module
Implementing MERGE Functionality in a Natively Compiled Stored Procedure
Implementing a CASE Expression in a Natively Compiled Stored Procedure
Implementing UPDATE with FROM or Subqueries
Implementing an Outer Join
See Also
Migrating to In-Memory OLTP
Create and Access Tables in TempDB from Stored
Procedures
3/29/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Creating and accessing tables in TempDB from natively compiled stored procedures is not supported. Instead, use
either memory-optimized tables with DURABILITY=SCHEMA_ONLY or use table types and table variables.
For more details about memory-optimization of temp table and table variable scenarios see: Faster temp table and
table variable by using memory optimization.
The following example shows how the use of a temp table with three columns (id, ProductID, Quantity) can be
replaced using a table variable @OrderQuantityByProduct of type dbo.OrderQuantityByProduct:
CREATE TYPE dbo.OrderQuantityByProduct
AS TABLE
(id INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=100000),
ProductID INT NOT NULL,
Quantity INT NOT NULL) WITH (MEMORY_OPTIMIZED=ON)
GO
CREATE PROCEDURE dbo.usp_OrderQuantityByProduct
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
(
TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'ENGLISH'
)
-- declare table variables for the list of orders
DECLARE @OrderQuantityByProduct dbo.OrderQuantityByProduct
-- populate input
INSERT @OrderQuantityByProduct SELECT ProductID, Quantity FROM dbo.[Order Details]
end
See Also
Migration Issues for Natively Compiled Stored Procedures
Transact-SQL Constructs Not Supported by In-Memory OLTP
Simulating an IF-WHILE EXISTS Statement in a
Natively Compiled Module
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Natively compiled stored procedures do not support the EXISTS clause in conditional statements such as IF and
WHILE.
The following example illustrates a workaround using a BIT variable with a SELECT statement to simulate an EXISTS
clause:
DECLARE @exists BIT = 0
SELECT TOP 1 @exists = 1 FROM MyTable WHERE …
IF @exists = 1
See Also
Migration Issues for Natively Compiled Stored Procedures
Transact-SQL Constructs Not Supported by In-Memory OLTP
Implementing MERGE Functionality in a Natively
Compiled Stored Procedure
3/24/2017 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
The Transact-SQL code sample in this section demonstrates how you can simulate the T-SQL MERGE statement in
a natively compiled module. The sample uses a table variable with an identity column, iterates over the rows in the
table variable, and for each row performs the update if the condition matches, and an insert if the condition does
not match.
Here is the T-SQL MERGE statement that you wish was supported inside a native proc, and that the code sample
simulates.
MERGE INTO dbo.Table1 t
USING @tvp v
ON t.Column1 = v.c1
WHEN MATCHED THEN
UPDATE SET Column2 = v.c2
WHEN NOT MATCHED THEN
INSERT (Column1, Column2) VALUES (v.c1, v.c2);
Here is the T-SQL to achieve the workaround and simulate MERGE.
DROP PROCEDURE IF EXISTS dbo.usp_merge1;
go
DROP TYPE IF EXISTS dbo.Type1;
go
DROP TABLE IF EXISTS dbo.Table1;
go
------------------------------ target table and table type used for the workaround
----------------------------CREATE TABLE dbo.Table1
(
Column1 INT NOT NULL PRIMARY KEY NONCLUSTERED,
Column2 INT NOT NULL
)
WITH (MEMORY_OPTIMIZED = ON);
go
CREATE TYPE dbo.Type1 AS TABLE
(
c1 INT NOT NULL,
c2 INT NOT NULL,
RowID
INT NOT NULL IDENTITY(1,1),
INDEX ix_RowID HASH (RowID) WITH (BUCKET_COUNT=1024)
)
WITH (MEMORY_OPTIMIZED = ON);
go
------------------------------ stored procedure implementing the workaround
----------------------------CREATE PROCEDURE dbo.usp_merge1
CREATE PROCEDURE dbo.usp_merge1
@tvp1 dbo.Type1 READONLY
WITH
NATIVE_COMPILATION, SCHEMABINDING
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'us_english')
DECLARE @i INT = 1, @c1 INT, @c2 INT;
WHILE @i > 0
BEGIN
SELECT @c1 = c1, @c2 = c2
FROM @tvp1
WHERE RowID = @i;
--test whether the row exists in the TVP; if not, we end the loop
IF @@ROWCOUNT=0
SET @i = 0
ELSE
BEGIN
-- try the update
UPDATE dbo.Table1
SET Column2 = @c2
WHERE Column1 = @c1;
-- if there was no row to update, we insert
IF @@ROWCOUNT=0
INSERT INTO dbo.Table1 (Column1, Column2)
VALUES (@c1, @c2);
SET @i += 1
END
END
END
go
------------------------------ test to validate the functionality
----------------------------INSERT dbo.Table1 VALUES (1,2);
go
SELECT N'Before-MERGE' AS [Before-MERGE], Column1, Column2
FROM dbo.Table1;
go
DECLARE @tvp1 dbo.Type1;
INSERT @tvp1 (c1, c2) VALUES (1,33), (2,4);
EXECUTE dbo.usp_merge1 @tvp1;
go
SELECT N'After--MERGE' AS [After--MERGE], Column1, Column2
FROM dbo.Table1;
go
-----------------------------
/**** Actual output:
Before-MERGE
Before-MERGE
Column1
1
Column2
2
After--MERGE
After--MERGE
After--MERGE
****/
Column1
1
2
Column2
33
4
See Also
Migration Issues for Natively Compiled Stored Procedures
Transact-SQL Constructs Not Supported by In-Memory OLTP
Implementing a CASE Expression in a Natively
Compiled Stored Procedure
4/25/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
CASE expressions are supported in natively compiled stored procedures. The following example demonstrates a
way to use the CASE expression in a query. The workaround described for CASE expressions in natively compiled
modules would be no longer needed.
-- Query using a CASE expression in a natively compiled stored procedure.
CREATE PROCEDURE dbo.usp_SOHOnlineOrderResult
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE=N'us_english')
SELECT
SalesOrderID,
CASE (OnlineOrderFlag)
WHEN 1 THEN N'Order placed online by customer'
ELSE N'Order placed by sales person'
END
FROM Sales.SalesOrderHeader_inmem
END
GO
EXEC dbo.usp_SOHOnlineOrderResult
GO
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
CASE expressions are not supported in natively compiled stored procedures. The following sample shows a way to
implement the functionality of a CASE expression in a natively compiled stored procedure.
The code samples uses a table variable to construct a single result set. This is suitable only when processing a
limited number of rows, because it involves creating an additional copy of the data rows.
You should test the performance of this workaround.
-- original query
SELECT
SalesOrderID,
CASE (OnlineOrderFlag)
WHEN 1 THEN N'Order placed online by customer'
ELSE N'Order placed by sales person'
END
FROM Sales.SalesOrderHeader_inmem
-- workaround for CASE in natively compiled stored procedures
-- use a table for the single resultset
CREATE TYPE dbo.SOHOnlineOrderResult AS TABLE
(
SalesOrderID uniqueidentifier not null index ix_SalesOrderID,
OrderFlag nvarchar(100) not null
) with (memory_optimized=on)
go
-- natively compiled stored procedure that includes the query
CREATE PROCEDURE dbo.usp_SOHOnlineOrderResult
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
(TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE=N'us_english')
-- table variable for creating the single resultset
DECLARE @result dbo.SOHOnlineOrderResult
-- CASE OnlineOrderFlag=1
INSERT @result
SELECT SalesOrderID, N'Order placed online by customer'
FROM Sales.SalesOrderHeader_inmem
WHERE OnlineOrderFlag=1
-- ELSE
INSERT @result
SELECT SalesOrderID, N'Order placed by sales person'
FROM Sales.SalesOrderHeader_inmem
WHERE OnlineOrderFlag!=1
-- return single resultset
SELECT SalesOrderID, OrderFlag FROM @result
END
GO
EXEC dbo.usp_SOHOnlineOrderResult
GO
See Also
Migration Issues for Natively Compiled Stored Procedures
Transact-SQL Constructs Not Supported by In-Memory OLTP
Implementing UPDATE with FROM or Subqueries
3/24/2017 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Natively compiled T-SQL modules do not support the FROM clause and do not support subqueries in UPDATE
statements (they are supported in SELECT). UPDATE statements with FROM clause are typically used to update
information in a table based on a table-valued parameter (TVP), or to update columns in a table in an AFTER
trigger.
For the scenario of update based on a TVP, see Implementing MERGE Functionality in a Natively Compiled Stored
Procedure.
The below sample illustrates an update performed in a trigger - the LastUpdated column of the table is updated to
the current date/time AFTER updates. The workaround uses a table variable with an identity column, and a WHILE
loop to iterate of the rows in the table variable and perform individual updates.
Here is the original T-SQL UPDATE statement :
UPDATE dbo.Table1
SET LastUpdated = SysDateTime()
FROM
dbo.Table1 t
JOIN Inserted i ON t.Id = i.Id;
The sample T-SQL code in this section demonstrates a workaround that provides good performance. The
workaround is implemented in a natively compiled trigger. Crucial to notice in the code are:
The type named dbo.Type1, which is a memory-optimized table type.
The WHILE loop in the trigger.
The loop retrieves the rows from Inserted one at a time.
DROP TABLE IF EXISTS dbo.Table1;
go
DROP TYPE IF EXISTS dbo.Type1;
go
------------------------------ Table and table type
----------------------------CREATE TABLE dbo.Table1
(
Id
INT
Column2
INT
LastUpdated DATETIME2
)
WITH (MEMORY_OPTIMIZED
go
NOT NULL PRIMARY KEY NONCLUSTERED,
NOT NULL,
NOT NULL DEFAULT (SYSDATETIME())
= ON);
CREATE TYPE dbo.Type1 AS TABLE
(
Id
INT NOT NULL,
RowID
INT NOT NULL IDENTITY,
INDEX ix_RowID HASH (RowID) WITH (BUCKET_COUNT=1024)
)
WITH (MEMORY_OPTIMIZED = ON);
go
------------------------------ trigger that contains the workaround for UPDATE with FROM
----------------------------CREATE TRIGGER dbo.tr_a_u_Table1
ON dbo.Table1
WITH NATIVE_COMPILATION, SCHEMABINDING
AFTER UPDATE
AS
BEGIN ATOMIC WITH
(
TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'us_english'
)
DECLARE @tabvar1 dbo.Type1;
INSERT @tabvar1 (Id)
SELECT Id FROM Inserted;
DECLARE
@i INT = 1, @Id INT,
@max INT = SCOPE_IDENTITY();
---- Loop as a workaround to simulate a cursor.
---- Iterate over the rows in the memory-optimized table
---- variable and perform an update for each row.
WHILE @i <= @max
BEGIN
SELECT @Id = Id
FROM @tabvar1
WHERE RowID = @i;
UPDATE dbo.Table1
SET LastUpdated = SysDateTime()
WHERE Id = @Id;
SET @i += 1;
END
END
go
------------------------------ Test to verify functionality
----------------------------SET NOCOUNT ON;
INSERT dbo.Table1 (Id, Column2)
VALUES (1,9), (2,9), (3,600);
SELECT N'BEFORE-Update' AS [BEFORE-Update], *
FROM dbo.Table1
ORDER BY Id;
WAITFOR DELAY '00:00:01';
UPDATE dbo.Table1
SET Column2 += 1
WHERE Column2 <= 99;
SELECT N'AFTER--Update' AS [AFTER--Update], *
FROM dbo.Table1
ORDER BY Id;
go
-----------------------------
/**** Actual output:
BEFORE-Update
BEFORE-Update
BEFORE-Update
BEFORE-Update
Id
1
2
3
Column2
9
9
600
LastUpdated
2016-04-20 21:18:42.8394659
2016-04-20 21:18:42.8394659
2016-04-20 21:18:42.8394659
AFTER--Update
AFTER--Update
AFTER--Update
AFTER--Update
****/
Id
1
2
3
Column2
10
10
600
LastUpdated
2016-04-20 21:18:43.8529692
2016-04-20 21:18:43.8529692
2016-04-20 21:18:42.8394659
Implementing an Outer Join
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
LEFT and RIGHT OUTER JOIN is supported in natively compiled T-SQL modules starting SQL Server 2016.
Migrating Computed Columns
3/24/2017 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Computed columns are not supported in memory-optimized tables. However, you can simulate a computed
column.
Applies to: SQL Server 2017 CTP 1.1.
Beginning with SQL Server 2017 CTP 1.1, computed columns are supported in memory-optimized tables and
indexes.
You should consider the need to persist your computed columns when you migrate your disk-based tables to
memory-optimized tables. The different performance characteristics of memory-optimized tables and natively
compiled stored procedures may negate the need for persistence.
Non-Persisted Computed Columns
To simulate the effects of a non-persisted computed column, create a view on the memory-optimized table. In the
SELECT statement that defines the view, add the computed column definition into the view. Except in a natively
compiled stored procedure, queries that use values from the computed column should read from the view. Inside
natively compiled stored procedures, you should update any select, update, or delete statement according to your
computed column definition.
-- Schema for the table dbo.OrderDetails:
-- OrderId int not null primary key,
-- ProductId int not null,
-- SalePrice money not null,
-- Quantity int not null,
-- Total money not null
--- Total is computed as SalePrice * Quantity and is not persisted.
CREATE VIEW dbo.v_order_details AS
SELECT
OrderId,
ProductId,
SalePrice,
Quantity,
Quantity * SalePrice AS Total
FROM dbo.order_details
Persisted Computed Columns
To simulate the effects of a persisted computed column, create a stored procedure for inserting into the table and
another stored procedure for updating the table. When inserting or updating the table, invoke these stored
procedures to perform these tasks. Inside the stored procedures, calculate the value for the computed field
according to the inputs, much like how the computed column is defined on the original disk-based table. Then,
insert or update the table as needed inside the stored procedure.
-- Schema for the table dbo.OrderDetails:
-- OrderId int not null primary key,
-- ProductId int not null,
-- SalePrice money not null,
-- Quantity int not null,
-- Total money not null
--- Total is computed as SalePrice * Quantity and is persisted.
-- we need to create insert and update procedures to calculate Total.
CREATE PROCEDURE sp_insert_order_details
@OrderId int, @ProductId int, @SalePrice money, @Quantity int
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH (LANGUAGE = N'english', TRANSACTION ISOLATION LEVEL = SNAPSHOT)
-- compute the value here.
-- this stored procedure works with single rows only.
-- for bulk inserts, accept a table-valued parameter into the stored procedure
-- and use an INSERT INTO SELECT statement.
DECLARE @total money = @SalePrice * @Quantity
INSERT INTO dbo.OrderDetails (OrderId, ProductId, SalePrice, Quantity, Total)
VALUES (@OrderId, @ProductId, @SalePrice, @Quantity, @total)
END
GO
CREATE PROCEDURE sp_update_order_details_by_id
@OrderId int, @ProductId int, @SalePrice money, @Quantity int
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH (LANGUAGE = N'english', TRANSACTION ISOLATION LEVEL = SNAPSHOT)
-- compute the value here.
-- this stored procedure works with single rows only.
DECLARE @total money = @SalePrice * @Quantity
UPDATE dbo.OrderDetails
SET ProductId = @ProductId, SalePrice = @SalePrice, Quantity = @Quantity, Total = @total
WHERE OrderId = @OrderId
END
GO
See Also
Migrating to In-Memory OLTP
Migrating Triggers
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016) Azure SQL Database
Warehouse
Parallel Data Warehouse
This topic discusses DDL triggers and memory-optimized tables.
Azure SQL Data
DML triggers are supported on memory-optimized tables, but only with the FOR | AFTER trigger event. For an
example see Implementing UPDATE with FROM or Subqueries.
LOGON triggers are triggers defined to fire on LOGON events. LOGON triggers do not affect memory-optimized
tables.
DDL Triggers
DDL triggers are triggers defined to fire when a CREATE, ALTER, DROP, GRANT, DENY, REVOKE, or UPDATE
STATISTICS statement is executed on the database or server on which it is defined.
You cannot create memory-optimized tables if the database or server has one or more DDL trigger defined on
CREATE_TABLE or any event group that includes it. You cannot drop a memory-optimized table if the database or
server has one or more DDL trigger defined on DROP_TABLE or any event group that includes it.
You cannot create natively compiled stored procedures if there are one or more DDL triggers on
CREATE_PROCEDURE, DROP_PROCEDURE, or any event group that includes those events.
See Also
Migrating to In-Memory OLTP
Cross-Database Queries
3/27/2017 • 2 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
Starting with SQL Server 2014, memory-optimized tables do not support cross-database transactions. You cannot
access another database from the same transaction or the same query that also accesses a memory-optimized
table. You cannot easily copy data from a table in one database, to a memory-optimized table in another database.
Table variables are not transactional. Therefore, memory-optimized table variables can be used in cross-database
queries, and can thus facilitate moving data from one database into memory-optimized tables in another. You can
use two transactions. In the first transaction, insert the data from the remote table into the variable. In the second
transaction, insert the data into the local memory-optimized table from the variable. For more information on
memory-optimized table variables, see Faster temp table and table variable by using memory optimization.
Example
This example illustrates a method to transfer data from one database into a memory-optimized table in a different
database.
1. Create Test Objects. Execute the following Transact-SQL in SQL Server Management Studio.
USE master;
GO
SET NOCOUNT ON;
-- Create simple database
CREATE DATABASE SourceDatabase;
ALTER DATABASE SourceDatabase SET RECOVERY SIMPLE;
GO
-- Create a table and insert a few records
USE SourceDatabase;
CREATE TABLE SourceDatabase.[dbo].[SourceTable] (
[ID] [int] PRIMARY KEY CLUSTERED,
[FirstName] nvarchar(8)
);
INSERT [SourceDatabase].[dbo].[SourceTable]
VALUES (1, N'Bob'),
(2, N'Susan');
GO
-- Create a database with a MEMORY_OPTIMIZED_DATA filegroup
CREATE DATABASE DestinationDatabase
ON PRIMARY
( NAME = N'DestinationDatabase_Data', FILENAME = N'D:\DATA\DestinationDatabase_Data.mdf', SIZE =
8MB),
FILEGROUP [DestinationDatabase_mod] CONTAINS MEMORY_OPTIMIZED_DATA DEFAULT
( NAME = N'DestinationDatabase_mod', FILENAME = N'D:\DATA\DestinationDatabase_mod', MAXSIZE =
UNLIMITED)
LOG ON
( NAME = N'DestinationDatabase_Log', FILENAME = N'D:\LOG\DestinationDatabase_Log.ldf', SIZE = 8MB);
ALTER DATABASE DestinationDatabase SET RECOVERY SIMPLE;
GO
USE DestinationDatabase;
GO
-- Create a memory-optimized table
CREATE TABLE [dbo].[DestTable_InMem] (
[ID] [int] PRIMARY KEY NONCLUSTERED,
[FirstName] nvarchar(8)
)
WITH ( MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA );
GO
2. Attempt cross-database query. Execute the following Transact-SQL in SQL Server Management Studio.
INSERT [DestinationDatabase].[dbo].[DestTable_InMem]
SELECT * FROM [SourceDatabase].[dbo].[SourceTable]
You should receive the following error message:
Msg 41317, Level 16, State 5
A user transaction that accesses memory optimized tables or natively compiled modules cannot access
more than one user database or databases model and msdb, and it cannot write to master.
3. Create a memory-optimized table type. Execute the following Transact-SQL in SQL Server Management
Studio.
USE DestinationDatabase;
GO
CREATE TYPE [dbo].[MemoryType]
AS TABLE
(
[ID] [int] PRIMARY KEY NONCLUSTERED,
[FirstName] nvarchar(8)
)
WITH
(MEMORY_OPTIMIZED = ON);
GO
4. Re-attempt the cross-database query. This time the source data will first be transferred to a memoryoptimized table variable. Then the data from the tale variable will be transferred to the memory-optimized
table.
-- Declare table variable utilizing the newly created type - MemoryType
DECLARE @InMem dbo.MemoryType;
-- Populate table variable
INSERT @InMem SELECT * FROM SourceDatabase.[dbo].[SourceTable];
-- Populate the destination memory-optimized table
INSERT [DestinationDatabase].[dbo].[DestTable_InMem] SELECT * FROM @InMem;
GO
See Also
Migrating to In-Memory OLTP
Implementing IDENTITY in a Memory-Optimized
Table
3/24/2017 • 1 min to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data
Warehouse
Parallel Data Warehouse
IDENTITY is supported on a memory-optimized table, as long as the seed and increment are both 1 (which is the
default). Identity columns with definition of IDENTITY(x, y) where x != 1 or y != 1 are not supported on memoryoptimized tables.
To increase the IDENTITY seed, insert a new row with an explicit value for the identity column, using the session
option SET IDENTITY_INSERT table_name ON . With the insert of the row, the IDENTITY seed is changed to the
explicitly inserted value, plus 1. For example, to increase the seed to 1000, insert a row with value 999 in the
identity column. Generated identity values will then start at 1000.
See Also
Migrating to In-Memory OLTP