Download SQL2014 and Azure xStore Integration

Document related concepts

Oracle Database wikipedia , lookup

Microsoft Access wikipedia , lookup

Database wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Team Foundation Server wikipedia , lookup

Btrieve wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Database model wikipedia , lookup

Relational model wikipedia , lookup

Clusterpoint wikipedia , lookup

Open Database Connectivity wikipedia , lookup

SQL wikipedia , lookup

PL/SQL wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Transcript
SQL Server 2014 and Windows Azure
Blob Storage Service: Better Together
SQL Server Technical Article
Summary: SQL Server 2014 introduces significant new features toward a deeper integration with
Microsoft Azure, thus unlocking new scenarios and provide more flexibility in the database space for
IaaS data models. This technical article will cover in depth SQL Server 2014 Data Files on Azure Blob
storage service, starting from step-by-step configuration, then providing guidance on scenarios, benefits
and limitations, best practices and lessons learned from early testing and adoption. Additionally, a fully
featured example of a new (Windows Server Failover Clustering) - like feature will be introduced to
demonstrate the power of Microsoft Azure and SQL Server 2014 when combined together.
Writer: Igor Pagliai, Francesco Cogno
Technical Reviewer: Silvano Coriani, Francesco Diaz, Pradeep M.M, Luigi Delwiche, Ignacio
Alonso Portillo
Published: April 2014 (revision 2, June 2014)
Applies to: SQL Server 2014 and Microsoft Azure
Copyright
This document is provided “as-is”. Information and views expressed in this document, including URL and
other Internet Web site references, may change without notice. You bear the risk of using it.
Some examples depicted herein are provided for illustration only and are fictitious. No real association
or connection is intended or should be inferred.
This document does not provide you with any legal rights to any intellectual property in any Microsoft
product. You may copy and use this document for your internal, reference purposes.
© 2014 Microsoft. All rights reserved.
2|Page
Contents
.1
Introduction .................................................................................................................................................. 5
1.
2.
3.
Architecture .......................................................................................................................................... 6
1.1
SQL Server Data Files in Windows Azure: overview ..................................................................... 7
1.2
Tools and interfaces ...................................................................................................................... 8
1.3
Limitations..................................................................................................................................... 9
1.4
Usage scenarios .......................................................................................................................... 10
1.5
Pros and cons .............................................................................................................................. 11
Configuration ...................................................................................................................................... 13
2.1
Step-by-step procedure .............................................................................................................. 13
2.2
Best practices .............................................................................................................................. 20
2.3
Operations .................................................................................................................................. 25
Implementing a failover cluster mechanism ...................................................................................... 27
3.1
Introduction ................................................................................................................................ 27
3.2
Shared nothing model in SQL Server Data Files in Windows Azure ........................................... 28
3.3
Endpoint mapping ....................................................................................................................... 32
3.4
Polling architecture ..................................................................................................................... 34
3.5
Limitations of this architecture ................................................................................................... 35
3.5.1
No metadata sharing........................................................................................................... 35
3.5.2
Failover at database level ................................................................................................... 35
3.5.3
No availability group ........................................................................................................... 36
3.6
3.6.1
Requirements ...................................................................................................................... 36
3.6.2
Metadata tables .................................................................................................................. 46
3.6.3
Polling stored procedures ................................................................................................... 48
3.6.4
Polling job............................................................................................................................ 49
3.7
4.
Implementation .......................................................................................................................... 36
Step-by-step installation example .............................................................................................. 49
Monitoring and troubleshooting ........................................................................................................ 57
3|Page
4.1
Windows Azure Management Portal .......................................................................................... 57
4.2
Windows Azure Storage logs and analytics ................................................................................ 59
4.3
Performance counters ................................................................................................................ 61
4.4
SQL Server wait types ................................................................................................................. 63
4.5
SQL Server xEvents trace ............................................................................................................ 63
4.6
Errors ........................................................................................................................................... 66
5.
Conclusion ........................................................................................................................................... 67
Appendix ..................................................................................................................................................... 69
A.
Wait stats analysis........................................................................................................................... 69
B.
Enabling Transparent Data Encryption (TDE) ................................................................................. 70
C.
Downloading Database file blobs using a Valet Key Pattern .......................................................... 71
D.
Microsoft SQL Server to Windows Azure helper library ................................................................. 74
E.
Installing SQL Management Objects ............................................................................................... 94
G.
Stored procedure reference ......................................................................................................... 108
1)
[HAOnAzure].[PlaceDBOnHA] ................................................................................................... 108
2)
[HAOnAzure].[ScanForDBAndMountEndpoints] ...................................................................... 108
3)
[HAOnAzure].[ScanForDetachedDBsAndAttach] ...................................................................... 108
4)
[HAOnAzure].[UpsertAzureAccountCredentials] ...................................................................... 109
5)
[HaOnAzure].[InsertOrUpdateEndpointOnHA]......................................................................... 109
6)
[HaOnAzure].[RemoveDBFromHA] ........................................................................................... 111
7)
[HaOnAzure].[RemoveEndpointFromHA] ................................................................................. 112
8)
[HaOnAzure].[ShowEndpoints] ................................................................................................. 113
9)
[HAOnAzure].[ShowDatabasesOnHA]....................................................................................... 114
4|Page
Introduction
The goal of this white paper is to provide guidance to developers and IT professionals on how to leverage
a new Microsoft SQL Server 2014 feature called SQL Server Data Files in Windows Azure. It explains the
architecture, provides step-by-step instructions for configuration, and clarifies limitations and
advantages, and then it provides indications on performance, monitoring, and troubleshooting. Finally, it
provides a fully functional example of how is possible to create a custom high-availability and disasterrecovery solution, very similar to typical on-premises failover clusters in Windows Server.
Note: In the context of this white paper, SQL Server Data Files in Windows Azure is the capability of SQL
Server 2014 to host database data and transaction log files directly on Windows Azure blob storage,
without using Windows Azure data disks as intermediate storage containers.
This white paper has an appendix that contains all the necessary code, stored procedure references, and
helper libraries. It also includes a step-by-step installation procedure for a custom cluster mechanism we
implemented based on SQL Server Data Files in Windows Azure.
This feature, which was initially released in SQL Server 2014, enables native support for SQL Server
database files stored as Windows Azure blobs. It enables you to create a database in SQL Server running
on-premises or in a virtual machine in Windows Azure with a dedicated storage location for your data in
Windows Azure blob storage. This enhancement helps make it easier to move databases between
machines through the use of detach and attach operations. In addition, it provides an alternative
storage location for your database backup files by allowing you to restore from or to Windows Azure
Storage. SQL Server Data Files in Windows Azure enables you to create low-cost, highly available, and
elastically scaling hybrid solutions that meet your organization’s requirements for data virtualization,
data movement, security, and availability.
5|Page
1. Architecture
SQL Server Data Files in Windows Azure feature is available in all editions of SQL Server 2014; it is
enabled by default and provided at no additional cost.
This improvement delivers native support for SQL Server database files stored as Windows Azure blobs,
making SQL Server 2014 the first truly hybrid-cloud enabled database. SQL Server runs on-premises or in
a virtual machine, and it directly interacts with Windows Azure Storage. Now SQL Server instances, both
on-premises and in virtual machines, can interact directly with Windows Azure blobs. This improves
agility in terms of attaching and moving databases and makes restore possible without data movement.
It also opens the way for a highly available solution that uses Windows Azure blob storage as a shared
storage between multiple nodes.
This is a feature first introduced in SQL Server 2014, for more information about SQL Server integration
with Windows Azure Storage, see the following topic in SQL Server Books Online:
SQL Server Data Files in Windows Azure
http://msdn.microsoft.com/en-us/library/dn385720(v=sql.120).aspx
6|Page
1.1 SQL Server Data Files in Windows Azure: overview
SQL Server 2014 integration with Windows Azure blob storage occurs at a deep level, directly into the
SQL Server Storage Engine; SQL Server Data Files in Windows Azure is more than a simple adapter
mechanism built on top of an existing software layer.



The Manager Layer includes a new component called XFCB Credential Manager, which
manages the security credentials necessary to access the Windows Azure blob containers
and provides the necessary security interface; secrets are maintained encrypted and
secured in the SQL Server built-in security repository in the master system database.
The File Control Layer contains a new object called XFCB, which is the Windows Azure
extension to the file control block (FCB) used to manage I/O against each single SQL Server
data or log file on the NTFS file system; it implements all the APIs that are required for I/O
against Windows Azure blob storage.
At the Storage Layer, the SQL Server I/O Manager is now able to natively generate REST API
calls to Windows Azure blob storage with minimal overhead and great efficiency; in
7|Page
addition, this component can generate information about performance counters and
extended events (xEvents).
Important: The only Windows Azure blob type supported by SQL Server 2014 is page blob; any attempt
to use block blob will fail. To understand the difference between block blobs and page blobs, see
“Understanding Block Blobs and Page Blobs” in the Windows Azure documentation at
http://msdn.microsoft.com/en-us/library/windowsazure/ee691964.aspx.
The path to Windows Azure Storage is different using this new feature. If you use the Windows Azure
data disk and then create a database on it, I/O traffic passes through the Virtual Disk Driver on the
Windows Azure Host node. However, if you use SQL Server Data Files in Windows Azure, I/O traffic uses
the Virtual Network Driver.
SQL Server 2014 can run native REST API calls against Windows Azure blobs via the network directly. For
more information about SQL Server in Windows Azure Virtual Machines (IaaS) scalability target and
performance optimization, see the following white paper:
Performance Guidance for SQL Server in Windows Azure Virtual Machines
http://go.microsoft.com/fwlink/?LinkId=306266
1.2 Tools and interfaces
SQL Server 2014 comes with a series of tools and interfaces you can use to interact with the Windows
Azure blob storage integration:


Windows PowerShell cmdlets. New cmdlets in SQL Server 2014 can be used to create a SQL
Server database that references a blob storage URL path instead of a file path.
Performance counters. A new object called HTTP_STORAGE_OBJECT tracks activity in System
Monitor when you are running SQL Server with Windows Azure Storage data or log files.
8|Page

SQL Server Management Studio. URL paths can be used in all the database-related dialog boxes
for creating, attaching, restoring, and reviewing database properties; you can also use Object
Explorer to browse and connect directly to Windows Azure blob storage accounts.

SQL Server Management Objects. You can specify URLs to data or log file paths in Windows
Azure Storage in SQL Server Management Objects (SMO); additionally, a new property,
HasFileInCloud, in the object Microsoft.SqlServer.Management.Smo.Database returns true if the
database has at least one file stored in Windows Azure blob storage.
Transact-SQL. Transact-SQL syntax now supports usage of URL paths in database creation and
file layout management. Also, a new column called credential_id in the sys.master_files system
view enables you to create joins with the sys.credentials system view and retrieve details about
Windows Azure blob storage access policies and containers.

1.3 Limitations
Windows Azure blob storage integration in SQL Server 2014 comes with some limitations, mainly due to
the nature of Windows Azure blob storage:




Maximum file size and number of files. The maximum size of a single page blob in Windows
Azure storage is 1 terabyte (TB). You can create an infinite number of blobs (and containers), but
the maximum total size per storage account is 200 TB.
Geo-replication. The Windows Azure blob storage geo-replication feature is not supported and
must be disabled: because geo-replication is asynchronous and there is no way to guarantee
write ordering between data and log files, using this feature can generate database corruption if
database failover occurs.
Filestream. Filestream is not supported on Windows Azure blob storage. Only MDF, LDF, and
NDF files can be stored in Windows Azure blob storage using SQL Server Data Files in Windows
Azure.
Hekaton in-memory database. Additional files used by the In-memory Hekaton feature are not
supported on Windows Azure blob storage. This is because Hekaton uses Filestream.
9|Page

AlwaysOn. You can use AlwaysOn if you don’t need to create additional files on the primary
instance. For more information about AlwaysOn support in Windows Azure Virtual Machines
(IaaS), see the following blog post:
SQL Server 2012 AlwaysOn Availability Group and Listener in Azure VMs: Notes, Details
and Recommendations
http://blogs.msdn.com/b/igorpag/archive/2013/09/02/sql-server-2012-alwaysonavailability-group-and-listener-in-azure-vms-notes-details-and-recommendations.aspx
1.4 Usage scenarios
Although it is theoretically possible and officially supported, using an on-premises SQL Server 2014
installation and database files in Windows Azure blob storage is not recommended due to high network
latency, which would hurt performance; for this reason, the main target scenario for this white paper is
SQL Server 2014 installed in Windows Azure Virtual Machines (IaaS). This scenario provides immediate
benefits for performance, data movement and portability, data virtualization, high availability and
disaster recovery, and scalability limits. “Implementing a failover cluster mechanism,” later in this
document, also describes a creative example of a new low-cost mechanism that provides high
availability and disaster recovery functionalities similar to typical on-premises failover clusters in
Windows Server.
These are the biggest advantages of using SQL Server data files on Windows Azure blob storage service:

Portability: in this form (that is, with database files as blobs in Windows Azure Storage), it’s easy
to detach a database from a Windows Azure Virtual Machine (IaaS) and attach the database to a
different virtual machine in the cloud; this feature might be also suitable to implement an
efficient disaster recovery mechanism, because everything is directly accessible in Windows
Azure blob storage. To migrate or move databases, use a single CREATE DATABASE Transact-SQL
query that refers to the blob locations, with no restrictions on storage account and compute
resource location. You can also use rolling upgrade scenarios if extended downtime for
operating system and/or SQL Server maintenance is required.

Database virtualization: When combined with the contained database feature in SQL Server
2012 and SQL Server 2014, a database can now be a self-contained data repository for each
tenant and then dynamically moved to different virtual machines for workload rebalancing. For
more information about contained databases, see the following topic in SQL Server Books
Online:
Contained Databases
http://technet.microsoft.com/en-us/library/ff929071.aspx

High availability and disaster recovery: Because all database files are now externally hosted,
even if a virtual machine crashes, you can attach these files from another hot standby virtual
10 | P a g e
machine, ready to take the processing workload. A practical example of this mechanism is
provided in “Implementing a failover cluster mechanism” later in this document.

Scalability: Using SQL Server Data Files in Windows Azure, you can bypass the limitation on the
maximum number of Windows Azure disks you can mount on a single virtual machine (16 for XL
and A7 sizes); there is a limitation on the maximum number of I/O per second (IOPS) for each
single Windows Azure disk. The next section discusses this limitation in greater detail.
Theoretically, you should be able to mount up to 32,767 database files on each SQL Server
instance, but it’s very important to consider the network bandwidth required to support your
workload. You should also carefully review “Pros and cons” later in this document. For more
information about Windows Azure virtual machine sizes and characteristics, limitations, and
performances, see the following links:
Virtual Machine and Cloud Service Sizes for Windows Azure
http://msdn.microsoft.com/library/dn197896.aspx
Performance Guidance for SQL Server in Windows Azure Virtual Machines
http://go.microsoft.com/fwlink/?LinkId=306266

Elasticity: Mixing SQL Server Data Files on premise and in Windows Azure you can expand your
SQL Server instance storage space as needed. Even better, you pay only for the Azure blob
pages actually in use, not the Azure blob pages allocated (note that you still pay for pages in use
for the Azure blob perspective but unused as far as SQL Server is concerned). You can even mix
the different types of storage based on cost, performance and availability considerations either
manually or using automatic data placement features such as partitioning functions along with
a partition schemes.
CREATE PARTITION SCHEME (Transact-SQL)
http://msdn.microsoft.com/en-us/library/ms179854.aspx
1.5 Pros and cons
SQL Server Data Files in Windows Azure provides several advantages and enables new usage scenarios,
but as with any SQL Server feature, it comes with pros and cons that must be carefully evaluated. The
most important consideration is related to the consumption of network: if you use this new feature, I/O
traffic now counts toward network bandwidth and not the IOPS limit per single Windows Azure disk. The
following table lists the main resource limits and thresholds for each virtual machine size.
11 | P a g e
Important: Values reported in Allocated Bandwidth (Mbps) are not officially covered or guaranteed by
any Service Level Agreement (SLA) today. Information is reported only as a general estimation of what is
expected for each virtual machine size. We hope in future to enable more fine-grained control over
network utilization and then to be able to provide an official SLA.
These are the main pros to consider:






It is possible to scale on the number of IOPS, on database data files, far beyond the (500 IOPS x
16 disks) = 8000 IOPS limit imposed by usage of Windows Azure additional disks.
Databases can be easily detached and migrated or moved to different virtual machines (that is,
portability).
Standby SQL Server virtual machines can be easily used for fast disaster recovery.
You can easily implement custom failover mechanisms. “Implementing a failover cluster
mechanism” (later in this document) demonstrates how to do this.
This mechanism is almost orthogonal to all main SQL Server 2014 features (such as AlwaysOn
and backup and restore) and it is pretty well integrated into the SQL Server management tools
like SQL Server Management Studio, SMO, and SQL Server PowerShell.
You can have a fully encrypted database with decryption only occurring on compute instance
but not in a storage instance. In other words, using this new enhancement, you can encrypt all
data in public cloud using Transparent Data Encryption (TDE) certificates, which are physically
separated from the data. The TDE keys can be stored in the master database, which is stored
locally in your physically secure on-premises machine and backed up locally. You can use these
local keys to encrypt the data, which resides in Windows Azure Storage. If your cloud storage
account credentials are stolen, your data still stays secure because the TDE certificates always
reside on-premises.
These are the main cons to consider:

I/O generated against single database files, using SQL Server Data Files in Windows Azure, count
against network bandwidth allocated to the virtual machine and pass through the same single
network interface card used to receive client/application traffic.
12 | P a g e

It is not possible to scale on the number of IOPS on the single database transaction log file.
Instead, using Windows Azure data disks, you can use the Windows Server 2012 Storage Spaces
feature and then stripe up to four disks to improve performance. For more information about
this technique, see the following white paper.
Performance Guidance for SQL Server in Windows Azure Virtual Machines
http://go.microsoft.com/fwlink/?LinkId=306266

Geo-replication for database file blobs is not supported.
Note: Even if you are using traditional Windows Azure data disks, geo-replication is not
supported if more than one single disk is used to contain all data files and transaction log file
together.

There are some limitations, as mentioned earlier in this document in “Limitations.” For example,
the Hekaton in-memory database and Filestream features are not supported.
2. Configuration
This section provides you with a complete step-by-step procedure that creates all the necessary objects
and required configuration items to implement a fully functional scenario.
2.1 Step-by-step procedure
First of all, create a Windows Azure storage account, if one is not yet available, and then create a
container with Private security access.
Important: For performance reasons, we strongly recommend that you create the storage account in
the same data center as the virtual machine where you will install SQL Server 2014, or, even better, use
the same Windows Azure affinity group.
Because you will use at least two blobs (one for the data file and one for the log file), you should disable
geo-replication. This configuration is not supported for SQL Server. By disabling geo-replication, you can
save around 30 percent on your storage costs. We have been asked several times by customers and
partners why we recommend disabling geo-replication, and the reason is simple: because there is no
13 | P a g e
ordering guaranteed in asynchronous storage replication, writes to the data file blob may occur before
writes to the log file blob, which violates a fundamental principle and requirement of every transactional
database system: Write-Ahead Logging (WAL), which you can read about in the following old but still
valid KB article:
SQL Server 7.0, SQL Server 2000, and SQL Server 2005 logging and data storage algorithms
extend data reliability
http://support.microsoft.com/default.aspx?scid=kb;en-us;230785
Next, create a container for your database with an access level of Private. The names of the storage
account and the container must be lowercase.
Note that this container has a full path equal to
“http://enosg.blob.core.windows.net/sqldatacontainer”. This is important to remember when you use
SQL Server Management Studio later in this paper. Even if the same container can be used by multiple
databases, you should create only one database per container because the security mechanism that is
used in this procedure is based on the container itself.
Next, create a policy and a shared access signature (SAS) to protect and secure the container. SQL Server
requires this policy in order to work correctly with Windows Azure storage. There are several ways to
create the policy and the SAS, You can use C# code for this simple task or use complex REST APIs, but
this procedure uses a free tool, Azure Storage Explorer, which you can find on CodePlex:
http://azurestorageexplorer.codeplex.com
14 | P a g e
Install the tool, open it, and then insert the name and the access key of your storage account. If the
operation succeeds, the content of your storage account is displayed, along with the list of your
containers. Select the one you want to place your database in, and then click Security.
In the Blob & Container Security dialog box, click the Shared Access Policies tab, and then create a new
policy with the settings shown in the following screen shot. Click Save Policies.
15 | P a g e
In some documentation you can find on the Internet, Delete is not listed as a privilege requirement, but
if you want to be able to drop the database from SQL Server, you need to include Delete in the policy
definition. We recommend setting a fixed limit for the expiry time setting (such as one year): although it
is technically possible to have an infinite lease, it’s a good practice to use a fixed limit and then renew
periodically. For more information about blob lease times and expirations, see the following blog post:
New Blob Lease Features: Infinite Leases, Smaller Lease Times, and More
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/12/new-blob-lease-featuresinfinite-leases-smaller-lease-times-and-more.aspx
For more information about shared access and blob storage and containers, see the following:
Shared Access Signatures, Part 1: Understanding the SAS Model
http://www.windowsazure.com/en-us/manage/services/storage/net/shared-access-signaturepart-1
Close the dialog box and then reopen it; switching directly to the Shared Access Signatures tab for the
next step will now let you see the newly created policy. After you reopen the dialog box, click the
Shared Access Signatures tab and then in Container name type sqldatacontainer, and in Policy, select
sqlpolicy. When you are done, click Generate Signature. A string appears in the Signature box.
16 | P a g e
Click Copy to Clipboard and then paste the string in a secure location for later reuse. Finally, close the
tool.
Now you can test integration with SQL Server Data Files in Windows Azure. Open SQL Server
Management Studio inside the Windows Azure virtual machine you created, and then create the SQL
Server credential object that you need to access the Windows Azure blob container you created before.
A SQL Server credential object stores authentication information required to connect to a resource
outside of SQL Server. The credential stores the URI path of the storage container and the SAS key
values. For each storage container used by a data or log file, you must create a SQL Server credential
whose name matches the container path. That is, the credential name string must be exactly the same
as the blob container path; otherwise the security information in SQL Server and Windows Azure do not
match, and authentication fails. For example, with the storage container created earlier in this paper,
the correct full path to use, including HTTPS prefix, is
https://enosg.blob.core.windows.net/sqldatacontainer. To create the corresponding credential object in
SQL Server, use the following code in SQL Server Management Studio, enclosing it in square brackets:
CREATE CREDENTIAL [https://enosg.blob.core.windows.net/sqldatacontainer]
WITH IDENTITY='SHARED ACCESS SIGNATURE',
SECRET =
'sr=c&si=sqlpolicy&sig=9aoywKCSbX4uQrIGEWIl%2Bfh3cMtEm5ZA3fSDxh2wskajd7'
Note: The signature in the example is for demonstration only. It is not valid.
To create the object without using Transact-SQL, use the Credentials node in SQL Server Management
Studio: right-click the instance-level Security container, and then click New. In Credential name, enter
the URI, and then in Identity, enter SHARED_ACCESS_SIGNATURE.
In Password, enter the SECRET value that you copied in the Azure Storage Tool earlier. You will need to
modify the string to get the correct value to enter. For example, the string you copied from Azure
Storage Explorer is something like this:
17 | P a g e
https://enosg.blob.core.windows.net/sqldatacontainer?sr=c&si=sqlpolicy&sig=9aoywKCSbX4uQrIGEWIl
%2Bfh3cMtEm5ZA3fSDxh2wskajd7
However, the value you need to provide to SQL Server is only the substring starting from [?] until the
end:
sr=c&si=sqlpolicy&sig=9aoywKCSbX4uQrIGEWIl%2Bfh3cMtEm5ZA3fSDxh2wskajd7
After the credential object is created, the credential object is saved in the SQL Server master database in
an encrypted form. After the object is stored, you do not need to maintain the object itself further, but
be sure to back up the master database regularly.
You can also register and connect to blob storage accounts in Windows Azure through SQL Server
Management Studio starting with SQL Server 2012.
When you click Azure Storage, the following dialog box appears.
As with the other connection methods discussed here, the shared access key is stored securely in the
SQL Server master database.
As of this writing, you can only list, see, and in some cases delete the containers and files for storage
accounts: we hope to add enhanced signature tools in future service packs so that you can create the
connection without having to use external tools or code.
If you try to delete a file that is in use, you receive the following error.
18 | P a g e
After the connection between Windows Azure and SQL Server is established, you can create your first
database using the following Transact-SQL command.
CREATE DATABASE TestDBonAzure
ON
(NAME = file_data1, FILENAME = 'https://enosg.blob.core.windows.net/sqldatacontainer/filedata1.mdf',
SIZE = 10GB),
(NAME = file_data2, FILENAME = 'https://enosg.blob.core.windows.net/sqldatacontainer/filedata2.mdf',
SIZE = 10GB),
(NAME = file_data3, FILENAME = 'https://enosg.blob.core.windows.net/sqldatacontainer/filedata3.mdf',
SIZE = 10GB)
LOG ON
(NAME = file_log1, FILENAME = 'https://enosg.blob.core.windows.net/sqldatacontainer/filelog1.ldf', SIZE =
1GB)
Note: In our testing, this command took 7 seconds to execute. The creation of 100 GB database data file
with a 10 GB transaction log file completed in 17 seconds.
After you run the Transact-SQL command, the new items appear in SQL Server Management Studio.
It is worth mentioning that SQL Server uses temporary leases to reserve blobs for storage, and that each
blob lease is renewed every 45 to 60 seconds. If a server crashes and another instance of SQL Server
configured to use the same blobs is started, the new instance waits up to 60 seconds for the existing
lease on the blob to expire. To remove or break the lease manually for emergency reasons or
troubleshooting, you can use the Lease Blob REST API or the following GUI tool:
Azure Blob Lease
https://github.com/gbowerman/azurebloblease
19 | P a g e
2.2 Best practices
Depending on why you are interested in SQL Server Data Files in Windows Azure (performance, portability,
security, scalability, or high availability and disaster-recovery), different sets of best practices may apply
for specific scenarios.
When you use this feature, it’s critical to remember the following points:

If SQL Server Data Files in Windows Azure is used for certain SQL Server database files, IOPS
against those files count toward the network bandwidth and limits assigned to your virtual
machine.
Today, Windows Azure does not guarantee with an SLA the network bandwidth for a specific
virtual machine size, but we provide a general estimation in “Pros and cons” earlier in this
document.

If you are using Windows Azure data disks for these database files, IOPS against those files counts
toward the 500 IOPS limit per single disk, up to 16 disks, depending on the Windows Azure virtual
machine size.
500 IOPS for each blob/data disk is a hard limit enforced through throttling, but it is not a
performance guarantee for each Windows Azure data disk. For more information about
performance with Windows Azure data disks, see the following white paper:
Performance Guidance for SQL Server in Windows Azure Virtual Machines
http://go.microsoft.com/fwlink/?LinkId=306266
There are many important things to consider when you plan a system that includes Windows Azure. The
following list contains information about network and storage performance targets, limits, and thresholds
that you should bear in mind:

Security
o Use a separate Windows Azure blob storage container for each database and create a
separate CREDENTIAL object, in SQL Server, for each container.
o Create Windows Azure blob storage containers with an access level of Private.
o Because Windows Azure storage does not provide or store blob data encrypted, if you
need encryption at rest, enable SQL Server transparent database encryption (TDE). The
appendix of this paper includes a sample script to enable TDE. For more information about
TDE, which is fully supported in this scenario, see the following topic in SQL Server Books
Online:
Transparent Data Encryption (TDE)
http://technet.microsoft.com/en-us/library/bb934049.aspx
20 | P a g e
o
When you create an SAS policy, use a short-term expiration date and be sure to schedule
appropriate SQL Server credential object updates to avoid blocked access. For more
information about containers, policies, and SAS, see the following blog post:
About Container Shared Access Signature for SQL Server XI
http://blogs.technet.com/b/italian_premier_center_for_sql_server/archive/201
3/12/05/how-to-build-your-container-shared-access-signature-for-sqlxi-fromsql-server.aspx
o
o
All general SQL Server concepts, technologies, and features related to security, like
auditing, still apply and should be adopted.
To help ensure the highest portability and self-containment for databases, use the
contained database feature in SQL Server 2014. For more information about working with
contained databases, see the following topic in SQL Server Books Online:
Contained Databases
http://technet.microsoft.com/en-us/library/ff929071.aspx

Performance
o Because each Windows Azure storage account is able to provide up to 20,000 IOPS and
200 TB space for all the types (that is, blobs, tables, and queues) of accessed objects, you
should carefully monitor the total IOPS amount for all database files and ensure that you
will not be throttled. For blob objects, you can expect up to 60 MB per second, or up to
500 transactions per second.
o If your database requires higher IOPS than a single blob or file can sustain (that is, >>500
IOPS), you should not place it directly on Windows Azure blob storage using SQL Server
Data Files in Windows Azure.
Until Windows Azure Storage allows you to scale up on I/O performances, the only
possibility is to place the transaction log on a volume backed up by Windows Server 2012
Storage Spaces in stripe mode, created inside the virtual machine. Adopting this
approach, you will have to create your data files on Windows Azure blob storage using
SQL Server Data Files in Windows Azure, leaving the transaction log inside the virtual
machine on a traditional Windows Azure data disk. Here is an example of how to do this
using Transact-SQL.
CREATE DATABASE TestDBonAzure
ON
(NAME = file_data1, FILENAME =
'https://enosg.blob.core.windows.net/sqldatacontainer/filedata1.mdf'
, SIZE = 10GB)
LOG ON
(NAME = file_log1,
FILENAME = 'Z:\DATA\file_log1.mdf', SIZE = 1GB)
21 | P a g e
For more information about Windows Server 2012 Storage Spaces and I/O scalability in
Windows Azure virtual machines, see the following white paper:
Performance Guidance for SQL Server in Windows Azure Virtual Machines
http://go.microsoft.com/fwlink/?LinkId=306266
o
o
o
o
We do not recommend the use of SQL Server Data Files in Windows Azure for the master,
model, and msdb system databases: to ensure virtual machine isolation and selfcontainment, store these system databases on system drive C.
We do not recommend the use of SQL Server Data Files in Windows Azure for the tempdb
system database for several reasons:
 Because tempdb is a temporary database, there is no need to port outside the
assigned SQL Server instance.
 Because using SQL Server Data Files in Windows Azure counts toward your
assigned network bandwidth and threshold, you can save network bandwidth
and use Windows Azure data disks instead.
You should place the SQL Server virtual machine in the same Windows Azure affinity
group as the storage account that you plan to use for SQL Server Data Files in Windows
Azure.
For each database, use multiple data files (blobs) until you reach the required number of
IOPS: because SQL Server can use data files in parallel very efficiently, it’s very easy to
scale up on storage performance, at least for data files.
Be sure to check your network bandwidth consumption when using multiple data files or
blobs with SQL Server Data Files in Windows Azure.
Note: Only one transaction log file can be active per database. If more than one
transaction log file is active per database, SQL Server Data Files in Windows Azure
cannot be used.
o
o
Even though large blob size allocation is very efficient in Windows Azure Storage (NTFS
sparse file mechanism), you should allocate, at creation time, the maximum expected
database file size for SQL Server data files and avoid lengthy file expansion operations.
For transaction log files, SQL Server requires the entire file content or blob to be zeroed
on the storage system. This is not required for azure page blob based transaction log files:
the only initialized part is the log tail witch is fixed in size and this operation is a single,
atomic REST call. So regardless of the dimension of your transaction log the required
amount of time to initialize it is immutable. For this reason the only constraint to evaluate
during transaction log creation is its cost. Keep in mind that the same consideration about
VLF size and cardinality apply in blob based transaction logs as well.
22 | P a g e
o
Use the SQL Server data compression feature whenever possible to reduce I/O against
Windows Azure blob storage. For more information about data compression, see the
following topic in SQL Server Books Online:
Data Compression
http://technet.microsoft.com/en-us/library/cc280449.aspx
o
Consider using the SQL Server backup compression feature as well, to reduce data size
transfer to Windows Azure blob storage during database backup and restore operations.
For more information about backup compression, see the following topic in SQL Server
Books Online:
Backup Compression (SQL Server)
http://technet.microsoft.com/en-us/library/bb964719.aspx
o

Cost
o
o
o
o
o
o

Avoid SQL Server shrink operations on database files, because these operations are
extremely slow.
Because it is not supported, you should disable Windows Azure storage account georeplication. Disabling this feature can save you 30 percent on usage billing.
If Windows Azure storage monitoring and logging are required, be sure to use minimal
detail levels and short information retention policies.
If you use a second SQL Server virtual machine for hot standby and you require very quick
database failover, leave the virtual machine running, but use the lowest possible
Windows Azure virtual machine size: when you need the virtual machine to become
active, change the virtual machine size back to the desired state. This simply triggers a
virtual machine restart.
If you use a second SQL Server virtual machine for cold standby and you do not require
very quick database failover, after the virtual machine is configured, shut it down and
deallocate it, because you will not pay for stopped machines, only for the space occupied
by virtual hard disks. Be aware that restarting the virtual machine takes a few minutes.
Carefully plan database file size for all your files, because you pay for each allocated
gigabyte: even if the storage cost is minimal compared to compute resource usage, you
may want to review the amount of space used to ensure cost efficiency.
Avoid unnecessary index maintenance: use adaptive logic in your script to ensure you
rebuild or reorganize indexes only if necessary.
Monitoring and management
o Using the Windows Azure Management Portal, add additional key metrics to monitor
performance and availability of the Windows Azure storage account (see “Windows Azure
Management Portal” later in this document).
23 | P a g e
o
o
Using the Management Portal, set up alerts on storage metrics per KPI like I/O requests,
availability, throttling, and errors (see “Windows Azure Management Portal” later in this
document).
Back up SQL Server databases directly on Windows Azure blob storage using a new
feature introduced in SQL Server 2012 SP1 – CU2. This approach avoids the use of
Windows Azure data disk, instead permitting remote backup to other Windows Azure
data centers and helping to ensure maximum portability of backup data. For more
information about this feature, see the following:
Tutorial: Getting Started with SQL Server Backup and Restore to Windows Azure
Blob Storage Service
http://msdn.microsoft.com/en-us/library/jj720558.aspx
SQL Server Database Backup in Azure IaaS: performance tests
http://blogs.msdn.com/b/igorpag/archive/2013/07/30/sql-server-databasebackup-in-azure-iaas-performance-tests.aspx
o
o
o
o

So that you always have a general overview of Windows Azure storage account health
and performances, enable Windows Azure storage analytics and metrics collection, at the
minimum verbose level (see “Windows Azure storage logs and analytics” later in this
document).
Be sure to set a reminder before SAS expiration to update the CREDENTIAL object in SQL
Server; otherwise access will be denied by the Windows Azure storage infrastructure.
Be sure to save all the scripts for database and credentials creation in a secure place
outside the main SQL Server virtual machine.
Periodically review SQL Server 2014 performance counters related to SQL Server Data
Files in Windows Azure (see “Performance counters” later in this document) and wait
statistics (see “SQL Server wait types” later in this document).
High availability and disaster recovery
o The official supported and recommended mechanism for high availability for SQL Server
2014 (and SQL Server 2012) in Windows Azure Virtual Machines (IaaS) is AlwaysOn
Availability Groups. For more information about AlwaysOn Availability Groups and SQL
Server IaaS, see the following blog post:
SQL Server 2012 AlwaysOn Availability Group and Listener in Azure VMs:
Notes, Details and Recommendations
http://blogs.msdn.com/b/igorpag/archive/2013/09/02/sql-server-2012alwayson-availability-group-and-listener-in-azure-vms-notes-details-andrecommendations.aspx
24 | P a g e
SQL Server Data Files in Windows Azure also makes it possible to implement different
custom high availability mechanisms. A fully functional example is provided
“Implementing a failover cluster mechanism” later in this document.
o
o
If possible, include SQL Server virtual machines and all data and log files in the same
storage account. The storage account is the failover unit, if a disaster occurs at the level
of the Windows Azure data center or storage rack.
Save the database creation script outside the main SQL Server virtual machine for fast
database recovery in case of a virtual machine crash: this script contains the full HTTPS
paths to Windows Azure blob storage necessary to reattach the database to a new virtual
machine.
Be sure also to save the SQL Server credential creation script for SAS policy. SQL Server
needs this script to mount blobs and files. Keep this file in a secure place, because it
contains security information.
For more information about Windows Azure storage performance targets, limits and thresholds, see the
following blog post:
Azure Storage Scalability and Performance Targets
http://msdn.microsoft.com/library/azure/dn249410.aspx
2.3 Operations
Now that you have your SQL Server instance in a Windows Azure virtual machine and your database is
directly on Windows Azure blob storage using SQL Server Data Files in Windows Azure, consider the
following items related to database maintenance, management, and life cycle:


If you drop the database, the blob files are removed.
If you detach the database, SQL Server releases the lease on the Windows Azure blob files
immediately. The following screen shot of the Windows Azure Management Portal shows the
blob properties after the database is detached.
25 | P a g e



If you take the database offline, as with detaching, SQL Server releases the lease on the
Windows Azure blob files immediately.
As long as SQL Server maintains a lease on the blob files, you cannot externally drop or modify
or mount files in another SQL Server instance. If you need to drop or modify such files, you must
detach the database or stop the SQL Server service.
If you need to programmatically break the lease on database files, you can do so if you
incorporate the necessary logic inside SQL Server itself. For more information about how to
break the lease programmatically, see the following blog post:
Blob leases: how to break them from SQL Server
(using SQL CLR and Windows Azure REST API)
http://blogs.technet.com/b/italian_premier_center_for_sql_server/archive/2013/10/24
/blob-leases-how-to-break-them-from-sql-server-via-sql-clr-and-windows-azure-restapi.aspx

From a secondary SQL Server instance, if you try to attach database files that are already being
used by a primary SQL Server instance, you receive an error similar to this one.
Msg 5120, Level 16, State 140, Line 5
Unable to open the physical file
"https://enosg.blob.core.windows.net/frcogno/filedata1.mdf".
Operating system error 32:
"32(The process cannot access the file because it is being used by another
process.)".

This error appears because a lease already exists at the Windows Azure blob storage level, and
only one instance at a time can have access.
If you need to attach a previously detached database, even on a different SQL Server instance or
virtual machine, use the CREATE DATABASE Transact-SQL command with the FOR ATTACH
switch as shown here.
CREATE DATABASE [TestDBonAzure]
ON PRIMARY
26 | P a g e
(NAME = N'file_data1',
FILENAME = N'https://enosg.blob.core.windows.net/frcogno/filedata1.mdf',
SIZE = 10485760KB, MAXSIZE = 1TB, FILEGROWTH = 1024KB)
LOG ON
(NAME = N'file_log1',
FILENAME = N'https://enosg.blob.core.windows.net/frcogno/filelog1.ldf',
SIZE = 1048576KB, MAXSIZE = 1TB, FILEGROWTH = 512MB)
FOR ATTACH
3. Implementing a failover cluster mechanism
This section presents a complete custom solution that provides functionalities similar to those of onpremises Windows Server failover clusters.
3.1 Introduction
One of the biggest challenges in any IT department is to offer a highly available service, regardless of its
nature. This holds true for Microsoft SQL Server. While on-premises SQL Server installations enable you
to make use of the Windows Server failover cluster technology, as of this writing this solution is not
supported in Windows Azure Virtual Machines (IaaS).
SQL Server 2014, however, with SQL Server Data Files in Windows Azure, enables you to access shared
storage directly. With the required connectivity and security permissions, each blob can be accessed by
any SQL Server installation—both on premises and in the cloud. Windows Azure virtual machines have
out-of-the box Internet connectivity, including the Windows Azure Storage URI—even across data centers.
Page blob
VM A
VM B
The preceding figure shows that every SQL Server instance that has access to those blobs can attach the
database. You can use this architecture to help protect against software failures.
Architecture
Protection level
Different instances
SQL Server instance failure
Different virtual machines
Instance failure
Different data centers
Data center failure*
27 | P a g e
Note: Precautions that help protect you from data center failure do not protect you from blob storage
failure, because the blob geo-replication feature is not supported for Microsoft SQL Server database files.
However, you can use geo-replication for database backups.
The Windows Azure platform, in addition, supports multiple endpoints linked specifically to a virtual
machine (stand-alone endpoints) or shared between all virtual machine in the same cloud service (loadbalanced endpoints). In the context of this white paper, we’ll use and refer to stand-alone endpoints:
these endpoints are mapped to a specific port couple, local and remote, and—at a given time—will point
to a specific virtual machine in the deployment. This configuration is discussed in detail in “Endpoint
mapping” later in this document
By migrating these endpoints from one virtual machine to another, an endpoint failover can be simulated,
achieving the same result as an AlwaysOn Availability Group Listener failover.
Page blob
VM A
VM B
Endpoint
For more information, see the following topics on MSDN:


List Cloud Services (http://msdn.microsoft.com/en-us/library/ee460781.aspx).
Get Cloud Service Properties (http://msdn.microsoft.com/enus/library/windowsazure/ee460806.aspx).
3.2 Shared nothing model in SQL Server Data Files in Windows Azure
The previous section showed that every Windows Azure virtual machine can access the blobs that
compose the database. In order to prevent simultaneous access to the same blob by two or more SQL
Server instances, SQL Server must implement a mechanism similar to the shared nothing model in
Windows Server failover clusters. For more information about the shared nothing model in failover
clusters, see “Cluster Strategy: High Availability and Scalability with Industry-Standard Hardware” at
http://technet.microsoft.com/en-us/library/cc767157.aspx.
28 | P a g e
In the Windows Azure Storage platform SQL Server uses the concept of lease. Each container and blob can
have a lease active on it. While leased, a container or blob can be only accessed via a valid lease ID.
Because SQL Server will not share the lease ID, it is guaranteed to be the only one that can access the blob
(or container) at a given time. There are two types of leases: time based and never expiring (often referred
as fixed). SQL Server 2014 uses time based leases, so in case of problems, the blobs eventually become
available again on their own. The use of time based leases also means that SQL Server must renew a lease
before its expiration. Here is a simplified sequence diagram.
Object
lease
Azure
Storage
SQL
Server
Request lease
Create lease
LeaseID
LeaseID
Renew lease with LeaseID
Renew lease with LeaseID
Note that SQL Server releases the lease if you detach the database manually or if you shut down SQL
Server gracefully. While the lease is active, no REST API request is allowed unless it specifies the lease ID.
For example, the Put Page operation—used by SQL Server to update a database page—is refused unless
a valid lease ID is specified. For more information about REST API requests, see “Put Page (REST API)”
(http://msdn.microsoft.com/en-us/library/windowsazure/ee691975.aspx). You can see the lease status
of a blob by clicking Edit in the Windows Azure Management Portal.
29 | P a g e
Because, by definition, a single database file blob can be leased by a single SQL Server instance at any
given time, you can use the lease as the guarantee required to replicate the shared nothing model. A
database can be composed of many blobs. Each blob maintains an independent lease. It is impossible,
however, to attach a different file of the same databases in different instances, because an instance must
have access to the PRIMARY filegroup in order to bring the database online. Because there is only a single
PRIMARY filegroup in a given database, it is not possible to attach the same database in two different
instances.
The remainder of the section refers to the Microsoft SQL Server to Windows Azure Helper Library, which
is an easy-to-use tool that is available for free on CodePlex: https://sqlservertoazure.codeplex.com. The
concepts discussed here can be applied regardless of what kind of technology you use. For more
information about how to install the library on your instance, see the appendix.
Using the library, call the List Blobs REST API method (http://msdn.microsoft.com/enus/library/windowsazure/dd135734.aspx) and retrieve the current lease status of a given blob.
30 | P a g e
Use the sys.master_files catalog view (http://technet.microsoft.com/en-us/library/ms186782.aspx) to
identify the SQL Server databases attached to an instance using SQL Server Data Files in Windows Azure
feature. Because the blobs are in use, you can confirm the existence of a valid lease.
Note: These examples use the MERGE join hint to minimize the number of REST calls. The purpose of this
hint is performance-related and therefore out of scope of this white paper.
31 | P a g e
If you detach the database, or if a failure that causes the instance to be unable to keep the lease active
occurs, the blobs become available.
In this situation any SQL Server instance that can connect to Windows Azure storage can also attach the
database (given the proper credentials).
3.3 Endpoint mapping
Whenever you create a virtual machine, you are given the option to add it to an existing cloud service. In
general, this enables you to horizontally scale to achieve the required performance. In order to exploit
cloud horizontal scaling, every node must be homogenous. Windows Azure balances the connections
through the available nodes.
While this feature is essential for Web Sites, and in general for stateless services, SQL Server does not
support a scale-out architecture. (SQL Server Reporting Services supports this configuration; however, it
is not discussed here. For more information about using Reporting Services in a scale-out architecture,
see “Configure a Native Mode Report Server Scale-Out Deployment” at http://msdn.microsoft.com/enus/library/ms159114.aspx.)
Another less common approach is to statically bind a cloud port to a specific node instance. This creates
an external port-local port mapping that allows you to reach the internal port that connects to the public
external port. This process is often called port forwarding. In this configuration you can forward one (or
more) SQL Server TCP ports to static Windows Azure cloud ports. You can also map more external ports
32 | P a g e
to the same internal port. This kind of decoupling enables you to simulate different endpoints while in
fact using only one.
The Windows Azure port forwarding can be changed using the Management REST API. Remapping the
endpoint to a different SQL Server instance disrupts the existing socket connection. This disruption is
similar to what happens during a network address resource failover phase in failover clusters. SQL Server
TDS protocol forces a new authentication when endpoints are remapped. This authentication should not
be a problem because your applications should already be able to handle transient errors gracefully: this
best practice holds true for this framework too.
<Port 1>
<Port 2>
. ..
<Port n>
VM A
<Port 1>
<Port 2>
. ..
<Port n>
Cloud service (VIP)
In this framework, the examples in this paper work at the database level—each database can fail over on
its own. An endpoint needs to be exposed for each clustered database. The framework helps ensure
that—at a given time—the endpoint will be mapped on the instance that owns the related database.
<Remote Port 1>
<Remote Port 2>
<Remote Port 3>
<Remote Port n>
VM B
The preceding figure shows how a single cloud service (that holds the virtual IP) allows you to map remote
ports to ports in the virtual machines. Note that you can have more cloud service ports besides the SQL
TCP endpoints. Those ports can be mapped to other services (for example, Remote Desktop). There is no
need to use multiple TCP/IP ports in SQL Server. You can reuse the same local port and map it to multiple
remote ports.
33 | P a g e
VM A
<Port 1>
Cloud service (VIP)
<Port 1>
<Remote Port 1>
<Remote Port 2>
<Remote Port 3>
<Remote Port n>
VM B
This configuration is easier to maintain from the SQL Server perspective; it is also more secure because it
reduces the attack surface area.
3.4 Polling architecture
In the failover cluster architecture the Windows Cluster service handles intra-node communication and
the failure detection mechanism. This is achieved, among other ways, by heartbeat communication
between the nodes. In the model discussed here there is no external service that queries the status of the
other instances. This model relies instead on the lease status of the database’s blobs. That means that
each instance that participates in this topology should query the database blob lease status at regular
intervals. This process is often referred as polling. As soon as an instance detects an unleased database
(in other words, with its blobs unlocked) it should try to attach it. The database downtime is in this case
directly influenced by the polling frequency.
In this case, the expected downtime in case of a failure can be expressed in this way:
𝜏=
𝐿+𝑃
+𝑅
2
Where:

𝝉 is the average down time.
34 | P a g e



L is the SQL Server lease time (60 seconds).
P is the polling interval.
R is the recovery time of the database.
In this architecture you can express P directly and—with less precision—you can influence R by specifying
the recovery interval configuration option (for more information about this option, see
http://technet.microsoft.com/en-us/library/ms191154.aspx). For example, if you poll the blobs every 120
seconds and the recovery time is 15 seconds, you can expect the database to be back online after an
average of 1 minute and 45 seconds. If you add the endpoint to the equation, it looks like this:
𝜏=
𝐿+𝑃
+𝑅+𝐸
2
Where:

E is the time needed to move and endpoint from the old owner to the new one. This time is
in general less than 30 seconds, but because the Windows Azure platform is constantly
improving, you should perform your own testing.
3.5 Limitations of this architecture
3.5.1 No metadata sharing
While this architecture allows you to achieve independence between the database and the holding
instance—as far as the partial containment allows—there are some limitations. The most notable is the
need to maintain a shared repository in the instances so that you know which blobs to watch and monitor.
This can be easily achieved through a simple backup and restore operation (using Windows Azure Storage
as the backup destination makes this even easier). As the number of instances that participate in this
architecture grows, however, the alignment task becomes more demanding. This is somewhat balanced
by how easy is to add another instance to the architecture. All you have to do is to copy metadata database
and start the polling jobs.
Another approach—the one taken by the framework explained in the appendix—is to use the Table
service in Windows Azure Storage. Those tables are, by definition, shared among every instance and
therefore are ideal candidates for shared metadata. In Windows Azure, tables do not support distributed
transactions: each call is atomic on its own. For the purpose of the metadata sharing this is not an issue,
because the examples in this paper access those tables in read-only mode most of the time and in writemode only through user interaction. A complete discussion of Windows Azure tables is beyond the scope
of this white paper. For more information about Windows Azure tables, see “Windows Azure Table
Storage and Windows Azure SQL Database - Compared and Contrasted" (http://msdn.microsoft.com/enus/library/windowsazure/jj553018.aspx).
3.5.2 Failover at database level
The other limitations are directly related to the partial database containment. You still have to align
noncontained entities manually. You can use the sys.dm_db_uncontained_entities dynamic management
35 | P a g e
view (http://technet.microsoft.com/en-us/library/ms191154.aspx) to help locate those objects. This task
is commonly done when you use AlwaysOn Availability Groups or SQL Server Mirroring. For more
information about how to perform those tasks, see SQL Server Books Online
(http://msdn.microsoft.com/en-us/library/ms130214(v=sql.120).aspx).
3.5.3 No availability group
There is no communication between the nodes; each node attempts to attach a database as soon as it
notices its blobs are unlocked. In this scenario there can be no guarantee that a group of databases would
end up attached to the same instance. For this reason you should consider mapping an endpoint to a
single database.
3.6 Implementation
The following sections cover in detail all the requirements for a fully functional solution, along with
descriptions of all of the configuration objects that are necessary for the implementation.
3.6.1 Requirements
In order to implement this mechanism you must have:





At least two Windows Azure virtual machines in the same cloud service with SQL Server 2014
installed.
A TCP stand-alone endpoint available in Windows Azure for each database to be clustered.
The master key for each Windows Azure storage account you want to use.
The Windows Azure management certificate including its private key.
A valid SAS for each database blob container.
Important: Although you can use multiple Windows Azure storage accounts for multiple databases, we
highly recommend that you place all files that are related to a single database into the same account; this
is to avoid potential situations where a storage accounts could fail over independently to different
Windows Azure data centers.
3.6.1.1 Virtual machines in the same Windows Azure cloud service
In order to support endpoint mapping, every SQL Server instance—or better, its virtual machine—must
share the same Windows Azure cloud service. This can be done by selecting the appropriate cloud service
during virtual machine instantiation.
36 | P a g e
3.6.1.2 TCP endpoint
Although Windows Azure endpoint management is handled by the solution’s logic, you must open a TCP
listener in SQL Server manually. You can map multiple external ports to the same local port, which is the
solution we recommend, but you can also add more TCP endpoints to SQL Server directly.
There are many ways to do this. One is to open the SQL Server configuration manager and explore the
network configuration node of your instance. From there you have to double click the TCP/IP section.
37 | P a g e
You can add as many TCP ports as you want, separating them with commas.
Restart the instance to activate your changes. Also note that if you specify a port in TCP Port SQL Server
will not start if that port is unavailable. Using TCP Dynamic Ports would fix this, but it is not compatible
with this solution because it requires a fixed port binding between SQL Server and Windows Azure.
38 | P a g e
3.6.1.3 Windows Azure storage shared key
The Windows Azure storage account master key is required by the Windows Azure storage REST API in
order to sign the requests. Each key is tied to a specific storage account. Keep in mind that you can have
more storage account per Windows Azure subscription.
To find the shared keys, use the Storage section of the Windows Azure Management Portal
(https://manage.windowsazure.com):
Click Storage, and then click Manage Accessed Keys.
39 | P a g e
There are always two shared keys. You can use either one.
3.6.1.4 Windows Azure management certificate
The endpoint management REST API calls require you to add a valid management certificate to be
successful. These certificates can be obtained in various ways. For testing purposes you can use the
makecert and pvx2pfx tools.
For example, create a certificate azuretest.cer.
makecert -r -sv C:\temp\azuretest.pvk -n "CN=Azure Self signed certificate"
C:\temp\azuretest.cer
And then provide file locations on the local user computer and passwords.
pvk2pfx -pvk C:\temp\azuretest.pvk -pi <Password> -spc C:\temp\azuretest.cer
-po <Password> -pfx C:\temp\azuretest.pfx
You will end up with three files:



AzureTest.cer – This is the public key only certificate.
AzureTest.pvk – This is the certificate private key.
AzureTest.pfx – This is the certificate with both the private key and the public key.
For more information, see the following topics on MSDN:
40 | P a g e


Makecert.exe (Certificate Creation Tool)
(http://msdn.microsoft.com/library/bfsktky3(v=vs.110).aspx)
Pvk2Pfx (http://msdn.microsoft.com/en-us/library/windows/hardware/ff550672.aspx)
You have to upload the public key only certificate to Windows Azure. You can use the Windows Azure
Management Portal.
You will need to import both the certificates with private and public keys in every SQL Server instance that
will form the cluster. You must install it into the “My” store of the SQL Server engine execution account.
For that task you can either use the Microsoft Management Console (MMC) or the Windows Server 2012
cmdlet Import-Certificate. For more information, see the following topics on TechNet:

Import a Certificate (http://technet.microsoft.com/en-us/library/cc754489.aspx).
41 | P a g e

Import-Certificate (http://technet.microsoft.com/en-us/library/hh848630.aspx).
As best practice, use certificates for this specific purpose. You should secure the certificates very carefully,
because anyone who gains access to them has complete control over your Windows Azure account. We
strongly recommend that mark the private key as nonexportable while importing them into the certificate
stores, and never to store the .pfx file on the server itself.
3.6.1.5 Valid SAS for the database blob containers
SQL Server Data Files in Windows Azure requires you to specify a valid shared access signature (SAS) in
order to access its blobs without having to authenticate every request. This is a very important
performance feature. There are two types of SASs: direct and policy based. Direct SASs sign the resource
with the shared access key itself. This means that the only way to invalidate the SAS is to regenerate the
shared access key. If you want to invalidate a direct SAS, you must invalidate all of them. For this reason
alone, you should use the policy based SASs, even though they are more complex. You can still use direct
SASs if you want to.
42 | P a g e
For more information about direct SASs and how to generate them from SQL Server, see the following
blog post:

About Container Shared Access Signature for SQL Server Data Files in Windows Azure
(http://blogs.technet.com/b/italian_premier_center_for_sql_server/archive/2013/12/05/howto-build-your-container-shared-access-signature-for-sqlxi-from-sql-server.aspx)
For more information about SAS, see the following:


Create and Use a Shared Access Signature (http://msdn.microsoft.com/enus/library/windowsazure/jj721951.aspx)
Introducing Table SAS (Shared Access Signature), Queue SAS and update to Blob SAS
(http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/12/introducing-table-sasshared-access-signature-queue-sas-and-update-to-blob-sas.aspx)
Policy based SASs require you to create the container policy first. The advantage of a policy based SAS is
that you can invalidate the SAS by invalidating the policy alone. This makes SAS control more granular.
You can use various methods. This white paper shows you how to do that using the Microsoft SQL Server
to Windows Azure helper library (available at https://sqlservertoazure.codeplex.com/) because in this
tool you work with familiar Transact-SQL statements. You can use the tool that works best for you.
In this example you create a container called “demoacl” and attach to it a policy called “frcogno Demo
ACL Policy” that gives full control to the container and its contains. Note that whenever you see <your
storage account> and <your storage account shared key> you should replace those with your
storage account name and storage account shared key, respectively.
First, create a container.
EXEC [Azure].CreateContainer
'<your storage account>', '<your storage account shared key>', 1,
'demoacl', 'Off';
Use the following command to verify that the container was created.
SELECT * FROM [Azure].ListContainers(
'<your storage account>', '<your storage account shared key>', 1, NULL);
Using the following stored procedure, create the policy and attach it to the container you just created.
EXEC [Azure].AddContainerACL
43 | P a g e
'<your storage account>', '<your storage account shared key>', 1,
'demoacl',
-- Container name
'Off',
-- Public access?
'frcogno Demo ACL Policy',
-- Policy name
'2010-01-01',
-- Policy validity start
'2014-12-31',
-- Policy expiration
1,
-- Give read access?
1,
-- Give write access?
1,
-- Give delete blob privilege?
1
-- Give list blob privilege?
Note that you must specify a validity interval. This example uses a long time span. As best practice you
should find the smallest interval that is manageable.
Check to see that your own policy is in place.
SELECT * FROM [Azure].GetContainerACL(
'<your storage account>', '<your storage account shared key>', 1,
'demoacl', NULL, NULL, NULL);
To check the policy, upload a sample blob. You can use a simple text file.
DECLARE @buffer VARBINARY(MAX);
SET @buffer = CONVERT(VARBINARY(MAX), N'This text comes from SQL Server!' +
CONVERT(NVARCHAR, GETDATE()));
PRINT @buffer;
EXEC [Azure].CreateOrReplaceBlockBlob
'<your storage account>', '<your storage account shared key>', 1,
'demoacl',
'sample.txt',
@buffer,
'text/plain',
'ucs-2';
GO
Next, check the file in your container.
SELECT * FROM [Azure].ListBlobs(
'<your storage account>', '<your storage account shared key>', 1,
'demoacl', 1,1,1,1,NULL)
List blobs in a container
44 | P a g e
Because the container was created as private, it should not be possible to access the blob without
authentication. Confirm this by pasting the blob URI in a browser.
Now generate the policy based SAS.
SELECT [Azure].GeneratePolicyBlobSharedAccessSignatureURI(
'https://<your storage account>.blob.core.windows.net/demoacl',
'<your storage account shared key>',
'c',
'frcogno Demo ACL Policy');
Note that you specified ‘c’ as resource type (c as container). The result should be like this one.
You can also test the GET operation with the browser, specifying the blob name instead of the container
name to show its contents.
45 | P a g e
If this test is successful, you can use the policy based SAS as a credential for the SQL Server Data Files in
Windows Azure database.
3.6.2 Metadata tables
In order to automate the take ownership process, tell SQL Server which database blobs to poll. You also
need to store the link between a database and its endpoint. Although there are many ways to achieve
that, the simplest approach is to use two tables.
Azure Database
PK
Azure Endpoint
ID
PK
ID
Account Name
Certificate thumbprint
Database Name
Subscription ID
File
Service Name
Type
1..1
FileName
Deployment Slot
Database Name
Endpoint Name
Port
Local Port
Protocol
Virtual IP
3.6.2.1 Database table
The database table fields are shown here.
Field name
Description
Account Name
The Windows Azure storage account name.
Database Name
The name of the database as it appears in the sys.database name field.
File
The name of the file as it appears in the sys.master_files name field.
46 | P a g e
Type
The type of the file as it appears in the sys.master_files type field.
FileName
The physical URI of file as it appears in the sys.master_files
physical_name field.
For more information, see the following topics in SQL Server Books Online:


sys.master_files (Transact-SQL) (http://technet.microsoft.com/library/ms186782.aspx).
sys.databases (Transact-SQL) (http://technet.microsoft.com/en-us/library/ms178534.aspx).
3.6.2.2 Endpoint table
Field name
Description
Certificate thumbprint
The Windows Azure management certificate thumbprint as it appears in
the “My” certificate store of the SQL Server engine execution account.
Subscription ID
The Windows Azure subscription ID.
Service name
The Windows Azure service name.
Deployment slots
The deployment slot used in the cloud service. By default, this is
“Production” but it can be customized. For more information, see “Get
Deployment”
(http://msdn.microsoft.com/enus/library/windowsazure/ee460804.aspx).
Database Name
The name of the database as it appears in the sys.database name field.
Endpoint Name
An arbitrary name that describes the endpoint. This must be unique.
Port
The port exposed from outside the cloud service. Can be any unused
port.
Local Port
The local port to bind to the outside port.
Protocol
The protocol to use. Currently only “tcp” is supported.
Virtual IP (VIP)
The external IP address of the cloud service.
Any endpoint row specifies how a database will be accessed. Connect to one of the following pairs:


<CloudServiceName>,<Port>
<Virtual IP>,<Port>
Then ensure that SQL Server is listening to <LocalPort>.
47 | P a g e
The logic ensures that the endpoint is mapped to the right instance; that is, the one that holds the
database.
3.6.3
Polling stored procedures
3.6.3.1 Attach database
After the proper metadata tables are in place, use this logic to attach a database.
Start
Yes
Have we
checked every
database in
Azure database
table?
Try to attach it
End
No
Yes
Are all the
database blobs
unlocked?
No
This logic is implemented in the [Azure].[ScanForDetachedDBsAndAttach] stored procedure (for the
complete source code, see the appendix).
3.6.3.2 Mount endpoint
The mount endpoint operation is slightly more complicated because the database owns the driver.
48 | P a g e
Start
Have we
checked every
database in
Azure database
table?
No
No
Is this instance the
owner of the database?
Yes
Yes
Yes
Is this instance the
owner of the endpoint?
No
Unmount the
enpoint from the
current owner
End
Mount the endpoint
This logic is implemented in the [Azure].[ScanForDBAndMountEndpoints] stored procedure (for the
complete source code, see the appendix).
3.6.4 Polling job
After we have the metadata tables in place, schedule the two stored procedures in the correct order.
3.7 Step-by-step installation example
This step-by-step procedure shows you how to implement a Windows Azure SQL Server cluster with two
nodes (one with Instance A and the other with Instance B). It places two databases in high availability:
MindFlavorDB and OtherDB. MindFlavorDB will be available at port 1600 and OtherDB at port 1601 of the
VIP 137.117.214.251. Before you execute this example, complete the following installation steps:
1. Install Microsoft SQL Server to Windows Azure helper library as instructed in section [D] in the
appendix.
49 | P a g e
2. Install the necessary SQL Server management objects as instructed in section [E] in the
appendix.
Prerequisites
Make sure to have the requirements detailed in “Implementation” (earlier in this document) in place.
Make sure that TCP ports 1600 and 1601 are available.
Initial installation
On both instances:
1. Make sure the CLR is enabled. You can use this script.
EXEC sp_configure 'clr enabled', 1;
GO
RECONFIGURE;
GO
2. Make sure both instances share the same TCP port. The default, 1433, is used in this sample but
it is not mandatory.
3. Verify that the Windows Azure management certificate is present in the “My” certificate store of
the SQL Server engine execution account. The certificate must have the private key. The certificate
must not require a user password for its access.
4. Make sure all the noncontained entities required by both databases are replicated in both
instances. This includes the required Windows Azure Storage credential or credentials.
On Instance A and Instance B:
1. Download and install the
Microsoft SQL Server to Windows Azure helper library
(https://sqlservertoazure.codeplex.com/). You can use the script provided in the section “Error!
Reference source not found.” to create the database. Note that the script will require you to
specify the DLL location. The script will create a database called HaOnAzure.
Note: if you download an updated version of the helper library, customize the provided setup
script so that it creates a database called HaOnAzure.
2. Install the SQL Server Cluster metadata tables and scripts. You can use the one in the Error!
Reference source not found. section. Note that those objects will be created in the HaOnAzure
database.
3. Make sure that one database (MindFlavorDB) is attached to Instance A and the other (OtherDB)
is attached to Instance B.
Configuration
On Instance A:
1. Find the database ID of the database (MindFlavorDB). You can use the sys.databases catalog view.
SELECT * FROM sys.databases;
2. Call the stored procedure [HAOnAzure].[PlaceDBOnHA] for the database ID. For example, if a
database ID is 10, use the following.
50 | P a g e
EXEC [HAOnAzure].[PlaceDBOnHA] @database_id = 10;
3. Call [HAOnAzure].[UpsertAzureAccountCredentials] to store the Windows Azure account and
shared key.
EXEC [HAOnAzure].[UpsertAzureAccountCredentials]
@account = '<account here>',
@secret = '<secret here>';
On Instance B repeat the steps you performed on Instance A for the second database in our example:
1. Find the database ID of the database (OtherDB). You can use the sys.databases catalog view.
SELECT * FROM sys.databases;
2. Call the stored procedure [HAOnAzure].[PlaceDBOnHA] for the database ID. For example, if a
database ID is 10, use the following.
EXEC [HAOnAzure].[PlaceDBOnHA] @database_id = 10;
3. Call [HAOnAzure].[UpsertAzureAccountCredentials] to store the Windows Azure account and
shared key.
EXEC [HAOnAzure].[UpsertAzureAccountCredentials]
@account = '<account here>',
@secret = '<secret here>';
On either instance (A or B):
1. Insert a row in the [HAOnAzure].[Enpoint] for each endpoint (2).
EXEC [HAOnAzure].[InsertOrUpdateEndpointOnHA]
'<Certificate Thumbprint>',
'<Subscription ID>',
'<Cloud Service name>',
'MindFlavorDB',
'TCP-1600',
1600,
'137.117.214.251',
1433,
'Production',
'tcp';
GO
EXEC [HAOnAzure].[InsertOrUpdateEndpointOnHA]
'<Certificate Thumbprint>',
'<Subscription ID>',
'<Cloud Service name>',
'OtherDB',
'TCP-1601',
1601,
'137.117.214.251',
1433,
'Production',
'tcp';
For more information about the values to specify here, see “Endpoint table” earlier in this document.
51 | P a g e
2. Check the validity of the configured data. First look at the databases in HA.
EXEC [HAOnAzure].[ShowDatabasesOnHA];
3. Check the endpoint configuration.
EXEC [HaOnAzure].[ShowEndpoints];
Single node check
This step is optional and is used to ensure the correctness of the step done so far. Perform the following
steps on Instance A or Instance B.
1. Call the [HaOnAzure].ScanForDetachedDBsAndAttach stored procedure:
EXEC [HaOnAzure].ScanForDetachedDBsAndAttach;
Because both databases are attached, you should see output like this.
2. Call [HaOnAzure].ScanForDBAndMountEndpoints to create and bind the endpoints. Because the
binding is done only by the database-owning machine you should call it first on both instances.
EXEC [HaOnAzure].ScanForDBAndMountEndpoints;
This is the first execution, so the REST code creates the endpoint and binds it to Instance A. This call takes
around 30 seconds.
52 | P a g e
On Instance B the result should be similar to this.
3. Call [HaOnAzure].ScanForDBAndMountEndpoints again to make sure nothing happens.
EXEC [HaOnAzure].ScanForDBAndMountEndpoints;
This call should be faster.
Test the database failover
This section is optional. On Instance B:
1. Call the [HaOnAzure].ScanForDetachedDBsAndAttach
everything works.
stored procedure and check that
EXEC [HaOnAzure].ScanForDetachedDBsAndAttach;
53 | P a g e
2. Call [HaOnAzure].ScanForDBAndMountEndpoints to make sure nothing happens.
EXEC [HaOnAzure].ScanForDBAndMountEndpoints;
On Instance B:
1. Detach a database.
EXEC sp_detach_db [OtherDB];
On Instance A:
2. Call the [HaOnAzure].ScanForDetachedDBsAndAttach
database.
stored procedure and mount the
EXEC [HaOnAzure].ScanForDetachedDBsAndAttach;
3. To move the endpoint, call [HaOnAzure].ScanForDBAndMountEndpoints and wait. Because the
endpoint must be unmounted first, the wait time is doubled to about 1 minute.
EXEC [HaOnAzure].ScanForDBAndMountEndpoints;
54 | P a g e
Test remote connectivity to a database in case of failover
From a remote machine, connect to the endpoint for TCP port 1600. Note that you cannot use the SQL
Server Browser service.
1. Run this batch.
SELECT @@SERVERNAME, * FROM sys.databases WHERE name IN('MindFlavorDB', 'OtherDB')
Note that there are two databases on this instance. The physical instance name can be found on the first
field (the xxxxCTP2).
2. Perform a failover of database MindFlavorDB. For more information, see the procedure called
“Test the database failover.”
55 | P a g e
3. At the end of the failover run the same batch again.
SELECT @@SERVERNAME, * FROM sys.databases WHERE name IN ('MindFlavorDB', 'OtherDB')
The physical server has now changed to “xxxxCTP3”, and the remote connection can be resolved to the
correct instance without external intervention; however, because TDS is a stateful protocol, you might
have to reauthenticate the connection.
Add node to the HA cluster
To add a node to this cluster, follow these steps:
1. Add the virtual machine to the same cloud service.
2. Install the management certificate in the “My” store of the SQL Server engine execution account.
Install the HAOnAzure database and call [HAOnAzure].[UpsertAzureAccountCredentials] to add the
mandatory local metadata. You can alternatively migrate via backup-restore an existing HAOnAzure
database.
56 | P a g e
4. Monitoring and troubleshooting
In every troubleshooting process, the first step is collecting adequate information to diagnose the
problem. Usage of SQL Server Data Files in Windows Azure may eventually requires almost the same
information used in every traditional SQL Server installation:








Guest operating system application and system event logs
SQL Server error log
SQL Server Profiler trace
Windows Azure Management Portal
Azure storage logs and analytics
SQL Server performance counters
SQL Server wait types
SQL Server xEvents trace
Two important considerations apply to this list:


SQL Server masks Windows Azure blob storage level access errors inside traditional SQL Server
error messages related to the I/O subsystem. If you want to trap and look at the real Windows
Azure error message and code, use the xfcb_failed_request xEvents trace.
If you want to troubleshoot SQL Server Data Files in Windows Azure in depth, go to the
Windows Azure storage level and enable diagnostics and logging as explained in the next
section.
There is nothing specific to SQL Server Data Files in Windows Azure in the operating system application
and system event logs, the SQL Server error log, or in SQL Server Profiler. The following section focuses
on the other information items mentioned in the preceding list. As of this writing, no public trace flag or
DBCC command is specifically related to this feature.
4.1 Windows Azure Management Portal
When using SQL Server Data Files in Windows Azure, SQL Server 2014 relies directly on Windows Azure
blobs that are effectively its storage system. Using the Windows Azure Management Portal you can
check basic counters about your storage account on the Monitor tab.
57 | P a g e
Using this web interface, you can see immediately whether the Windows Azure blob storage has
experienced availability problems (Availability), the percentage of successful operations (Success
Percentage) and the total number of requests sent or received at the storage account level (Total
Requests). In addition to the basic metrics displayed by default, you can also add many others.
Finally, you can create automatic alerts based on a large number of metrics. If a threshold you select is
crossed, Windows Azure subscription administrators receive a notification.
58 | P a g e
Note: If you want to visualize counters and create alerts, enable and configure monitoring for the
storage account in the Configure tab.
At a minimum, we recommend that you enable alerts for Throttling Error, Availability, Total Requests,
Network Error Percentage.
Windows Azure Storage comes with a strong SLA on service availability of 99.9 percent. For more
information, see the SLA:
Windows Azure Storage Service Level Agreement
http://www.microsoft.com/en-us/download/details.aspx?id=6656
4.2 Windows Azure Storage logs and analytics
For every storage account in Windows Azure, you can (and here you need to) enable analytics and
logging at the storage account level to have detailed storage information and errors; you can enable
logging for only the blob storage in the Windows Azure Management Portal. For more information
about how to configure logging, see the following:
How To Monitor a Storage Account
http://www.windowsazure.com/en-us/manage/services/storage/how-to-monitor-astorage-account
After you enable logging for blob storage, you can retrieve information from the $LOGS special
container in your storage account location (http://<accountname>.blob.core.windows.net/$logs),
using one of the many tools available over Internet to browse Windows Azure Storage. Here is a list of
some commonly used tools:
59 | P a g e
Azure Storage Explorer
http://azurestorageexplorer.codeplex.com
CloudXplorer
http://clumsyleaf.com/products/cloudxplorer
The $LOGS container stores detailed information about the following:



Successful requests
Failed requests, including timeout, throttling, network, authorization, and other errors
Requests that use an SAS, including failed and successful requests
You can use Microsoft Excel to extract and analyze this information; Excel offers many ways to filter and
aggregate data. The following links contain examples:
Monitoring Windows Azure VM Disk Performance
http://brentdacodemonkey.wordpress.com/2013/11/13/monitoring-windows-azure-vm-diskperformance
Storage Analytics Logging - How to Enable and Where to Find the logs
http://blogs.msdn.com/b/cie/archive/2013/10/10/storage-analytics-logging-how-to-enableand-where-to-find-the-logs.aspx
If you also want a graphical report of how your storage is performing, in terms of latency and number of
requests, enable Windows Azure Storage monitoring (in addition to previous logging) and use the tool
described in the following blog post:
Storage Analytics Viewer
http://social.msdn.microsoft.com/Forums/windowsazure/it-IT/1f973a27-5835-4f59-9683b1e27f474113/storage-analytics-viewer?forum=windowsazuredata
In the following figure, you can see some of the useful information you can retrieve using Windows
Azure Storage logs and analytics in conjunction with the powerful Excel import and filtering capabilities:




operation-type: indicates a write operation on the blob.
request-status: shows the success or failure of the operation.
end-to-end-latency-in-ms: tracks effective latency of the Windows Azure blob storage.
requested-object-key: shows the SQL Server database file being accessed; this information is
useful for drilling through latency analysis at the single file level.
60 | P a g e
After you configure Excel to display different types of logging and performance information, you can use
it to troubleshoot SQL Server Data Files in Windows Azure. For example, you can:


Use column filtering on request-status and search for errors.
Aggregate all entries based on requested-object-key, and then calculate minimum, average, and
maximum access time using measures on column end-to-end-latency-in-ms. This method is
useful for troubleshooting slow performance of database file access.
4.3 Performance counters
A new performance counter object, SQLServer: HTTP Storage, reports information on Windows Azure
Storage integration. As you can see in the following screen shot, the object instances you can select are
based on the storage accounts used.
61 | P a g e
This list describes each performance counter:











Avg. Bytes/Read: The average number of bytes transferred from the HTTP storage per read.
Bytes scheduled: The number of bytes that were scheduled in some way (rather than just being
directly sent to the underlying miniport).
Bytes scheduled/sec: The number of bytes per second from this flow that have been scheduled.
Bytes transmitted: The number of bytes from this flow that have been sent.
Bytes transmitted/sec: The number of bytes per second from this flow that have been sent.
HTTP Storage IO retry/sec: The number of retry requests sent to the HTTP storage per second.
Nonconforming packets scheduled: The number of packets that have entered the packet
scheduler at a rate that exceeded the flow parameters.
Nonconforming packets scheduled/sec: The rate at which nonconforming packets have entered
the packet scheduler.
Pacer Flow: Statistics from the packet scheduler.
Packets scheduled: The number of packets that were scheduled in some way (rather than just
being directly sent to the underlying miniport).
Packets transmitted: The number of packets from this flow that have been sent.
62 | P a g e



Packets transmitted/sec: The number of packets per second from this flow that have been sent.
Reads/Sec: The number of reads per second on the HTTP storage.
Transfers/Sec: The number of read and write operations per second on the HTTP storage.
4.4 SQL Server wait types
There is no specific wait type for SQL Server Data Files in Windows Azure, but because this feature is
related to database files I/O, you can use sys.dm_os_wait_stats SQL Server dynamic management view
to search for specific wait types related to I/O like IO_COMPLETION, WRITE_COMPLETION, WRITELOG
and PAGEIOLATCH_*. For more information about wait types, see the following topic in SQL Server
Books Online:
sys.dm_os_wait_stats (Transact-SQL)
http://msdn.microsoft.com/en-us/library/ms179984.aspx
Here is an example of information about SQL Server Data Files in Windows Azure as viewed through this
dynamic management view.
If you want information specific to each data or log file for any database, you can use the following
query. It returns an ordered list based on descending values for I/O stalls.
select DB_NAME(t1.DbId) as 'Database', t2.name as 'File Name',
t2.physical_name as 'File Path', t1.IoStallMS, NumberReads, BytesRead,
IoStallReadMS, NumberWrites, BytesWritten, IoStallWriteMS,
from ::fn_virtualFilestats(NULL,NULL) as t1
join sys.master_files as t2 on t1.DbId = t2.database_id and t1.FileId = t2.file_id
where LEFT(t2.physical_name,5) = 'https'
order by t1.IoStallMS desc
4.5 SQL Server xEvents trace
The following new xEvents events support SQL Server Data Files in Windows Azure feature:


xfcb_failed_request: A request to Windows Azure Storage failed to complete.
xfcb_request_opened: A request was opened to Windows Azure Storage.
63 | P a g e





xfcb_header_obtained: A response header was obtained from the request to Windows Azure
Storage.
xfcb_blob_properties_obtained: A Windows Azure Storage blob property was obtained from
response header.
xfcb_read_complete: Read complete from Windows Azure Storage response.
xfcb_send_complete: Request send to Windows Azure Storage is complete.
xfcb_write_complete: Request send to Windows Azure Storage is complete
SQL Server xEvents is a powerful tool that provides granular and in-depth information and detail. It is an
excellent technology for trapping low-level Windows Azure blob storage native error codes. Because it is
deeply integrated in the SQL Server engine, you can also analyse a complex series of events, even
outside of the specific context of this feature.
SQL Server 2014 provides an easy way to set up an xEvents session by using a wizard, in addition to
Transact-SQL queries.
The relevant events for SQL Server Data Files in Windows Azure are located in the io category and
Operational channel.
64 | P a g e
For every event, a series of information attributes is available. For example, this is what you can see for
the xfcb_failed_request event.
xfcb_failed_request is the most important event to use for monitoring Windows Azure blob storage
problems. It returns the Windows Azure error code (errorcode), the file or blob affected (file_path), and
the number of retries attempted (retry_count) by the SQL Server storage engine. The following screen
shot shows an example of a failed request.
65 | P a g e
For more information about SQL Server xEvents, see the following topic in SQL Server Books Online:
Extended Events
http://technet.microsoft.com/en-us/library/bb630282.aspx
4.6 Errors
To avoid errors due to unsupported features or limitations, first review SQL Server Data Files in
Windows Azure limitations. For more information, see “SQL Server Data Files in Windows Azure”
(http://msdn.microsoft.com/en-us/library/dn385720(v=sql.120).aspx) in SQL Server Books Online.
The list of errors that you might encounter when you use the SQL Server 2014 data files in Windows
Azure Storage feature follows.
Authentication errors



Cannot drop the credential '%.*ls' because it is used by an active database file.
You may see this error when you try to drop a credential that is still being used by an active
database file in Windows Azure Storage. To drop the credential, first you must delete the
associated blob that has this database file. For more information, see “ICloudBlob.Delete
Method” (http://msdn.microsoft.com/enus/library/windowsazure/microsoft.windowsazure.storage.blob.icloudblob.delete.aspx ). To
delete a blob that has an active lease, you must first break the lease. For more information, see
“ICloudBlob.BreakLease Method” (http://msdn.microsoft.com/enus/library/windowsazure/microsoft.windowsazure.storage.blob.icloudblob.breaklease.aspx).
Shared Access Signature has not been created on the container correctly.
Make sure that you have created a shared access signature (SAS) on the container correctly.
Review the instructions given in Lesson 2 in “Tutorial: SQL Server Data Files in Windows Azure
Storage service” (http://msdn.microsoft.com/en-us/library/dn466438(v=sql.120).aspx).
SQL Server credential has not been not created correctly.
Make sure that you have used 'Shared Access Signature' for the Identity field and created a
secret correctly. Review the instructions given in Lesson 3 in “Tutorial: SQL Server Data Files in
Windows Azure Storage service” (http://msdn.microsoft.com/enus/library/dn466438(v=sql.120).aspx).
Database errors


Errors when creating a database
Review the instructions given in Lesson 4 in “Tutorial: SQL Server Data Files in Windows Azure
Storage service” (http://msdn.microsoft.com/en-us/library/dn466438(v=sql.120).aspx).
Errors when running the ALTER statement
Make sure to execute the ALTER DATABASE statement when the database is online. When you
copy the data files to Windows Azure Storage, always create a page blob, not a block blob.
Otherwise, ALTER DATABASE fails. Review the instructions given in Lesson 7 in “Tutorial: SQL
Server Data Files in Windows Azure Storage service” (http://msdn.microsoft.com/enus/library/dn466438(v=sql.120).aspx).
66 | P a g e

Error code 5120 Unable to open the physical file "%.*ls". Operating system error %d: "%ls"
Currently, this new enhancement does not support more than one SQL Server instance
accessing the same database files in Windows Azure Storage at the same time. For example, if
ServerA is online with an active database file and ServerB, which has a database that points to
the same data file, is started, ServerB fails to start the database with the following error:
Msg 5120, Level 16, State 140, Line 5
Unable to open the physical file
"https://enosg.blob.core.windows.net/frcogno/filedata1.mdf". Operating system
error 32: "32(The process cannot access the file because it is being used by
another process.)".
To resolve this issue, first determine whether you need ServerA to access the database file in
Windows Azure Storage. If not, simply remove any connection between ServerA and the
database files in Windows Azure Storage. To do this, follow these steps:
1. Set the file path of ServerA to a local folder by using the ALTER DATABASE statement.
2. Take the database offline in ServerA.
3. Copy database files from Windows Azure Storage to the local folder in ServerA. This
ensures that ServerA still has a copy of the database locally.
4. Put the database back online.
5. Conclusion
In this white paper we introduced you to SQL Server Data Files in Windows Azure, another big step
toward Windows Azure integration in SQL Server 2014. The “Architecture” section covered basic
concepts, usage scenarios, and pros and cons to show the benefits of this feature in your SQL Server
environments. The “Configuration” section drove through a step-by-step procedure to configure and use
SQL Server Data Files in Windows Azure for your existing databases, along with tested best practices and
operation management. “Implementing a failover cluster mechanism” provided a real and fully
functional example, including complete source code, of a way to design a new high-availability
mechanism; we hope this section provided you with a deeper understanding of the power and flexibility
of this new technology. Finally, “Monitoring and troubleshooting” explained what you need to know to
support, monitor, and troubleshoot this new SQL Server database deployment model, highlighting the
differences between the new model and traditional on-premises environments.
For more information:
http://www.microsoft.com/sqlserver/: SQL Server Web site
http://technet.microsoft.com/en-us/sqlserver/: SQL Server TechCenter
http://msdn.microsoft.com/en-us/sqlserver/: SQL Server DevCenter
67 | P a g e
SQL Server Integration with Windows Azure Storage
http://msdn.microsoft.com/en-us/library/dn385720(v=sql.120).aspx
Tutorial: Integrating SQL Server Data Files in Windows Azure Storage service
http://msdn.microsoft.com/en-us/library/dn466438(v=sql.120).aspx
Tutorial: SQL Server Data Files in Windows Azure Storage service
http://msdn.microsoft.com/en-us/library/dn466438(v=sql.120).aspx
Performance Guidance for SQL Server in Windows Azure Virtual Machines
http://go.microsoft.com/fwlink/?LinkId=306266
Virtual Machine and Cloud Service Sizes for Windows Azure
http://msdn.microsoft.com/en-us/library/dn197896.aspx
Getting Started with SQL Server in Windows Azure Virtual Machines
http://msdn.microsoft.com/en-us/library/windowsazure/dn133151.aspx
CSS SQL Server Engineers
http://blogs.msdn.com/psssql
SQL Server Database Backup in Azure IaaS: performance tests
http://blogs.msdn.com/b/igorpag/archive/2013/07/30/sql-server-database-backup-in-azure-iaasperformance-tests.aspx
SQL Server 2012 AlwaysOn Availability Group and Listener in Azure VMs: Notes, Details and
Recommendations
http://blogs.msdn.com/b/igorpag/archive/2013/09/02/sql-server-2012-alwayson-availability-groupand-listener-in-azure-vms-notes-details-and-recommendations.aspx
Did this paper help you? Please give us your feedback. Tell us on a scale of 1 (poor) to 5 (excellent), how
would you rate this paper and why have you given it this rating? For example:


Are you rating it high due to having good examples, excellent screen shots, clear writing, or
another reason?
Are you rating it low due to poor examples, fuzzy screen shots, or unclear writing?
This feedback will help us improve the quality of white papers we release.
Send feedback
68 | P a g e
Appendix
A. Wait stats analysis
WITH [Waits] AS
(SELECT
[wait_type],
[wait_time_ms] / 1000.0 AS [WaitS],
([wait_time_ms] - [signal_wait_time_ms]) / 1000.0 AS [ResourceS],
[signal_wait_time_ms] / 1000.0 AS [SignalS],
[waiting_tasks_count] AS [WaitCount],
100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER() AS [Percentage],
ROW_NUMBER() OVER(ORDER BY [wait_time_ms] DESC) AS [RowNum]
FROM sys.dm_os_wait_stats
WHERE [wait_type] NOT IN (
N'CLR_SEMAPHORE',
N'LAZYWRITER_SLEEP',
N'RESOURCE_QUEUE',
N'SQLTRACE_BUFFER_FLUSH',
N'SLEEP_TASK',
N'SLEEP_SYSTEMTASK',
N'WAITFOR',
N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
N'CHECKPOINT_QUEUE', N'REQUEST_FOR_DEADLOCK_SEARCH',
N'XE_TIMER_EVENT',
N'XE_DISPATCHER_JOIN',
N'LOGMGR_QUEUE',
N'FT_IFTS_SCHEDULER_IDLE_WAIT',
N'BROKER_TASK_STOP', N'CLR_MANUAL_EVENT',
N'CLR_AUTO_EVENT',
N'DISPATCHER_QUEUE_SEMAPHORE',
N'TRACEWRITE',
N'XE_DISPATCHER_WAIT',
N'BROKER_TO_FLUSH', N'BROKER_EVENTHANDLER',
N'FT_IFTSHC_MUTEX', N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
N'DIRTY_PAGE_POLL', N'SP_SERVER_DIAGNOSTICS_SLEEP')
)
SELECT
[W1].[wait_type] AS [WaitType],
CAST ([W1].[WaitS] AS DECIMAL(14, 2)) AS [Wait_S],
CAST ([W1].[ResourceS] AS DECIMAL(14, 2)) AS [Resource_S],
CAST ([W1].[SignalS] AS DECIMAL(14, 2)) AS [Signal_S],
[W1].[WaitCount] AS [WaitCount],
CAST ([W1].[Percentage] AS DECIMAL(4, 2)) AS [Percentage],
CAST (([W1].[WaitS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgWait_S],
CAST (([W1].[ResourceS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgRes_S],
CAST (([W1].[SignalS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgSig_S]
FROM [Waits] AS [W1]
INNER JOIN [Waits] AS [W2]
ON [W2].[RowNum] <= [W1].[RowNum]
GROUP BY [W1].[RowNum], [W1].[wait_type], [W1].[WaitS],
[W1].[ResourceS], [W1].[SignalS], [W1].[WaitCount], [W1].[Percentage]
HAVING SUM ([W2].[Percentage]) - [W1].[Percentage] < 95; -- percentage threshold
69 | P a g e
B. Enabling Transparent Data Encryption (TDE)
-- Create a master key and a server certificate.
USE master
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'MySQLKey01';
GO
CREATE CERTIFICATE MySQLCert WITH SUBJECT = 'SQL - Azure Storage Certification'
GO
-- Create a backup of the server certificate in the master database.
-- Store TDE certificates in the source machine locally.
BACKUP CERTIFICATE MySQLCert
TO FILE = 'C:\certs\MySQLCert.CER'
WITH PRIVATE KEY
(
FILE = 'C:\certs\MySQLPrivateKeyFile.PVK',
ENCRYPTION BY PASSWORD = 'MySQLKey01'
);
-- Then, encrypt your new database by following these steps:
-- Switch to the new database.
-- Create a database encryption key that is protected by the server certificate in the -- master database.
-- Alter the new database to encrypt the database using TDE.
USE TestDB1;
GO
-- Encrypt your database.
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE MySQLCert
GO
ALTER DATABASE TestDB1
SET ENCRYPTION ON
GO
70 | P a g e
C. Downloading Database file blobs using a Valet Key Pattern
When you delegate access to a resource, in some cases you might want to restrict the functionality you
grant, and you also need the ability to revoke those grants. The traditional approach is to define access
control lists (ACLs) that detail the permission matrix. However, ACLs require the ability to verify the
identify the account (even Everyone is a special kind of identity). Another approach doesn’t require you
to identify the grantee: the required privilege is granted by the ownership of a specific token. This second
approach, known as a Valet Key pattern, is like giving a valet a key to park your car: a valet doesn’t need
to own your car in order to park it. All that is necessary is a key that you provide.
Windows Azure storage implements the Valet Key pattern using shared account signature (SAS) URIs. SAS
URIs are static (that is, immutable) URIs that embed both the resource identification and the granted
privilege. For example, the following URI gives you write and list access to the sascontainer in
the storageaccount storage account from 2012-02-12 until 2013-04-13.
https://storageaccount.blob.core.windows.net/sascontainer?sv=2012-02-12&se=2013-0413T00%3A12%3A08Z&sr=c&sp=wl&sig=t%2BbzU9%2B7ry4okULN9S0wst%2F8MCUhTjrHyV9rDNLSe8g%
3D
Anyone using this URI can write and list the specified container. For more information, see “Shared Access
Signatures, Part 1: Understanding the SAS Model” (http://www.windowsazure.com/enus/manage/services/storage/net/shared-access-signature-part-1).
The SAS URIs are used by SQL Server Data Files in Windows Azure to access the required database: it can
be mapped on a stored access policy instead of a resource directly. The advantage of giving a stored access
policy mapped URI instead of a direct one is that you can revoke the policy at any time. Revoking a stored
access policy invalidates the relative SAS URIs. If you give a direct SAS URI you cannot revoke it, short of
invalidating the shared encryption keys altogether.
You can generate your own SAS URI and use it with HTTP programs, such as Microsoft Internet Explorer
and Google Chrome, that don’t understand the Windows Azure REST APIs (and therefore are unable to
authenticate the requests correctly). The suggested workflow in this case is:
1. Generate a SAS policy for the blob to download. The policy should:
o Be specific to the container.
o Grant only read access (following the minimum privilege principle).
o Have a very narrow timeframe. The policy should start immediately and expire as soon as
the download operation terminates. However, there may be slight timer inconsistencies
with the virtual machine: plan accordingly.
2. Generate the SAS URI using the SAS policy previously generated.
3. Download the blob using your HTTP(s) program of choice. Point it to the SAS URI generated in
step 2.
4. Invalidate the SAS policy.
71 | P a g e
The last step is optional, but it is a best practice.
This workflow can be used, for example, to download a database backup saved on a Windows Azure
container (available from SQL Server 2012 SP1 CU2). For more information, see the following links:


SQL Server Backup and Restore with Windows Azure Blob Storage Service
(http://technet.microsoft.com/en-us/library/jj919148.aspx).
Cumulative
update
package
2
for
SQL
Server
2012
Service
Pack
1
(http://support.microsoft.com/kb/2790947).
Even if the TCP protocol is robust, there still is a chance of receiving a corrupt blob. There are many ways
of ensuring the consistency of a download; Windows Azure supports the MD5 hash in a way that depends
of the type of blob (page or block). For more information about different types of blobs, see
“Understanding
Block
Blobs
and
Page
Blobs”
(http://msdn.microsoft.com/enus/library/windowsazure/ee691964.aspx).
Internet browsers are not aware of the Windows Azure MD5 hash, so you cannot rely on them for this
particular issue. You should perform a server-side hashing and compare it to a client-side hash. If the
hashes do not match, you must download it again.
Consider
this
example.
You
want
to
download
the
blob
https://enosg.blob.core.windows.net/frcogno/other1.mdf. Following best practices, the container
is private (that is, public access is disabled) and you need to use the REST API to access it. Here are the
steps you need to follow:
Step 1
EXEC [Azure].[AddContainerACL]
'enosg',
-'<shared key>',
-1,
-'frcogno',
-'Off',
-'testACL',
-'2014-01-01',
-'2014-02-01',
-1,
-0,
-0,
-0
-NULL, 0, NULL
container name
shared key
use HTTPS
container name
container public access
SAS policy name
valid from
valid to
can read?
can write?
can delete blobs?
can list blobs?
Step 2
SELECT [Azure].[GeneratePolicyBlobSharedAccessSignatureURI](
'https://enosg.blob.core.windows.net/frcogno/other1.ldf',
'<shared key>',
'b',
'testACL')
72 | P a g e
Step 3
EXEC [Azure].[RemoveContainerACL]
'enosg',
'<shared key>',
1,
'frcogno',
'testACL'
73 | P a g e
D. Microsoft SQL Server to Windows Azure helper library
This library is available at no cost from CodePlex at https://sqlservertoazure.codeplex.com/. The sample
cluster implementation was built using version 1.7.0.2 (change-set 67b8b2385f7a). Because the library is
likely to be updated, the SQL CLR import scripts are included in this appendix. You can download the
required binaries from the CodePlex page.
Library setup script
-----
*********************************************************************************
In SQL Server Management Studio, open this file and press CTRL+Shift+M to ********
configure it BEFORE executing it.
*********************************************************************************
USE [master];
GO
CREATE ASYMMETRIC KEY [AzureKey] FROM EXECUTABLE FILE = '<ITPCfSQL.Azure.CLR.dll path,
nvarchar(4000), DLLPath>'
CREATE LOGIN [AzureLogin] FROM ASYMMETRIC KEY [AzureKey];
GRANT EXTERNAL ACCESS ASSEMBLY TO [AzureLogin];
GO
CREATE DATABASE HaOnAzure;
GO
ALTER DATABASE [HaOnAzure] SET RECOVERY SIMPLE;
GO
USE [HaOnAzure];
GO
EXEC sp_changedbowner 'sa';
GO
CREATE SCHEMA [Azure];
GO
CREATE SCHEMA [Azure.Embedded];
GO
CREATE SCHEMA [Azure.Management];
GO
CREATE ASSEMBLY [ITPCfSQL.Azure.CLR] FROM '<ITPCfSQL.Azure.CLR.dll path, nvarchar(4000),
DLLPath>'
WITH PERMISSION_SET=EXTERNAL_ACCESS;
GO
-------- Std
-------- Blob
CREATE FUNCTION Azure.ListContainers(@AccountName NVARCHAR(255), @SharedKey
NVARCHAR(255), @useHTTPS bit,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX), ETag NVARCHAR(255), LastModified
DATETIME, LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus
NVARCHAR(255))
74 | P a g e
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].ListContainers;
GO
CREATE PROCEDURE Azure.CreateContainer(@AccountName NVARCHAR(255), @SharedKey
NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @ContainerPublicReadAccess
NVARCHAR(255), @xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateContainer;
GO
CREATE PROCEDURE Azure.DeleteContainer(@AccountName NVARCHAR(255), @SharedKey
NVARCHAR(255), @useHTTPS bit, @ContainerName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER =
NULL, @xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DeleteContainer;
GO
CREATE FUNCTION Azure.GetContainerACL(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(255),
@LeaseId UNIQUEIDENTIFIER = NULL,
@TimeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS TABLE
(
Id NVARCHAR(64),
Start DATETIME,
Expiry DATETIME,
Permission NVARCHAR(255),
ContainerPublicReadAccess NVARCHAR(255))
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].GetContainerACL;
GO
CREATE PROCEDURE Azure.ChangeContainerPublicAccessMethod(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000),
@ContainerPublicReadAccess NVARCHAR(255),
@LeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].ChangeContainerPublicAccessMethod;
GO
CREATE PROCEDURE Azure.AddContainerACL(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000),
@ContainerPublicReadAccess NVARCHAR(255),
@accessPolicyId NVARCHAR(64),
@start DATETIME,
@expiry DATETIME,
@canRead BIT = 0,
@canWrite BIT = 0,
@canDeleteBlobs BIT = 0,
@canListBlobs BIT = 0,
@LeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].AddContainerACL;
GO
75 | P a g e
CREATE PROCEDURE Azure.RemoveContainerACL(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000),
@accessPolicyId NVARCHAR(64),
@LeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].RemoveContainerACL;
GO
CREATE FUNCTION Azure.ListBlobs(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255),
@useHTTPS bit,
@container NVARCHAR(4000),
@includeSnapshots BIT,
@includeMetadata BIT,
@includeCopy BIT,
@includeUncommittedBlobs BIT,
@xmsclientrequestId NVARCHAR(4000) = NULL)
RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX), ETag NVARCHAR(255),
BlobSequenceNumber INT,
BlobType NVARCHAR(255),
ContentEncoding NVARCHAR(255),
ContentLanguage NVARCHAR(255),
ContentLength BIGINT,
ContentMD5 NVARCHAR(255),
ContentType NVARCHAR(255),
CopyId UNIQUEIDENTIFIER,
CopySource NVARCHAR(255),
CopyStatus NVARCHAR(255),
CopyCurrentPosition BIGINT,
CopyTotalLength BIGINT,
CopyCompletionTime DATETIME,
LastModified DATETIME,
LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus
NVARCHAR(255),
Metadata XML)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].ListBlobs;
GO
CREATE FUNCTION Azure.DownloadBlockBlob(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@container NVARCHAR(4000), @blobName NVARCHAR(4000),
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS VARBINARY(MAX)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DownloadBlockBlob;
GO
CREATE FUNCTION Azure.DownloadPageBlob(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@container NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER = NULL,
@startPosition BIGINT = NULL, @length INT = NULL,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS VARBINARY(MAX)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DownloadPageBlob;
GO
76 | P a g e
CREATE PROCEDURE Azure.CreateOrReplaceBlockBlob(@AccountName NVARCHAR(255), @SharedKey
NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@buffer VARBINARY(MAX),
@contentType NVARCHAR(255) = 'application/octect-stream', @contentEncoding
NVARCHAR(255) = NULL,
@contentMD5 NVARCHAR(255) = NULL,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplaceBlockBlob;
GO
CREATE PROCEDURE Azure.PutPage(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@buffer VARBINARY(MAX),
@startPositionBytes BIGINT, @bytesToUpload INT = NULL,
@leaseId UNIQUEIDENTIFIER = NULL,
@contentMD5 NVARCHAR(255) = NULL,
@timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].PutPage;
GO
CREATE FUNCTION Azure.PutPageFunction(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@buffer VARBINARY(MAX),
@startPositionBytes BIGINT, @bytesToUpload INT = NULL,
@leaseId UNIQUEIDENTIFIER = NULL,
@contentMD5 NVARCHAR(255) = NULL,
@timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS NVARCHAR(MAX)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].PutPage_Function;
GO
CREATE FUNCTION Azure.CreateOrReplaceBlockBlobFunction(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@buffer VARBINARY(MAX),
@contentType NVARCHAR(255) = 'application/octet-stream', @contentEncoding
NVARCHAR(255) = NULL,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS NVARCHAR(MAX)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplaceBlockBlob_Function;
GO
CREATE PROCEDURE Azure.DeleteBlob(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255),
@useHTTPS bit,
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@blobDeletionMethod NVARCHAR(255),
@leaseID UNIQUEIDENTIFIER = NULL,
@snapshotDateTimeToDelete DATETIME = NULL,
@xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DeleteBlob;
GO
77 | P a g e
CREATE PROCEDURE Azure.CopyBlob(@destinationAccount NVARCHAR(255), @destinationSharedKey
NVARCHAR(255), @useHTTPS bit,
@sourceAccountName NVARCHAR(255),
@sourceContainerName NVARCHAR(4000), @sourceBlobName NVARCHAR(4000),
@sourceLeaseId UNIQUEIDENTIFIER = NULL, @destinationLeaseId UNIQUEIDENTIFIER = NULL,
@destinationContainerName NVARCHAR(4000), @destinationBlobName NVARCHAR(4000),
@xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CopyBlob;
GO
CREATE FUNCTION Azure.GetBlobProperties(@AccountName NVARCHAR(255), @SharedKey
NVARCHAR(255), @useHTTPS bit,
@container NVARCHAR(4000), @blob NVARCHAR(4000),
@snapshotDateTime DATETIME = NULL,
@xmsclientrequestId NVARCHAR(4000) = NULL)
RETURNS TABLE(
ETag NVARCHAR(255),
BlobSequenceNumber BIGINT,
BlobType NVARCHAR(255),
ContentEncoding NVARCHAR(255),
ContentLanguage NVARCHAR(255),
ContentLength BIGINT,
ContentMD5 NVARCHAR(255),
ContentType NVARCHAR(255),
CopyId UNIQUEIDENTIFIER,
CopySource NVARCHAR(255),
CopyStatus NVARCHAR(255),
CopyStatusDescription NVARCHAR(4000),
CopyCurrentPosition BIGINT,
CopyTotalLength BIGINT,
CopyCompletionTime DATETIME,
LastModified DATETIME,
LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus
NVARCHAR(255),
CacheControl NVARCHAR(255),
[Date] DATETIME,
RequestId UNIQUEIDENTIFIER,
[Version] NVARCHAR(255)
)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].GetBlobProperties;
GO
CREATE PROCEDURE [Azure].UpdateBlobMetadata(
@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,
@container NVARCHAR(4000), @blob NVARCHAR(4000),
@attributeList XML,
@leaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 60,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].UpdateBlobMetadata;
GO
CREATE PROCEDURE Azure.CreateOrReplacePageBlob(@AccountName NVARCHAR(255), @SharedKey
NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@maximumSize BIGINT, @leaseId UNIQUEIDENTIFIER = NULL,
@contentType NVARCHAR(255) = 'application/octet-stream', @contentEncoding
NVARCHAR(255) = NULL,
78 | P a g e
@contentLanguage NVARCHAR(255) = NULL,
@timeout INT = NULL,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplacePageBlob;
GO
-- Queue
CREATE PROCEDURE [Azure].CreateQueue(
@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,
@queueName NVARCHAR(255),
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].CreateQueue;
GO
CREATE FUNCTION [Azure].ListQueues(
@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,
@prefix NVARCHAR(255) = NULL,
@includeMetadata BIT = 0,
@timeoutSeconds INT = 60,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(4000), Metadata XML)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].ListQueues;
GO
CREATE FUNCTION [Azure].ListQueues_Faulty(
@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,
@prefix NVARCHAR(255) = NULL,
@includeMetadata BIT = 0,
@timeoutSeconds INT = 60,
@xmsclientrequestId NVARCHAR(255) = NULL)
RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(4000), Metadata XML)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].ListQueues_Faulty;
GO
CREATE PROCEDURE [Azure].EnqueueMessage(
@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,
@queueName NVARCHAR(255),
@xmlMessage XML,
@timeoutSeconds INT = 60,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].EnqueueMessage;
GO
CREATE PROCEDURE [Azure].DequeueMessage(
@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,
@queueName NVARCHAR(255),
@visibilityTimeoutSeconds INT = 10, -- lower time might recover from failure faster.
@timeoutSeconds INT = 60,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].DequeueMessage;
GO
CREATE PROCEDURE [Azure].RetrieveApproximateMessageCount(
@accountName NVARCHAR(255), @sharedKey NVARCHAR(255), @useHTTPS BIT = 1,
@queueName NVARCHAR(255),
@timeoutSeconds INT = 60,
79 | P a g e
@xmsclientrequestId NVARCHAR(255) = NEWID)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].RetrieveApproximateMessageCount;
GO
-- Table
CREATE PROCEDURE Azure.CreateTable(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255),
@useHTTPS bit, @TableName NVARCHAR(4000), @xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].CreateTable;
GO
CREATE PROCEDURE Azure.DropTable(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255),
@useHTTPS bit, @TableName NVARCHAR(4000), @xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].DropTable;
GO
CREATE FUNCTION Azure.ListTables(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255),
@useHTTPS bit, @xmsclientrequestId NVARCHAR(4000) = NULL)
RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX))
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].ListTables;
GO
CREATE FUNCTION Azure.QueryTable(@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255),
@useHTTPS bit, @tableName NVARCHAR(4000), @xmsclientrequestId NVARCHAR(4000) = NULL)
RETURNS TABLE(PartitionKey NVARCHAR(4000), RowKey NVARCHAR(4000), [TimeStamp] DATETIME,
Attributes XML)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].QueryTable;
GO
CREATE PROCEDURE Azure.DeleteEntity(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@TableName NVARCHAR(4000),
@PartitionKey NVARCHAR(4000), @RowKey NVARCHAR(4000),
@xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].DeleteEntity;
GO
CREATE PROCEDURE Azure.InsertOrReplaceEntity(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@TableName NVARCHAR(4000),
@PartitionKey NVARCHAR(4000), @RowKey NVARCHAR(4000), @AttributeList XML,
@xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].InsertOrReplaceEntity;
GO
-----------Embedded
----------- Blob
CREATE FUNCTION [Azure.Embedded].ListContainers(@LogicalConnectionName NVARCHAR(255),
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX), ETag NVARCHAR(255), LastModified
DATETIME, LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus
NVARCHAR(255))
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].ListContainers_Embedded;
GO
80 | P a g e
CREATE PROCEDURE [Azure.Embedded].CreateContainer(@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @ContainerPublicReadAccess NVARCHAR(255),
@xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateContainer_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].DeleteContainer(@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @leaseId UNIQUEIDENTIFIER = NULL, @xmsclientrequestId
NVARCHAR(4000) = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DeleteContainer_Embedded;
GO
CREATE FUNCTION [Azure.Embedded].ListBlobs(@LogicalConnectionName NVARCHAR(255),
@container NVARCHAR(4000),
@includeSnapshots BIT,
@includeMetadata BIT,
@includeCopy BIT,
@includeUncommittedBlobs BIT,
@xmsclientrequestId NVARCHAR(4000) = NULL)
RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX), ETag NVARCHAR(255),
BlobSequenceNumber INT,
BlobType NVARCHAR(255),
ContentEncoding NVARCHAR(255),
ContentLanguage NVARCHAR(255),
ContentLength BIGINT,
ContentMD5 NVARCHAR(255),
ContentType NVARCHAR(255),
CopyId UNIQUEIDENTIFIER,
CopySource NVARCHAR(255),
CopyStatus NVARCHAR(255),
CopyCurrentPosition BIGINT,
CopyTotalLength BIGINT,
CopyCompletionTime DATETIME,
LastModified DATETIME,
LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus
NVARCHAR(255),
Metadata XML)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].ListBlobs_Embedded;
GO
CREATE FUNCTION [Azure.Embedded].DownloadBlockBlob(
@LogicalConnectionName NVARCHAR(255),
@container NVARCHAR(4000), @blobName NVARCHAR(4000),
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS VARBINARY(MAX)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DownloadBlockBlob_Embedded;
GO
CREATE FUNCTION [Azure.Embedded].DownloadPageBlob(
@LogicalConnectionName NVARCHAR(255),
@container NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER = NULL,
@startPosition BIGINT = NULL, @length INT = NULL,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
81 | P a g e
RETURNS VARBINARY(MAX)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DownloadPageBlob_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].CreateOrReplaceBlockBlob(@LogicalConnectionName
NVARCHAR(255),
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@buffer VARBINARY(MAX),
@contentType NVARCHAR(255) = 'application/octect-stream', @contentEncoding
NVARCHAR(255) = NULL,
@contentMD5 NVARCHAR(255) = NULL,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplaceBlockBlob_Embedded;
GO
CREATE FUNCTION [Azure.Embedded].CreateOrReplaceBlockBlobFunction(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@buffer VARBINARY(MAX),
@contentType NVARCHAR(255) = 'application/octect-stream', @contentEncoding
NVARCHAR(255) = NULL,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS NVARCHAR(MAX)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplaceBlockBlob_Function_Emb
edded;
GO
CREATE PROCEDURE [Azure.Embedded].DeleteBlob(@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@blobDeletionMethod NVARCHAR(255),
@leaseID UNIQUEIDENTIFIER = NULL,
@snapshotDateTimeToDelete DATETIME = NULL,
@xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].DeleteBlob_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].CopyBlob(@DestinationLogicalConnectionName
NVARCHAR(255),
@sourceAccountName NVARCHAR(255),
@sourceContainerName NVARCHAR(4000), @sourceBlobName NVARCHAR(4000),
@sourceLeaseId UNIQUEIDENTIFIER = NULL, @destinationLeaseId UNIQUEIDENTIFIER = NULL,
@destinationContainerName NVARCHAR(4000), @destinationBlobName NVARCHAR(4000),
@xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CopyBlob_Embedded;
GO
CREATE FUNCTION [Azure.Embedded].GetBlobProperties(@LogicalConnectionName NVARCHAR(255),
@container NVARCHAR(4000), @blob NVARCHAR(4000),
@snapshotDateTime DATETIME = NULL,
@xmsclientrequestId NVARCHAR(4000) = NULL)
RETURNS TABLE(
ETag NVARCHAR(255),
BlobSequenceNumber BIGINT,
BlobType NVARCHAR(255),
82 | P a g e
ContentEncoding NVARCHAR(255),
ContentLanguage NVARCHAR(255),
ContentLength BIGINT,
ContentMD5 NVARCHAR(255),
ContentType NVARCHAR(255),
CopyId UNIQUEIDENTIFIER,
CopySource NVARCHAR(255),
CopyStatus NVARCHAR(255),
CopyStatusDescription NVARCHAR(4000),
CopyCurrentPosition BIGINT,
CopyTotalLength BIGINT,
CopyCompletionTime DATETIME,
LastModified DATETIME,
LeaseDuration NVARCHAR(255), LeaseState NVARCHAR(255), LeaseStatus
NVARCHAR(255),
CacheControl NVARCHAR(255),
[Date] DATETIME,
RequestId UNIQUEIDENTIFIER,
[Version] NVARCHAR(255)
)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].GetBlobProperties_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].UpdateBlobMetadata(
@LogicalConnectionName NVARCHAR(255),
@container NVARCHAR(4000), @blob NVARCHAR(4000),
@attributeList XML,
@leaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 60,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].UpdateBlobMetadata_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].CreateOrReplacePageBlob(
@logicalConnectionName NVARCHAR(255),
@containerName NVARCHAR(255), @blobName NVARCHAR(255),
@maximumSize BIGINT, @leaseId UNIQUEIDENTIFIER = NULL,
@contentType NVARCHAR(255) = 'application/octect-stream', @contentEncoding
NVARCHAR(255) = NULL,
@contentLanguage NVARCHAR(255) = NULL,
@timeout INT = NULL,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].CreateOrReplacePageBlob_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].PutPage(
@logicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@buffer VARBINARY(MAX),
@startPositionBytes INT, @bytesToUpload INT = NULL,
@leaseId UNIQUEIDENTIFIER = NULL,
@contentMD5 NVARCHAR(255) = NULL,
@timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].PutPage_Embedded;
GO
83 | P a g e
CREATE FUNCTION [Azure.Embedded].PutPageFunction(
@logicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @BlobName NVARCHAR(4000),
@buffer VARBINARY(MAX),
@startPositionBytes BIGINT, @bytesToUpload INT = NULL,
@leaseId UNIQUEIDENTIFIER = NULL,
@contentMD5 NVARCHAR(255) = NULL,
@timeoutSeconds INT = 0, @xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS NVARCHAR(MAX)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.AzureBlob].PutPage_Function_Embedded;
GO
-- Queue
CREATE PROCEDURE [Azure.Embedded].CreateQueue(
@logicalConnectionName NVARCHAR(255),
@queueName NVARCHAR(255),
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].CreateQueue_Embedded;
GO
CREATE FUNCTION [Azure.Embedded].ListQueues(
@logicalConnectionName NVARCHAR(255),
@prefix NVARCHAR(255) = NULL,
@includeMetadata BIT = 0,
@timeoutSeconds INT = 60,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(4000), Metadata XML)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].ListQueues_Embedded;
GO
CREATE FUNCTION [Azure.Embedded].ListQueues_Faulty(
@logicalConnectionName NVARCHAR(255),
@prefix NVARCHAR(255) = NULL,
@includeMetadata BIT = 0,
@timeoutSeconds INT = 60,
@xmsclientrequestId NVARCHAR(255) = NULL)
RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(4000), Metadata XML)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].ListQueues_Faulty_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].EnqueueMessage(
@logicalConnectionName NVARCHAR(255),
@queueName NVARCHAR(255),
@xmlMessage XML,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].EnqueueMessage_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].DequeueMessage(
@logicalConnectionName NVARCHAR(255),
@queueName NVARCHAR(255),
@visibilityTimeoutSeconds INT = 10, -- lower time might recover from failure faster.
84 | P a g e
@timeoutSeconds INT = 60,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].DequeueMessage_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].RetrieveApproximateMessageCount(
@logicalConnectionName NVARCHAR(255),
@queueName NVARCHAR(255),
@timeoutSeconds INT = 60,
@xmsclientrequestId NVARCHAR(255) = NEWID)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Queue].RetrieveApproximateMessageCount_Embedded;
GO
-- Table
CREATE PROCEDURE [Azure.Embedded].CreateTable(@LogicalConnectionName NVARCHAR(255),
@TableName NVARCHAR(4000), @xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].CreateTable_Embedded;
GO
CREATE FUNCTION [Azure.Embedded].ListTables(@LogicalConnectionName NVARCHAR(255),
@xmsclientrequestId NVARCHAR(4000) = NULL)
RETURNS TABLE(Name NVARCHAR(4000), Url NVARCHAR(MAX))
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].ListTables_Embedded;
GO
CREATE FUNCTION [Azure.Embedded].QueryTable(@LogicalConnectionName NVARCHAR(255),
@tableName NVARCHAR(4000), @xmsclientrequestId NVARCHAR(4000) = NULL)
RETURNS TABLE(PartitionKey NVARCHAR(4000), RowKey NVARCHAR(4000), [TimeStamp] DATETIME,
Attributes XML)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].QueryTable_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].DeleteEntity(
@LogicalConnectionName NVARCHAR(255),
@TableName NVARCHAR(4000),
@PartitionKey NVARCHAR(4000), @RowKey NVARCHAR(4000),
@xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].DeleteEntity_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].InsertOrReplaceEntity(
@LogicalConnectionName NVARCHAR(255),
@TableName NVARCHAR(4000),
@PartitionKey NVARCHAR(4000), @RowKey NVARCHAR(4000), @AttributeList XML,
@xmsclientrequestId NVARCHAR(4000) = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Table].InsertOrReplaceEntity_Embedded;
GO
------- General purpose
----CREATE FUNCTION [Azure].ToXmlDate (@dt DATETIME)
RETURNS NVARCHAR(MAX) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlDate;
GO
CREATE FUNCTION [Azure].ToXmlString (@txt NVARCHAR(MAX))
85 | P a g e
RETURNS NVARCHAR(MAX) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlString;
GO
CREATE FUNCTION [Azure].ToXmlInt64 (@i BIGINT)
RETURNS NVARCHAR(MAX) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlInt64;
GO
CREATE FUNCTION [Azure].ToXmlDouble (@d FLOAT)
RETURNS NVARCHAR(MAX) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlDouble;
GO
CREATE FUNCTION [Azure].ToXmlBinary(@d VARBINARY(MAX))
RETURNS NVARCHAR(MAX) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlBinary;
GO
CREATE FUNCTION [Azure].ToXmlGuid(@d UNIQUEIDENTIFIER)
RETURNS NVARCHAR(MAX) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlGuid;
GO
CREATE FUNCTION [Azure].ToXmlStatement(@stmt NVARCHAR(MAX))
RETURNS NVARCHAR(MAX) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ToXmlStatement;
GO
CREATE FUNCTION [Azure].ComputeMD5AsBase64(@byteArray VARBINARY(MAX))
RETURNS NVARCHAR(MAX) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].ComputeMD5AsBase64;
GO
CREATE FUNCTION [Azure].GetFileSizeBytes(@fileName NVARCHAR(4000))
RETURNS BIGINT EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GetFileSizeBytes;
GO
CREATE FUNCTION [Azure].GetFileBlock(
@fileName NVARCHAR(4000),
@offsetBytes BIGINT,
@lengthBytes INT,
@fileShareOption NVARCHAR(255) = 'Read'
)
RETURNS VARBINARY(MAX) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GetFileBlock;
GO
------- Blob Lease
------ Direct
CREATE PROCEDURE Azure.AcquireBlobFixedLease(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseDuration INT = 60,
@proposedLeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireBlobFixedLease;
GO
86 | P a g e
CREATE PROCEDURE Azure.AcquireBlobInfiniteLease(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@proposedLeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireBlobInfiniteLease;
GO
CREATE PROCEDURE Azure.RenewBlobLease(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].RenewBlobLease;
GO
CREATE PROCEDURE Azure.ChangeBlobLease(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@proposedLeaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ChangeBlobLease;
GO
CREATE PROCEDURE Azure.ReleaseBlobLease(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ReleaseBlobLease;
GO
CREATE PROCEDURE Azure.BreakBlobLeaseWithGracePeriod(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseBreakPeriod INT = 15,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakBlobLeaseWithGracePeriod;
GO
CREATE PROCEDURE Azure.BreakBlobLeaseImmediately(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakBlobLeaseImmediately;
GO
-- Embedded
87 | P a g e
CREATE PROCEDURE [Azure.Embedded].AcquireBlobFixedLease(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseDuration INT = 60,
@proposedLeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireBlobFixedLease_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].AcquireBlobInfiniteLease(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@proposedLeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireBlobInfiniteLease_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].RenewBlobLease(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].RenewBlobLease_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].ChangeBlobLease(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@proposedLeaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ChangeBlobLease_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].ReleaseBlobLease(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ReleaseBlobLease_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].BreakBlobLeaseWithGracePeriod(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@leaseBreakPeriod INT = 15,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
88 | P a g e
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakBlobLeaseWithGracePeriod_Embed
ded;
GO
CREATE PROCEDURE [Azure.Embedded].BreakBlobLeaseImmediately(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000), @blobName NVARCHAR(4000),
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakBlobLeaseImmediately_Embedded;
GO
------- Container Lease
------ Direct
CREATE PROCEDURE Azure.AcquireContainerFixedLease(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000),
@leaseDuration INT = 60,
@proposedLeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireContainerFixedLease;
GO
CREATE PROCEDURE Azure.AcquireContainerInfiniteLease(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000),
@proposedLeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireContainerInfiniteLease;
GO
CREATE PROCEDURE Azure.RenewContainerLease(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].RenewContainerLease;
GO
CREATE PROCEDURE Azure.ChangeContainerLease(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@proposedLeaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
89 | P a g e
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ChangeContainerLease;
GO
CREATE PROCEDURE Azure.ReleaseContainerLease(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ReleaseContainerLease;
GO
CREATE PROCEDURE Azure.BreakContainerLeaseWithGracePeriod(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000),
@leaseBreakPeriod INT = 15,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakContainerLeaseWithGracePeriod;
GO
CREATE PROCEDURE Azure.BreakContainerLeaseImmediately(
@AccountName NVARCHAR(255), @SharedKey NVARCHAR(255), @useHTTPS bit,
@ContainerName NVARCHAR(4000),
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakContainerLeaseImmediately;
GO
-- Embedded
CREATE PROCEDURE [Azure.Embedded].AcquireContainerFixedLease(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000),
@leaseDuration INT = 60,
@proposedLeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireContainerFixedLease_Embedded
;
GO
CREATE PROCEDURE [Azure.Embedded].AcquireContainerInfiniteLease(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000),
@proposedLeaseId UNIQUEIDENTIFIER = NULL,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].AcquireContainerInfiniteLease_Embed
ded;
GO
CREATE PROCEDURE [Azure.Embedded].RenewContainerLease(
90 | P a g e
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].RenewContainerLease_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].ChangeContainerLease(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@proposedLeaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ChangeContainerLease_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].ReleaseContainerLease(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000),
@leaseId UNIQUEIDENTIFIER,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].ReleaseContainerLease_Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].BreakContainerLeaseWithGracePeriod(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000),
@leaseBreakPeriod INT = 15,
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakContainerLeaseWithGracePeriod_
Embedded;
GO
CREATE PROCEDURE [Azure.Embedded].BreakContainerLeaseImmediately(
@LogicalConnectionName NVARCHAR(255),
@ContainerName NVARCHAR(4000),
@timeoutSeconds INT = 0,
@xmsclientrequestId UNIQUEIDENTIFIER = NULL)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.BlobStorage].BreakContainerLeaseImmediately_Embe
dded;
GO
CREATE FUNCTION [Azure].GetContainerFromUri(
@resourceUri NVARCHAR(4000))
RETURNS NVARCHAR(255) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GetContainerFromUri;
GO
CREATE FUNCTION [Azure].GenerateBlobSharedAccessSignatureURI(
91 | P a g e
@resourceUri NVARCHAR(4000),
@sharedKey NVARCHAR(4000),
@permissions NVARCHAR(8),
@resourceType NVARCHAR(4),
@validityStart DATETIME,
@validityEnd DATETIME,
@identifier NVARCHAR(4000))
RETURNS NVARCHAR(255) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GenerateBlobSharedAccessSignatureURI;
GO
CREATE FUNCTION [Azure].GenerateDirectBlobSharedAccessSignatureURI(
@resourceUri NVARCHAR(4000),
@sharedKey NVARCHAR(4000),
@permissions NVARCHAR(8),
@resourceType NVARCHAR(4),
@validityStart DATETIME,
@validityEnd DATETIME)
RETURNS NVARCHAR(255) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GenerateDirectBlobSharedAccessSignatureUR
I;
GO
CREATE FUNCTION [Azure].GeneratePolicyBlobSharedAccessSignatureURI(
@resourceUri NVARCHAR(4000),
@sharedKey NVARCHAR(4000),
@resourceType NVARCHAR(4),
@identifier NVARCHAR(4000))
RETURNS NVARCHAR(255) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Utils].GeneratePolicyBlobSharedAccessSignatureUR
I;
GO
-------------------- Management -------------------CREATE FUNCTION [Azure.Management].GetServices(
@certificateThumbprint NVARCHAR(255),
@subscriptionId UNIQUEIDENTIFIER)
RETURNS TABLE(
ServiceName
NVARCHAR(4000),
Url
NVARCHAR(4000),
DefaultWinRmCertificateThumbprint NVARCHAR(255),
AffinityGroup
NVARCHAR(4000),
DateCreated
DATETIME,
DateLastModified
DATETIME,
[Description]
NVARCHAR(MAX),
Label
NVARCHAR(4000),
Location
NVARCHAR(4000),
[Status]
NVARCHAR(255)
)
AS EXTERNAL NAME [ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Management].GetServices;
GO
CREATE FUNCTION [Azure.Management].GetDeploymentsPersistentVMRolesWithInputEndpoints(
@certificateThumbprint NVARCHAR(255),
@subscriptionId UNIQUEIDENTIFIER,
92 | P a g e
@serviceName NVARCHAR(255),
@deploymentSlots NVARCHAR(255))
RETURNS TABLE(
Name
NVARCHAR(4000),
DeploymentSlot
NVARCHAR(255),
PrivateID
NVARCHAR(255),
[Status]
NVARCHAR(255),
Label
NVARCHAR(4000),
Url
NVARCHAR(4000),
Configuration
NVARCHAR(MAX),
UpgradeDomainCount
INT,
VMName
NVARCHAR(4000),
OsVersion
NVARCHAR(4000),
RoleSize
NVARCHAR(255),
DefaultWinRmCertificateThumbprint NVARCHAR(4000),
EndpointName
NVARCHAR(4000),
LocalPort
INT,
Port
INT,
Protocol
NVARCHAR(255),
Vip
NVARCHAR(255)
)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Management].GetDeploymentsPersistentVMRolesWithI
nputEndpoints;
GO
CREATE PROCEDURE [Azure.Management].AddInputEndpointToPersistentVM(
@certificateThumbprint
NVARCHAR(255),
@subscriptionId
UNIQUEIDENTIFIER,
@serviceName
NVARCHAR(255),
@deploymentSlots
NVARCHAR(255),
@vmName
NVARCHAR(255),
@EndpointName
NVARCHAR(255),
@LocalPort
INT,
@EnableDirectServerReturn
BIT,
@Port
INT,
@Protocol
NVARCHAR(255),
@Vip
NVARCHAR(255),
@Blocking
BIT
= 0
)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Management].AddInputEndpointToPersistentVM;
GO
CREATE PROCEDURE [Azure.Management].RemoveEndpointFromPersistentVM(
@certificateThumbprint
NVARCHAR(255),
@subscriptionId
UNIQUEIDENTIFIER,
@serviceName
NVARCHAR(255),
@deploymentSlots
NVARCHAR(255),
@vmName
NVARCHAR(255),
@EndpointName
NVARCHAR(255),
@Blocking
BIT
= 0
)
AS EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Management].RemoveEndpointFromPersistentVM;
GO
CREATE FUNCTION [Azure.Management].GetOperationStatus (
93 | P a g e
@certificateThumbprint
@subscriptionId
@operationId
NVARCHAR(255),
UNIQUEIDENTIFIER,
UNIQUEIDENTIFIER
)
RETURNS NVARCHAR(255) EXTERNAL NAME
[ITPCfSQL.Azure.CLR].[ITPCfSQL.Azure.CLR.Management].GetOperationStatus;
GO
E. Installing SQL Management Objects
The proposed cluster implementation is based on the Microsoft SQL Server to Windows Azure helper
library. Install this tool according to the directions in section D, and then execute all the scripts listed in
the remaining part of this section.
Schema
All the cluster objects are in the HAOnAzure schema.
USE [HAOnAzure]
GO
CREATE SCHEMA [HAOnAzure];
GO
Metadata Tables
USE [HaOnAzure]
GO
/****** Object: Table [HAOnAzure].[Credential]
Script Date: 12/20/2013 10:12:24 AM
******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [HAOnAzure].[Credential](
[AccountName] [nvarchar](255) NOT NULL,
[SharedKey] [nvarchar](4000) NOT NULL,
[IsMaster] BIT NOT NULL DEFAULT(0)
PRIMARY KEY CLUSTERED
(
[AccountName] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
The other tables are Windows Azure tables and are created on demand in the master account.
Stored Procedures
---------------------------------------------------------------------------------------------------------------------04-SP-HAOnAzure.PlaceDBOnHA.sql -------------------------------------------------------------------------------------------------------------------
94 | P a g e
USE [HaOnAzure]
GO
/****** Object: StoredProcedure [HAOnAzure].[PlaceDBOnHA]
10:12:24 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [HAOnAzure].[PlaceDBOnHA] @database_id INT
AS
DECLARE @AccountName NVARCHAR(255);
DECLARE @SharedKey NVARCHAR(4000);
Script Date: 12/20/2013
SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey
FROM [HAOnAzure].[Credential]
WHERE IsMaster = 1;
IF @AccountName IS NULL OR @SharedKey IS NULL
THROW 53000, 'Default credential not found. Add it and try again.', 1;
--------------- Check for Azure table existence -----------------IF NOT EXISTS(SELECT * FROM [Azure].ListTables(@AccountName, @SharedKey, 1, NULL)
WHERE Name = 'HaOnAzureDatabase')
BEGIN
PRINT 'Windows Azure table not found... creating'
EXEC [Azure].CreateTable @AccountName, @SharedKey, 1, 'HaOnAzureDatabase';
PRINT 'Windows Azure table HaOnAzureDatabase created in account ' +
@AccountName
END
-------------- End Check for Azure table existence ---------------DECLARE @fn SYSNAME;
IF(EXISTS(
SELECT
Attributes.value('(//AccountName)[1]', 'nvarchar(260)') AS
AccountName
, Attributes.value('(//DatabaseName)[1]', 'nvarchar(260)') AS
DatabaseName
, Attributes.value('(//name)[1]', 'sysname') AS [name]
, Attributes.value('(//type)[1]', 'tinyint') AS [type]
, Attributes.value('(//physical_name)[1]', 'nvarchar(260)') AS
physical_name
FROM [Azure].QueryTable(
@AccountName, @SharedKey, 1,
'HaOnAzureDatabase', NULL)
WHERE PartitionKey = DB_NAME(@database_id)
))
BEGIN
RAISERROR('Database already added', 18, 1);
END
ELSE BEGIN
SELECT TOP 1 @fn = physical_name FROM sys.master_files
WHERE database_id = @database_id
IF @fn IS NULL
BEGIN
RAISERROR('Database not found', 18, 1);
END
ELSE BEGIN
95 | P a g e
IF LOWER(LEFT(@fn, 4)) <> 'http'
BEGIN
RAISERROR('Database must be SQL Server Data Files in Windows
Azure', 18, 1);
END
ELSE BEGIN
DECLARE @file_id INT;
DECLARE @name SYSNAME;
DECLARE @databaseName SYSNAME;
DECLARE @stmt NVARCHAR(4000);
DECLARE @xml XML;
DECLARE curs CURSOR FAST_FORWARD FOR
SELECT
database_id,
[file_id],
[name],
DB_NAME(database_id)
FROM sys.master_files WHERE database_id = @database_id
OPEN curs;
FETCH NEXT FROM curs INTO @database_id, @file_id, @name,
@databaseName
WHILE @@FETCH_STATUS = 0
BEGIN
SET @stmt = N'SELECT
SUBSTRING(physical_name, 9,
CHARINDEX(''.'', physical_name, 9)-9) AS ''AccountName'',
DB_NAME( ' +
CONVERT(NVARCHAR, @database_id) +
N') AS ''DatabaseName'',
name,
[type],
physical_name
FROM sys.master_files
WHERE database_id = ' +
CONVERT(NVARCHAR, @database_id) +
N' AND file_id = ' +
CONVERT(NVARCHAR, @file_id) + N';';
--PRINT @stmt;
SELECT @xml = [Azure].[ToXmlStatement](@stmt);
--SELECT @xml;
SELECT @xml = t.c.query('.') FROM
@xml.nodes('/ResultSet/Record') AS t(c);
PRINT N'Adding datafile ' + @name + N' of database ' +
@databaseName + N'.'
EXEC [Azure].[InsertOrReplaceEntity]
@AccountName, @SharedKey, 1,
'HaOnAzureDatabase',
@databaseName,
@name,
@xml;
96 | P a g e
FETCH NEXT FROM curs INTO @database_id, @file_id,
@name, @databaseName
END
CLOSE curs;
DEALLOCATE curs;
END
END
END
GO
-------------------------------------------------------------------------------------------------------------- 05-SP-HAOnAzure.ScanForDBAndMountEndpoints.sql ----------------------------------------------------------------------------------------------------------USE [HaOnAzure]
GO
/****** Object: StoredProcedure [HAOnAzure].[ScanForDBAndMountEndpoints]
Script Date:
12/20/2013 10:12:24 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [HAOnAzure].[ScanForDBAndMountEndpoints] AS
SET NOCOUNT ON;
DECLARE @databaseName SYSNAME;
DECLARE @serverName SYSNAME;
SET @serverName = @@SERVERNAME;
IF CHARINDEX('\', @serverName) > 0
BEGIN
SET @serverName = SUBSTRING(@serverName, 0, CHARINDEX('\', @serverName));
END
SET @serverName = LOWER(@serverName);
DECLARE @AccountName NVARCHAR(255);
DECLARE @SharedKey NVARCHAR(4000);
SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey
FROM [HAOnAzure].[Credential]
WHERE IsMaster = 1;
IF @AccountName IS NULL OR @SharedKey IS NULL
THROW 53000, 'Default credential not found. Add it and try again.', 1;
-- Load Azure endpoint table in table variable
DECLARE @endpoint TABLE(
[CertificateThumbprint] NVARCHAR(4000) NOT NULL,
[SubscriptionId] UNIQUEIDENTIFIER NOT NULL,
[ServiceName] NVARCHAR(4000) NOT NULL,
[DeploymentSlot] NVARCHAR(4000) NOT NULL,
[DatabaseName] SYSNAME NOT NULL,
[EndPointName] NVARCHAR(255) NOT NULL,
[Port] INT NOT NULL,
[LocalPort] INT NOT NULL,
[Protocol] NVARCHAR(255) NOT NULL,
[VIP] NVARCHAR(255) NOT NULL)
INSERT INTO @endpoint
97 | P a g e
SELECT
Attributes.value('(//CertificateThumbprint)[1]', 'nvarchar(4000)') AS
[CertificateThumbprint]
, Attributes.value('(//SubscriptionId)[1]', 'UNIQUEIDENTIFIER') AS
[SubscriptionId]
, Attributes.value('(//ServiceName)[1]', 'nvarchar(4000)') AS [ServiceName]
, Attributes.value('(//DeploymentSlot)[1]', 'nvarchar(4000)') AS [DeploymentSlot]
, PartitionKey AS [DatabaseName]
, RowKey AS [EndPointName]
, Attributes.value('(//Port)[1]', 'INT') AS [Port]
, Attributes.value('(//LocalPort)[1]', 'INT') AS [LocalPort]
, Attributes.value('(//Protocol)[1]', 'nvarchar(255)') AS [Protocol]
, Attributes.value('(//VIP)[1]', 'nvarchar(255)') AS [VIP]
FROM [Azure].QueryTable(
@AccountName, @SharedKey, 1,
'HaOnAzureEndpoint', NULL);
DECLARE curs CURSOR FAST_FORWARD FOR
SELECT
[DatabaseName]
FROM
@endpoint
GROUP BY
[DatabaseName];
OPEN curs;
FETCH NEXT FROM curs INTO @databaseName
WHILE @@FETCH_STATUS = 0
BEGIN
IF(EXISTS(SELECT * FROM sys.databases WHERE name = @databaseName))
BEGIN
PRINT 'Database ' + CONVERT(VARCHAR(255), @databaseName) + ' is owned by
us. Checking the listener...';
DECLARE
DECLARE
DECLARE
DECLARE
DECLARE
DECLARE
DECLARE
DECLARE
DECLARE
@CertificateThumbprint
@SubscriptionId
@ServiceName
@DeploymentSlot
@EndPointName
@Port
@LocalPort
@Protocol
@VIP
DECLARE @actualVM
NVARCHAR(4000);
UNIQUEIDENTIFIER;
NVARCHAR(4000);
NVARCHAR(4000);
NVARCHAR(255);
INT;
INT;
NVARCHAR(255);
NVARCHAR(255);
NVARCHAR(255) = NULL;
SELECT TOP 1
@CertificateThumbprint = CertificateThumbprint,
@SubscriptionId
=
SubscriptionId,
@ServiceName
=
ServiceName,
@DeploymentSlot
=
DeploymentSlot,
@EndPointName
=
EndPointName,
@Port
=
Port,
@LocalPort
=
LocalPort,
@Protocol
=
Protocol,
@VIP
=
VIP
FROM @endpoint
WHERE [DatabaseName] = @databaseName
98 | P a g e
SELECT
@actualVM = VMName FROM
[Azure.Management].GetDeploymentsPersistentVMRolesWithInputEndpoints
(@CertificateThumbprint, @SubscriptionId, @ServiceName,
@DeploymentSlot)
WHERE
EndpointName = @EndPointName;
PRINT 'Endpoint ' + CONVERT(VARCHAR(255), @EndPointName) + ' is owned by '
+ CONVERT(VARCHAR(255), ISNULL(@actualVM, '<nobody>')) + '.'
IF ISNULL(@actualVM, '') <> @serverName
BEGIN
PRINT 'Endpoint ' + CONVERT(VARCHAR(255), @EndPointName) + ' is not
owned by the local machine (' + CONVERT(VARCHAR(255), @serverName) + ').'
IF @actualVM IS NOT NULL
BEGIN
PRINT 'Bringing down endpoint...';
EXEC [Azure.Management].RemoveEndpointFromPersistentVM
@CertificateThumbprint,
@SubscriptionId,
@ServiceName,
@DeploymentSlot,
@actualVM,
@EndPointName,
1;
PRINT 'Endpoint ' + CONVERT(VARCHAR(255), @EndPointName) + '
unassigned.';
END
ELSE BEGIN
PRINT 'Endpoint ' + CONVERT(VARCHAR(255), @EndPointName) + '
unassigned (not owned by anybody).';
END
SET @actualVM = @serverName;
PRINT 'Mounting ' + CONVERT(VARCHAR(255), @EndPointName) + ' as
local (' + @actualVM + ') endpoint (Port=' + CONVERT(VARCHAR(255), @Port) + ',
LocalPort=' + CONVERT(VARCHAR(255), @LocalPort) + ')...';
EXEC [Azure.Management].AddInputEndpointToPersistentVM
@CertificateThumbprint,
@SubscriptionId,
@ServiceName,
@DeploymentSlot,
@actualVM,
@EndPointName,
@LocalPort,
0,
@Port,
@Protocol,
@VIP,
1;
PRINT 'Mounting done.';
99 | P a g e
END ELSE BEGIN
PRINT 'Endpoint ' + CONVERT(VARCHAR(255), @EndPointName) + ' is
owned by the local machine (' + CONVERT(VARCHAR(255), @serverName) + '). Nothing to do.';
END
END
ELSE BEGIN
PRINT 'Database ' + CONVERT(VARCHAR(255), @databaseName) + ' is owned by
someone else. Ignore.';
END
FETCH NEXT FROM curs INTO @databaseName
END
CLOSE curs
DEALLOCATE curs
GO
----------------------------------------------------------------------------------------------------------- 06-SP-HAOnAzure.ScanForDetachedDBsAndAttach.sql ------------------------------------------------------------------------------------------------------------USE [HaOnAzure]
GO
/****** Object: StoredProcedure [HAOnAzure].[ScanForDetachedDBsAndAttach]
Script
Date: 12/20/2013 10:12:24 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [HAOnAzure].[ScanForDetachedDBsAndAttach] AS
SET NOCOUNT ON;
DECLARE @AccountName NVARCHAR(255);
DECLARE @SharedKey NVARCHAR(4000);
SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey
FROM [HAOnAzure].[Credential]
WHERE IsMaster = 1;
IF @AccountName IS NULL OR @SharedKey IS NULL
THROW 53000, 'Default credential not found. Add it and try again.', 1;
-- Load [Azure].QueryTable in a temp table
DECLARE @database TABLE(
[AccountName] NVARCHAR(260) NOT NULL,
[DatabaseName] SYSNAME NOT NULL,
[File] SYSNAME NOT NULL,
[type] TINYINT NOT NULL,
[FileName] NVARCHAR(260) NOT NULL);
INSERT INTO @database
SELECT
Attributes.value('(//AccountName)[1]', 'nvarchar(260)') AS AccountName
, Attributes.value('(//DatabaseName)[1]', 'SYSNAME') AS DatabaseName
, Attributes.value('(//name)[1]', 'SYSNAME') AS [name]
, Attributes.value('(//type)[1]', 'TINYINT') AS [type]
, Attributes.value('(//physical_name)[1]', 'nvarchar(260)') AS physical_name
100 | P a g e
FROM [Azure].QueryTable(
@AccountName, @SharedKey, 1,
'HaOnAzureDatabase', NULL);
DECLARE @container NVARCHAR(255);
DECLARE @tbl TABLE
(
Url NVARCHAR(4000),
LeaseStatus NVARCHAR(255)
);
PRINT 'Retrieving blob status (by container)...'
DECLARE curs CURSOR FAST_FORWARD FOR
SELECT
CRED.AccountName, CRED.SharedKey,
[Azure].GetContainerFromUri([FileName]) AS 'Container'
FROM @database DB
INNER JOIN [HAOnAzure].[Credential] CRED ON CRED.AccountName = DB.AccountName
GROUP BY
CRED.AccountName,
CRED.SharedKey,
[Azure].GetContainerFromUri([FileName])
OPEN curs;
FETCH NEXT FROM curs INTO @AccountName, @SharedKey, @container
WHILE @@FETCH_STATUS = 0
BEGIN
-- Now we should check for lease!
DECLARE @cnt INT
INSERT INTO @tbl
SELECT Url, LeaseStatus FROM [Azure].ListBlobs(
@AccountName, @SharedKey, 1,
@container, 0,0,0,0, NULL)
FETCH NEXT FROM curs INTO @AccountName, @SharedKey, @container
END
CLOSE curs
DEALLOCATE curs
PRINT 'Retrieving blob status (by container) completed.'
PRINT 'Looping in unlocked blobs...'
DECLARE @dbName SYSNAME;
DECLARE @cntUnlocked INT
DECLARE @totalFiles INT
DECLARE curs CURSOR FAST_FORWARD FOR
SELECT [DatabaseName], COUNT(*) AS 'Unlocked count' FROM @tbl BLOBS
INNER JOIN @database DB ON DB.[FileName] = BLOBS.Url
WHERE [LeaseStatus] = 'Unlocked'
GROUP BY [DatabaseName];
OPEN curs;
FETCH NEXT FROM curs INTO @dbName, @cntUnlocked
101 | P a g e
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @totalFiles = COUNT(*) FROM @database
WHERE
[DatabaseName] = @dbName
GROUP BY [DatabaseName]
PRINT 'Database ' + CONVERT(VARCHAR(255), @dbName) + ' has (' +
CONVERT(VARCHAR(255), @cntUnlocked) + '/' +
CONVERT(VARCHAR(255), @totalFiles) + ') unlocked files.'
IF @cntUnlocked = @totalFiles
BEGIN
-------------------- Attach
---------------PRINT 'Trying to attach database ' + @DBName + '...';
DECLARE @txt NVARCHAR(MAX);
SET @txt= N'CREATE DATABASE ' + QUOTENAME(@DBName) + ' ON ';
DECLARE @fileName NVARCHAR(4000);
DECLARE @first BIT = 0;
--- Datafile
DECLARE cDataFiles CURSOR FAST_FORWARD FOR
SELECT [FileName] FROM @database WHERE [type] = 0 AND DatabaseName =
@DBName;
OPEN cDataFiles;
FETCH NEXT FROM cDataFiles INTO @fileName
WHILE @@FETCH_STATUS = 0
BEGIN
IF @first = 0
BEGIN
SET @first = 1;
END ELSE BEGIN
SET @txt += ', ';
END
SET @txt += '(FILENAME = ''' + @fileName + ''')'
FETCH NEXT FROM cDataFiles INTO @fileName
END
CLOSE cDataFiles
DEALLOCATE cDataFiles
--TLog
SET @txt += ' LOG ON ';
SET @first = 0;
DECLARE cTlog CURSOR FAST_FORWARD FOR
SELECT [FileName] FROM @database WHERE [type]=1 AND DatabaseName = @DBName;
OPEN cTlog;
FETCH NEXT FROM cTlog INTO @fileName
WHILE @@FETCH_STATUS = 0
BEGIN
IF @first = 0
BEGIN
102 | P a g e
SET @first = 1;
END ELSE BEGIN
SET @txt += ', ';
END
SET @txt += '(FILENAME = ''' + @fileName + ''')'
FETCH NEXT FROM cTlog INTO @fileName
END
CLOSE cTlog
DEALLOCATE cTlog
SET @txt += ' FOR ATTACH;';
PRINT 'Will call this statement "' + @txt + '"';
EXEC sp_executeSQL @txt;
IF(EXISTS(SELECT * FROM sys.databases WHERE [name] = @DBName))
PRINT 'Attach successful!';
ELSE
PRINT 'Attach failed';
END
FETCH NEXT FROM curs INTO @dbName, @cntUnlocked
END
CLOSE curs
DEALLOCATE curs
PRINT 'Looping in unlocked blobs completed.';
GO
------------------------------------------------------------------------------------------------------------- 07-SP.HAOnAzure.UpsertAzureAccountCredentials.sql --------------------------------------------------------------------------------------------------------USE [HaOnAzure]
GO
/****** Object: StoredProcedure [HAOnAzure].[UpsertAzureAccountCredentials]
Script
Date: 12/20/2013 10:12:24 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [HAOnAzure].[UpsertAzureAccountCredentials]
@account NVARCHAR(255),
@secret NVARCHAR(4000),
@isMaster BIT
AS
MERGE [HAOnAzure].[Credential] AS target
USING (VALUES(@account, @secret, @isMaster)) AS src(AccountName, SharedKey,
IsMaster)
ON
target.AccountName = src.AccountName
WHEN MATCHED THEN
UPDATE SET SharedKey = src.SharedKey, IsMaster = src.IsMaster
WHEN NOT MATCHED THEN
INSERT (AccountName, SharedKey, IsMaster) VALUES(src.AccountName,
src.SharedKey, src.IsMaster);
103 | P a g e
GO
------------------------------------------------------------------------------------------------------------- 08-SP.HaOnAzure.InsertOrUpdateEndpointOnHA.sql -----------------------------------------------------------------------------------------------------------CREATE PROCEDURE [HaOnAzure].InsertOrUpdateEndpointOnHA
@CertificateThumbprint NVARCHAR(255),
@SubscriptionId UNIQUEIDENTIFIER,
@ServiceName NVARCHAR(255),
@DatabaseName SYSNAME,
@EndPointName NVARCHAR(255),
@Port INT,
@VIP NVARCHAR(64),
@LocalPort INT = 1433,
@DeploymentSlot NVARCHAR(255) = 'Production',
@Protocol NVARCHAR(64) = 'tcp'
AS
DECLARE @AccountName NVARCHAR(255);
DECLARE @SharedKey NVARCHAR(4000);
SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey
FROM [HAOnAzure].[Credential]
WHERE IsMaster = 1;
IF @AccountName IS NULL OR @SharedKey IS NULL
THROW 53000, 'Default credential not found. Add it and try again.', 1;
--------------- Check for Azure table existence -----------------IF NOT EXISTS(SELECT * FROM [Azure].ListTables(@AccountName, @SharedKey, 1, NULL)
WHERE Name = 'HaOnAzureEndpoint')
BEGIN
PRINT 'Windows Azure table not found... creating'
EXEC [Azure].CreateTable @AccountName, @SharedKey, 1, 'HaOnAzureEndpoint';
PRINT 'Windows Azure table HaOnAzureEndpoint created in account ' +
@AccountName
END
-------------- End Check for Azure table existence ---------------DECLARE @xml XML;
SET @xml =
N'<row> '
+ N'<CertificateThumbprint>' +
[Azure].[ToXmlString](@CertificateThumbprint) + N'</CertificateThumbprint>'
+ N'<SubscriptionId>' + [Azure].[ToXmlString](@SubscriptionId) +
N'</SubscriptionId>'
+ N'<ServiceName>' + [Azure].[ToXmlString](@ServiceName) +
N'</ServiceName>'
+ N'<Port>' + [Azure].[ToXmlInt64](@Port) + N'</Port>'
+ N'<VIP>' + [Azure].[ToXmlString](@VIP) + N'</VIP>'
+ N'<LocalPort>' + [Azure].[ToXmlInt64](@LocalPort) + N'</LocalPort>'
+ N'<DeploymentSlot>' + [Azure].[ToXmlString](@DeploymentSlot) +
N'</DeploymentSlot>'
+ N'<Protocol>' + [Azure].[ToXmlString](@Protocol) + N'</Protocol>'
+ N'</row>';
--PRINT @stmt;
SELECT @xml;
104 | P a g e
EXEC [Azure].[InsertOrReplaceEntity]
@AccountName, @SharedKey, 1,
'HaOnAzureEndpoint',
@DatabaseName,
@EndPointName,
@xml;
GO
------------------------------------------------------------------------------------------------------------------- 09-SP.HaOnAzure.RemoveDBFromHA.sql -----------------------------------------------------------------------------------------------------------------CREATE PROCEDURE [HaOnAzure].[RemoveDBFromHA]
@dbName SYSNAME
AS
DECLARE @AccountName NVARCHAR(255);
DECLARE @SharedKey NVARCHAR(4000);
DECLARE @PartitionKey NVARCHAR(260);
DECLARE @RowKey NVARCHAR(260);
SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey
FROM [HAOnAzure].[Credential]
WHERE IsMaster = 1;
IF @AccountName IS NULL OR @SharedKey IS NULL
THROW 53000, 'Default credential not found. Add it and try again.', 1;
DECLARE curs CURSOR FAST_FORWARD FOR
SELECT
PartitionKey,
RowKey
FROM [Azure].QueryTable(
@AccountName, @SharedKey, 1,
'HaOnAzureDatabase', NULL)
WHERE
PartitionKey = @dbName;
OPEN curs;
FETCH NEXT FROM curs INTO @PartitionKey, @RowKey
WHILE @@FETCH_STATUS = 0
BEGIN
--PRINT @PartitionKey;
EXEC [Azure].[DeleteEntity]
@AccountName, @SharedKey, 1,
'HaOnAzureDatabase',
@PartitionKey, @RowKey;
FETCH NEXT FROM curs INTO @PartitionKey, @RowKey
END
CLOSE curs;
DEALLOCATE curs;
GO
---------------------------------------------------------------------------------------------------------------10-SP.HaOnAzure.RemoveEndpointFromHA.sql ----------------------------------------------------------------------------------------------------------------CREATE PROCEDURE [HaOnAzure].[RemoveEndpointFromHA]
@dbName SYSNAME,
105 | P a g e
@endpointName NVARCHAR(260)
AS
DECLARE @AccountName NVARCHAR(255);
DECLARE @SharedKey NVARCHAR(4000);
DECLARE @PartitionKey NVARCHAR(260);
DECLARE @RowKey NVARCHAR(260);
SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey
FROM [HAOnAzure].[Credential]
WHERE IsMaster = 1;
IF @AccountName IS NULL OR @SharedKey IS NULL
THROW 53000, 'Default credential not found. Add it and try again.', 1;
EXEC [Azure].[DeleteEntity]
@AccountName, @SharedKey, 1,
'HaOnAzureEndpoint',
@dbName, @endpointName;
GO
----------------------------------------------------------------------------------------------------------------- 11-SP.HaOnAzure.ShowEndpoints.sql --------------------------------------------------------------------------------------------------------------------CREATE PROCEDURE [HaOnAzure].[ShowEndpoints] AS
SET NOCOUNT ON;
DECLARE @AccountName NVARCHAR(255);
DECLARE @SharedKey NVARCHAR(4000);
SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey
FROM [HAOnAzure].[Credential]
WHERE IsMaster = 1;
IF @AccountName IS NULL OR @SharedKey IS NULL
THROW 53000, 'Default credential not found. Add it and try again.', 1;
SELECT
Attributes.value('(//CertificateThumbprint)[1]', 'nvarchar(4000)') AS
[CertificateThumbprint]
, Attributes.value('(//SubscriptionId)[1]', 'UNIQUEIDENTIFIER') AS
[SubscriptionId]
, Attributes.value('(//ServiceName)[1]', 'nvarchar(4000)') AS [ServiceName]
, Attributes.value('(//DeploymentSlot)[1]', 'nvarchar(4000)') AS [DeploymentSlot]
, PartitionKey AS [DatabaseName]
, RowKey AS [EndPointName]
, Attributes.value('(//Port)[1]', 'INT') AS [Port]
, Attributes.value('(//LocalPort)[1]', 'INT') AS [LocalPort]
, Attributes.value('(//Protocol)[1]', 'nvarchar(255)') AS [Protocol]
, Attributes.value('(//VIP)[1]', 'nvarchar(255)') AS [VIP]
FROM [Azure].QueryTable(
@AccountName, @SharedKey, 1,
'HaOnAzureEndpoint', NULL);
GO
----------------------------------------------------------------------------------------------------------------- 12-SP.HAOnAzure.ShowDatabasesOnHA.sql -----------------------------------------------------------------------------------------------------------------
106 | P a g e
USE [HaOnAzure]
GO
/****** Object: StoredProcedure [HAOnAzure].[ShowDatabasesOnHA]
12/27/2013 2:44:19 PM ******/
SET ANSI_NULLS ON
GO
Script Date:
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [HAOnAzure].[ShowDatabasesOnHA] AS
SET NOCOUNT ON;
DECLARE @AccountName NVARCHAR(255);
DECLARE @SharedKey NVARCHAR(4000);
SELECT TOP 1 @AccountName = AccountName, @SharedKey = SharedKey
FROM [HAOnAzure].[Credential]
WHERE IsMaster = 1;
IF @AccountName IS NULL OR @SharedKey IS NULL
THROW 53000, 'Default credential not found. Add it and try again.', 1;
SELECT
Attributes.value('(//AccountName)[1]', 'nvarchar(260)') AS AccountName
, Attributes.value('(//DatabaseName)[1]', 'nvarchar(260)') AS DatabaseName
, Attributes.value('(//name)[1]', 'sysname') AS [name]
, Attributes.value('(//type)[1]', 'tinyint') AS [type]
, Attributes.value('(//physical_name)[1]', 'nvarchar(260)') AS physical_name
FROM [Azure].QueryTable(
@AccountName, @SharedKey, 1,
'HaOnAzureDatabase', NULL);
GO
107 | P a g e
G. Stored procedure reference
1) [HAOnAzure].[PlaceDBOnHA]
Description
This stored procedure adds the mandatory metadata to the Windows Azure database table. After a
database is in that table, all cluster instances monitor its lease status. If it is unlocked, the instances try to
attach it as local database.
Parameters
Name
@database_id
Type
Description
INT
The database ID of the database to add. The
database ID can be found in the sys.databases
catalog view. Only SQL Server Data Files in
Windows Azure databases can be added.
Returns
Nothing.
2) [HAOnAzure].[ScanForDBAndMountEndpoints]
Description
This stored procedure scans all the SQL Server Data Files in Windows Azure databases attached to the
local instance and in the Windows Azure database table. For each database found, the stored procedure
checks to see whether the corresponding endpoint is mapped to the local instance. If not, the procedure
maps the endpoint to the local instance, bringing down the preexisting map if needed.
Parameters
Nothing.
Returns
Nothing.
Remarks
A master credential must exist in the credential table. If none exists, the procedure throws an error. To
add the required master credential, use the stored procedure
[HAOnAzure].[UpsertAzureAccountCredentials].
3) [HAOnAzure].[ScanForDetachedDBsAndAttach]
Description
This stored procedure scans all the databases in the Windows Azure database table for unlocked blobs.
If a database is found with all its blobs unlocked, the procedure attempts to attach it as local database.
108 | P a g e
Parameters
Nothing.
Returns
Nothing.
Remarks
A master credential must exist in the credential table. If none exists, the procedure throws an error. To
add
the
required
master
credential,
use
the
stored
procedure
[HAOnAzure].[UpsertAzureAccountCredentials].
4) [HAOnAzure].[UpsertAzureAccountCredentials]
Description
This stored procedure adds or updates the Windows Azure storage credentials. You can add as many
credentials as you want to support multiple different accounts. One (no more or less) must be marked as
master. The master account is used for the shared Windows Azure metadata table by the framework.
Parameters
Name
Type
Description
@account
NVARCHAR(255)
This is the Windows Azure account name.
@secret
NVARCHAR(4000)
This is one of the shared keys of the Windows
Azure credential.
@isMaster
BIT
Set this to true for the master account. Only one
master account is supported at a time.
Returns
Nothing.
5) [HaOnAzure].[InsertOrUpdateEndpointOnHA]
Description
This stored procedure adds or updates an endpoint mapping.
Parameters
Name
@CertificateThumbprint
Type
Description
NVARCHAR(255)
This is the certificate thumbprint that identifies
the Windows Azure management certificate.
For more information, see Windows Azure
109 | P a g e
management
document.
certificate”
earlier
in
this
@SubscriptionId
UNIQUEIDENTIFIER
This is the Windows Azure subscription ID. For
more information, see the remarks section.
@ServiceName
NVARCHAR(255)
This is the Windows Azure cloud service name.
This is also the DNS name of your cluster. For
more information, see “Virtual machines in the
same Windows Azure cloud service” earlier in
this document.
@DatabaseName
SYSNAME
This is the database name. One database can
have one endpoint.
@EndPointName
NVARCHAR(255)
This is the endpoint name. Any unique name can
be used; however, standard Windows Azure
endpoint naming constraints apply.
@Port
INT
This is the remote port (the port to connect to
from client perspective). Each port must be
unique and available (not used by other
services). For more information, see “Endpoint
table” earlier in this document.
@VIP
NVARCHAR(64)
The VIP of the cloud service.
@LocalPort
INT
The TCP port of SQL Server. This can be shared
among different external ports. For more
information, see “Endpoint table” earlier in this
document.
@DeploymentSlot
NVARCHAR(255)
This is the Windows Azure deployment slot. For
more information about Windows Azure
deployments, see “Manage Deployments in
Windows
Azure”
(http://msdn.microsoft.com/enus/library/windowsazure/gg433027.aspx).
@Protocol
NVARCHAR(64)
For SQL Server TDS specify “tcp”.
Returns
Nothing.
110 | P a g e
Remarks
You can retrieve the subscription ID through the Windows Azure Management Portal by clicking Settings.
Figure 1 - How to find the Windows Azure Subscription ID using the management portal
A master credential must exist in the credential table. If none exists, the procedure throws an error. To
add
the
required
master
credential,
use
the
stored
procedure
[HAOnAzure].[UpsertAzureAccountCredentials].
6) [HaOnAzure].[RemoveDBFromHA]
Description
This stored removes a database from the scan that is performed each time the stored procedure
[HAOnAzure].[ScanForDetachedDBsAndAttach] is called. This action effectively removes the
database from the cluster.
Parameters
Name
Type
Description
111 | P a g e
@dbName
SYSNAME
This database name.
Returns
Nothing.
Remarks
A master credential must exist in the credential table. If none exists, the procedure throws an error. To
add
the
required
master
credential,
use
the
stored
procedure
[HAOnAzure].[UpsertAzureAccountCredentials].
7) [HaOnAzure].[RemoveEndpointFromHA]
Description
This stored procedure removes the endpoint from the scan that is performed each time the stored
procedure [HAOnAzure].[ScanForDBAndMountEndpoints] is called. This operation does not
remove the endpoint from the Windows Azure deployment. If you need to remove the endpoint, you
must
use
Windows
Azure
PowerShell
cmdlets
or
the
stored
procedure
[Azure.Management].[RemoveEndpointFromPersistentVM]. For more information about
using
cmdlets,
see
“Windows
Azure
PowerShell”
(http://msdn.microsoft.com/enus/library/windowsazure/jj156055.aspx).
Parameters
Name
Type
Description
@dbName
SYSNAME
This is the database name.
@endpointName
NVARCHAR(260)
This is the endpoint name.
Returns
Nothing.
Remarks
A master credential must exist in the credential table. If none exists, the procedure throws an error. To
add
the
required
master
credential,
use
the
stored
procedure
[HAOnAzure].[UpsertAzureAccountCredentials].
112 | P a g e
8) [HaOnAzure].[ShowEndpoints]
Description
This stored procedure lists the endpoints configured using the stored procedure
[HaOnAzure].[InsertOrUpdateEndpointOnHA]. This is not the actual deployment, just the
configured one. If you need to inspect the actual endpoint configuration, use the Windows Azure
PowerShell
cmdlets
or
use
the
stored
procedure
[Azure.Management].[GetDeploymentsPersistentVMRolesWithInputEndpoints].
For
more
information
about
the
cmdlets,
see
“Windows
Azure
PowerShell”
http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).
Parameters
Nothing.
Returns
Name
Type
Description
@CertificateThumbprint
NVARCHAR(255)
This is the certificate thumbprint that identifies
the Windows Azure management certificate. For
more information, see “Windows Azure
management certificate” earlier in this
document.
@SubscriptionId
UNIQUEIDENTIFIER
This is the Windows Azure subscription ID. For
more information, see the remarks section of
the section that describes the stored procedure
[HaOnAzure].[InsertOrUpdateEndpointOnHA].
@ServiceName
NVARCHAR(255)
This is the Windows Azure cloud service name.
This is also the DNS name of your cluster. For
more information, see “Virtual machines in the
same Windows Azure cloud service” earlier in
this document.
@DeploymentSlot
NVARCHAR(255)
This is the Windows Azure deployment slot. For
more information about Windows Azure
deployments, see “Manage Deployments in
Windows
Azure”
(http://msdn.microsoft.com/enus/library/windowsazure/gg433027.aspx).
113 | P a g e
@DatabaseName
SYSNAME
This is the database name. One database can
have one endpoint.
@EndPointName
NVARCHAR(255)
This is the endpoint name. Any unique name can
be used; however, standard Windows Azure
endpoint naming constraints apply.
@Port
INT
This is the remote port (that is, the port to
connect to from the client perspective). Each
port must be unique and available (not used by
other services). For more information, see
“Endpoint table” earlier in this document.
@LocalPort
INT
The TCP port of SQL Server. This can be shared
among different external ports. For more
information, see “Endpoint table” earlier in this
document.
@Protocol
NVARCHAR(64)
For SQL Server TDS specify “tcp”.
@VIP
NVARCHAR(64)
The VIP of the cloud service.
Remarks
A master credential must exist in the credential table. If none exists, the procedure throws an error. To
add
the
required
master
credential,
use
the
stored
procedure
[HAOnAzure].[UpsertAzureAccountCredentials].
9) [HAOnAzure].[ShowDatabasesOnHA]
Description
This stored procedure shows the databases added to the cluster scan that is performed each time the
stored procedure [HAOnAzure].[ScanForDetachedDBsAndAttach] is called. It also returns a
list of the physical files (blobs) that make up the database.
Parameters
Nothing.
Returns
Name
Type
Description
AccountName
NVARCHAR(260)
This is the Windows Azure account name.
DatabaseName
NVARCHAR(260)
This is the database name.
114 | P a g e
name
SYSNAME
The name of the file as it appears in the
sys.master_files name field.
type
TINYINT
This is the type of the file as it appears in the
sys.master_files type field.
physical_name
NVARCHAR(260)
This is the physical URI of file as it appears in the
sys.master_files
physical_name
field.
Remarks
A master credential must exist in the credential table. If none exists, the procedure throws an error. To
add
the
required
master
credential,
use
the
stored
procedure
[HAOnAzure].[UpsertAzureAccountCredentials].
115 | P a g e