Download Dell EMC VMAX All Flash Storage for Mission Critical SQL Server

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Concurrency control wikipedia , lookup

Tandem Computers wikipedia , lookup

Oracle Database wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Microsoft Access wikipedia , lookup

Database wikipedia , lookup

Btrieve wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Team Foundation Server wikipedia , lookup

Relational model wikipedia , lookup

Database model wikipedia , lookup

Clusterpoint wikipedia , lookup

Open Database Connectivity wikipedia , lookup

SQL wikipedia , lookup

PL/SQL wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Transcript
DELL EMC VMAX ALL FLASH STORAGE FOR
MISSION-CRITICAL SQL SERVER DATABASES
Dell EMC VMAX Engineering White Paper
ABSTRACT
®
®
This white paper describes the Dell EMC VMAX All Flash storage
system, which provides high performance and storage efficiency by using
compression to run SQL Server mission-critical databases. VMAX All Flash
storage provides ease of use, reliability, high availability, security, and
versatility.
March 2017
WHITE PAPER
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind
with respect to the information in this publication, and specifically disclaims implied warranties of merchantability
or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are
trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA March 2017 White Paper H15019.1
Dell EMC believes the information in this document is accurate as of its publication date. The information is
subject to change without notice.
2
TABLE OF CONTENTS
EXECUTIVE SUMMARY ...........................................................................................................5
AUDIENCE .................................................................................................................................6
INTRODUCTION ........................................................................................................................6
VMAX All Flash Adaptive Compression Engine ................................................................................ 6
KEY BENEFITS OF VMAX ALL FLASH FOR SQL SERVER DATABASES ..........................7
DESIGN CONSIDERATIONS FOR SQL SERVER AND VMAX ALL FLASH ..........................7
Storage design .................................................................................................................................. 8
Storage connectivity best practices ............................................................................................................ 8
Virtual provisioning and thin devices .......................................................................................................... 8
Best practices for VMAX compression with SQL Server............................................................................. 9
Host connectivity ............................................................................................................................. 10
FC connectivity best practices.................................................................................................................. 10
iSCSI connectivity best practices ............................................................................................................. 11
Number and size of host devices ............................................................................................................. 11
Physical host connectivity best practices ................................................................................................. 11
Host I/O limits and multi-tenancy.............................................................................................................. 12
MANAGING SQL SERVER DATA COPIES USING APPSYNC ON VMAX ALL FLASH ..... 12
Configuring the AppSync Server ..................................................................................................... 13
Discovering and provisioning VMAX All Flash storage for database copies .................................... 14
Registering the host agent and discovering Microsoft SQL Server databases ................................ 15
Selecting or configuring service plans ............................................................................................. 15
Scheduling database copies and using them for repurposing ......................................................... 16
SQL SERVER TEST CASES ON VMAX ALL FLASH ........................................................... 17
Test environment............................................................................................................................. 18
Hardware and software configuration ....................................................................................................... 18
Test Cases ...................................................................................................................................... 19
TEST 1: Single SQL Server database OLTP test ..................................................................................... 19
Test motivation ................................................................................................................ 19
Test configuration ............................................................................................................ 19
Test results ...................................................................................................................... 19
Test conclusion ................................................................................................................ 19
TEST 2: Multiple SQL Server database OLTP tests ................................................................................. 20
Test motivation ................................................................................................................ 20
Test configuration ............................................................................................................ 20
Test results ...................................................................................................................... 20
Test conclusion ................................................................................................................ 23
TEST 3: Single SQL Server database OLTP test with VMAX compression .............................................. 23
3
Test motivation ................................................................................................................ 23
Test configuration ............................................................................................................ 23
Test results ...................................................................................................................... 23
Test conclusion ................................................................................................................ 24
TEST 4: Single SQL Server database OLTP test with VMAX and SQL Server compression ................... 25
Test motivation ................................................................................................................ 25
Test configuration ............................................................................................................ 25
Test results ...................................................................................................................... 25
Test conclusion ................................................................................................................ 26
CONCLUSION ........................................................................................................................ 26
REFERENCES ........................................................................................................................ 27
APPENDIX .............................................................................................................................. 28
Using Unisphere to configure VMAX compression .......................................................................... 28
Checking and setting the configuration of the storage group .................................................................... 28
Using Solutions Enabler CLI to configure VMAX compression ....................................................... 29
Checking the configuration on the storage group ..................................................................................... 29
Setting the compression on the storage group ......................................................................................... 30
Getting the details about device level compression .................................................................................. 30
Using VMAX REST API to provision Storage and monitor Performance ......................................... 30
Introduction .............................................................................................................................................. 30
REST model ............................................................................................................................................ 30
Transport protocol .................................................................................................................................... 31
REST support in PowerShell .................................................................................................................... 31
REST API documentation for VMAX ........................................................................................................ 31
Authentication and security ...................................................................................................................... 32
Client authentication ................................................................................................................................ 32
Disabling SSL certificate validation .......................................................................................................... 32
Basic storage provisioning operations using REST API ........................................................................... 33
Creating Storage Groups ......................................................................................................................... 33
Adding volumes to a storage group .......................................................................................................... 33
Creating Hosts ......................................................................................................................................... 33
Creating port groups ................................................................................................................................ 34
Creating masking views ........................................................................................................................... 34
Creating your own functions ..................................................................................................................... 34
Calling your own functions ....................................................................................................................... 35
Gathering performance statistics.............................................................................................................. 35
4
EXECUTIVE SUMMARY
In 2016, Dell EMC released three newly engineered and purpose-built VMAX All Flash products: VMAX 250,
VMAX 450 and VMAX 850 that are available with F and FX software packges. The new VMAX architecture uses
the latest, most cost efficient 3D NAND flash drive technology, with multi-dimensional scale, large write-cache
buffering, back-end write aggregation, high IOPS, high bandwidth, and low latency.
Benefits of VMAX All Flash systems include:

Easy provisioning and storage management: VMAX uses virtual provisioning to create new storage
devices in seconds. All VMAX devices are thin provisioned, consuming only the storage capacity that is
actually written to, which increases storage efficiency without compromising performance. VMAX devices can
be grouped into Storage Groups and managed as a unit, for performance Quality of Service (QoS), data
protection and data efficiency use cases. In addition, VMAX storage can be managed using Unisphere for
VMAX, Solutions Enabler CLI, or REST APIs.

High performance: Each VMAX All Flash V-Brick features multi-controller architecture, front-end and backend connectivity, Infiniband internal fabric, and a large mirrored and persistent cache offers, Each V-Brick
offers high and predictable performance for any kind of workloads including transaction processing, log writes,
checkpoints, or batch processes. VMAX also excels in servicing high-bandwidth sequential workloads
leveraging pre-fetch algorithms, optimized writes, and fast front-end and back-end interfaces.

Superior data services: VMAX All Flash excells at providing data services. It natively protects all data with
™
®
T10-DIF from the moment it enters the array until it leaves (including replications). With SnapVX and SRDF ,
®
VMAX offers many topologies for consistent local and remote replications and integrations with Data Domain
™
such as ProtectPoint for data backups using storage snapshots. Other VMAX All Flash data services include
host I/O limits and other QoS features, compression, “call-home” support, non-disruptive upgrades (NDU),
and non-disruptive migrations (NDM).
Figure 1 shows the multi-dimensional scalability, performance, and data services offered by VMAX All Flash.
Figure 1.
VMAX All Flash multi-dimensional performance and scalability
VMAX All Flash is an excellent choice to run SQL Server mission-critical databases, where high performance,
resiliency, protection, and security are required.
This white paper describes the benefits of using Dell EMC VMAX All Flash storage for Microsoft SQL Server
databases. Four SQL Server use cases demonstrate how the design and features of VMAX All Flash provide a
powerful environment to run SQL Server mission-critical workloads; and how the VMAX All Flash adaptive
compression engine (ACE) can efficiently manage capacity utilization for SQL Server databases with no overhead
on production database environments and minimal impact on overall system performance.
5
AUDIENCE
This paper is intended for database and system administrators, storage administrators, and system architects who
are responsible for implementing, managing, and maintaining SQL Server databases and VMAX storage systems.
Readers should be familiar with SQL Server and the VMAX family of storage arrays, and have an interest in
achieving higher database availability, better performance, and simplified storage management.
INTRODUCTION
SQL Server databases often use flash storage, which provides high-performance and low-latency, with faster
transactions to ensure better business agility and improved client satisfaction. Flash storage provides better total
cost ownership (TCO) because it uses less floor space for a few high-capacity SSD drives, uses less power than
hard disk drives. Flash storage aso provides consistent performance whether the I/O profile is random,
sequential, intermittent, or continuous.
SQL Server databases can greatly benefit from both server-side cache and flash storage. As large as the serverside cache is, often the database capacity is larger. While frequently accessed data fits in the database cache,
there are always queries that access less-frequently needed data. Database consolidation also often means a
smaller portion of the cache is used for each tenant. Finally, in a cluster, server-side cache is not cumulative and
each cluster node caches its own data regardless of others. When the requested data is not in cache, flash
storage enables I/O to complete quickly. VMAX All Flash is even faster and more reliable than other third party
Flash only systems, due to the high-capacity and persistent VMAX cache.
This platform compliments the use of database cache with faster I/O requests for blocks that are not already in
cache. VMAX All Flash also provides a storage system that allows for high performance, consolidation, ease of
data replications for backup, for high availability (HA) or disaster recovery (DR).
VMAX All Flash provides automatic, scheduled, and application consistent snapshots for Microsoft SQL Server
and other applications for creating point-in-time copies for backup, reporting or test/dev natively using SnapVX.
EMC AppSync lets you manage application snapshots with tighter integration between VMAX SnapVX and
Microsoft Volume Shadowcopy Services (VSS) and SQL Server Virtual Device Interface (VDI).
VMAX All Flash offers active/active high availability of storage devices at Synchronous distances for Microsoft
SQL Server Failover clusters using SRDF/Metro. With storage devices always RW enabled, in the event of
failover, SQL Server cluster resources can be quickly restarted thus improving RTO and providing ease of
management for SQL Server databases on Windows Server Failover Cluster.
SInce HYPERMAX OS Q3 2016 microcode release, VMAX All Flash systems are capable of performing data
compression to significantly increase the effective capacity of the array. With its fine-grained data packing and
activity-based harware accelerated compression capabilities, all application environments can achieve storage
efficiency with optimum performance.
VMAX All Flash Adaptive Compression Engine
The adaptive compression engine (ACE) offers high performance and maximum storage efficiency for application
environments. ACE compresses data and efficiently optimizes system resources to balance overall system
performance. Some of the features of the ACE are as follows:

Selective compression—Using the ACE, the user can decide at the VMAX storage group (SG) level which
data to compress and when to enable or disable compression. As an example, to prioritize workloads, you
might decide not to use compression. Enabling compression for a SG compresses the candidate data in the
background with the system still active. Likewise, disabling compression for an SG decompresses data over
time.

VMAX activity-based compression (ABC)—The use of inline compression and algorithms paired with
compression hardware acceleration provides space savings, as you would expect. In addition, the ACE uses
activity-based compression to determine which data should be compressed. This enables the system to
mitigate compression overhead on both the system and frequently accessed data.
ABC prevents constant compression and decompression of data that is frequently accessed. ABC marks the
busiest data in the SRP to skip the compression flow regardless of the related storage group compression
6
setting. This function differentiates busy data from idle or less-busy data and only accounts for up to 20
percent of the allocations in the SRP. Marking up to 20 percent of the busiest allocations to skip the
compression action ensures optimal response time and reduced overhead that can result from the act of
compressing data to save space. The mechanism used to determine the busiest data does not add additional
CPU load on the system. This function is similar to the FAST™ code used for promoting data in previous code
releases. ABC leverages the FAST statistics to determine what data sets are the best candidates for
compression. It allows the system to maintain balance across the resources providing an optimal environment
for both the best possible compression savings and the best performance. Effectively, this avoids
compression and decompression latency for the busiest data and reduces system overhead.

Sizing and capacity—All systems must be sized considering a compression ratio ranging between 1.3:1 and
3.0:1. When sizing a system where compression is included it is important to understand the data that
populates the system. It is equally important to understand how to determine the amount of capacity needed
to support compression. In the world of storage, capacity has many meanings. One of the most important is
host addressable capacity which refers to the amount of capacity that can be presented to the hosts and
applied to applications that deliver the workload to the system.

Fine-grained data packing—Using the ACE, each 128 K I/O string is split into four 32 K buffers. Each buffer
is compressed individually in parallel, maximizing the efficiency of the compression IO module. The total of the
four buffers result in the final compressed size and determines where the data is allocated.
Fine-grained data packing offers performance benefits for both the compression function and the overall
performance of the system. Included in this process is a zero reclaim function that prevents the allocation of
buffers with all zeros or no actual data. Pairing the zero reclaim function with fine-grained data packing allows
the compression function to operate efficiently with minimal impact on performance. Compressing the 128 K
I/O in four buffers individually in parallel allows for each section to be handled independently, even though
they are still part of the initial 128 K I/O. The main benefit comes in the case of partial write updates or read
I/O. If only one or two of the sections need to be updated or read, only that data is uncompressed.
KEY BENEFITS OF VMAX ALL FLASH FOR SQL SERVER DATABASES
Several features of VMAX All Flash provide exceptional benefits when used for SQL Server databases:

Dynamic random-access memory (DRAM)-based cache is large, persistent, and mirrored, which enables all
writes, including SQL Server log writes or batch loads to complete faster than writing directly to SSD.

Dell EMC TimeFinder SnapVX provides storage-consistent snapshots. SnapVX offers high levels of scale,
efficiency, and simplicity. It uses redirect-on-write for added performance, and pointer-based management for
dedup-like data reduction. Snapshots have names, versions, dates, and automatic expiration dates for easy
identity and management. Snapshots are protected, so they can be re-used regardless of changes by the
application. They can also cascade any number of times. With SnapVX, SQL Server database replicas can be
created for gold copies, test or development environments, and backups. Snapshots can be restored in
seconds, and read/write access to their data is always immediate.

Dell EMC Symmetrix Remote Data Facility (SRDF™) offers many Disaster Recovery topologies for SQL
Server databases, including those for two, three, and four sites. SRDF can replicate in synchronous and
asynchronous modes, and offers two site, and multi-site cascaded and STAR configurations. SRDF is closely
integrated with SnapVX™ to offer a variety of HA and DR solutions including fast recovery from a remote
copies, backup offload, and more.

Dell EMC AppSync offers self-service automated copy management for Microsoft SQL Server and other
application environments. It provides automatic application discovery and mapping, storage provisioning and
selection for local and remote replications, and easy repurposing for test, development, and reporting. Copies
can be used for instantly restoring a production database.
®
™
DESIGN CONSIDERATIONS FOR SQL SERVER AND VMAX ALL FLASH
The newest release of VMAX All Flash has simplified storage provisioning. This section describes configuration
best practices and design considerations for storage connectivity and provisioning with VMAX All Flash and
Microsoft SQL Server.
7
STORAGE DESIGN
VMAX All Flash is pre-configured with your specified capacity, connectivity options, and storage RAID protection.
The SSD drives are spread across the backend, and provisioned into logical devices called thin data devices
(TDATs). The devices are placed in RAID groups and pooled together to become a Storage Resource Pool
(SRP). When host thin devices(TDEVs) are later created, their capacity is consumed in the SRP. VMAX usually
ships with one SRP or two SRPs when CloudArray is used to provide native cloud connectivity.
VMAX All Flash architecture is based on V-Bricks. A V-Brick contains an engine, which consists of two directors,
and each director includes cache, ports, and emulations based on the services and functions that the storage was
designed to provide. For example, each director includes ports supporting Fibre Channel (FC) or GigE, and
emulations, such as front-end, back-end, iSCSI, eNAS, and SRDF. Each director also includes CPU cores that
are pooled for each emulation and serve all its ports. While the default balanced core allocation is usually best,
EMC support can deploy other methods if core use across emulations is constantly unbalanced.
Since VMAX All Flash comes pre-configured, once powered up, activities can immediately focus on physical
connectivity to hosts, zoning, and storage provisioning to the database servers. This makes the VMAX All Flash
deployment fast, easy, and focused on the application needs, rather than on the storage configuration.
Storage connectivity best practices
When planning storage connectivity for performance and availability ”go-wide before going deep,” which means it
is better to connect storage ports across different engines and directors than to use all the ports on a single
director. In this way, even if a component fails, the storage can continue to service host I/Os.
Dynamic core allocation is a new VMAX feature. Each VMAX director provides services such as front-end
connectivity, backend connectivity, or data management. Each such service has its own set of cores on each
director that are pooled together to provide CPU resources that can be allocated as necessary. For example, even
if host I/Os arrive via a single front-end port on the director, the front-end pool with all its CPU cores is available to
service that port. Because I/Os arriving to other directors have their own core pools, we recommend connecting
each host to ports on different directors before using additional ports on the same director. This practice ensures
high performance and availability.
Virtual provisioning and thin devices
All VMAX host devices use virtual provisioning (also known as thin provisioning). The devices are actually a set of
pointers to capacity allocated at 128 KB extent granularity in the storage data pools. However, to the host they
look and respond just like regular LUNs. Using pointers also increases capacity and efficiency for local replication
using SnapVX by sharing extents when data does not change between snapshots.
Virtual provisioning offers a choice of whether to fully allocate the host device capacity, or to allow allocation ondemand.

A fully allocated device consumes all its capacity in the data pool on creation, and therefore, there is no risk
that future writes may fail because the SRP has no capacity left.

Allocation on-demand enables over-provisioning. Although the storage devices are created and look to the
host as available with their full capacity, actual capacity is allocated in the data pools only when host writes
occur. This is a common cost-saving practice.
Use allocation on-demand when:

The capacity growth rate for an application is unknown.

You do not want to commit large amounts of storage ahead of time, as it might not be used.

You do not want to disrupt host operations at a later time by adding more devices.
If allocation on demand is used, capacity is only physically assigned as needed to meet application requirements.
When data files are created, Microsoft SQL Server pre-allocates capacity by writing to every page with contiguous
zeros. When allocation on-demand is used, it is best to increase database capacity over time, based on actual
8
need. For example, if SQL Server was provisioned with a thin device of 2 TB, rather than immediately creating
data files of 2 TB and consuming all its space, the DBA should use an auto-growth feature that consumes only the
needed capacity.
Note: When Windows Instant File Initialization (IFI) is used, allocation of data files occurs in a thin-pool friendly
way. Areas of a disk under which a sparse file is defined, as created by Instant File Initialization, are not zeroed.
As table and index information is written to a fully initialized data file, areas of the database become allocated and
used by non-zero user data. SQL Server automatically uses Windows Instant File Initialization (IFI), if the service
account under which the SQL Server service is running has Perform volume maintenance tasks permission
under the local security policy. By default, only administrators have this permission. Information about the Instant
File Initialization functionality (IFI) is provided in the Microsoft SQL Server Books online product documentation.
Transaction logs continue with remain fully allocated even with IFI to avoid marginal performance penalties that
may result during additional thin pool allocations. Refer to: Application deployment best practices with VMAX
Virtual Provisioning for more details.
Best practices for VMAX compression with SQL Server
This section discusses various considerations when using VMAX compression for SQL Server.

Efficiency using Thin Provisioning—VMAX is a fully thin provisioned storage system providing the highest
levels of storage efficiency. SQL server storage groups will only consume storage capacity for the data
actually written by the host and will grow as needed with host writes. Storage efficiency is already realized
when SQL Server storage groups are deployed on VMAX devices. All VMAX data services continue to offer
highly efficient data copy and capacity utilization even when using VMAX Snap VX for periodic snapshots or
SRDF for remote replication.

Efficiency using VMAX compression—The VMAX compression engine splits the 128 KB track into four 32
KB buffers and acts on them independently, making the compression process very efficient. SQL Server I/O
ranges between 8 KB and 256 KB with 8 KB to 32 KB write sizes most common for OLTP workloads;
therefore, contiguous writes to the same track will be considered for the compression analysis. Also data is
de-compressed at the same 32 KB granularity, improving the latency in the event of activities with high locality
of reference as data might be already available in VMAX cache from prior activity on the track. Data which is
already compressed and does not exceed the activity thresholds assigned by VMAX compression algorithms
will be compressed in line and written to the thin pools resulting in excellent efficiency.

Dynamically changing compression ratio on storage groups—VMAX compression is configured at the
storage group level and can be changed dynamically. However, as the VMAX compression is activity-based,
the effect might not be immediately noticeable. To minimize the impact on other workloads, compression
related movements are done at lower priority. Once the compression ratio is set on the storage group, there is
no need to disable it, even though it is possible. The VMAX compression engine compresses/de-compresses
data as necessary to meet application performance needs even when acessing a compressed data set.

Expectation of the compression ratio—Compressibility greatly depends on the data set, and most live SQL
server databases exhibit compression ratios from 1.3 to 2.0. Once the VMAX system is configured for
compression, you can configure individual storage groups for compression, and they will achieve their target
compression ratio over time. Using third-party tools like MiTrends enables you to estimate compression ratio
targets by collecting and analyzing statistics on random samples of the devices. Higher compressibility can
potentially be achieved when large blocks of fillers or blob data with a lot of cyclic/repeated patterns exist in
the data set. VMAX ACE determines the compressibility of the storage group by scanning the contents and
transparently moving the application data to highly compressible storage pools, improving the compression
ratio of the storage group. When the workload is run, compression targets are maintained as the ACE
decompresses the active data set and assigns scores to the extents for further compressibility analysis. Less
active extents remain on the compression storage pools that meet their potential compressibility.

Compressibility of logs—SQL Server active log devices generally have lower compressibility, but as the
logs fill up and are archived, the inactive nature of the archived logs results in higher compressibility of the log
devices.

SQL Server data and log devices in cascaded configuration—ACE collects statistics on 32 KB data
extents for compressibility. Although SQL Server storage provisioning uses child storage groups for data and
log devices for data protection and manageability, these devices can be part of the same parent storage
group that has compression enabled. ACE still operates at the device level and achieves compression of
various SQL Server objects in the most optimal fashion. Thus, best practices for storage provisioning with
9
VMAX compression are the same as the best practices associated with other VMAX data services, such as
Snap VX and SRDF.

SQL Server compression and encryption—SQL Server allows database table-level compression for row
and page and is set up using the manage compression wizard. The compressibility is slightly better when
using SQL Server compression compared to VMAX compression because it uses inline compression at SQL
Server. However, SQL Server compression puts a greater demand on the host CPU to process compression,
which affects the overall performance of the SQL Server and other applications running on the server. VMAX
compression is set up at the storage group level, so all the database objects that are part of the storage group
will benefit including all the data, logs, indexes, and any support files associated with the application and
database. Even external blob/object stores can be compressed when using VMAX compression without any
impact to overall performance, while still improving storage efficiency.
Although SQL Server and VMAX compression can co-exist, any application performance benefits are realized
at the expense of host CPU overhead resulting from SQL Server compression. The benefits are also shortlived as SQL Server reads in more data to de-compress to align with server application needs. As the data is
aged and cache needs to be refreshed, any benefits are diminished and performance lowers to the same level
as using only VMAX compression, which incurs no CPU overhead. For optimal performance—if there is a
choice—using VMAX compression is preferred over SQL Server compression.
VMAX compression can also be used when SQL server data, connections, and stored procedures use SQL
server encryption. The compressibility of the data depends on the amount of redundancy or repeated
patterns, but there is no reason not to use VMAX compression even when the database is encrypted. Even if
compressibility of the data is lower due to obfuscated data patterns, there is no overhead associated with
compression, so application performance will not suffer. Also while using VMAX compression on the storage,
VMAX All Flash systems also offer Data at Rest (D@RE) encryption, using an integrated/external key
manager, which allows storage level encryption for the highest level of security.
HOST CONNECTIVITY
HBA ports (initiators) and storage ports (targets) are connected to an FC or Ethernet switch based on the
connectivity requirements. FC connectivity requires that you create zones on the switch and define initiator and
target relationships. The zones create an I/O path between the host and storage. Zoning and paths strongly affect
performance aspects of the host and database.
FC connectivity best practices

When zoning host initiators to storage target ports, ensure that each pair is on the same switch. Performance
bottlenecks are often created when I/Os need to travel through ISL (paths between switches), as those are
shared and limited.

Use at least two HBAs for each database server to enable better availability and scale. Use multipathing
®
software like EMC PowerPath or Microsoft Windows MPIO to balance loads and automatically fail over or
recover paths. Use commands like: powermt display paths or multipath -l to ensure that all paths
are visible and active.

Consider port speed and count when planning bandwidth requirements. Each 8 Gb FC port can deliver up to
about 800 MB/second. Therefore, a server with 4 ports cannot deliver more than about 3 GB/second. Also
consider that between the host initiator, storage port, and switch, the lowest speed supported by any of these
components is negotiated and used for that path.

Consider the number of paths that are available to the database storage devices. Each path between the host
and storage ports creates additional SCSI representation for the database devices on the host.

While more paths add I/O queues and the potential for more concurrency and performance, consider that
server boot time is affected by the number of SCSI devices discovered (one for each path combination per
device, plus a pseudo device). Also, once connectivity needs are satisfied for performance and availability,
additional paths do not add more value and only add more SCSI representations to the host.

In most cases, for availability and performance, it is sufficient to have each HBA port zoned/masked to two or
four VMAX ports, preferably on different engines and directors.
10
iSCSI connectivity best practices

Use VLANs dedicated to iSCSI setup. VLANs allow logical grouping of network end points, which minimize
network bandwidth contention for iSCSI traffic and eliminate impact on iSCSI traffic due to noisy neighbors.

If all network devices in the iSCSI communication paths support jumbo frames, using jumbo frames on
Ethernet improves iSCSI performance.

To minimze host CPU impact due to network traffic, ensure that transmission control protocol (TCP) offloading
is enabled on a host network interface card (NIC), which offloads processing of the TCP stack to the NIC and
eases impact on the CPU.

As with FC connectivty, using Powerpath or native multipathing for Windows helps with load balancing and
eases queuing issues for iSCSI traffic through the host NICs.
Number and size of host devices

VMAX uses thin devices exclusively. That means that unless devices are specifically requested on creation to
fully allocate their capacity in the SRP, they only consume as much capacity as the application actually writes
to them. For example, by default, a 1 TB device that has not been written to does not consume any storage
capacity. This approach enables capacity saving because storage is consumed based on demand and not
during device creation. However, if certain applications require a guarantee for their capacity, their devices
can be created fully and allocated in the SRP. New devices can be created easily using Unisphere for VMAX
or by using CLI, as in the following example:
PS > symdev create -tdev -cap 500 -captype gb -N 4 -v # create 4 x 500GB thin
devices

VMAX host devices are natively striped at 128 KB across the data pools in the SRP. Although you can create
only a few very large host devices, consider the following:
o
As explained earlier, each path to a device creates a SCSI representation on the host. Each such
representation provides a host I/O queue for that path. Each such queue can service a tunable, but
limited (often 32) number of I/Os simultaneously. Provide enough database devices for concurrency
(multiple I/O queues), but not so many that it would increase management overhead.
o
Another benefit of using multiple host devices is that internally the storage array can use more
parallelism for operations such as data movement and local or remote replications. By performing
more copy operations simultaneously, the overall operation takes less time.
o
While the size and number of host devices can vary, we recommend finding a reasonable, low
number that offers enough concurrency, provides an adequate building block for capacity increments
when additional storage is needed, and does not become too large to manage. For example, 4 or 8
devices, sized at 250 GB to 1 TB (choose one size) can be a good design starting point for databases
from 1 to 8 TB in size, if enough connectivity/concurrency exists.
Physical host connectivity best practices
For physical host connectivity, consider the number and speed of the HBA ports (initiators) and the number and
size of host devices.

HBA ports—Each HBA port (initiator) creates a path for I/Os between the host and the SAN switch and then
to the VMAX storage. If a host uses a single HBA port, it has a single I/O path that must serve all I/Os. Such a
design is not advisable, as a single path does not provide high-availability and risks a potential bottleneck
during high I/O activity because additional ports are not available for load-balancing.
For a better design, provide each database server with at least two HBA ports, preferably on two separate
HBAs. The additional ports provide more connectivity and also allow multipathing software like Dell EMC
PowerPath or Microsoft Multipath I/O (MPIO), to load-balance and failover across HBA paths.
Each path between host and storage device creates a SCSI device representation on the host. For example,
two HBA ports connected to two VMAX front-end adapter ports with a 1:1 relationship create three
presentations for each host device. One port is used for each path and the multipathing software uses the
other port to create a Dell EMC Symmetrix Multi-Path Disk Device (PowerPath System Devices). If each HBA
11
port was zoned and masked to both FA ports (1: many relationship) there will be five SCSI device
representations for each host device (one for each path combination + pseudo device).
While modern operating systems can manage hundreds of devices, it is not advisable or necessary, and it
burdens the user with complex tracking and storage provisioning management overhead. We recommend that
you establish enough HBA ports to support workload concurrency, availability, and throughput. Also use 1:1
relationships for storage front-end ports, and do not zone or mask each each HBA port to all VMAX front-end
ports. Following these suggestions will provide enough connectivity, availability, and concurrency, while
reducing unnecessary complexity.

Number and size of host devices—VMAX can create host devices with capacity ranging from a few
megabytes to multiple terabytes. With the native striping across the data pools that VMAX provides, the user
may be tempted to create only a few very large host devices. For example: a 1 TB Microsoft SQL Server
database can reside on a one 1 TB host device, or perhaps on ten 100 GB host devices. While either option
satisfies the capacity requirement, you should use a reasonable number of host devices of appropriate size.
In the example above, if the database capacity was to rise above 1 TB, it is likely that the DBA would want to
add another device of the same capacity, even if they didn’t currently need 2 TB in total. Therefore, large host
devices create very large building blocks when additional storage is needed.
Each host device also creates its own host I/O queue for the host operating system. Each such queue can
service a tunable, but limited, number of I/Os that can be transmitted simultaneously. If, for example, the host
has 4 HBA ports, and a single 1 TB LUN with multipathing software, it will have only 4 paths available to
queue I/Os. A high level of database activity generates more I/Os than the queues can service, resulting in
artificially elongated latencies. In this example, two or more host devices are advisable to alleviate such an
artificial bottleneck. Host software such as Dell EMC PowerPath or Windows PerfMon can help in monitoring
host I/O queues to ensure that the number of devices and paths is adequate for the workload.
Another benefit of using multiple host devices is that, internally, the storage array can use more parallelism
when operations such as FAST data movement or local and remote replications take place. By performing
more copy operations simultaneously, the overall operation takes less time.
While there is no one magic number for the size and number of host devices, we recommend finding a
reasonably low number that offers enough concurrency, provides an adequate building block for capacity
when additional storage is needed, and does not become too large to manage.
Host I/O limits and multi-tenancy
The Host I/O limits quality of service (QOS) feature was introduced in the previous generation of VMAX arrays but
it continues to provide the option to place specific IOPs or bandwidth limits on any storage group. For example,
assigning a specific Host I/O limit for IOPS, to a storage group with low performance requirements can ensure that
a spike in I/O demand will not saturate or overload the storage, affecting performance of more critical applications.
As the test cases for multi-tenant application workloads later in this paper show, even though VMAX All Flash is
capable of maintaining high performance at low latency, using a Host I/O limit may limit the impact of noisy
neighbors on mission-critical applications.
MANAGING SQL SERVER DATA COPIES USING AppSync ON VMAX ALL FLASH
Many applications that run Microsoft SQL Server are required to be fully operational at all times, and the data for
these applications continues to grow. At the same time, RPO and RTO requirements are becoming more
stringent. As a result, there is a large gap between the requirements for fast and efficient protection and
replications, and the ability to meet these requirements without overhead or operations disruption. Examples are
the ability to create local and remote database replicas in seconds without disruption of Production host CPU or
I/O activity for purposes such as patch testing, running reports, creating development sandbox environments,
publishing data to analytic systems, offloading backups from Production, DR strategy, and more.
Traditional solutions rely on host-based replications. The disadvantages are:

Additional host I/O and CPU cycles are consumed by the need to create such replicas.
12

Monitoring and maintaining them, especially across multiple servers, is complex and time consuming.
TimeFinder local replication benefits for SQL Server include:

The ability to create instant and consistent database replicas for repurposing multiple databases across a
single database. These include external data or message queues across multiple VMAX storage systems.

TimeFinder replica creation or restore time takes seconds, regardless of the database size. The target
devices (for a replica) or source devices (for a restore) are available immediately with their data, even as
incremental data changes are copied in the background.

VMAX TimeFinder SnapVX snapshots are consistent. Each source device can have up to 256 space-efficient
snapshots that can be created or restored at any time. Snapshots can be further linked to up to 1,024 target
devices, maintaining incremental refresh relationships. The linked targets can remain space-efficient, or a
background copy of all the data can take place, making it a full copy. In this way, SnapVX allows unlimited
number of cascaded snapshots.
SRDF remote replication benefits for SQL Server include:

Synchronous and asynchronous consistent replications of a single database or multiple databases, including
external data or message queues, and across multiple VMAX storage array systems if necessary. The point of
consistency is created before a disaster strikes, instead of taking hours to achieve afterwards when using
replications that are not consistent across applications and databases.

DR protection for two or three sites, including cascaded or triangular relationships, where SRDF always
maintains incremental updates between source and target devices.

SRDF and TimeFinder are integrated. SRDF replicates the data remotely and TimeFinder can be used on
the remote site to create writable snapshots or backup images of the database. In this way the database
administrators can perform remote backup operations, or create remote database copies.
SRDF and TimeFinder can work in parallel to restore remote backups. While a remote TimeFinder backup is
being restored to the remote SRDF devices. SRDF copies the restored data to the local site. This parallel restore
capability provides the DBA with faster accessibility to remote backups and shortens recovery times.
Traditional methods for copy data management using scripts do not offer application-consistent replication for
Microsoft SQL Server. Although VMAX HYPERMAX OS Consistency Assist can provide point-in-time snapshots
for testing, development, reporting, and quick restore of a production environment, snapshots to be used for
backup and full recovery of the database must integrate with Microsoft Virtual Device Interface (SQL Server VDI)
to create SQL Server consistent copies, including data and log devices. EMC AppSync offers tightly integrated
copy data management for SQL Server running on VMAX All Flash.
AppSync provides simple, automated, and self-service copy management automation for Microsoft SQL Server
that runs on VMAX All Flash. These are the steps to use AppSync for Microsoft SQL Server with SnapVX on
VMAX All Flash:
1. Configure the AppSync server.
2. Run VMAX All Flash storage discovery and provisioning for database copies.
3. Run Host agent registration and discovery of Microsoft SQL Server databases.
4. Select or configure service plans and subscribe the databases to the plan.
5. Schedule the copies and use them for repurposing or restore.
Refer to the Dell EMC AppSync User and Administration Guide for more details about AppSync.
Configuring the AppSync Server
The AppSync server can be installed on a supported Windows environment. Refer to
https://elabnavigator.emc.com/eln/extendedSupport for supported environments. After the App Sync server is
configured, it can be accessed from any Web browser, as shown in Figure 2.
13
Figure 2.
EMC AppSync Dashboard
Discovering and provisioning VMAX All Flash storage for database copies
AppSync server discovers a VMAX All Flash array when you register a pre-configured SMI-S provider. As part of
the discovery process, AppSync registers devices that are not associated with existing masking views as available
storage for database copies. The discovery process can be run periodically or on demand to add new devices to
be used for copy management. Figure 3 shows the storage discovery and provisioning for copy management.
Figure 3.
AppSync storage discovery and provisioning for database copies
14
Registering the host agent and discovering Microsoft SQL Server databases
AppSync server enables push installation of Microsoft Windows host agents using the Windows administrator
account credentials, as shown in Figure 4.
Figure 4.
Host agent registration
After the host is registered, discover Microsoft SQL Server databases using Windows or SQL Server credentials,
as shown in Figure 5.
Figure 5.
SQL Server database discovery
Selecting or configuring service plans
AppSync comes with pre-configured service plans for copy management. The Bronze service plan provides local
replication, the Silver service plan provides remote replication, and the Gold service plan provides both local and
remote replication capabilities with default settings. As shown in Figure 6, these service plans can also be
customized using existing service plans as a template to meet the needs for the copy data management for
Microsoft SQL Server databases. Customized service plans can specify the list of tasks associated with the plan,
the recurrence pattern, and the schedule. Once the service plan is configured, you can subscribe SQL Server
databases to the plans for copy management.
15
Figure 6.
Service plan configuration and customization
Scheduling database copies and using them for repurposing
You can create database copies on demand or schedule them automated copy management as shown in Figure
7. Once the copies are created they can be used to mount for test/dev/reporting or used to restore source
database.
16
Figure 7.
Scheduling database copies
SQL SERVER TEST CASES ON VMAX ALL FLASH
We ran four tests to demonstrate the capabilities of the system using simultaneous OLTP workloads. For all these
OLTP tests, a standard benchmarking tool was used to drive OLTP workloads from SQL Server instances.
Database transaction rate and Windows Server Performance Monitoring using PerfMon and VMAX performance
data collection was done while the workloads ran, and results were analyzed and charted.
Our first two tests demonstrate a minimal performance configuration of a single V-Brick 850FX with 32 SSD
drives. The results show how, in this small configuration, multiple SQL Server database workloads can achieve
sustained high performance. We used the following SQL Server use cases:

Use Case 1 – Single SQL Server database OLTP test. The workload shows sustained performance of near
70k database IOPS and 0.3 ms database read latency with a transaction rate of 1.3 M per minute in this
single V-Brick configuration.

Use Case 2 – Multiple SQL Server databases OLTP tests. Consolidated multi-tenant online transaction
processing (OLTP) workloads benefit from the consistent high performance of VMAX All Flash. The workload
shows sustained performance of 160k+ total database IOPS, sub millisecond database read latency, with total
2.8 M transactions per minute in the same single V-Brick configuration. While servicing multiple databases,
mission critical databases continue to maintain the SQL Server batch request/second rate at low latency.
The next two tests demonstrate how the VMAX All Flash ACE can efficiently manage capacity utilization for SQL
Server databases with no overhead on production database environments and minimal impact on overall system
performance. We used the following SQL Server use cases:
17

Use Case 3 – Single SQL Server database OLTP test with VMAX compression. This use case shows the
storage efficiency achieved when using VMAX adaptive compression with minimal performance impact to the
production environments.

Use Case 4 – Single SQL Server database OLTP test with VMAX and SQL Server compression. This
use case shows how VMAX compression can alleviate the CPU overhead and performance impact of SQL
Server compression while improving storage efficiency and maintaining the production database performance.
TEST ENVIRONMENT
Hardware and software configuration
Figure 8 shows the test environment. The environment consists of multiple Cisco C240M3 servers, with slightly
different server side RAM and CPU power to service different tier applications. Each host uses two dual-port host
bus adapters (HBAs) for a total of four initiators per server connected to the SAN switches. The database used an
8 KB block size and a very small buffer pool to generate as many I/Os as possible to test the VMAX All Flash
storage capabilities.
Figure 8.
Test configuration
The storage system for All Flash testing (Test cases 1 and 2) is a VMAX 850FX, 1 V-Brick, with 32 SSD drives in
a RAID5 configuration. The VMAX storage uses the HYPERMAX OS Q1 2016 release, which includes the
FlashBoost feature.
The storage system for All Flash compression testing (Test cases 3 and 4) is a VMAX 250FX, 1 V-Brick, with 32 x
SSD drives in a RAID5 configuration. The VMAX storage uses the HYPERMAX OS Q3 2016 release, which
includes the compression feature. Table 1 shows a list of the hardware and software components used in these
tests.
Table 1.
Hardware and software components
Category
Type
Quantity/Size
Version/Release
Storage system for All
Flash OLTP testing
Dell EMC VMAX 850F
1 x V-Brick, 256 GB usable
cache, 32 x SSD drives in RAID5
HYPERMAX OS 5977
based on Q1 2016 release
Storage system for
Compression
Dell EMC VMAX 250F
1 x V-Brick, 256 GB usable
cache, 32 x SSD drives in RAID5
HYPERMAX OS 5977
based on Q3 2016 release
Database servers
UCS C240-M3
3 – Standalone servers
Windows Server 2012
SQL Servers
Multiple SQL Server
instances on stand-alone
servers
3 – SQL Servers
Microsoft SQL Server 2014
18
TEST CASES
TEST 1: SINGLE SQL SERVER DATABASE OLTP TEST
Test motivation
This test shows a scenario with SQL Server database OLTP workload running on VMAX with consistent low
latency and a high transaction rate.
Test configuration
We ran a single SQL Server database OLTP workload running on 128 GB C240-M3 server with SQL buffer pool
size of 4 GB.
Test results
As the test results in Figure 9 show, based on the PerfMon IO statistics and SQL Server batch requests/second,
SQL Server completed 21,000 batch requests/second with a 1.32M overall transactions/minute (TPM) rate. A
single V-Brick VMAX All Flash Array experienced the average cache read hits of 75% with I/O latency of 0.3 ms
(for both reads and writes).
Figure 9.
Test 1 – Single SQL Server Database OLTP Test
We confirmed the numbers by comparing them to the VMAX storage statistics. (Consolidated charts for the runs
are shown in Test 2.)
Test conclusion
VMAX All Flash continued to serve the application at sub-millisecond latency, and the total IOPS remained very
high (over 136k). This provided consistent SQL Server performance with less than 1 ms read latency.
19
TEST 2: MULTIPLE SQL SERVER DATABASE OLTP TESTS
Test motivation
This test shows a scenario where the VMAX All Flash can service multiple SQL Server databases, driving very
high IOPS at predictable low latencies and maintaining performance levels for mission critical applications.
Test configuration
We ran multiple independent SQL Server database OLTP workloads on multiple SQL Servers with an SQL buffer
pool size of 4 GB. All these severs had different CPU power and server memory to demonstrate a multi-tenant
scenario in which each tenant has diverse workloads and I/O profiles.
We followed these steps:
1. Ran a single SQL Server database OLTP workload on a high performance server.
2. Added a second important SQL Server database on a medium performance server.
3. Added a third SQL Server reporting database on a low performance server.
SQL Server PerfMon and VMAX storage performance details were collected during all of the tests.
Test results
As the workloads were added, the single V-Brick VMAX All Flash Array experienced different mixes of host IOPS
and serviced all the workloads with latencies consistently lower than 1ms, as shown in Figure 10. The overall
system IOPS and SQL Server transaction rates increased.
SQL Server databases read latency
(in ms)
1 DB Disk m
sec/read
2 DB Disk m
sec/read
3 DB Disk m
sec/read
1.5
1
0.5
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
0
SQL Server databases write latency
(in ms)
1 DB Disk m
sec/write
1
0.8
0.6
0.4
0.2
0
2 DB Disk m
sec/write
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
3 DB Disk m
sec/write
Figure 10. Test 2 – Multiple SQL Server databases OLTP IO latencies
In a consolidated multitenant environment all SQL Server databases shared storage resources and the cache
read hits percentage changed from 75 percent in a single database configuration to 55 percent in three database
configurations as workload increased, as shown in Figure 11.
20
VMAX All Flash cache hits % with
number of SQL Server databases
120
100
80
60
40
20
0
Read Hits
Write Hits
1
2
3
Figure 11. VMAX All Flash cache hits for multiple databases test
As shown in Figure 12 VMAX All Flash serviced more IOPS—68,000 for a single database and as high as
127,000 for three databases—and provided a large overall transaction rate while maintaining fast response times.
VMAX All Flash IOPS for
multiple SQL Server databases
200,000
100,000
FE IOPS
0
1
2
3
Figure 12. VMAX All Flash host IOPS for multiple databases test
Even when the VMAX All Flash workload increased, the database continued to maintain its performance as shown
by the SQL Server batch request persecond in Figure 13.
Mission Critical Database SQL Server
Batch Requests/Second
30000
20000
10000
Batch Requests/Sec
1
10
19
28
37
46
55
64
73
82
91
100
109
118
127
0
Figure 13. SQL Server batch requests for a multi-database test
21
VMAX All Flash SQL Server
transaction rate for multiple
databases
3,000,000
2,000,000
Transactions/Minute
1,000,000
0
1
2
3
Figure 14. VMAX All Flash SQL Server Transaction Rate
As shown in Figure 15, the host IOPS increased more than 1.8x as the additional databases were added.
Figure 15.
Unisphere Host IOPS for multiple database test
The VMAX cache hits for individual databases dropped as multiple database workloads were added as shown in
Figure 16, but flash boost in VMAX All Flash helped maintain the database performance at lower latency.
Figure 16.
Read and write cache hits for multiple database test
Response times were still maintained at the sub-millisecond level, as shown in Figure 17.
22
Figure 17.
Read and write response times for multiple databsae test
Test conclusion
This test showed that VMAX in an All Flash configuration can achieve high performance and low latency, even
with a single-engine configuration. This configuration can achieve more than 150k IOPS at lower than 1 ms
latency. This consolidation platform provides low latency workloads.
TEST 3: SINGLE SQL SERVER DATABASE OLTP TEST WITH VMAX COMPRESSION
Test motivation
This test shows a scenario that higlights the effect of performance, latencies and storage efficiency of SQL Server
database workload with VMAX compression while comparing them to workloads that are not using VMAX
compression.
Test configuration
We ran multiple independent SQL Server database OLTP workloads on SQL Server with an SQL buffer pool size
of 4 GB. We measured the I/O performance, latencies and storage efficiencies periodically.
We followed these steps:
1. Ran a baseline no-compression single SQL server database OLTP workload.
2. Set VMAX compression on the storage group and ran SQL server workload to understand the effect
of VMAX Activity Based Compression Engine.
Test results
As shown in Figure 18, SQL Server IOPS were maintained quite well with VMAX compression compared to the
baseline run. VMAX activity based compression improved host IOPS by decompressing the active data set,
ensuring that active data is uncompressed on demand.
Figure 18.
SQL Server Host IOPS
As shown in Figure 19, there was some impact to read response time when VMAX compression was enabled,
because activity on the compressed data set took time to be uncompressed. But as shown in Figure 20 there was
23
no impact on write response times, because all writes go to VMAX cache. This result shows that while VMAX
performs inline compression, the write response time is not impacted.
Figure 19.
Host read response times
Figure 20.
Host write response times
As shown in Figure 21, VMAX ABC started decompression of the highly active dataset to improve response times.
The compression ratio on the SQL Server storage group started declining but VMAX maintained compressibility of
the less active dataset, ensuring higher performance with excellent storage capacity efficiency. The SQL server
storage group had a compression ratio of 2.6:1 in the beginning of the workload that dropped to 1.3 due to the
activity. This result shows that the compression ratio was maintained while the highly activie workload was
compressed to meet application SLAs.
Figure 21.
VMAX compression on SQL Server storage group
Test conclusion
This test showed the VMAX Activity Based Compression effect for a SQL Server OLTP workload. While improving
storage efficiency, the VMAX Adaptive Compression Engine maintained SQL server performance with very
marginal overhead. All writes to the compressed dataset continued to be compressed inline while de-compressing
the dataset only on a performance basis to maintain the application performance transparently.
24
TEST 4: SINGLE SQL SERVER DATABASE OLTP TEST WITH VMAX AND SQL SERVER COMPRESSION
Test motivation
This test shows a scenario that higlights the performance impact of SQL Server compression on application
performance and illustrates how VMAX storage based compression helps alleviate the issues.
Test configuration
We configured SQL Server compression on the largest table used for OLTP test and ran the workload. Similarly
we set VMAX compression on the entire storage group associated with the SQL Server database and we ran the
identical workload.
We followed these steps:
1. Set up VMAX compression on OLTP database and ran the test. Collected PerfMon statistics for host IOPS,
latencies, and SQL Server batch requests per second.
2. Identified the largest table for OLTP test case and configured SQL Server compression.
3. Ran SQL server row level compression on the table and noted the storage efficiency.
4. Ran OLTP workload on the configuration with compressed table. Collected PerfMon statistics for host IOPS,
latencies and SQL Server batch requests per second.
We collected SQL Server PerfMon and VMAX storage performance details during all of the tests.
Test results
As shown in Figure 22 SQL Server batch requests/second were higher when we set SQL compression. as
compared to setting VMAX compression on the storage group as for host-based compression, SQL Server reads
in more data in the host server cache improving locality of reference for the table that had compression set.
SQL Server Batch Requests/Sec
60000
50000
SQL Comp only on OL
table
40000
30000
VMAX comp only on SG
20000
10000
0
Figure 22. SQL Server batch requests/second
While the SQL Server batch requests/second were higher for SQL Server compression, as shown in Figure 23,
SQL Server processor time was near 100% as the host CPU was busy compressing and de-compressing the
dataset. Thus, to achieve a 20 percent benefit of SQL Server batch requests/second, the host CPU utilization was
40 percent higher, compared to VMAX compression which did not put any additional overhead on the host CPU.
Figure 23.
SQL Server processor time
25
Test conclusion
This test showed the overhead of SQL Server compression on a host server. The application performance may
improve due to higher locality of reference for certain objects but that is at the expense of CPU processing needed
to compress and de-compress the objects. VMAX compression is activity-based and application-agnostic so there
is no impact to application performance. Also VMAX compression can be effectively set at the entire storage
group level and compressibility is determined when I/O takes place making VMAX compression very efficient.
CONCLUSION
VMAX All Flash is well suited to run mission-critical and high-performance SQL Server workloads while providing
data protection, data services, resiliency, and availability.
The tests described in this paper demonstrate the ability of a single V-Brick VMAX 850FX with 32 x SSD to deliver
performance values, as shown in Table 2.
Table 2.
Test result summary
SQL Server Transactions Per Minute
(TPM)
Databases
in the test
Database 1
1 DB
1,362,108
2 DBs
1,363,320
842,478
3 DBs
1,403,436
878,052
Database 2
Total TPM
Average Read Hits
(%)
Average Write Hits
(%)
VMAX All
Flash Host
IOPS
1,362,108
75
100
68,000
2,205,798
60
100
104,000
2,821,296
55
100
127,000
Database 3
539,808
The key benefits of running Microsoft SQL Server on VMAX All Flash are:

Scale—VMAX provides a flexible and scalable storage design that can start as low as 16 SSD per V-Brick
(IOPS optimized) or 32 SSD per V-Brick (bandwidth optimized) and scale up to hundreds of SSD drives.

Write cache—VMAX cache eliminates flash-related write latency delays, which can affect log writes during
commits and write rates during batch or database file initialization.

FlashBoost—VMAX FlashBoost bypasses cache to make application reads from flash drives faster while
enabling read-cache hits for subsequent reads to nearby SQL Server blocks.

CPU power—VMAX All Flash 450 or 850 systems can be configured with various CPU types and numbers of
cores. In each system, the cores are pooled together to dynamically service I/O, regardless of the port they
come from. The result is the best use of the overall system CPU power to match application requirements.

Connectivity—VMAX provides a flexible choice of connectivity and protocols, including FC, iSCSI, and eNAS
(Windows Server 2012 SMB 3.0).

Data integrity—All data in VMAX is protected with T10-DIF, including local and remote replications.

Remote replications—VMAX systems provide robust HA/BC/DR support for many topologies and modes,
including synchronous and asynchronous replications, Cascaded, Extended Distance Protection, STAR, and
active/active with SRDF Metro.

Local replications—The capability of creating 256 snapshots for any host device with up to 1,024 linked
targets enables high availability and easy creation of database replicas and backups.

Multitenancy—Using Host I/O limits, resources can be limited for applications that are not performancecritical. This allows better consolidation and sharing of the VMAX All Flash storage system across many
applications with varied business priorities.

Self service automated copy data management using AppSync—AppSync support for VMAX All Flash
offers automated, self-service, service plan driven copy management for Microsoft SQL Server. This supports
periodic backup, instant restore and recovery, and repurposed copies for testing, development, and reporting,
with tighter integration with SQL Server Virtual Device Interface (VDI).

Adaptive Compression Engine—HYPERMAX OS on VMAX All Flash storage uses the ACE to compress
data, efficiently optimize system resources, and balance overall system performance.
26
REFERENCES
VMAX All Flash Product Guide
VMAX All Flash Spec Sheet
VMAX All Flash with Adaptive Compression Engine
Microsoft SQL Server High Availability using VMAX and SRDF/Metro
Application Workload Control Using Host I/O Limits for SQL Server
VMAX3 eNAS Deployment for MICROSOFT Windows and SQL Server environments
EMC Symmetrix VMAX Virtual Provisioning space reclamation and application considerations
Deployment best practices for Microsoft SQL Server with VMAX3 SLO Management
AppSync Support Matrix
EMC AppSync User and Administration Guide
EMC Unisphere for VMAX REST API Concepts and Programmer's Guide
27
APPENDIX
USING UNISPHERE TO CONFIGURE VMAX COMPRESSION
This section shows the configuration of VMAX compression using Unisphere for VMAX.
Checking and setting the configuration of the storage group
Figure 24 shows how to check the compression ratio on the storage group.
Figure 24.
Setting compression ratio on a storage group
28
Figure 25 shows how to enable/disable compression.
Figure 25.
Modifying storage group compression
USING SOLUTIONS ENABLER CLI TO CONFIGURE VMAX COMPRESSION
This section shows Solutions Enabler CLI examples for managing compression.
Checking the configuration on the storage group
Figure 26 shows the compression ratio of the storage group.
Figure 26.
Checking the storage group configuration
29
Setting the compression on the storage group
The CLI command to set the compression on a storage group is as follows:
symsg –sid 68 –sg SQL_0122_NC set –srp SRP_1 -compression
Getting the details about device level compression
Figure 27 shows how to find the device level compression details.
Figure 27.
Retrieving device level compression details
USING VMAX REST API TO PROVISION STORAGE AND MONITOR PERFORMANCE
Introduction
The EMC Unisphere for VMAX REST (Representational State Transfer) API allows for accessing performance
data, accessing configuration data, and performing storage provisioning operations for VMAX through robust
APIs. These APIs can be used in any of the programming environments that support standard REST clients such
as web browsers and programming platforms that can issue HTTP requests. The API has predictable, resourceoriented URLs, and uses HTTP response codes to indicate API errors. The Unisphere for VMAX REST API
supports both XML and JavaScript Object Notation (JSON). The Invoke-RestMethod cmdlet in PowerShell
provides a convenient way to use REST API to configure and manage VMAX in Windows environment.
REST model
REST stands for REpresentational State Transfer. REST is web standards based architecture and uses HTTP
Protocol. With REST every component is a resource and accessed by a common interface using HTTP standard
methods. In REST architecture, a REST Server simply provides access to resources and while the REST client
accesses and modifies the resources. Each resource is identified by URIs/ global IDs. REST uses various
representations for a resource such as text, JSON, XML. JSON is the most popular representation.
The following HTTP methods are commonly used in REST based architecture. Table 3 shows their function and
what type of user on Unisphere can use them.
30
Table 3.
HTTP methods used in REST based architecture
HTTP operation
PUT
GET
POST
Description
Modify resource
Retrieve state of resource
Create a new resource
DELETE
Delete resource
Access to Method
Administrator and Storage Administrator
All users. Provides read only access to a resource
Administrator
Performance Monitor (for Performance URLs only)
Storage Administrator
Administrator and Storage Administrator
Transport protocol
REST API uses the Hypertext Transfer Protocol Secure (HTTPS) version 1.1 as the transport for API requests.
For POST requests, method arguments are passed in the HTTP Request message body. API methods return
standard HTTP status codes and result content in the HTTP Response message body.
Unisphere for VMAX REST API supports two media-types:

application/xml—Allows marshalling/un-marshalling using XML

application/json—Allows marshalling/ un-marshalling using JSON
REST support in PowerShell
The Invoke-RestMethod cmdlet sends HTTP and HTTPS requests to REST web services that returns richly
structured data. Windows PowerShell formats the response based on the data type. For an RSS or ATOM feed,
Windows PowerShell returns the Item or Entry XML nodes.
For JavaScript Object Notation (JSON) or XML, Windows PowerShell converts (or deserializes) the content into
objects. This cmdlet was introduced in Windows PowerShell 3.0.
Figure 28.
Using REST API to manage VMAX
REST API documentation for VMAX
REST API documentation can be downloaded from host running Unisphere for VMAX. Point the browser to:
https://{UNIVMAX_IP}:{UNIVMAX_PORT}/univmax/restapi/docs. where, UNIVMAX_IP is the IP address and
UNIVMAX_PORT is the port of the host running Unisphere for VMAX. The default UNIVMAX_PORT port number
is 8443. For example, https://10.108.244.19:8443/univmax/restapi/docs. You will be required to enter valid
credentials for Unisphere to have access to the documentation.
31
Authentication and security
Before using the Unisphere for VMAX REST API, user authorization must be assigned for each VMAX array that
the user can access.
Procedure
1. Log into Unisphere for VMAX.
2. From the system selector, select All Symmetrix.
3. Select Home > Administration > Security > Authorized Users and Groups to open the
4. Authorized Users and Groups list view.
5. Click Add to open the Add Authorization Rule dialog box.
6. Select the Roles tab. Create a user login profile for each VMAX array to be accessed by the user, and assign
them to any of the following roles:
o
Administrator—Can initiate GET, POST, PUT, and DELETE methods.
o
Storage Administrator—Can initiate GET, POST, PUT, and DELETE methods.
o
Performance Monitor—Can initiate GET and POST (for Performance URLS only) methods.
o
Monitor—Can initiate GET methods.
o
Auditor—Can initiate GET methods.
o
Security Administrator—Can initiate GET methods.
Client authentication
REST API uses HTTP Basic Access Authentication to authenticate API clients. HTTP Basic Access Authentication
uses static headers, requiring no handshakes. Users of the REST API are assigned user credentials for
associated VMAX arrays within Unisphere for VMAX. A username and password is supplied with every request to
REST API in the “Authorization” header, as specified in RFC 2617. Every request to REST API is authenticated
and authorized.
The following code block is an example of how to create a header using a Unisphere username and password:
function httpheaders ([string]$username, [string]$upassword)
{
$auth = $username + ':' + $upassword
$Encoded = [System.Text.Encoding]::UTF8.GetBytes($auth)
$EncodedPassword = [System.Convert]::ToBase64String($Encoded)
$headers = @{ };
$headers.Add("cache-control", "no-cache");
$headers.Add("Authorization", "Basic $($EncodedPassword)")
$headers.Add("Server", "Restlet-Framework/2.3.1")
$headers.Add("content-type", "application/json")
return $headers
}
Disabling SSL certificate validation
You may need to disable SSL certificate validation for REST API to work. Add the following code to your
PowerShell script to disable SSL certificate validation.
Set-ExecutionPolicy Unrestricted
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
add-type @"
using System.Net;
using System.Security.Cryptography.X509Certificates;
public class TrustAllCertsPolicy : ICertificatePolicy {
public bool CheckValidationResult(
ServicePoint srvPoint, X509Certificate certificate,
WebRequest request, int certificateProblem) {
return true;
}
}
32
"@
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy
Basic storage provisioning operations using REST API
This section describes how REST API can be used to provision and manage storage for a host. Examples here
use PowerShell’s Invoke-RestMethod cmdlet which is supported in PowerShell version 3.0 and higher. Although
all payloads will be sent in JSON format, no JSON knowledge is required as the ConvertTo-Json cmdlet will be
used to a convert PowerShell object to a string in JavaScript Object Notation (JSON) format.
Creating Storage Groups
Storage for a host is carved out by creating a storage group of one or more volumes. At a minimum define these
parameters: Storage Group ID, number of volumes, volume size and volume capacity unit. Send a payload with
this information as data for the POST request to URL https://<Unisphere
IP>:8443/univmax/restapi/83/sloprovisioning/symmetrix/<VMAX ID>/storagegroup. Unisphere will return status
code 200 with Status Code: with "success": true in the message body if storage group creation with requested
volumes was successful.
Example:
$payloadSG = @{
storageGroupId = "SQL_DATA_SG";
create_empty_storage_group = $false;
srpId = "SRP_1";
emulation = "FBA";
sloBasedStorageGroupParam = @(
@{num_of_vols = 4;
noCompression = $false;
volumeAttribute = @{capacityUnit = "GB"; volume_size =
})
}
"100"}
$response = Invoke-RestMethod -Method Post -Uri $myuri -Headers $headers -Body $body ($payloadSG
| ConvertTo-Json -Depth 4)
Note: “83” in the URL indicates Unipshere version 8.3. Your code might differ depending on your Unisphere
version and whether the feature is unique to a particular version. The parameter noCompression is new in
version 8.3 and defines whether compression is enabled for the storage group.
Adding volumes to a storage group
Volumes can be added to an existing storage group by sending a POST requrest.
$payloadVol = @{
editStorageGroupActionParam = @{
expandStorageGroupParam = @{
num_of_vols = 4;
emulation = "FBA";
volumeAttribute = @{
volume_size = "100";
capacityUnit = "GB";
}
create_new_volumes = $true;
}
}
}
$response = Invoke-RestMethod -Method Post -Uri $myuri -Headers $headers -Body $body
($payloadVol | ConvertTo-Json -Depth 4)
Creating Hosts
To create a host, send a POST request to the following URL:
https://<Unisphere IP>:8443/univmax/restapi/83/sloprovisioning/symmetrix/<VMAX ID>/Host
33
Make sure that the body of the request contains a unique name for the host, "hostId" and WWNs of the host as
"initiatorId"
$payloadHost
= @{hostId = "SQL_Host";
initiatorId = @('21000024ff3de24c',
'21000024ff3de24d')
}
$response = Invoke-RestMethod -Method Post -Uri $myuri -Headers $headers -Body $body
($payloadHost | ConvertTo-Json -Depth 4)
To find the WWN for the HBA on the host, use the cmdlet Get-InitiatorPort as follows:
PS C:\> (Get-InitiatorPort | ? { $_.ConnectionType -eq "Fibre Channel" }).PortAddress
21000024ff3de24c
21000024ff3de24d
21000024ff3de23e
21000024ff3de23f
Creating port groups
A port group specifies which ports on the VMAX array will be used by the host to access storage. We recommend
that you use at least two FA ports, preferably from different director boards, for high availability and load
balancing.
$payloadPG
= @{portGroupId = "PG_SQL";
symmetrixPortKey = @(@{directorId = 'FA-1D'; portId = '8'},
@{directorId = 'FA-2D'; portId = '8'})
}
$response = Invoke-RestMethod -Method Post -Uri $myuri -Headers $headers -Body ($payloadPG |
ConvertTo-Json)
Creating masking views
A masking view binds the three entities created above to provide VMAX storage access to a host.
$payloadMV = @{
maskingViewId = "SQL_MV";
portGroupSelection = @{
useExistingPortGroupParam = @{
portGroupId = "SQL_PG"}
};
hostOrHostGroupSelection = @{
useExistingHostParam = @{
hostId = "SQL_Host"}
};
storageGroupSelection = @{
useExistingStorageGroupParam = @{
storageGroupId = "SQL_DATA_SG" }
}
}
Cmdlet usage
$response = Invoke-RestMethod -Method Post -Uri $myuri -Headers $headers -Body ($payloadMV |
ConvertTo-Json)
Creating your own functions
You can create your own functions for using VMAX REST API and use them in various scripts. Here are examples
of header and storage group functions and their usage in scripts.
function createSG([hashtable]$VMAX, [hashtable]$SG)
{
$group = 'storagegroup'
$uri = $baseurl + '83/sloprovisioning/symmetrix/' + $VMAX.ID + '/' + $group
$headers = httpheaders $VMAX.user $VMAX.password
34
$payloadSG = @{
storageGroupId = $SG.ID;
create_empty_storage_group = $false;
srpId = $SG.SRP;
emulation = $SG.emulation;
sloBasedStorageGroupParam = @(
@{num_of_vols = $SG.num_vols;
noCompression = $false;
volumeAttribute = @{capacityUnit = $SG.capacity_unit;
$SG.vol_size}
})
}
volume_size =
try
{
$response = ""
$response = Invoke-RestMethod -Method Post -Uri $uri -Headers $headers -Body
($payloadSG | ConvertTo-Json -Depth 4)
$response
return $response
}
catch
{
Write-Host StatusCode is $response.StatusCode
Write-Host $_.Exception
}
}
Calling your own functions
$baseurl = 'https://' + $U4V + ':' + $port + '/univmax/restapi/'
#Define parameters based on environment and requirements
$VMAX = @{ ID = $vmaxid; user = $username; password = $upassword; U4V = $U4V }
$SG = @{ID = 'TEST_SG'; SRP = 'SRP_1'; emulation = 'FBA'; num_vols = 4; vol_size = '100';
capacity_unit = 'GB'}
# Create a new SG using function and parameters given above
createSG $VMAX $SG
Gathering performance statistics
REST API can be used to gather performance statistics from VMAX. The following example demonstrates how to
collect FA statistics from VMAX. Note that the start and end times need to be in an Epoch time format in
milliseconds.
function FAstats([string]$arrayID, [string]$directorId, [string]$portId, [string]$starttime,
[string]$endtime)
{
$group = 'FEPort'
$myuri = $baseurl + 'performance/' + $group + '/metrics'
$headers = httpheaders $username $upassword
$format = 'Average'
$metrices = @("IOs", "Reads", "Writes", "ReadResponseTime", "WriteResponseTime", "MBs")
$payload = @{ symmetrixId = $arrayID; directorId = $directorId; portId = $portId;
startDate = $starttime; endDate = $endtime; dataFormat = $format; metrics = $metrices; }
try
{
$response = Invoke-RestMethod -Method Post -Uri $myuri -Headers $headers -Body ($payload
| ConvertTo-Json) -ContentType 'application/json'
$txt >>$FAlog
$response.resultList.result | ConvertTo-Csv >>$FAlog
return $response
}
catch
{
Write-Host StatusCode $response.StatusCode
Write-Host $_.Exception
}
}
35
The output in CSV format is as follows and can be easily imported into Excel. Note that the timestamp is in Epoch
(Unix) time which can be converted to local time using a Epoch and Unix timestamp conversion tool.
Stats for Port FA-1D : 4
#TYPE System.Management.Automation.PSCustomObject
"ReadResponseTime","WriteResponseTime","Reads","Writes","IOs","MBs","timestamp"
"7.3137527","3.4454439","11346.403","371.77667","11718.18","526.8554","1477915200000"
"7.0546618","0.9067312","11327.873","428.52335","11756.396","520.11884","1477915500000"
"4.254815","0.8271081","12051.057","391.20667","12442.264","457.80905","1477915800000"
"7.040959","1.2732335","11436.39","249.69333","11686.083","524.96295","1477916100000"
"6.7981377","0.78095454","10309.373","444.88333","10754.257","478.59308","1477916400000"
"7.251244","0.8993413","10257.337","179.03334","10436.37","472.01675","1477916700000"
"5.9142027","0.8543569","10505.04","349.79666","10854.837","460.56274","1477917000000"
"8.049732","1.4058913","10748.014","352.87668","11100.891","522.4847","1477917300000"
"4.2941537","0.7776407","11418.903","436.46335","11855.366","431.1944","1477917600000"
"4.446732","0.83318233","10051.63","208.48334","10260.113","399.6872","1477917900000"
36