Download Oracle, SAP and MS SQL on Vblock System 540

Document related concepts

Entity–attribute–value model wikipedia , lookup

Tandem Computers wikipedia , lookup

Microsoft Access wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Team Foundation Server wikipedia , lookup

Btrieve wikipedia , lookup

Concurrency control wikipedia , lookup

Ingres (database) wikipedia , lookup

Database wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Relational model wikipedia , lookup

SQL wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Database model wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Clusterpoint wikipedia , lookup

SAP IQ wikipedia , lookup

Oracle Database wikipedia , lookup

PL/SQL wikipedia , lookup

Transcript
VCE Solutions for Enterprise
Mixed Workload on Vblock
System 540
Solutions Guide
Version 1.0
May 2015
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN
THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OR
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Copyright 2016 VCE Company, LLC. All Rights Reserved.
VCE believes the information in this publication is accurate as of its publication date. The information is subject to
change without notice.
2
© 2016 VCE Company, LLC.
All Rights Reserved.
Contents
Introduction ................................................................................................................................................. 5
Solution Overview ................................................................................................................................... 5
Benefits ................................................................................................................................................... 6
Key test results ....................................................................................................................................... 7
Audience ................................................................................................................................................. 8
Feedback ................................................................................................................................................ 8
Technology Components ........................................................................................................................... 9
Vblock System 540 ................................................................................................................................. 9
Storage components ........................................................................................................................ 9
Compute components ...................................................................................................................... 9
Networking components................................................................................................................... 9
EMC AppSync ...................................................................................................................................... 11
Enterprise mixed workload ................................................................................................................... 11
Oracle Database (11g and 12c) ..................................................................................................... 11
Microsoft SQL Server 2014 ............................................................................................................ 12
SAP Business Suite ....................................................................................................................... 12
VMware vSphere .................................................................................................................................. 12
VMware vSphere ESXi................................................................................................................... 12
VMware vSphere vCenter .............................................................................................................. 13
Architecture Overview .............................................................................................................................. 14
Physical layout ...................................................................................................................................... 15
Hardware and software components .................................................................................................... 17
Design Considerations for Mixed Workloads ........................................................................................ 18
Compute design.................................................................................................................................... 18
Network design ..................................................................................................................................... 19
IP network ...................................................................................................................................... 19
Storage design...................................................................................................................................... 25
XtremIO overview ........................................................................................................................... 25
XtremIO database storage design considerations ......................................................................... 25
Storage layout for Oracle database ............................................................................................... 26
Storage layout for SQL database ................................................................................................... 28
Storage layout for SAP ERP system .............................................................................................. 30
Application design................................................................................................................................. 32
Oracle Database 12c ..................................................................................................................... 32
Microsoft SQL 2014 ....................................................................................................................... 33
SAP Business Suite ....................................................................................................................... 34
Key considerations for SAP design ................................................................................................ 36
Solution Validation .................................................................................................................................... 37
3
© 2016 VCE Company, LLC.
All Rights Reserved.
Test objective........................................................................................................................................ 37
Test scenarios ...................................................................................................................................... 37
Test tool and methodology ................................................................................................................... 38
Oracle database 12c ...................................................................................................................... 39
Microsoft SQL 2014 ....................................................................................................................... 41
SAP Business Suite ....................................................................................................................... 44
Test Results ............................................................................................................................................... 46
Enterprise mixed workload performance validation on Vblock 540 ..................................................... 46
Mixed workload test results ............................................................................................................ 46
Vblock System 540 performance summary ................................................................................... 53
XtremIO storage efficiency analysis with a mixed workload ................................................................ 54
Conclusion ................................................................................................................................................. 59
Next steps ............................................................................................................................................. 59
References ................................................................................................................................................. 60
VCE documentation .............................................................................................................................. 60
EMC documentation ............................................................................................................................. 60
VMware document ................................................................................................................................ 60
Appendix .................................................................................................................................................... 61
Provisioning design with AppSync........................................................................................................ 61
Provisioning SQL Server database copies with AppSync ............................................................. 61
Provisioning non-production Oracle database with AppSync ........................................................ 65
Harness the I/O throughput of the non-production workload ............................................................... 77
Linux Control Group ....................................................................................................................... 77
VMware Storage I/O control ........................................................................................................... 78
SLOB configuration parameters ........................................................................................................... 79
4
© 2016 VCE Company, LLC.
All Rights Reserved.
Introduction
Best practices for data center management have been completely rewritten during the transition first to
server virtualization and then to cloud computing. The previously widespread view that workload isolation
was essential for ensuring great performance and adequate management has to be discarded to achieve
a better return on IT investment. If an organization is to be financially viable, it is inefficient to implement
and maintain identical and dedicated environments for production, pre-production, staging, quality
assurance (QA), and development (DevOps). The complexity of the application IT landscape also
significantly compounds the financial burden. It has been commonplace in the past for all the major
application owners, including enterprise resource planning (ERP), customer service, and human resource
management, to demand multiple dedicated environments for each business function. At the same time,
customers now realize that designing, planning, testing, verifying, deploying, and maintaining
interoperability between infrastructure components drains IT budgets and resources while adding little
value to the business. This is why customers are rapidly adopting CI: they want to buy–not build–
infrastructure.
Converged infrastructure (CI) combined with the use of all-flash technology provides a revolutionary new
platform for modernizing mixed-workload and mixed-application best practices for data center
management. The Converged Platform Division (CPD) and Global Solutions Organization have
integrated expertise in delivering integrated systems and services with deep workload and application
expertise to bring to market new, all-flash converged infrastructure solutions, focusing initially on Oracle,
Microsoft, and SAP.
We used the cumulative knowledge of EMC expertise on converged infrastructure and workload solutions
to show that modern converged systems are capable of running enterprise-class mixed-application
workloads with superior performance and manageability. We invite you to read further to understand our
methodology and results. We recommend a deeper discussion with your local EMC representatives to
see if converged platforms are the right choice for your next data center modernization project.
Solution Overview
The goal of this work to was to build, test, and document a near-real life enterprise computing
environment consisting of several well-known business applications all running on a single converged
infrastructure platform. When dealing with mixed or even individual workloads, the presentation of IOPS
data without latency data (at both a storage and an application level) can be misleading for an IT
organization seeking to understand the value and applicability of a solution to their business.
Therefore, our focus was not on producing unrealistic “hero number” IOPS, but rather on deriving key
performance indicators of end-user response times both at a storage and an application level, while also
driving IOPS workloads which meet or exceed what the vast majority of databases require today.
®
For the converged platform hardware, we chose the VCE™ Vblock System 540 with Cisco compute and
®
networking and EMC XtremIO All-Flash persistent storage. Our software platforms consisted of SAP
ECC, Oracle 11g and 12g databases as well as Microsoft SQL Server 2014 relational database
management systems. We felt that this combination of hardware and software would be representative of
an environment that many large to very large enterprise customers would find useful in evaluating the
applicability of CI systems.
5
© 2016 VCE Company, LLC.
All Rights Reserved.
The business application landscape for the testing environment consisted of:

A high frequency online transaction processing (OLTP) application with Oracle using the Silly Little
Oracle Benchmark (SLOB) tool

A modern OLTP benchmark simulating a stock trading application representing a second OLTP
workload for SQL Server

ERP hosted on SAP with an Oracle data store simulating a sell-from-stock business scenario

A decision support system (DSS) workload accessing an Oracle database

An online analytical processing (OLAP) workload accessing two SQL Server analysis and reporting
databases

Ten development/test database copies for each of the Oracle and SQL Server OLTP and five
development/test copies of the SAP/Oracle system (25 total copies)
This landscape is considerably more complex than what most hardware or software companies use for
engineering demonstrations. This guide includes details of all the configuration and settings that we
used. The reason we chose to implement the test this way was to have an environment that produced a
mixture of compute, network, and storage demands from different application vendors (SAP, Oracle, and
Microsoft) and from different application uses (OLTP and OLAP).
Knowing that working with mixed workloads increases the demand on all aspects of the platform, we
started the testing by collecting results from each application/vendor and workload type individually. We
then tested the system with two different levels of the combined application workload to calculate the
impact of mixed workloads on the efficiency of the system. The end state of the mixed-application and
mixed-workload landscape represents a large enterprise where multiple copies of production are
available for testing and development, and capacity headroom is available to store the incremental data
generated from those environments. From a performance perspective, the focus was on platform- and
application-level response times, while also tracking the total number of IOPS and GB/s generated.
All creation and mapping of the 25 test/dev copies for Oracle, Microsoft and SAP production was
performed using XtremIO Virtual Copies. This solution is designed to provide a high-performance,
scalable configuration accommodating the capacity consumption of the 25 database copies as they are
updated.
Benefits
VCE Vblock Systems with all-flash storage provide IT organizations with a single, complete platform to
uniformly support mixed workloads and mixed applications simultaneously, without modifying the
application itself or requiring any proprietary application tools to unlock great performance. IT
organizations no longer need to separate workloads with different I/O patterns or different uses
(production versus test) on separate infrastructure silos. The Vblock 540 can consolidate workloads with
mixed I/O patterns onto a single set of infrastructure. The reduced cost of configuration, support, and
maintenance will lower the overall TCO of running an enterprise data center.
6
© 2016 VCE Company, LLC.
All Rights Reserved.
Key test results
The key results of this solution are:

A VCE Vblock System 540 converged infrastructure with XtremIO All-Flash Arrays (four 10 TB XBricks®) supporting a mixed workload of production Oracle Database, SQL Server Database, and
SAP Business Suite can sustain approximately 230 K IOPS and 3.8 GB bandwidth simultaneously,
while maintaining excellent response times at both storage and host levels.

Production database storage read latencies are below 1ms and write latencies are below 2 ms with
Test/Dev, DSS and OLAP databases running in parallel.

Production database server read latencies are below 2 ms and write latencies are below 3 ms with
Test/Dev, DSS and OLAP databases running in parallel.
Latency performance diagram

This solution achieves a superior cost-to-performance ratio over traditional copy management
methods by using AppSync and XtremIO Virtual Copies to create Test/Dev databases. The XtremIO
inline data-reduction capability greatly reduces the system’s storage requirements. The overall
efficiency ratio is as high as 24:1. The XtremIO volume configured for mixed workload and
infrastructure servers is approximately 240 TB, while the actual physical capacity used is as low as 10
TB.
7
© 2016 VCE Company, LLC.
All Rights Reserved.
Performance parameters diagram

The mixed Oracle, Microsoft, and SAP workloads (OLTP, OLAP, Test/Dev) running on the Vblock 540
were able to scale up production IOPS to the same levels as the combined independent workload
tests while maintaining a low latency of under 1 ms.
Audience
This solution guide is intended for infrastructure architects, as well as database and system
administrators who are interested in building an environment capable of supporting mixed Oracle, SQL
Server, and SAP enterprise workloads.
Feedback
To suggest changes and provide feedback on this document, send an e-mail message to
[email protected]. Please include the title of the document, the name of the section
to which your feedback applies, and your comments.
8
© 2016 VCE Company, LLC.
All Rights Reserved.
Technology Components
®
®
This solution relies on the following four technology components: Vblock System 540, EMC AppSync ,
enterprise mixed workload applications, and VMware vSphere.
Vblock System 540
Vblock System 540 is an industry-leading converged infrastructure that incorporates an XtremIO all-flash
storage array to enable delivery of more than a million IOPS.
Storage components
XtremIO is a 100 percent flash-based storage array that was created for maximum performance,
scalability, and ease of use. The product includes inline data reduction, wear leveling, write abatement,
thin provisioning, snapshots, volume clones, and data protection. The product architecture addresses all
the requirements for flash-based storage, including longevity of the flash media and a lower effective flash
capacity cost.
To support enterprise computing, XtremIO is integrated with other technologies. For example,
communication between storage devices and VMware vSphere ESXi hosts is enabled with VMware
vSphere Storage APIs – Array Integration (VAAI). Resiliency comes from Fibre Channel (FC)
connectivity, flash-specific dual-parity data protection, and storage presentation over the iSCSI protocol.
Compute components
Vblock System 540 uses Cisco Unified Computing System (UCS) blade enclosures, interconnects, and
blade servers.
The UCS data center platform combines x86-architecture blade and rack servers with networking and
storage access in a single system. Innovations in the platform include a standards-based, unified network
fabric; a Cisco Virtual Interface Card (VIC); and Cisco UCS Extended Memory Technology. A wire-once
architecture with a self-aware, self-integrating, intelligent infrastructure eliminates the need for manually
assembling components into systems.
Cisco UCS B-Series 2-socket blade servers deliver optimized performance to a wide range of workloads.
Based on Intel Xeon processor E7 and E5 product families and designed for virtualized applications,
these servers deliver fast performance and reduce expense by integrating systems management and
converging network fabrics.
Networking components
The networking components in the Vblock System 540 include Cisco Nexus 5548UP switches, fabric
interconnects, and Cisco Nexus 3064-T Ethernet switches, as shown in the following diagram.
9
© 2016 VCE Company, LLC.
All Rights Reserved.
Vblock System 540 networking components (Four 10 TB X-Bricks)
A pair of Cisco Nexus 5548UP switches provides 10 GbE connectivity to the Vblock System 540
components as well as connectivity to the external network through the customer's core network. A pair of
Cisco Nexus 3064-T switches connects the Advanced Management Pod (AMP) to the external network,
supporting the Vblock System management infrastructure with redundancy.
10
© 2016 VCE Company, LLC.
All Rights Reserved.
The Cisco Nexus 5548UP switches provide 10 GbE connectivity as follows:

Between the Vblock System internal components

From those internal components to the AMP

From the internal components to the external network
EMC AppSync
AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring, and cloning critical
Microsoft and Oracle applications and VMware environments. After defining service plans (such as Gold,
Silver, and Bronze), application owners can protect, restore, and clone production data quickly with itemlevel granularity by using the underlying EMC replication technologies. AppSync also provides an
application protection monitoring service that generates alerts when the SLAs are not met.
AppSync supports the following applications and storage arrays:

Applications — Oracle, Microsoft SQL Server, Microsoft Exchange, VMware VMFS and NFS
datastores, and file systems

EMC Storage arrays — VMAX2, VMAX3 , VNX (BLOCK), VNX (File), VNXe , XtremIO, and EMC
®
RecoverPoint
™
®
®
Enterprise mixed workload
This solution tests and validates the ability of a single Vblock 540 all-flash system to sustain both mixed
applications (Oracle, Microsoft, and SAP) and mixed workloads (OLTP, OLAP, and Test/Dev) on a single
platform simultaneously. We measured IOPS, GB/s and latency and array space efficiency running on a
Vblock with XtremIO or VMAX, and documented how we sized and configured the system and how we
utilized array features such as copy services (via AppSync) and cache (VMAX) to optimize the workloads.
Oracle Database (11g and 12c)
Oracle Database Enterprise Edition delivers performance, scalability, security, and reliability on a choice
of clustered or single servers running Windows, Linux, or UNIX. It provides comprehensive features for
transaction processing, business intelligence, and content management applications. This solution
implements many Oracle Database features, including RAC and Automatic Storage Management (ASM).
In Oracle 12c R1, Oracle ASM and Oracle Clusterware have been integrated into the Oracle Grid
Infrastructure. This provides the cluster and storage services required to run Oracle RAC databases.
Oracle ASM is also extended to store Oracle Cluster Registry (OCR) and voting disks.
11
© 2016 VCE Company, LLC.
All Rights Reserved.
Microsoft SQL Server 2014
Microsoft SQL Server 2014 builds on the mission-critical capabilities delivered in the prior release by
providing breakthrough performance, availability, and manageability for your mission-critical applications.
SQL Server 2014 delivers new in-memory capabilities built into the core database for OLTP and data
warehousing, which complement existing in-memory data warehousing and BI capabilities for the most
comprehensive in-memory database solution on the market.
SQL Server 2014 also provides a new disaster recovery/backup capability and takes advantage of
Windows Server 2012 and Windows Server 2012 R2 capabilities to give you unparalleled scalability for
your database application in a physical or virtual environment.
SAP Business Suite
SAP Business Suite is a bundle of business applications that provide integration of information and
processes, collaboration, industry-specific functionality, and scalability. SAP Business Suite is based on
SAP's NetWeaver technology platform. SAP Business Suite 7 includes the following components:

SAP ERP 6.0 (Enterprise Resource Planning)

SAP CRM 7.0 (Customer Relationship Management)

SAP SRM 7.0 (Supplier Relationship Management)

SAP SCM 7.0 (Supply Chain Management)

SAP PLM 7.0 (Product Lifecycle Management)
We used SAP ERP 6.0 in the test environment for this solution.
VMware vSphere
VMware vSphere is the most widely adopted virtualization platform in the world. The technology
increases server utilization so that a firm can consolidate its servers and spend less on hardware,
administration, energy, and floor space. The success of vSphere reflects the ability of its installations to
respond to user requests reliably while giving administrators the tools to respond to changing needs.
The components of particular interest in this solution are vSphere ESXi and vCenter.
VMware vSphere ESXi
VMware vSphere ESXi is a bare-metal hypervisor. It installs directly on a physical server and partitions
that server into multiple virtual machines. The phrase ESXi host refers to the physical server.
vSphere ESXi hosts and their resources are pooled together into clusters that contain the CPU, memory,
network, and storage resources that are available for allocation to the virtual machines. Clusters scale up
to a maximum of 32 hosts and can support thousands of virtual machines.
12
© 2016 VCE Company, LLC.
All Rights Reserved.
VMware vSphere vCenter
VMware vCenter Server is management software that runs on a virtual or physical server to oversee
multiple ESXi hypervisors as a single cluster. An administrator can interact directly with vCenter Server or
use vSphere Client to manage virtual machines from a browser window anywhere in the world. For
example, the administrator can capture the detailed blueprint of a known, validated configuration—a
configuration that includes networking, storage, and security settings—and then deploy that blueprint to
multiple ESXi hosts.
13
© 2016 VCE Company, LLC.
All Rights Reserved.
Architecture Overview
The simplicity of the Vblock 540 system allows you to design a mixed workload and mixed applications
solution that can be easily broken down into separate, modular layers that function well together.
The logical architecture of the solution's applications is composed of the following layers: User Layer,
Application Layer and Infrastructure Layer, as shown in the following figure.
System architecture diagram
14
© 2016 VCE Company, LLC.
All Rights Reserved.
Physical layout
At a minimum, this solution requires a single-cabinet Vblock System 540.
The system consists of a dedicated three-chassis, 24-blade Cisco UCS environment used for the
application infrastructure. Storage for infrastructure servers is hosted on the same XtremIO array (with
four X-Bricks).
Storage configuration
This environment, which is designed for mixed enterprise applications, includes the following
characteristics:

Each of the three vSphere clusters supports a single application.

15 Cisco UCS B-series Blade Servers.

One vCenter server manages the infrastructure, as shown in the following figure:
15
© 2016 VCE Company, LLC.
All Rights Reserved.
VMware vCenter instances
16
© 2016 VCE Company, LLC.
All Rights Reserved.
Hardware and software components
The following table lists the hardware used in the validation test environment:
Hardware used in the validation test environment
Layer
Hardware
Quantity
Compute
Cisco UCSB B200 M3 Rack Server
15
Network




Storage
EMC XtremIO Storage System (4 X-Bricks)
Cisco UCS 6248UP Fabric Interconnect
Cisco MDS 9148 Fibre Channel Switch
The following table lists the software used in the validation test environment:
Software used in the validation test environment
Software
Version
Oracle Database
12c R1 Enterprise Edition 12.1.0.2
Oracle Enterprise Linux
OEL 6.5
Oracle Grid Infrastructure
12c R1 Enterprise Edition 12.1.0.2
Silly Little Oracle Benchmark (SLOB)
2.2
Microsoft SQL Server
2014 Enterprise Edition SP1
VMware vSphere
6.0
Microsoft Windows Server operating system
2012 R2 Data Center edition
SAP ERP
6.0 EHP5
Oracle database
11g R2
SUSE Linux
11 SP3
SAP Power Benchmark
17
© 2016 VCE Company, LLC.
All Rights Reserved.
1
2
2
Design Considerations for Mixed Workloads
Compute design
The following table details the configuration of the ESXi and virtual machines for SAP, Oracle and SQL
server:
ESXi and virtual machines configuration
ESXi cluster
Oracle*
SQL
SAP*
Note:
ESXi
quantity
Virtual machine role
Virtual
machine
quantity
vCPUs
RAM
(GB)
Oracle PRD OLTP
database server
2
16
40
Oracle OLAP database
server
2
4
20
Oracle TST/DEV OLTP
database server
10
4
20
Oracle Load generate
server
1
4
8
SQL PRD OLTP database
server
1
24
32
SQL OLAP database
server
2
16
128
SQL TST/DEV OLTP
database server
10
4
32
SQL client
2
4
4
SAP PRD database server
1
32
64
SAP PRD central service
sever
1
8
32
SAP PRD application
server
8
4
16
SAP TST/DEV database
server
5
16
32
SAP TST/DEV central
service server
5
4
16
SAP PRD database server
1
32
64
Oracle Linux
6.5 64bit
4
4
Operating
system
Windows
Server 2012 R2
64-bit
SUSE Linux 11
SP3 64bit
7
* It is recommended to have a separate ESXi cluster dedicated for the Oracle database (if the
database license is purchased from Oracle) for licensing and workload segregation.
18
© 2016 VCE Company, LLC.
All Rights Reserved.
Network design
IP network
The IP network for this solution was designed as follows:

The two Cisco Nexus 6248UP switches were configured to provide 10 Gb Ethernet connectivity to the
SAP, Oracle and SQL infrastructure.

Virtual local area networks (VLANs) used to logically group devices that were on different network
segments or subnetworks.

Separate network adapters/networks for vMotion, and VMkernel management.

Separate network adapters/networks for Oracle RAC interconnect.
In this solution, the SAP, Oracle, and SQL clusters were deployed on 15 Cisco B200 M3 blades that were
connected through the vSphere 6.0 distributed switch. vMotion and VMkernel management were
deployed on a vSphere 6.0 standard switch.
For the Oracle production RAC database, we separated the private (interconnect) network to isolate it
from other traffic, and enabled jumbo frames as shown in the following figure:
19
© 2016 VCE Company, LLC.
All Rights Reserved.
Oracle network architecture
20
© 2016 VCE Company, LLC.
All Rights Reserved.
The following figure shows the SQL server network architecture:
SQL server network architecture
21
© 2016 VCE Company, LLC.
All Rights Reserved.
The following figure shows the SAP network architecture:
SAP network architecture
22
© 2016 VCE Company, LLC.
All Rights Reserved.
Validation of this solution required four VLANs for SAP, Oracle, SQL customer connectivity, and Oracle
RAC private connectivity. VLAN information is shown in the following table:
VLAN information
VLAN name
VLAN ID
CUSTOMER-VLAN-SAP
501
CUSTOMER-VLAN-ORACLE
502
CUSTOMER-VLAN-SQL
503
PRIVATE-VLAN-ORACLE
601
Storage area network design
We recommend the following best practices for configuring SAN:

Do not use more than 16 paths per device.

Keep a consistent link speed and duplex across all paths between the host and the XtremIO cluster.

Separate different I/O across different controllers in a mixed workload environment. For example,
hosts running OLTP applications connect to controller 1, while hosts running OLAP applications
connect to controller 2.
Fabric interconnect configuration
In the Vblock system, data moves from the compute layer through a pair of fabric interconnects to the
SAN switches. There are two port channels at 64 Gbps on each fabric (using eight out of eight FC ports
at 8 Gbps speed). The total aggregate bandwidth available to the XtremIO array from the Cisco UCS is
128 Gbps, which would theoretically provide around 11 GBps in bandwidth.
The following diagram shows the Vblock 540 network connectivity physical architecture:
23
© 2016 VCE Company, LLC.
All Rights Reserved.
Vblock 540 network connectivity physical architecture
Before deploying mixed workloads on a Vblock 540 system, ensure that the system is configured for
maximum available performance.
A bandwidth stress test (128 KB, read-only) shows that the read bandwidth of the array peaked at 10
GB/s successfully with 8*2 fabric interconnect ports.
When configuring the number of fabric interconnect ports, ensure the total bandwidth is adequate.
Performance bottlenecks are often created with the limited fabric interconnect ports. The following
diagram shows the fabric interconnect ports:
24
© 2016 VCE Company, LLC.
All Rights Reserved.
Fabric interconnect ports
Storage design
XtremIO overview
XtremIO uses its multi-controller scale-out design and RDMA fabric to maintain all metadata in memory.
The performance is always consistent and predictable.
XtremIO best practice is not to mix tempdb and user databases on the same devices because it
complicates the use of iCDM. It is acceptable to mix data and log files for a user DB on a single device.
With built-in thin provisioning, storage is allocated only when it is needed. This enables you to create
larger LUNs to accommodate future or unexpected growth for databases, without wasting any physical
space on storage.
XtremIO database storage design considerations
Performance is the number one consideration for tier-1 database storage design. XtremIO all-flash arrays
provide industry-leading performance with the easiest provisioning experience of any product on the
market.
25
© 2016 VCE Company, LLC.
All Rights Reserved.
With XtremIO, thin provisioning (allocate-on-demand) ensures that a 1 TB database requires less than 1
TB of allocated physical space. You can eliminate operational complexities by allocating only as much
LUN space and virtual file system space as is required, because storage is allocated on demand.
XtremIO Virtual Copies abstract the copy operations as a unique in-memory metadata operation with no
back-end media or network impact. XVC creates instant, high-performance copies of any data set with no
impact on production or other copies. Inline deduplication and compression data services are also applied
to changes written to XVC copies. XtremIO Virtual Copies consume zero space initially. As copies are
modified and updated, they will consume physical storage capacity depending on the change rate of data
after compression and deduplication are applied.
Storage layout for Oracle database
The following figure shows the cluster and storage design for Oracle:
Cluster and storage design for Oracle
26
© 2016 VCE Company, LLC.
All Rights Reserved.
We created the following volumes, including the volumes for Oracle production database, as shown in the
following table:
Volumes for the Oracle production database
Volume name
Volume size
ORA_PRD_DATA1
1TB
ORA_PRD_DATA2
1TB
ORA_PRD_DATA3
1TB
ORA_PRD_DATA4
1TB
ORA_PRD_FRA_1
500GB
ORA_PRD_FRA_2
500GB
ORA_PRD_OCR1
10GB
ORA_PRD_OCR2
10GB
ORA_PRD_OCR3
10GB
ORA_REDO_1
20GB
ORA_REDO_2
20GB
ORA_REDO_3
20GB
ORA_REDO_4
20GB
Description
Volumes used to store the data files of the Oracle database
Volumes used to store the archived redo log files of the Oracle
database
Volumes used to store the voting disk file and Oracle Cluster
Register (OCR) files of the database
Volumes used to store the online redo log files of Oracle
database
AppSync automatically mounts the XVC of the following volumes to the Test/Dev and OLAP virtual
machines for repurposing:

ORA_PRD_DATA1

ORA_PRD_DATA2

ORA_PRD_DATA3

ORA_PRD_DATA4

ORA_REDO_1

ORA_REDO_2

ORA_REDO_3

ORA_REDO_4
In this solution, we deployed 10 Oracle Test/Dev databases and two Oracle DSS databases by
repurposing the XVC of the production databases through EMC AppSync. The EMC AppSync User and
Administration Guide listed in the References section provides the detailed deployment steps.
27
© 2016 VCE Company, LLC.
All Rights Reserved.
Storage layout for SQL database
The following figure shows the cluster and storage design for SQL server.
Cluster and storage design for SQL server
We created volumes, including those for SQL OLTP production and DSS database, as shown in the
following table:
Volumes for SQL Server, OLTP production and DSS database
Volume name
Volume size
Description
SQL_PRD_DATA1
1TB
SQL_PRD_DATA2
1TB
SQL_PRD_DATA3
1TB
SQL_PRD_DATA4
1TB
SQL_PRD_LOG
800GB
Volumes used to store the log files of SQL database
SQL_PRD_TEMPDB
2TB
Volumes used to store the tempdb files of SQL database
Volumes used to store the data files of the SQL database
SQL OLTP
production
28
© 2016 VCE Company, LLC.
All Rights Reserved.
SQL_DSS_DATA1
1TB
SQL_DSS_DATA2
1TB
SQL_DSS_DATA3
1TB
SQL_DSS_DATA4
1TB
SQL_DSS_LOG
800GB
Volumes used to store the log files of SQL database
SQL_DSS_TEMPDB
2TB
Volumes used to store the tempdb files of SQL database
Volumes used to store the data files of the SQL database
SQL DSS
AppSync automatically mounts the XVC of the following volumes to Test/Dev virtual machines for
repurposing:

SQL_PRD_DATA1

SQL_PRD_DATA2

SQL_PRD_DATA3

SQL_PRD_DATA4

SQL_PRD_LOG
We deployed 10 SQL Test/Dev databases by repurposing an XVC of the SQL production databases
through EMC AppSync in this solution. The EMC AppSync User and Administration Guide in the
References section provides the detailed deployment steps.
29
© 2016 VCE Company, LLC.
All Rights Reserved.
Storage layout for SAP ERP system
The following figure shows the cluster and storage design for SAP:
Cluster and storage design for SAP
30
© 2016 VCE Company, LLC.
All Rights Reserved.
We created the volumes shown in the following table, including the volumes for SAP central service,
database, and application servers for the production system:
SAP storage volume for production system
Volume name
Volume size
Description
SAP_CI
2TB
Volumes used to store the operating system of SAP central
service server
SAP_DB_BIN
2TB
Volumes used to store the operating system and binaries of
SAP database server
SAP_DB_DATA1
2TB
SAP_DB_DATA2
2TB
SAP_DB_DATA3
2TB
SAP_DB_DATA4
2TB
SAP_DB_LOG_1
500GB
SAP_DB_LOG_2
500GB
SAP_APPS
2TB
Volumes used to store the data files of SAP DB server
Volumes used to store the log files of SAP DB server
Volumes used to store the operating system of SAP application
server
To create a crash-consistent XtremIO snapshot of the production system for repurposing (Test/Dev), we
created a consistency group for the following volumes:

SAP_CI

SAP_DB_BIN

SAP_DB_DATA1

SAP_DB_DATA2

SAP_DB_DATA3

SAP_DB_DATA4

SAP_DB_LOG_1

SAP_DB_LOG_2
We deployed five SAP Test/Dev systems quickly by repurposing a snapshot of the production databases
in this solution. The EMC XtremIO All Flash solution for SAP document listed in the References section
provides the detailed deployment steps.
31
© 2016 VCE Company, LLC.
All Rights Reserved.
Application design
The following section describes the application design and considerations.
Oracle Database 12c
On the Vblock 540, we configured a two-node Oracle 12C RAC database that acted as the production
database. Each node of the RAC database was running on virtual machines that were created on two
separate ESXi servers. We then created ten Test/Dev and two DSS Oracle databases by creating
repurposed copies of the production database and mounting them on separate virtual machines using
AppSync.
To provision storage for this Oracle production database, we then:
1
Created a single VMFS datastore for each XtremIO volume
2
Created a single virtual disk from each datastore using Thick Provision Eager Zeroed disk
provisioning type and Independent Persistent mode.
3
Evenly spread the virtual disks to all the four VMware SCSI controllers, as shown in the table
below.
4
Logged in to the virtual machine and partitioned the disks with an offset of 2048 sectors, which
equaled 1 MB.
5
Created Oracle ASM disks on the partitioned disks using the oracleasm utility
6
Logged in to the ASM instance to create ASM diskgroups.
32
© 2016 VCE Company, LLC.
All Rights Reserved.
Virtual disks evenly spread over four VMware SCSI controllers
VMware data store
VMware
virtual disk
VMware SCSI
controller
Oracle
ASM disk
Oracle ASM
diskgroup
Oracle data file
PROD_OCR1
Disk 2
Controller 0
OCR1
+OCR (normal
redundancy)
OCR files and
Voting disk files
PROD_OCR2
Disk 3
Controller 1
OCR2
PROD_OCR3
Disk 4
Controller 2
OCR3
PROD_DATA1
Disk 5
Controller 0
DATA1
+DATA (external
redundancy)
DATA files, TEMP
files, Control files
PROD_DATA2
Disk 6
Controller 1
DATA2
PROD_DATA3
Disk 7
Controller 2
DATA3
PROD_DATA4
Disk 8
Controller 3
DATA4
PROD_REDO1
Disk 9
Controller 0
REDO1
+REDO (external
redundancy)
Online redo log files
PROD_REDO2
Disk 10
Controller 1
REDO2
PROD_REDO3
Disk 11
Controller 2
REDO3
PROD_REDO4
Disk 12
Controller 3
REDO4
PROD_FRA1
Disk 13
Controller 2
FRA1
+FRA (external
redundancy)
Archived redo log
files
PROD_FRA2
Disk 14
Controller 3
FRA2
Microsoft SQL 2014
We designed a SQL Server 2014 stand-alone instance to host a 1 TB production database. Then we
deployed 10 copies of the SQL Server production database in AppSync by using the XtremIO Virtual
Copy (XVC) feature. Each copy was mounted to a separate virtual machine to simulate a typical SQL
Server Test/Dev database environment.
Also, we deployed two SQL Server 2014 standalone instances to host two OLAP databases respectively
for analysis and reporting in the SQL Server DSS environment.
To achieve the best configuration with maximum performance of the SQL Server production database, we
then:

Enabled –T834 trace flag to enable Large Pages for the SQL Server instance.

Used Lock Pages in Memory for the SQL Server service account.

Pre-allocated data and log files for both SQL Server production and tempdb databases to avoid autogrowth during peak times.

Used multiple data files of equal size for user database and tempdb within the same file group.

Used NTFS 64K allocation unit size when formatting all data and log volumes.

Set Max Server Memory to limit SQL Server maximum allocated memory, so some reserved
memory was available for OS operations.
33
© 2016 VCE Company, LLC.
All Rights Reserved.
Database configuration for SQL Server 2014
Database usage Quantity Size
Number of Number of Data LUN Log LUN
filegroups files per
size
size
filegroup
OLTP – Production 1
database
1TB
4
4
4 x 1TB
1 x 800GB Full
Production database
OLTP - Test/Dev 10
copies
1TB
4
4
4 x 1TB
1 x 800GB Simple
Test/Dev databases
generated from
production database
snapshot in AppSync
OLAP - Analysis
and Reporting
databases
1TB
1
16
4 x 1TB
1 x 800GB Simple
Decision support
system databases
used for analysis and
reporting
2
Database
recovery
model
Configuration
As best practices, we designed tempdb for both OLTP and OLAP workload profile with both capacity and
performance considerations.

For the OLTP workload, we created four 100 GB tempdb data files and one 80 GB log file, and put
them on a separate LUN on XtremIO

For the OLAP workload, we created eight 100 GB tempdb data files and one 160 GB log file, and put
them on a separate LUN on XtremIO
SAP Business Suite
The following table lists the file system and logical volume structure of the SAP Oracle database server in
this solution:
SAP Oracle database file system and logical volume structure
Volume group
Logical volume
Size (GB)
Mount point
oraclevg
fslv112_64
16
/oracle/<SID>/112_64
fslvoraclient
16
/oracle/client
fslvorastaging
16
/oracle/stage
fslvsaparch
64
/oracle/<SID>/oraarch
fslvsaptrace
64
/oracle/<SID>/saptrace
sapdata1vg
fslvsapdata1
1800
/oracle/<SID>/sapdata1
sapdata2vg
fslvsapdata2
1800
/oracle/<SID>/sapdata2
sapdata3vg
fslvsapdata3
1800
/oracle/<SID>/sapdata3
sapdata4vg
fslvsapdata4
1800
/oracle/<SID>/sapdata4
sapmirrorlogvg
fslvmirrlogA
32
/oracle/<SID>/mirrlogA
saparchvg
34
© 2016 VCE Company, LLC.
All Rights Reserved.
fslvmirrlogB
32
/oracle/<SID>/mirrlogB
fslvoriglogA
32
/oracle/<SID>/origlogA
fslvoriglogB
32
/oracle/<SID>/origlogB
sapvg
fslvusrsap
32
/usr/sap
system
root
30
/
swap
16
swap
saporalogvg
The following table lists the Oracle configuration for the SAP landscape in this solution:
SAP landscape Oracle configuration
Oracle parameters
Value
Remarks
CPU_COUNT
32
PARALLEL_MAX_SERVERS
320
PARALLEL_MAX_SERVERS= #DB-CPU-Cores * 10
PROCESSES
2280
PROCESSES = #ABAP work processes * 2 + #J2EE
server processes * <max-connections> +
PARALLEL_MAX_SERVERS + 40
SESSIONS
4576
SESSIONS = 2 * PROCESSES
db_cache_size
5G
shared_pool_size
5G
ARCHIVELOG
OFF
sga_target
20G
sga_max_size
20G
If you enable this parameter, run a CRON cleanup script
every five minutes to prevent the disk from getting full.
The maximum size is the amount of RAM.
35
© 2016 VCE Company, LLC.
All Rights Reserved.
Key considerations for SAP design
For this solution, we installed and configured the SAP systems in a distributed architecture where we had
separate virtual machines for the SAP central services, the database, and application servers. The SAP
system we deployed implements these key design features:


Provision an SAP ERP production system including the following components on their own virtual
machines:
–
ABAP central services instance
–
Database server
–
Eight application instances
Provision an SAP ERP Test/Dev system by repurposing a snapshot of the production system with the
following components:
–
ABAP central services instance
–
Database server

Configure the SAP Database server running Oracle according to the SAP notes listed in the
References section

Install and configure SAP patches, parameter, and basis settings according to the SAP installation
Guide and SAP Notes listed in the References section

Configure the SAP update processes (UPD/UP2) on the primary application and additional server
instances

Store SAP shared file systems, /sapmnt/<SID> and /usr/sap/<SID>, on the SAP ASCS server and
share to all the SAP virtual machines of the instance.

Install VMware tools and configure with vmxnet3 network adapter

Spread database data files across multiple datastore/LUNs

Separate logs from data in separate virtual disks

Use paravirtualized SCSI (PVSCSI) controllers for database data and log virtual disks

Spread the database files across all virtual SCSI controllers

Use Thick Provision Eager Zeroed format for all virtual disks in the SAP virtual machines
36
© 2016 VCE Company, LLC.
All Rights Reserved.
Solution Validation
Test objective
In this solution we:
1
Validated and measured how different types of workloads generated by a single application can
be consolidated to the same Vblock system
2
Validated how different types of workloads generated by multiple applications can also be
consolidated to the same Vblock system
3
Validated and measured the impact of the combination of XtremIO Virtual Copy (XVC) and
AppSync. We were able to provision multiple copies of the production database for nonproduction
workloads with no initial overhead on the array’s physical capacity.
Test scenarios
This solution case covers five test scenarios to demonstrate the ability of a Vblock all-flash system to
sustain both mixed applications (Oracle, Microsoft and SAP) and mixed workloads (production OLTP,
test, development and OLAP/DSS reporting) simultaneously on the same Vblock system.
We tested multiple combinations of mixed applications and mixed workloads to demonstrate multiple
consolidation scenarios. The KPIs we monitored during these tests included:

Impact on production IOPS as mixed workloads were added (Test/Dev and OLAP/DSS reporting with
production)

Impact on production IOPS as mixed applications were added (Oracle, Microsoft, and SAP)

Production read and write I/O latencies at the storage array level and workload level as mixed
workloads and applications were added
Scenario 1: Baseline independent production workloads
In this scenario, we established a performance baseline for each application (Oracle, Microsoft, and SAP)
by running a separate workload in each of their production environments.
Scenario 2: Combine production and test/dev workloads for Oracle, SQL, and SAP
independently
In this scenario, we provisioned 10 Oracle test/dev databases, 10 SQL Server Test/Dev databases, and
five SAP Test/Dev systems by creating XtremIO virtual copies for each of their production environments
respectively. We then ran the same production workload as in scenario 1 and a Test/Dev workload in
parallel for each application separately, to simulate consolidating production and Test/Dev workloads to
the Vblock system.
37
© 2016 VCE Company, LLC.
All Rights Reserved.
We harnessed the IOPS issued from each of the Test/Dev databases and systems. For Oracle and SAP
environments, we configured the Linux Control Group (cgroup) to harness the IOPS. For SQL Server
environments, we configured VMware Storage I/O control. The 10 Oracle Test/Dev databases generated
a total of 20,000 IOPS, the 10 SQL Server Test/Dev databases generated a total of 20,000 IOPS, and the
five SAP Test/Dev systems generated a total of 60,000 IOPS. The Appendix section provides a complete
description of how to harness the Test/Dev system IOPS.
Scenario 3: Combined Production, Test/Dev, and DSS reporting/OLAP workloads for Oracle
and SQL Independently
In this scenario, we provisioned two Oracle DSS databases by creating XtremIO virtual copies for the
production database, and two SQL OLAP databases by using the Microsoft SQL OLAP benchmark tool.
We ran the production workloads, Test/Dev workloads, and DSS/OLAP workloads in parallel for Oracle
and SQL separately on the same Vblock system.
We harnessed the bandwidth generated from each of the Oracle DSS databases using Linux cgroup for
Oracle and using VMware Storage I/O control for SQL Server. The two Oracle DSS databases generated
a total of around two GB/s read bandwidth, and the two SQL Server OLAP databases generated a total of
around 2 GB/s read bandwidth.
Scenario 4: Full Mixed Workloads for All Applications and Environments
In this scenario, we ran the Oracle production, Test/Dev, and DSS workloads; the SQL Server production,
Test/Dev, and OLAP workloads; and the SAP production and Test/Dev workloads in parallel, simulating
consolidating mixed workloads of mixed applications to the Vblock System.
Scenario 5: Full mixed workloads for all applications and environments with scaled up
production workloads
In this scenario, we increased the production workload of Oracle, SQL Server, and SAP, and
simultaneously ran the same Oracle Test/Dev and DSS workload, the same SQL Server Test/Dev and
OLAP workload, and the same SAP Test/Dev workload, simulating increasing the production workloads
of each application after we consolidated the mixed workload to the Vblock System.
Test tool and methodology
The following section describes the tools and methodologies that we used to test this solution.
38
© 2016 VCE Company, LLC.
All Rights Reserved.
Oracle Database 12c
Test tool and workload profile
In this solution, we used SLOB to generate random read/write I/O workload to the production and
Test/Dev databases, simulating production and Test/Dev workloads. We also adjusted SLOB to execute
queries that accessed data using full table scans, generating sequential read-only I/O workloads to the
DSS databases, which is the typical data access pattern for a DSS reporting workload.
The following table shows the production database configuration and workload profile.
Production database configuration and workload profile
Profile characteristic
Description
Database type
OLTP
Database size
3 TB
Oracle database
Two-node Oracle 12c R1 RAC database on ASM
Instance configuration
SGA size: 16 GB
Note: Because a larger database cache will buffer
more data, we configured a very small buffer cache to
generate a stable and high physical I/O workload.
Workload profile
SLOB random I/O workload with 80:20 read/write ratio
and SLOB execution think time enabled. Refer to the
Appendix in this paper for a full list of SLOB
configuration parameters used.
Data block size
8 KB
The table below shows the Test/Dev database configuration and workload profile.
Test/Dev database configuration and workload profile
Profile characteristic
Description
Database type
OLTP
Database size
3 TB
Oracle database
Single instance Oracle 12c R1 database on ASM,
which is provisioned from an XtremIO virtual copy of
the production database
Instance configuration
SGA size: 16 GB
Note: Because a larger database cache size will buffer
more data, we configured a very small buffer cache to
generate a stable and high physical I/O workload.
39
© 2016 VCE Company, LLC.
All Rights Reserved.
Profile characteristic
Description
Workload profile
SLOB random I/O workload with 80:20 read/write ratio
and SLOB execution think time enabled. Refer to the
Appendix in this paper for a full list of SLOB
configuration parameters used.
Data block size
8 KB
The table below shows the DSS database configuration and workload profile.
DSS database configuration and workload profile
Profile characteristic
Description
Database type
DSS
Database size
3 TB
Oracle database
Single instance Oracle 12c R1 database on ASM,
which is provisioned from an XtremIO virtual copy of
the production database
Instance configuration
SGA size: 16 GB
Note: Because larger database cache size will buffer
more data, we configured a very small buffer cache to
generate a stable and high physical I/O workload.
Workload profile
SLOB sequential read-only I/O workload with an I/O
size of 128 KB and SLOB execution think time
disabled. Refer to the Appendix in this paper for a full
list of SLOB configuration parameters used.
Data block size
8 KB
Methodology
The detailed test methodology was as follows:
1
Ran the baseline performance test on the production database to achieve over 77,000 IOPS.
After reaching a steady state, we measured and recorded the performance level of the production
database.
2
Based on step 1, we generated over 20,000 total IOPS against 10 Test/Dev databases. We
measured and recorded the performance level of the production database.
3
Based on step 2, we generated about 2GB/s total bandwidth over the two DSS databases. We
measured and recorded the performance level of the production database.
4
Based on step 3, we ran SQL Server and an SAP mixed workload together on the same vBlock
System 540. We measured and recorded the performance level of the production database.
5
Based on step 4, we increased the production workloads of Oracle, SQL Server, and SAP
environment. We then measured and recorded the storage and application latency, and relevant
performance metrics of the production database.
40
© 2016 VCE Company, LLC.
All Rights Reserved.
The following table shows the performance metrics that were measured and recorded for Oracle:
Oracle performance metrics
Data source
Metrics
Collect method
XtremIO
XtremIO Volume Latency (us)
XMS report
vCenter
vBlock ESXi host CPU percentage
vCenter performance panel
Oracle AWR report
Read and write IOPS
Take from physical write I/O requests and physical
read I/O requests of System Statistics (Global) section
Read and write MBPS
Take from Physical write bytes and Physical read
bytes of System Statistics (Global) section
Read and write I/O operation latency
Take from db file sequential read, db file parallel
read, and log file parallel write of Top Timed Events
of the section
CPU usage
Calculate the average idle CPU usage during the test,
then subtract it from 100.
Oracle OSWatcher
Block Box running on
the production virtual
machines
Microsoft SQL 2014
Test tool and workload profile
To simulate workload in a real-world OLTP and DSS environment, we used the following tools:

Microsoft Benchcraft OLTP workload tool version 1.12.0-1026: Derived from an industry-standard,
modern OLTP benchmark, this tool simulates a stockbroker trading system used for managing
customer accounts, executing customer trade orders, and other such transactions within the financial
markets. The majority of the I/O size is 8k with a fully random 90:10 read/write ratio. This tool is used
to generate workloads for both the production and test/dev environments.

Microsoft OLAP workload tool version 2.17: Derived from an industry-standard DSS or OLAP
benchmark, this tool simulates system functionality representative of complex business analysis
applications for a wholesale supplier, through set queries that are given a realistic context. The
majority of the I/O size is between 64k and 512k, with a 100 percent sequential read. This tool is used
to generate the workload for the OLAP/DSS environment.
41
© 2016 VCE Company, LLC.
All Rights Reserved.
The table below shows the SQL Server production database configuration and workload profile:
SQL Server production database configuration and workload profile
Profile characteristic
Description
Database type
OLTP
Database size
1 TB
Number of test users
75 users
SQL Server Memory reservation
16 GB
Workload profile
90/10
Data block size
8 KB
The table below shows the SQL Server test/dev database configuration and workload profile:
SQL Server Test/Dev database configuration and workload profile
Profile characteristic
Description
Database type
OLTP
Database size
1 TB x 10
Number of test users
25 users in total for 10 Test/Dev databases
SQL Server Memory reservation
16 GB
Workload profile
90/10
Data block size
8 KB
The table below shows the SQL Server DSS analysis and reporting database configuration and workload
profile.
SQL Server OLAP analysis and reporting database configuration and workload profile
Profile characteristic
Description
Database type
OLAP
Database size
1 TB x 2
SQL Server Memory reservation
120 GB
Workload profile
100 percent sequential read
Data block size
64 KB – 512 KB
42
© 2016 VCE Company, LLC.
All Rights Reserved.
Methodology
The detailed test methodology was as follows:
1
Ran the baseline performance test on the production database to achieve over 55,000 IOPS.
After reaching a steady state, we measured and recorded relevant performance metrics.
2
Based on step 1, we generated over 20,000 total IOPS against 10 Test/Dev copies. Measured
and recorded the performance impact against the production database.
3
Based on step 2, we generated about 2GB/s total bandwidth over the two OLAP databases.
Measured and recorded the performance impact against the production database.
4
Based on step 3, ran Oracle, SQL Server and SAP combined workloads together on the same
vBlock System 540. We pushed the same workload on the production database, as defined in
step 1. Measured and recorded the storage and application latency, and relevant performance
metrics.
5
Based on step 4, ran combined application workload, and pushed the Oracle, SQL Server and
SAP production databases to the same IOPS number as defined in step 1. Measured and
recorded the storage and application latency, and relevant performance metrics.
The following table shows the metrics that were measured and recorded for SQL Server:
SQL Server metrics
Data source
Metrics
Collection method
XtremIO
XtremIO Volume IOPS
XMS Report
XtremIO Volume Latency (us)
vCenter
Vblock ESXi host CPU percentage
vCenter performance panel
Vblock ESXi host Memory usage
Windows
SQL Server production volume IOPS
Performance Monitor
LogicalDisk - Disk Transfers/sec (Disk Reads/sec, Disk
Writes/sec)
SQL Server production volume latency
LogicalDisk - Avg. Disk sec/Transfer (Avg. Disk
sec/Read, Avg. Disk sec/Write)
SQL Server Test/Dev volume IOPS
LogicalDisk - Disk Transfers/sec (Disk Reads/sec, Disk
Writes/sec)
SQL Server DSS volume bandwidth
LogicalDisk - Disk Bytes/sec (Disk Read Bytes/sec,
Disk Write Bytes/sec)
SQL Server production server -Processor
Time Percentage
Processor Information - % Processor Time
SQL Server Transaction per second
SQL Server Databases – Transactions/sec
43
© 2016 VCE Company, LLC.
All Rights Reserved.
SAP Business Suite
Test tool and workload profile
This solution was designed for a mixed application workload, including SAP, Oracle, and Microsoft. For
the SAP application, we used SAP Power Benchmark, derived from the SAP Sales and Distribution (SD)
Benchmark, to simulate an SAP workload on the installed SAP ERP 6.0 system.
The toolkit includes a sell-from-stock business scenario that consists of the following transactions:

(VA01) Create a sales order with five line items

(VL01N) Create a delivery for the order

(VA03) Display the customer order

(VL02N) Change the delivery and post a goods issue

(VA05) List 40 orders for sold-to party

(VF01) Create an invoice for the order
The following tables show the workload profile for the production system.
PRD system configuration and workload profile
Characteristic
Description
Workload type
OLTP – For scenario 1 and 2 and 4
Database size
2TB
Number of SPB concurrent users
1,000
Number of background jobs
8
Average R/W ratio
96/4
Data Block Size
8KB
Characteristic
Description
Workload type
OLTP – For scenario 5
Database size
2TB
Number of SPB concurrent users
2,000
Number of background jobs
11
Average R/W ratio
96/4
Data Block Size
8KB
44
© 2016 VCE Company, LLC.
All Rights Reserved.
We created XtremIO XVC on the production (PRD) system to be used as the Test/Dev system. We used
the following configuration and workload profile for the Test/Dev system.
Test/Dev system configuration and workload profile
Characteristic
Description
Workload type
OLTP – For scenario 2 and 4 and 5
Database size
2TB
Number of SPB concurrent users
200
Number of background jobs
5
Average R/W ratio
98/2
Data Block Size
8KB
Methodology
The detailed test methodology was as follows:
1
Ran the baseline performance test on the production system to achieve around 13,000 IOPS.
After reaching a steady state, measured and recorded relevant performance metrics.
2
Based on step 1, generated over 59,957 total IOPS against 5 Test/Dev systems. Measured and
recorded the performance impact against the production database.
3
Based on step 2, ran Oracle and SQL combined workloads together on the same vBlock System
540. Pushed the SAP production system to the higher IOPS number to show the result of the
increased workload. Measured and recorded the storage and application latency, and relevant
performance metrics.
We collected and measured the performance metrics from XtremIO storage array, vCenter, and SAP
Business Suite. The following table shows the performance metrics used in the test.
Performance metrics by application type
Data source
Metrics
Collection method
XtremIO
XtremIO Volume IOPS
XMS report
XtremIO Volume Latency (us)
vCenter
vBlock ESXi host CPU percentage
EXSi Server Performance Tab - CPU
SAP Business Suite Average Dialog Response Time (ms)
Workload Monitor (ST03)
45
© 2016 VCE Company, LLC.
All Rights Reserved.
Test Results
The following chapter describes the test results of the enterprise mixed workload on Vblock 540.
Enterprise mixed workload performance validation on
Vblock 540
Mixed workload test results
The test results when Oracle, Microsoft and SAP mixed workloads were run simultaneously for Scenario
5 show an on-array performance of ~230k IOPS (primary IO size 8KB) and an average throughput of 3.8
GB/s (primary I/O size 64 KB and 128 KB), split into 88 percent read and 12 percent write activity.
Average response times were recorded to be 866 μs, 829 μs for reads and 1152 μs for writes.
The following figure shows the total combined test results on the Vblock 540.
Total combined test results for Vblock System 540
Note:
The symbol ~ means “approximately.”
46
© 2016 VCE Company, LLC.
All Rights Reserved.
Oracle database test results:
The following figure shows the Oracle production workload test results collected in each scenario.
Oracle production database test results for different scenarios
The following table shows the Oracle performance data collected in each scenario:

IOPS of the production database collected from the AWR reports

I/O response time of the production database collected from the XtremIO array performance report

CPU usage of the production database collected from the Oracle OSWatcher Black Box that was
deployed in each RAC node

IOPS of the test/dev databases collected from the AWR reports

MBPS of the DSS databases collected from the AWR reports
47
© 2016 VCE Company, LLC.
All Rights Reserved.
Overall performance data table
Performance data
Scenario
Read
IOPS
Write
IOPS
Aggregate
IOPS
Test/Dev
aggregate
IOPS
Two
OLAP
DBs
(MB/s)
PROD
read
latency
(µs)
PROD
write
latency
(µs)
PROD
redo
write
latency
(µs)
RAC
node
1 CPU
usage
(%)
RAC
node
2 CPU
usage
(%)
Scenario1
61,715
15,311
77,026
N/A
N/A
280
380
620
34.91
34.14
Scenario2
61,515
15,279
76,794
19,584
N/A
294
421
686
36.95
34.80
Scenario3
58,209
14,461
72,670
19,609
2,033
522
549
1,043
49.17
48.40
Scenario4
55,026
13,621
68,647
19,582
1,795
738
1,002
1,922
56.45
56.37
Scenario5
61,553
15,271
76,824
19,605
1,800
753
1,027
1,986
64.22
63.65
The following table shows the I/O latency collected from the Oracle AWR of the production database in
each scenario.
I/O latency performance data table
Performance data
Scenario
db file sequential read
db file parallel read
log file parallel write
avg wait time (µs)
avg wait time (µs)
avg wait time (µs)
Scenario1
523
1,057
1,043
Scenario2
539
1,064
1,111
Scenario3
784
1,404
1,546
Scenario4
1,066
1,947
2,701
Scenario5
1,098
1,977
2,803
As shown in the figure and tables:

In scenario 1, the Oracle production workload generated 77,026 aggregate IOPS when it was run on
the four-X-Brick XtremIO array.

In scenario 2, as we added the Oracle Test/Dev workloads which generated a total of 20,000
aggregate IOPS to the array, the Oracle production workload generated 76,794 aggregate IOPS.

In scenario 3, as we further added the Oracle DSS workloads which generated a total of 2GB/s read
only bandwidth to the array, the Oracle production workload generated 72,670 aggregate IOPS.

In scenario 4, as we further added the mixed workloads of SQL Server and SAP which generated a
total of 131,253 random read write IOPS and 2GB/s sequential read bandwidth to the array, the
Oracle production workload generated 68,647 aggregate IOPS.
48
© 2016 VCE Company, LLC.
All Rights Reserved.

In scenario 5, with all the non-production workloads of Oracle, SQL Server, and SAP running on the
array, we increased the workload on the production systems for each application. The Oracle
production workload generated 76,824 aggregate IOPS.

Because the load of the array increased as we added more workloads to it, the I/O response time
increased and the CPU usage of the RAC nodes also increased. This is because most of the CPU
cycles were waiting for I/O completion.
SQL database test results:
The following figure shows the SQL Server production database test results in the validation tests.
SQL Server production database test results
The following table shows the detailed SQL Server test results in this solution.

In scenario 1, the production database achieved 58,991 IOPS overall with an average read/write
response time of only 249/288 microseconds on the XtremIO side.

In scenario 2, after we added about 20,000 IOPS on the 10 Test/Dev copies, we checked the impact
on the production database. As shown in the table, there was close to zero impact on the production
IOPS and response time, at the level of 58,836 IOPS and 256/316 microseconds average read/write
latency respectively.

In scenario 3, we continued to add 2GB/s OLAP workload. As shown in the table, there was less than
five percent impact on the production IOPS, which reached 56,260. The XtremIO average read/write
response time remained steady at 523/653 microseconds.
49
© 2016 VCE Company, LLC.
All Rights Reserved.

In scenario 4, when we continued to add both Oracle and SAP workloads, the impact is still minimized,
as shown in the table. Production database IOPS reached 52,943 and the XtremIO average read and
write response time for the SQL Server production database is 755/896 microseconds, meaning it
remained within 1 millisecond.

In scenario 5, as we added more application workloads to reach the similar baseline IOPS level (up to
59,528) XtremIO shows strong capacity for handling this amount of IOPS under the mixed workload
profile with Oracle, SQL Server and SAP running together. The XtremIO average read/write response
time for the SQL Server application production database is 778/912 microseconds: well under 1
millisecond.
Detailed test results for SQL Server
Performance data
Scenario
Read
IOPS
Write
IOPS
Aggregate
IOPS
Test/Dev
aggregate
IOPS
Two
OLAP
DBs
PRD
(MB/s)
read
latency
(µs)
PRD
write
latency
(µs)
PRD log
write
latency
(µs)
SQL Server
production
CPU
percentage
Scenario1
54,282
4,709
58,991
N/A
N/A
249
288
655
69.38%
Scenario2
54,234
4,602
58,836
22,749
N/A
256
316
698
68.80%
Scenario3
51,906
4,354
56,260
21,706
1,926
523
653
1,301
60.61%
Scenario4
48,771
4,172
52,943
20,552
1,908
755
896
2,213
54.35%
Scenario5
54,783
4,745
59,528
18,292
1,906
778
912
2,346
66.59%
The detailed response time from the SQL Server production instance is listed in the following table.
Response times from SQL Server production instances
Performance data
Scenario
SQL Server AVG data
read latency (µs)
SQL Server AVG data
write latency (µs)
SQL Server AVG log
write latency (µs)
Scenario1
486
623
894
Scenario2
516
689
979
Scenario3
885
1,598
1,618
Scenario4
1,113
2,621
2,742
Scenario5
1,157
2,648
2,871
50
© 2016 VCE Company, LLC.
All Rights Reserved.
SAP Business Suite test results:
Test results for the different test scenarios for SAP are shown in the following figure.

In scenario 1, we ran the OLTP workload on the production system, which consisted of 1,000 PBM
users and 8 background jobs as a baseline.

In scenario 2, we kept the production system running and added the Test/Dev workload to the same
XtremIO X-Bricks. The five Test/Dev system workload profiles consisted of 200 PBM users and five
background jobs running on each system.

In scenario 3, the DSS tests were done with the MS-SQL and Oracle side only, with the SAP system
turned off. As it was not related to SAP, this scenario was not shown in the SAP test result.

In scenario 4, we kept the production Test/Dev workload and added the mixed workload, containing
the OLTP and OLAP workload, from MS-SQL and Oracle.

In scenario 5, we added more workload to the system from SAP/Oracle/MS-SQL sides together. For
the SAP production system, we increased the background job number from 8 to 11 and increased the
number of PBM users to 2,000. The test results show that the XtremIO X-Bricks have not reached full
capacity, and are able to scale and handle the new workloads without any problems.
SAP Production System Test Result
51
© 2016 VCE Company, LLC.
All Rights Reserved.
From the SAP application perspective, the average dialog response time in the final mixed workload tests
with Oracle and SQL-Server is below the 1,000 ms threshold: a very good performance for dialog users.
The following figure shows the screenshot of the SAP ST03N output taken during the test period.
SAP average dialog response time during the mixed workload test
SAP production system detailed performance result.
Non PRD system
aggregate IOPS
PRD
read
IOPS
PRD
write
IOPS
PRD
aggregate
IOPS
PRD average
read RT(µs)
PRD average
write RT(µs)
Scenario 1
12,907
393
13,300
296
704
0
Scenario 2
12,511
396
12,907
302
779
59,957
Scenario 3
Not available as SAP test does not include a DSS scenario
Scenario 4
9,264
326
9,590
780
1,759
31,674
Scenario 5
16,940
822.2
17,762.2
611
1,777.3
30,908
52
© 2016 VCE Company, LLC.
All Rights Reserved.
Vblock System 540 performance summary
The following figure shows the average CPU usage across all of the ESXi servers for production and nonproduction environments on the Vblock 540.
CPU usage on ESXi server for production and non-production environments
Overall, the average latency remained low for the XtremIO array, while the combined workloads of Oracle
Database, SQL Server Database, and SAP Business Suite generated extremely high IOPS, as shown in
the following figure.
IOPS behavior on XtremIO with combined workload
53
© 2016 VCE Company, LLC.
All Rights Reserved.
Latency behavior on XtremIO with combined workload
XtremIO storage efficiency analysis with a mixed workload
The XtremIO inline data-reduction capability and XtremIO Virtual Copy greatly reduced
the physical storage footprint required to support mixed workloads on the same array.
Here is a detailed analysis of the statistics:
Deploying Test/Dev virtual machines from virtual machine templates (Oracle *10 & SQL * 10)
In the figure below:

Volume Capacity (box A) shows the total amount of space taken by all of the volumes created in the
array.
Note:
Volumes in XtremIO array are all thin-provisioned.

Physical Capacity Used(box B) shows the amount of physical space allocated in the array; this is the
amount of space occupied after compression and deduplication.

Volume Capacity (box C) shows the amount of space allocated in the volumes; this is the amount of
space occupied before compression and deduplication.
54
© 2016 VCE Company, LLC.
All Rights Reserved.
Storage efficiency for deploying virtual machine from template
The following table summarizes the results shown in the above figure. When we deployed 10 virtual
machines for Oracle and 10 virtual machines for SQL Server, we allocated an additional 0.5 TB of volume
capacity, and 0.321 TB of data was written to the array. However, this data did not take up any physical
space because of the XtremIO inline data-reduction capability.
55
© 2016 VCE Company, LLC.
All Rights Reserved.
Metrics summary
Metrics
Before
After
Delta
Volume capacity
62.403 TB
62.903 TB
0.5 TB
Volume capacity used
16.426 TB
16.747 TB
0.321 TB
Physical capacity used
8.797 TB
8.797 TB
0 TB
Provisioning Test/Dev databases or systems (Oracle *10 & SQL * 10 & SAP * 5)
Storage efficiency for provisioning Test/Dev databases/systems
The table below provides a summary of the numerical test results shown in the above figure. As you can
see, when provisioning ten Oracle Test/Dev databases, ten SQL Server Test/Dev databases, and five
SAP Test/Dev systems by creating XtremIO virtual copies from the corresponding production database
and system, 147.617 TB of XtremIO volumes were created, while the used volume capacity and used
physical capacity results stayed the same. This is because XtremIO virtual copy creation is an in-memory
metadata operation that does not involve any backend media.
56
© 2016 VCE Company, LLC.
All Rights Reserved.
Metrics summary
Metrics
Before
After
Delta
Volume Capacity
62.903 TB
210.52 TB
147.617 TB
Volume Capacity used
16.776 TB
16.776 TB
0 TB
Physical Capacity used
8.8 TB
8.8 TB
0 TB
Final stage:
The diagram below shows the total storage efficiency after all the benchmarking has been completed.

Overall efficiency ratio: 24:1

Data reduction ratio: 2:1

vSphere thin provisioning saving: 92%
Total storage efficiency
57
© 2016 VCE Company, LLC.
All Rights Reserved.
It is important to understand that the end state of the Vblock 540 includes 10 TB of physical capacity used
on a 30 TB storage array. The 25 Test/Dev copies created have not yet been updated and modified over
a period of time which would impact the amount of additional physical capacity consumed.
For example: Assuming 25 copies of the various databases are accessed by developers and utilized for a
24 hour period at a time, incremental capacity would be required as the space-efficient copies are
updated and changed. If all Test/Dev copies were full copies we would have the following:

Oracle –10 copies of a 3 TB database = 30 TB of data

SQL –10 copies of a 1 TB database = 10 TB of data

SAP –5 copies of a 2 TB database = 10 TB of data
Here are some different potential scenarios:
Assuming a 50% update ratio:

Oracle –10 copies of a 3 TB database updated at 50% new data = 15 TB

SQL –10 copies of a 1 TB database updated at 50% new data = 5 TB

SAP –5 copies of a 2 TB database updated at 50% new data = 5 TB
This would equate to a requirement of 25 TB of incremental capacity. Factoring in a 2:1 compression ratio
would result in an incremental physical capacity requirement of approximately 12 TB, totaling 22 TB
(roughly 75% of the total capacity of 4 X-Bricks).
Assuming a 30% update ratio:

Oracle –10 copies of a 3 TB database updated at 30% new data = 10 TB

SQL –10 copies of a 1 TB database updated at 30% new data = 3 TB

SAP –5 copies of a 2 TB database updated at 30% new data = 3 TB
This would equate to a requirement of 16 TB of incremental capacity Factoring in a 2:1 compression ratio
would result in an incremental physical capacity requirement of approximately 8 TB, totaling 18 TB
(roughly 60% of the total capacity of 4 X-Bricks).
Therefore, when evaluating capacity requirements for Test/Dev copies on the same systems as
production, it is important to evaluate and project capacity consumption, inclusive of space-efficiency, to
determine the proper amount of storage capacity required.
58
© 2016 VCE Company, LLC.
All Rights Reserved.
Conclusion
IT decision makers who are evaluating new options for data center management to help provide better
service with lower TCO should research converged infrastructure platforms that use all-flash technology.
EMC can help you identify the right choice of platform based on the business applications and skill sets of
the organization. Our complete package of engineered solutions, software, services, support, and
training can eliminate the complexities of build-your-own multi-vendor solutions. For more information,
visit www.vce.com.
Next steps
To learn more about this and other solutions, contact a VCE representative or visit www.vce.com.
59
© 2016 VCE Company, LLC.
All Rights Reserved.
References
VCE documentation

VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview
EMC documentation

EMC AppSync User and Administration Guide

EMC XtremIO Storage Array Host Configuration Guide

White Paper: EMC XtremIO Advanced Data Service for SAP Business Suite

White Paper: EMC Extreme Performance and Efficiency for Microsoft SQL Server

White Paper: Microsoft SQL Server Best Practices and Design Guideline for EMC Storage

White Paper: EMC XtremIO Workload Consolidation And Copy Management For Microsoft SQL
Server

White Paper: Oracle 11g and 12c Database Consolidation and Workload Scalability with EMC
XtremIO 4.0

White Paper: Oracle Best Practices with XtremIO

White Paper: EMC XtremIO Optimized Flash Storage for Oracle Databases
VMware document

Performance Best Practices for VMware vSphere 6.0
SQL document

Microsoft MSDN - Books Online for SQL Server 2014
SAP documents

Installation Guide: SAP Systems Based on SAP NetWeaver ABAP on UNIX - Valid for SAP Systems
based on SAP NetWeaver 7.1 and Higher

SAP Note 19466 - Downloading SAP kernel patches

SAP Note 1171650 - Automated Oracle DB parameter check

SAP Note 1122387—Linux: SAP Support in virtualized environments

SAP Note 1122388—Linux: VMware vSphere configuration guidelines

SAP on VMware Best Practices

SAP benchmark configuration blueprint
60
© 2016 VCE Company, LLC.
All Rights Reserved.
Appendix
Provisioning design with AppSync
Provisioning SQL Server database copies with AppSync
EMC AppSync provides simple, self-service application protection with tiered protection options and
proven recoverability. It facilitates and automates creation of disk-based copies of Microsoft SQL Server
2014 databases for EMC XtremIO, which can be used for recovery and repurposing.
We followed these steps to create 10 SQL Server Test/Dev database copies in AppSync:
1
Add the EMC XtremIO storage in AppSync. Select Settings > Storage Infrastructure and add
an XtremIO storage system, as shown in the following figure:
Adding EMC XtremIO in AppSync
2
Select Settings > VMware vCenter Servers. Add the vCenter Server that will host both the SQL
Server production and Test/Dev virtual machines, as shown in the following figure.
61
© 2016 VCE Company, LLC.
All Rights Reserved.
Adding vCenter Server
3
Add a SQL Server 2014 production instance in AppSync. Select Copy Management > Microsoft
SQL Server and add a SQL Server Cluster Instance using Discover Instance, as shown in the
following figure. AppSync pushes the plug-in software to the host automatically.
Adding an SQL Server 2014 Production instance in AppSync
62
© 2016 VCE Company, LLC.
All Rights Reserved.
4
After you add the production instance, add 10 Test/Dev SQL Server instances, as shown in the
following figure.
Adding 10 Test/Dev SQL Server instances
5
Discover the production database residing on the SQL Server instance in AppSync console, as
shown in the following figure.
Discovering databases in AppSync
6
Select Repurpose > Create Repurpose Copy to generate on-demand database copies, as
shown in the following figure. In this solution, we created 10 database copies in total with
AppSync.
63
© 2016 VCE Company, LLC.
All Rights Reserved.
Generating on-demand database copies
7
After you finish creating the snapshot copy, mount the copy with the “Read-write” option to the
corresponding Test/Dev SQL Server instance as shown in the following figure.
Mounting copies to the Test/Dev SQL Server instance
8
Mount all the 10 copies to each Test/Dev SQL Server instance respectively. Mount the database
online with the SQL Server Management Studio, as shown in the following figure.
64
© 2016 VCE Company, LLC.
All Rights Reserved.
Mounting databases online with the SQL Server Management Studio
Provisioning non-production Oracle database with AppSync
We followed these steps to create 10 Oracle test/dev databases and two Oracle OLAP databases with
AppSync.
Register the Oracle production database to AppSync
1
On the AppSync console GUI, click Copy Management and select Oracle.
Registering the Oracle database to AppSync – step 1
2
Click Discover Databases and select Add Servers, enter the IP addresses, username, and
password of each database, and then click Start. Click Close after the servers are added to
AppSync.
65
© 2016 VCE Company, LLC.
All Rights Reserved.
Registering the Oracle database to AppSync – step 2
3
The Oracle databases running on the servers are successfully discovered.
Oracle database is successfully registered to AppSync
66
© 2016 VCE Company, LLC.
All Rights Reserved.
Create a repurpose copy from the Oracle production database
1
On the AppSync console, select the Oracle production database, click Repurpose and select
Create Repurpose Copy. Select the default setting in the following screens, and then click
Finish.
Creating the repurpose copy from the Oracle production database
2
The repurpose copy of the Oracle production database is created.
67
© 2016 VCE Company, LLC.
All Rights Reserved.
Repurpose copy created from the Oracle production
Register non-production virtual machines to AppSync
1
After the nonproduction virtual machines are deployed in vCenter, log on to AppSync, click the
Settings tab, and select Servers.
68
© 2016 VCE Company, LLC.
All Rights Reserved.
Registering Oracle non-production virtual machines – step 1
2
On the console, select Add > Unix Servers.
69
© 2016 VCE Company, LLC.
All Rights Reserved.
Registering Oracle non-production virtual machines – step 2
3
Enter the IP addresses of the non-production virtual machines, then type a username and
password and click Start.
70
© 2016 VCE Company, LLC.
All Rights Reserved.
Registering Oracle non-production virtual machines – step 3
4
All the Oracle non-production virtual machines are registered to AppSync.
71
© 2016 VCE Company, LLC.
All Rights Reserved.
Register Oracle non-production virtual machines – complete
Mount a repurpose copy to a non-production virtual machine and open database
1
On AppSync, click the Copy Management tab, and select Oracle.
Mounting repurpose copy to non-production virtual machine – step 1
2
Click the name of the Oracle database, and select one of the repurpose copies that has a Mount
Status of Not Mounted. Click Mount.
72
© 2016 VCE Company, LLC.
All Rights Reserved.
Mounting repurpose copy to a nonproduction virtual machine – step 2
3
On the popup box, click Next, and then select the server to which the repurpose copy will be
mounted, leaving the other setting as the default. Click Next and then click Finish.
73
© 2016 VCE Company, LLC.
All Rights Reserved.
Mounting repurpose copy to non-production virtual machine – step 3
4
The repurpose copy is mounted to the specified virtual machine.
74
© 2016 VCE Company, LLC.
All Rights Reserved.
Mount repurpose copy to non-production virtual machine – complete
5
Log on to the virtual machine as “root” and rescan the ASM disks.
Rescan ASM disks
6
Log on to the ASM instance and mount the DATA and REDO ASM diskgroups.
75
© 2016 VCE Company, LLC.
All Rights Reserved.
Mounting ASM diskgroups
7
Log on to the Oracle database and start it.
76
© 2016 VCE Company, LLC.
All Rights Reserved.
Mount starting the database
Harness the I/O throughput of the non-production workload
To prevent the nonproduction workloads from overloading the Vblock system, we harnessed the I/O
throughput generated by each non-production database and system using Linux Control Group (cgroup)
and VMware storage I/O control.
Linux Control Group
Follow these steps to harness the I/O throughput with cgroup on Linux server.
1
Verify the kernel settings that are required for harnessing the resources:
[root@testdevhost boot]# cd /boot
[root@testdevhost boot]# grep CONFIG_BLK_CGROUP config*el6.x86_64
CONFIG_BLK_CGROUP=y
[root@testdevhost boot]# grep CONFIG_BLK_DEV_THROTTLING config*el6.x86_64
CONFIG_BLK_DEV_THROTTLING=y
2
Mount the cgroup subsystems:
[root@testdevhost boot]# /etc/init.d/cgconfig status
Stopped
[root@testdevhost boot]# ls /cgroup
[root@testdevhost boot]# /etc/init.d/cgconfig restart
77
© 2016 VCE Company, LLC.
All Rights Reserved.
Stopping cgconfig service:
[ OK ]
Starting cgconfig service:
[ OK ]
[root@testdevhost boot]# ls /cgroup
blkio cpu cpuacct cpuset devices freezer memory net_cls
[root@testdevhost boot]# chkconfig --list cgconfig
cgconfig
0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@testdevhost boot]# chkconfig cgconfig on
[root@testdevhost boot]# chkconfig --list cgconfig
cgconfig
3
0:off 1:off 2:on
3:on
4:on
5:on
6:off
Get the major and minor number of the target storage device:
[root@testdevhost boot]# ls -l /dev/sdc1
brw-rw---- 1 root disk 8, 33 Apr 11 02:41 /dev/sdc1
4
To harness the read IOPS on this device to 400, execute the following command:
echo “8 33 400” > /cgroup/blkio/blkio.throttle.read_iops_device; done
5
To harness the read bandwidth on this device to 256MB/s, execute the following command:
echo “8 33 268435456” > /cgroup/blkio/blkio.throttle.read_bps_device; done
6
For the Oracle Test/Dev databases, we execute the following script to harness the IOPS on the
ASM DATA disks:
ls -l /dev/oracleasm/disks/DATA*|awk '{print $5($6-1)}'|sed "s/,/:/"|while read line; do echo "$line
400" > /cgroup/blkio/blkio.throttle.read_iops_device; done
7
For the Oracle OLAP databases, we execute the following script to harness the read bandwidth
on the ASM DATA disks:
ls -l /dev/oracleasm/disks/DATA*|awk '{print $5($6-1)}'|sed "s/,/:/"|while read line; do echo "$line
268435456" > /cgroup/blkio/blkio.throttle.read_bps_device; done
VMware Storage I/O control
Follow these steps to harness the I/O throughput with VMware Storage I/O control on the virtual machine.
VMware Storage I/O control only provides the option to harness the IOPS of the virtual machine. To
harness the I/O bandwidth, we observed the size of the I/O issued from the virtual machine and divided
the I/O bandwidth by the I/O size to get the IOPS required for the virtual hard disks.
As the following figure shows, to harness the I/O throughput of the virtual machine:
1
On vCenter, click the name of the virtual machine on the left panel.
2
Click the Summary tab and select Edit Settings on the right panel.
3
Click the Resource tab on the popup box and select Disk.
78
© 2016 VCE Company, LLC.
All Rights Reserved.
4
On the Limit-IOs column, enter the number of IOPS for the virtual hard disks on which you want
to harness the IOPS.
Harnessing IOPS on virtual machine with VMware Storage I/O control
SLOB configuration parameters
The following table shows the SLOB configuration parameters used for the production workload.
SLOB configuration parameters used for production workload
Parameters
Values
UPDATE_PCT
25
RUN_TIME
900
SCALE
400,000
WORK_UNIT
32
REDO_STRESS
LIGHT
LOAD_PARALLEL_DEGREE
8
SHARED_DATA_MODULUS
0
79
© 2016 VCE Company, LLC.
All Rights Reserved.
Parameters
Values
DO_UPDATE_HOTSPOT
FALSE
HOTSPOT_PCT
10
THINK_TM_MODULUS
7
THINK_TM_MIN
.1
THINK_TM_MAX
.5
The following table shows the SLOB configuration parameter used for the Test/Dev workload.
SLOB configuration parameters used for the Test/Dev workload
Parameters
Values
UPDATE_PCT
25
RUN_TIME
900
SCALE
400,000
WORK_UNIT
32
REDO_STRESS
LIGHT
LOAD_PARALLEL_DEGREE
8
SHARED_DATA_MODULUS
0
DO_UPDATE_HOTSPOT
FALSE
HOTSPOT_PCT
10
THINK_TM_MODULUS
7
THINK_TM_MIN
.1
THINK_TM_MAX
.5
The following table shows the SLOB configuration parameter used for the OLAP workload.
SLOB configuration parameters used for the OLAP workload
Parameters
Values
UPDATE_PCT
0
RUN_TIME
900
SCALE
4,000,000
WORK_UNIT
32
REDO_STRESS
LIGHT
LOAD_PARALLEL_DEGREE
8
SHARED_DATA_MODULUS
0
80
© 2016 VCE Company, LLC.
All Rights Reserved.
Parameters
Values
DO_UPDATE_HOTSPOT
FALSE
HOTSPOT_PCT
10
THINK_TM_MODULUS
0
THINK_TM_MIN
.1
THINK_TM_MAX
.5
81
© 2016 VCE Company, LLC.
All Rights Reserved.
About VCE
VCE, an EMC Federation Company, is the world market leader in converged infrastructure and converged solutions.
VCE accelerates the adoption of converged infrastructure and cloud-based computing models that reduce IT costs
while improving time to market. VCE delivers the industry's only fully integrated and virtualized cloud
infrastructure systems, allowing customers to focus on business innovation instead of integrating, validating, and
managing IT infrastructure. VCE solutions are available through an extensive partner network, and cover horizontal
applications, vertical industry offerings, and application development environments, allowing customers to focus on
business innovation instead of integrating, validating, and managing IT infrastructure.
For more information, go to http://www.vce.com.
Copyright © 2016 VCE Company, LLC. All rights reserved. VCE, VCE Vision, VCE Vscale, Vblock, VxBlock, VxRack, and
the VCE logo are registered trademarks or trademarks of VCE Company LLC. All other trademarks used herein are the
property of their respective owners.