Download EMC VSPEX PRIVATE CLOUD Virtual Machines EMC VSPEX

Document related concepts

Data remanence wikipedia , lookup

Management features new to Windows Vista wikipedia , lookup

Computer data storage wikipedia , lookup

Security and safety features new to Windows Vista wikipedia , lookup

Transcript
Proven Infrastructure
EMC VSPEX PRIVATE CLOUD
Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines
Enabled by EMC VNXe3200 and EMC Data Protection
EMC VSPEX
Abstract
This document describes the EMC® VSPEX® Proven Infrastructure solution for
private cloud deployments with Microsoft Hyper-V, EMC VNXe3200™, and EMC
Data Protection for up to 200 virtual machines.
January 2015
Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA.
Published January 2015
EMC believes the information in this publication is accurate as of its publication date.
The information is subject to change without notice.
The information in this publication is provided as is. EMC Corporation makes no
representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or
fitness for a particular purpose. Use, copying, and distribution of any EMC software
described in this publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC
Corporation in the United States and other countries. All other trademarks used
herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical
documentation and advisories section on the EMC Online Support website.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines
Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Part Number H13094.1
2
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Contents
Chapter 1
Executive Summary
13
Introduction ............................................................................................................. 14
Target audience ........................................................................................................ 14
Document purpose ................................................................................................... 14
Business needs ........................................................................................................ 15
Chapter 2
Solution Overview
17
Introduction ............................................................................................................. 18
Virtualization ............................................................................................................ 18
Compute .................................................................................................................. 18
Networking ............................................................................................................... 18
Storage .................................................................................................................... 19
EMC next-generation VNXe .................................................................................. 19
EMC Data Protection ................................................................................................. 23
Chapter 3
Solution Technology Overview
25
Overview .................................................................................................................. 26
Summary of key components ................................................................................... 27
Virtualization ............................................................................................................ 28
Overview ............................................................................................................. 28
Microsoft Hyper-V ................................................................................................ 28
Virtual FC ports .................................................................................................... 28
Microsoft System Center Virtual Machine Manager .............................................. 28
High availability with Hyper-V Failover Clustering ................................................. 29
Hyper-V Replica ................................................................................................... 29
Hyper-V snapshot ................................................................................................ 29
Cluster-Aware Updating ....................................................................................... 30
EMC Storage Integrator ........................................................................................ 30
Compute .................................................................................................................. 31
Networking ............................................................................................................... 32
Overview ............................................................................................................. 32
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
3
Contents
Storage .................................................................................................................... 34
Overview ............................................................................................................. 34
EMC VNXe ............................................................................................................ 34
EMC VNXe Virtual Provisioning ............................................................................. 35
Windows Offloaded Data Transfer ........................................................................ 37
EMC PowerPath ................................................................................................... 38
VNXe FAST Cache ................................................................................................. 38
VNXe FAST VP ...................................................................................................... 38
VNXe file shares................................................................................................... 38
ROBO................................................................................................................... 38
Data Protection......................................................................................................... 39
Overview ............................................................................................................. 39
EMC Avamar deduplication .................................................................................. 39
EMC Data Domain deduplication storage systems ............................................... 39
EMC RecoverPoint ................................................................................................ 39
Other technologies ................................................................................................... 40
EMC XtremCache ................................................................................................. 40
Chapter 4
Solution Architecture Overview
43
Overview .................................................................................................................. 44
Solution architecture ................................................................................................ 44
Overview ............................................................................................................. 44
Logical architecture ............................................................................................. 44
Key components .................................................................................................. 46
Hardware resources ............................................................................................. 48
Software resources .............................................................................................. 50
Server configuration guidelines ................................................................................ 50
Overview ............................................................................................................. 50
Hyper-V memory virtualization ............................................................................. 51
Memory configuration guidelines......................................................................... 53
Network configuration guidelines ............................................................................. 53
Overview ............................................................................................................. 53
VLAN.................................................................................................................... 53
Enabling jumbo frames (iSCSI or SMB only) ......................................................... 55
Enabling link aggregation (SMB only) .................................................................. 55
Storage configuration guidelines .............................................................................. 56
Overview ............................................................................................................. 56
Hyper-V storage virtualization for VSPEX .............................................................. 56
VSPEX storage building blocks............................................................................. 59
4
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Contents
VSPEX Private Cloud validated maximums ........................................................... 60
High availability and failover .................................................................................... 63
Overview ............................................................................................................. 63
Virtualization layer............................................................................................... 63
Compute layer ..................................................................................................... 63
Network layer....................................................................................................... 63
Storage layer ....................................................................................................... 64
Validation test profile ............................................................................................... 65
Profile characteristics .......................................................................................... 65
EMC Data Protection and configuration guidelines ................................................... 65
Sizing guidelines ...................................................................................................... 65
Reference workload .................................................................................................. 66
Overview ............................................................................................................. 66
Defining the reference workload .......................................................................... 66
Applying the reference workload .............................................................................. 67
Overview ............................................................................................................. 67
Example 1: Custom-built application ................................................................... 67
Example 2: Point-of-Sale system .......................................................................... 67
Example 3: Web server ........................................................................................ 68
Example 4: Decision-support database................................................................ 68
Summary of examples ......................................................................................... 68
Implementing the solution ....................................................................................... 69
Overview ............................................................................................................. 69
Resource types .................................................................................................... 69
CPU resources ..................................................................................................... 70
Memory resources ............................................................................................... 70
Network resources ............................................................................................... 70
Storage resources ................................................................................................ 71
Implementation summary .................................................................................... 71
Quick assessment of customer environment ............................................................ 72
Overview ............................................................................................................. 72
CPU requirements ................................................................................................ 72
Memory requirements.......................................................................................... 73
Storage performance requirements...................................................................... 73
IOPS .................................................................................................................... 73
I/O size................................................................................................................ 73
I/O latency........................................................................................................... 74
Storage capacity requirements ............................................................................ 74
Determining equivalent reference virtual machines ............................................. 74
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
5
Contents
Fine-tuning hardware resources ........................................................................... 79
EMC VSPEX Sizing Tool ........................................................................................ 81
Chapter 5
VSPEX Configuration Guidelines
83
Overview .................................................................................................................. 84
Pre-deployment tasks ............................................................................................... 85
Overview ............................................................................................................. 85
Deployment prerequisites.................................................................................... 85
Customer configuration data .................................................................................... 86
Preparing switches, connecting network, and configuring switches .......................... 86
Overview ............................................................................................................. 86
Preparing network switches ................................................................................. 87
Configuring infrastructure network ....................................................................... 87
Configuring VLANs ............................................................................................... 89
Configuring jumbo frames (iSCSI or SMB only) ..................................................... 89
Completing network cabling ................................................................................ 89
Preparing and configuring storage array ................................................................... 90
VNXe configuration for block protocols ................................................................ 90
VNXe configuration for file protocols .................................................................... 92
FAST VP configuration (optional) .......................................................................... 98
FAST Cache configuration (optional) .................................................................. 100
Installing and configuring Hyper-V hosts ................................................................ 103
Overview ........................................................................................................... 103
Installing Windows hosts ................................................................................... 104
Installing Hyper-V and configuring failover clustering......................................... 104
Configuring Windows host networking ............................................................... 104
Installing PowerPath on Windows servers .......................................................... 104
Planning virtual machine memory allocations .................................................... 104
Installing and configuring SQL Server database ...................................................... 106
Overview ........................................................................................................... 106
Creating a virtual machine for Microsoft SQL Server ........................................... 106
Installing Microsoft Windows on the virtual machine ......................................... 106
Installing SQL Server.......................................................................................... 106
Configuring a SQL Server for SCVMM ................................................................. 107
System Center Virtual Machine Manager server deployment ................................... 107
Overview ........................................................................................................... 107
Creating a SCVMM host virtual machine............................................................. 108
Installing the SCVMM guest OS .......................................................................... 108
Installing the SCVMM server .............................................................................. 108
6
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Contents
Installing the SCVMM Management Console ...................................................... 108
Installing the SCVMM agent locally on a host ..................................................... 109
Adding a Hyper-V cluster into SCVMM ................................................................ 109
Adding file share storage to SCVMM (file variant only) ....................................... 109
Creating a virtual machine in SCVMM ................................................................ 109
Performing partition alignment, and assigning File Allocation Unite Size ........... 109
Creating a template virtual machine .................................................................. 109
Deploying virtual machines from the template virtual machine .......................... 110
Summary ................................................................................................................ 110
Chapter 6
Verifying the Solution
111
Overview ................................................................................................................ 112
Post-installing checklist ......................................................................................... 113
Deploying and testing a single virtual server........................................................... 113
Verifying the redundancy of the solution components ............................................ 113
Block and File environments .............................................................................. 113
Chapter 7
System Monitoring
117
Overview ................................................................................................................ 118
Key areas to monitor............................................................................................... 118
Performance baseline ........................................................................................ 118
Servers .............................................................................................................. 119
Networking ........................................................................................................ 119
Storage .............................................................................................................. 120
VNXe resources monitoring guidelines ................................................................... 120
Monitoring block storage resources ................................................................... 120
Monitoring file storage resources....................................................................... 128
Summary ........................................................................................................... 132
Appendix A Bill of Materials
133
Bill of materials ...................................................................................................... 134
Appendix B Customer Configuration Data Sheet
137
Customer configuration data sheet ......................................................................... 138
Appendix C Server Resources Component Worksheet
141
Server resources component worksheet ................................................................. 142
Appendix D References
143
References ............................................................................................................. 144
EMC documentation .......................................................................................... 144
Other documentation......................................................................................... 144
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
7
Contents
Appendix E About VSPEX
145
About VSPEX .......................................................................................................... 146
8
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Figures
Figure 1.
Next-generation VNXe with multicore optimization .............................. 22
Figure 2.
EMC Data Protection solutions............................................................. 23
Figure 3.
VSPEX Private Cloud components ........................................................ 26
Figure 4.
Compute layer flexibility ...................................................................... 31
Figure 5.
Example of highly available network design – for block ....................... 33
Figure 6.
Storage pool rebalance progress ......................................................... 35
Figure 7.
Thin LUN space utilization ................................................................... 36
Figure 8.
Examining storage pool space utilization............................................. 37
Figure 9.
Logical architecture for block storage .................................................. 45
Figure 10.
Logical architecture for file storage ...................................................... 45
Figure 11.
Hypervisor memory consumption ........................................................ 52
Figure 12.
Required networks for block storage.................................................... 54
Figure 13.
Required networks for file storage ....................................................... 55
Figure 14.
Hyper-V virtual disk types .................................................................... 57
Figure 15.
Building block for 15 virtual servers..................................................... 59
Figure 16.
Building block for 125 virtual servers .................................................. 60
Figure 17.
Storage layout for 200 virtual machines using VNXe3200 .................... 61
Figure 18.
Maximum scale levels and entry points of different arrays ................... 62
Figure 19.
High availability on the virtualization layer .......................................... 63
Figure 20.
Redundant power supplies .................................................................. 63
Figure 21.
Network layer high availability (VNXe) ................................................. 64
Figure 22.
VNXe series HA components ................................................................ 64
Figure 23.
Resource pool flexibility ...................................................................... 69
Figure 24.
Required resource from the reference virtual machine pool ................. 75
Figure 25.
Aggregate resource requirements – stage 1 ......................................... 77
Figure 26.
Pool configuration – stage 1 ................................................................ 77
Figure 27.
Aggregate resource requirements - stage 2 .......................................... 78
Figure 28.
Pool configuration – stage 2 ................................................................ 79
Figure 29.
Customizing server resources .............................................................. 79
Figure 30.
Sample Ethernet network architecture - block variant .......................... 88
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
9
Figures
10
Figure 31.
Sample Ethernet network architecture - file variant .............................. 89
Figure 32.
Configure NAS Server Address ............................................................. 95
Figure 33.
Configure NAS Server type ................................................................... 96
Figure 34.
Fast VP tab .......................................................................................... 98
Figure 35.
Scheduled Fast VP relocation .............................................................. 99
Figure 36.
Fast VP Relocation Schedule................................................................ 99
Figure 37.
Create Fast Cache ..............................................................................101
Figure 38.
Advanced tab in the Create Storage Pool dialog box ..........................102
Figure 39.
Settings tab in the Storage Pool Properties dialog box .......................103
Figure 40.
Storage Pool Alert settings.................................................................121
Figure 41.
Storage Pool Snapshot settings .........................................................122
Figure 42.
Storage Pools panel ..........................................................................122
Figure 43.
LUN Properties dialog box .................................................................123
Figure 44.
System Panel.....................................................................................124
Figure 45.
System Health panel .........................................................................124
Figure 46.
IOPS on the LUNs ..............................................................................125
Figure 47.
IOPS on the drives .............................................................................126
Figure 48.
Latency on the LUNs ..........................................................................127
Figure 49.
SP CPU Utilization..............................................................................128
Figure 50.
VNXe file statistics.............................................................................129
Figure 51.
System Capacity panel ......................................................................129
Figure 52.
File Systems panel.............................................................................130
Figure 53.
File System Capacity panel ................................................................131
Figure 54.
System Performance panel displaying file metrics .............................132
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Tables
Tables
Table 1.
VNXe customer benefits ...................................................................... 34
Table 2.
Solution hardware ............................................................................... 48
Table 3.
Solution software ................................................................................ 50
Table 4.
Hardware resources for compute layer ................................................. 51
Table 5.
Hardware resources for network .......................................................... 53
Table 6.
Hardware resources for storage ........................................................... 56
Table 7.
Number of disks required for different number of virtual machines ...... 60
Table 8.
Profile characteristics .......................................................................... 65
Table 9.
Virtual machine characteristics............................................................ 66
Table 10.
Blank worksheet row ........................................................................... 72
Table 11.
Reference virtual machine resources ................................................... 74
Table 12.
Example worksheet row ....................................................................... 75
Table 13.
Example applications – stage 1 ........................................................... 76
Table 14.
Example applications - stage 2 ............................................................ 77
Table 15.
Server resource component totals ....................................................... 80
Table 16.
Deployment process overview ............................................................. 84
Table 17.
Tasks for pre-deployment .................................................................... 85
Table 18.
Deployment prerequisites checklist ..................................................... 85
Table 19.
Tasks for switch and network configuration ......................................... 87
Table 20.
Tasks for VNXe configuration for block protocols ................................. 90
Table 21.
Storage allocation table for block ........................................................ 92
Table 22.
Tasks for storage configuration for file protocols ................................. 92
Table 23.
Storage allocation table for file ............................................................ 97
Table 24.
Tasks for server installation ...............................................................103
Table 25.
Tasks for SQL Server database setup .................................................106
Table 26.
Tasks for SCVMM configuration .........................................................107
Table 27.
Tasks for testing the installation ........................................................112
Table 28.
Rules of thumb for drive performance ................................................126
Table 29.
Best practice for performance monitoring ..........................................128
Table 30.
List of components used in the VSPEX solution for 200 virtual machines134
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
11
Tables
12
Table 31.
Common server information ..............................................................138
Table 32.
Hyper-V server information ...............................................................138
Table 33.
Array information...............................................................................139
Table 34.
Network infrastructure information ....................................................139
Table 35.
VLAN information ..............................................................................139
Table 36.
Service accounts ...............................................................................139
Table 37.
Blank worksheet for determining server resources.............................142
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Chapter 1
Executive Summary
This chapter presents the following topics:
Introduction .............................................................................................................14
Target audience .......................................................................................................14
Document purpose ...................................................................................................14
Business needs ........................................................................................................15
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
13
Executive Summary
Introduction
EMC® VSPEX® validated and modular architectures are built with proven superior
technologies to create complete virtualization solutions. These solutions enable you
to make an informed decision at the hypervisor, compute, backup, storage, and
networking layers. VSPEX helps to reduce virtualization planning and configuration
burdens. When embarking on server virtualization, virtual desktop deployment, or IT
consolidation, VSPEX accelerates your IT transformation by enabling faster
deployments, expanded choices, greater efficiency, and lower risk.
This document is a comprehensive guide to the technical aspects of this solution.
Server capacity is provided in generic terms for required minimums of CPU, memory,
and network interfaces; the customer is free to select the server and networking
hardware that meet or exceed the stated minimums.
Target audience
The readers of this document should have the necessary training and background to
install and configure a VSPEX computing solution based on Microsoft Hyper-V as a
hypervisor, EMC VNX® series storage systems, and associated infrastructure as
required by this implementation. External references are provided where applicable,
and the readers should be familiar with these documents.
Readers should also be familiar with the infrastructure and database security policies
of the customer’s environment.
Individuals focusing on selling and sizing a VSPEX end-user computing solution for
Microsoft Hyper-V private cloud infrastructure must pay particular attention to the first
four chapters of this document. After the purchase, implementers of the solution
should focus on the configuration guidelines in Chapter 5, the solution validation in
Chapter 6, and the appropriate references and appendices.
Document purpose
This proven infrastructure guide includes an initial introduction to the VSPEX
architecture, an explanation of how to modify the architecture for specific
engagements, and instructions on how to effectively deploy and monitor the system.
The VSPEX private cloud architecture provides the customer with a modern system
capable of hosting many virtual machines at a consistent performance level. This
solution runs on the Microsoft Hyper-V virtualization layer backed by the highly
available VNX family of storage. The compute and network components, which are
defined by the VSPEX partners, are laid out to be redundant and sufficiently powerful
to handle the processing and data needs of the virtual machine environment.
The 200 virtual machine Hyper-V Private Cloud solution described in this document is
based on the EMC VNXe3200™ and on a defined reference workload. Since not every
virtual machine has the same requirements, this document contains methods and
14
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Executive Summary
guidance to adjust your system to be cost-effective when deployed. For larger
environments, solutions for up to 1,000 virtual machines based on the EMC VNX
series are described in the EMC VSPEX Private Cloud: Microsoft Windows Server 2012
R2 with Hyper-V for up to 1,000 Virtual Machines Proven Infrastructure Guide.
A private cloud architecture is a complex system offering. This document facilitates
its setup by providing up-front software and hardware material lists, step-by-step
sizing guidance and worksheets, and verified deployment steps. After the last
component has been installed, validation tests and monitoring instructions ensure
that your customer’s system is running correctly. Following the instructions in this
document ensures an efficient and expedited journey to the cloud.
Business needs
Business applications are moving into consolidated compute, network, and storage
environments. EMC VSPEX private cloud solutions use Microsoft Hyper-V to reduce
the complexity of configuring every component of a traditional deployment model.
The complexity of integration management is reduced while maintaining the
application design flexibility and implementation options. Administration is unified,
while process separation can be adequately controlled and monitored. The business
needs for the VSPEX private cloud solutions for Microsoft Hyper-V are:

Providing an end-to-end virtualization solution to effectively utilize the
capabilities of the unified infrastructure components.

Providing a VSPEX private cloud solution for Microsoft Hyper-V to efficiently
virtualize up to 200 virtual machines for varied customer use cases.

Providing a reliable, flexible, and scalable reference design.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
15
Executive Summary
16
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Chapter 2
Solution Overview
This chapter presents the following topics:
Introduction .............................................................................................................18
Virtualization ...........................................................................................................18
Compute ..................................................................................................................18
Networking ..............................................................................................................18
Storage ....................................................................................................................19
EMC Data Protection ................................................................................................ 23
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
17
Solution Overview
Introduction
The EMC VSPEX private cloud solution for Microsoft Hyper-V provides a complete
system architecture capable of supporting up to 200 virtual machines with a
redundant server or network topology and highly available storage. The core
components that make up this particular solution are virtualization, compute,
networking, storage, and EMC Data Protection.
Virtualization
Microsoft Hyper-V is a key virtualization platform in the industry. For years, Hyper-V
has provided flexibility and cost savings to end users by consolidating large,
inefficient server farms into nimble, reliable cloud infrastructures.
Features such as Live Migration, which enables a virtual machine to move between
different servers with no disruption to the guest operating system, and Dynamic
Optimization, which performs Live Migrations automatically to balance loads, make
Hyper-V a solid business choice.
With the release of Windows Server 2012 R2, a Microsoft virtualized environment can
host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access
memory (RAM).
Compute
VSPEX provides the flexibility to design and implement the customer’s choice of
server components. The infrastructure must conform to the following attributes:

Sufficient cores and memory to support the required number and types of
virtual machines

Sufficient network connections to enable redundant connectivity to the system
switches

Excess capacity to withstand a server failure and failover within the
environment
Networking
VSPEX provides the flexibility to design and implement the customer’s choice of
network components. The infrastructure must conform to the following attributes:
18

Redundant network links for the hosts, switches, and storage

Traffic isolation based on industry-accepted best practices

Support for link aggregation

A minimum backplane capacity of 96 Gb/s non-blocking for IP network
switches
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Overview

IP network switches used to implement this reference architecture must have a
minimum non-blocking backplane capacity which is sufficient for the target
number of virtual machines and their associated workloads. Enterprise-class
switches with advanced features such as Quality of Service are highly
recommended.
Storage
The EMC VNXe® storage series provides both file and block access with a broad
feature set, which makes it an ideal choice for any private cloud implementation.
VNXe storage includes the following components, sized for the stated reference
architecture workload:

I/O ports (for block and file): Provide host connectivity to the array, which
supports CIFS/ Server Message Block (SMB), Network File System (NFS), Fibre
Channel (FC), and Internet Small Computer System Interface (iSCSI).

Storage processors – The compute components of the storage array, used for
all aspects of data moving into, out of, and between arrays. Unlike the VNX
family, which requires external processing units known as Data Movers to
provide file services, the VNXe contains integrated code that provides file
services to hosts.

Disk drives – Disk spindles and solid state drives (SSDs) that contain the host
or application data and their enclosures
The 200 virtual machine Hyper-V Private Cloud solution described in this document is
based on the VNXe3200 storage array. The VNXe3200 can support a maximum of 150
drives.
The VNXe series supports a wide range of business-class features that are ideal for
the private cloud environment, including:
EMC nextgeneration VNXe

EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP™)

EMC FAST Cache

Thin provisioning

Snapshots or checkpoints

File-level retention

Quota management
Features and enhancements
EMC now offers customers even greater performance and choice than before with the
inclusion of the next generation of VNXe Unified Storage into the VSPEX family of
Proven Infrastructures. The next-generation VNXe, led by the VNXe3200, offers a
hybrid, unified storage system for VSPEX customers who need to centralize and
simplify storage when transforming their IT.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
19
Solution Overview
Customers who need to virtualize up to 200 virtual machines with VSPEX Private
Cloud solutions will now see the benefits that the new Multicore (MCx) VNXe3200
brings. The new architecture distributes all data services across all the system’s cores
more evenly. Cache management and backend RAID management processes scale
linearly and benefit greatly from the latest Intel multicore CPUs. Simply put, I/O
operations in VSPEX run faster and more efficiently than ever before with the new
VNXe3200.
The VNXe3200 is ushering in a profoundly new experience for small and mediumsized VSPEX customers as it delivers performance and scale at a lower price. The
VNXe3200 is a significantly more powerful system than the previous VNXe series and
ships with many enterprise-like features and capabilities such as auto-tiering, file
deduplication, and compression, which add to the simplicity, efficiency, and
flexibility of the VSPEX Private Cloud solution.
EMC FAST Cache and FAST VP, features that have in the past been exclusive to the
VNX, are now available to VSPEX customers with VNXe3200 storage. FAST Cache
dynamically extends the storage system’s existing read/write caching capacity to
increase system-wide performance and more cost effectively deliver performance to
your virtual machines. FAST Cache uses high-performing flash drives that are
positioned between the primary cache (DRAM-based) and the hard disk drives. This
feature boosts the performance of highly transactional applications and virtual
desktops by keeping hot data in the cache, so can deliver performance for your
frequently accessed data.
VNXe3200 FAST Cache and FAST VP auto-tiering lowers the total cost of ownership
through policy-based movement of your data to the right storage type. Doing so
maximizes the cost investment and speed benefit of flash drives across the system
intelligently while leveraging the capacity of less-costly spinning drives. This avoids
over-purchasing and exhaustive manual configuration.
The EMC VNXe flash-optimized unified storage platform delivers innovation and
enterprise capabilities for file, block, and object storage in a single, scalable, and
easy-to-use solution. Ideal for mixed workloads in physical or virtual environments,
The VNXe combines powerful and flexible hardware with advanced efficiency,
management, and protection software to meet the demanding needs of today’s
virtualized application environments.
VNXe includes many features and enhancements designed and built upon the
success of the next generation VNX family. These features and enhancements
include:
20

More capacity with multicore optimization with Multicore Cache, Multicore
RAID, and Multicore FAST Cache (MCx)

Greater efficiency with a flash-optimized hybrid array

Easier administration and deployment by increasing productivity with a new
Unisphere element manager
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Overview
Flash-optimized hybrid array
VNXe is a flash-optimized hybrid array that provides automated tiering to deliver the
best performance for your critical data, while intelligently moving less frequently
accessed data to lower-cost disks.
In this hybrid approach, a small percentage of flash drives in the overall system can
provide a high percentage of the overall IOPS. A flash-optimized VNXe takes full
advantage of the low latency of flash to deliver cost-saving optimization and high
performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache
and FAST VP) tiers both block and file data across heterogeneous drives and migrates
the most active data to the flash drives, ensuring that customers never have to make
concessions for cost or performance.
Data is typically used most frequently at the time it is created; therefore new data is
first stored on flash drives for the best performance. As that data ages and becomes
less active over time, FAST VP moves the data from high-performance to high-capacity
drives automatically, based on customer-defined policies. EMC has enhanced this
functionality with four times better granularity and with new FAST VP flash drives
based on enterprise multi-level cell (eMLC) technology to lower the cost per gigabyte.
FAST Cache dynamically absorbs unpredicted spikes in system workloads. All VSPEX
use cases benefit from the increased efficiency.
Note: This reference architecture does not make use of FAST Cache or FAST VP. Lab testing
has demonstrated performance increases of approximately 10 – 20%, depending upon
protocol using the VSPEX workload.
VSPEX Proven Infrastructures deliver private cloud, end-user computing, and
virtualized application solutions. With VNXe, customers can realize an even greater
return on their investment. VNXe provides out-of-band, file-based deduplication that
can dramatically lower the costs of the flash tier.
VNXe Intel MCx Code Path Optimization
The advent of flash technology has been a catalyst in totally changing the
requirements of VNXe storage systems. EMC redesigned the midrange storage
platform to efficiently optimize multicore CPUs to provide the highest performing
storage system at the lowest cost in the market.
MCx distributes all VNXe data services across all cores, as shown in Figure 1. The
VNXe series with MCx has dramatically improved the file performance for
transactional applications like databases or virtual machines over network-attached
storage (NAS).
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
21
Solution Overview
Figure 1.
Next-generation VNXe with multicore optimization
Multicore Cache
The cache is the most valuable asset in the storage subsystem; its efficient use is key
to the overall efficiency of the platform in handling variable and changing workloads.
The cache engine has been modularized to take advantage of all the cores available
in the system.
Multicore RAID
Another important part of the MCx redesign is the handling of I/O to the permanent
back-end storage—hard disk drives (HDDs) and SSDs. Greatly increased performance
improvements in VNXe come from the modularization of the back-end data
management processing, which enables MCx to seamlessly scale across all
processors.
VNXe performance
Performance enhancements
VNXe storage, enabled with the MCx architecture, is optimized for FLASH 1st and
provides unprecedented overall performance, optimizing for transaction performance
(cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and
providing optimal capacity efficiency (cost per GB).
VNXe provides the following performance improvements:

Up to four times more file transactions when compared with dual controller
arrays

Increased file performance for transactional applications by up to three times,
with a 60 percent better response time

Up to four times more Oracle and Microsoft SQL Server OLTP transactions

Up to six times more virtual machines
Virtualization Management
EMC Storage Integrator
EMC Storage Integrator (ESI) is targeted towards the Windows and Application
administrator. ESI is easy to use, delivers end-to end monitoring, and is hypervisor
22
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Overview
agnostic. Administrators can provision in both virtual and physical environments for a
Windows platform, and troubleshoot by viewing the topology of an application from
the underlying hypervisor to the storage.
Microsoft Hyper-V
With Windows Server 2012 R2, Microsoft provides Hyper-V 3.0, an enhanced
hypervisor for private cloud that can run on NAS protocols for simplified connectivity.
Offloaded Data Transfer
The Offloaded Data Transfer (ODX) feature of Windows Server 2012 R2 enables data
transfers during copy operations to be offloaded to the storage array, freeing up host
cycles. For example, using ODX for a live migration of a SQL Server virtual machine
doubled performance, decreased migration time by 50 percent, reduced CPU on the
host server by 20 percent, and eliminated network traffic.
EMC Data Protection
EMC Data Protection solutions, EMC Avamar and EMC Data Domain, deliver the
protection and confidence needed to accelerate the deployment of VSPEX Private
Clouds.
Optimized for virtual environments, EMC Data Protection reduces backup times by 90
percent and increases recovery speeds by 30 times, even offering virtual machines
instant access for worry-free protection. EMC backup appliances add another layer of
assurance with end-to-end verification and self-healing to ensure successful
recoveries.
Our solutions also deliver big saving. With industry-leading deduplication, you can
reduce backup storage by 10 to 30 times, backup management time by 81 percent,
and WAN bandwidth by 99 percent for efficient disaster recovery, delivering a sevenmonth payback period on average. You will be able to scale storage easily and
efficiently as your environment grows.
Figure 2.
EMC Data Protection solutions
EMC Data Protection solutions used in this VSPEX solution include the EMC Avamar
deduplication software and system, and the EMC Data Domain deduplication storage
system.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
23
Solution Overview
24
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Chapter 3
Solution Technology Overview
This chapter presents the following topics:
Overview ..................................................................................................................26
Summary of key components ...................................................................................27
Virtualization ...........................................................................................................28
Compute ..................................................................................................................31
Networking ..............................................................................................................32
Storage ....................................................................................................................34
Data Protection ........................................................................................................39
Other technologies ..................................................................................................40
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
25
Solution Technology Overview
Overview
This solution uses the VNXe array and Microsoft Hyper-V to provide storage and
server hardware consolidation in a VSPEX Private Cloud. The new virtualized
infrastructure is centrally managed, to provide efficient deployment and management
of a scalable number of virtual machines and associated shared storage.
Figure 3 depicts the solution components.
Figure 3.
VSPEX Private Cloud components
The following sections describe the components in detail.
26
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Technology Overview
Summary of key components
This section briefly describes the key components of this solution.

Virtualization
The virtualization layer decouples the physical implementation of resources
from the applications that use them. The application’s view of the available
resources is no longer directly tied to the hardware. This enables many key
features in the private cloud concept.

Compute
The compute layer provides memory and processing resources for the
virtualization layer software, and for the applications running in the private
cloud. The VSPEX program defines the minimum amount of required compute
layer resources, and enables the customer to implement the solution by using
any server hardware that meets these requirements.

Network
The network layer connects the users of the private cloud to the resources in
the cloud, and the storage layer to the compute layer. The VSPEX program
defines the minimum number of required network ports, provides general
guidance on network architecture, and enables the customer to implement
the solution by using any network hardware that meets these requirements.

Storage
The storage layer is critical for the implementation of the private cloud. With
multiple hosts accessing shared data, many of the use cases defined in the
private cloud can be implemented. The EMC VNXe storage used in this
solution provides high-performance data storage while maintaining high
availability.

Data Protection
The backup and recovery components of the solution provide data protection
when the data in the primary system is deleted, damaged, or unusable.
Solution architecture provides details on all the components that make up the
reference architecture.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
27
Solution Technology Overview
Virtualization
Overview
The virtualization layer is a key component of any server virtualization or private cloud
solution. It decouples the application resource requirements from the underlying
physical resources that serve them. This enables greater flexibility in the application
layer by eliminating hardware downtime for maintenance, and allows the system to
physically change without affecting the hosted applications. In a server virtualization
or private cloud use case, it enables multiple independent virtual machines to share
the same physical hardware, rather than being directly implemented on dedicated
hardware.
Microsoft Hyper-V
Microsoft Hyper-V is a Windows Server role that was introduced in Windows Server
2008. Hyper-V virtualizes computer hardware resources, such as CPU, memory,
storage, and networking. This transformation creates fully functional virtual machines
that run their own operating systems and applications like physical computers.
Hyper-V works with Failover Clustering and Cluster Shared Volumes (CSVs) to provide
high availability in a virtualized infrastructure. Live migration and live storage
migration enable seamless movement of virtual machines or virtual machines files
between Hyper-V servers or storage systems transparently and with minimal
performance impact.
Virtual FC ports
Windows Server 2012 R2 provides virtual FC ports within a Hyper-V guest operating
system. The virtual FC port uses the standard N-port ID virtualization (NPIV) process to
address the virtual machine WWNs within the Hyper-V host’s physical host bus
adapter (HBA). This provides virtual machines with direct access to external storage
arrays over FC, enables clustering of guest operating systems over FC, and offers an
important new storage option for the hosted servers in the virtual infrastructure.
Virtual FC in Hyper-V guest operating systems also supports related features, such as
virtual SANs, live migration, and multipath I/O (MPIO).
Prerequisites for virtual FC include:

One or more installations of Windows Server 2012 R2 with the Hyper-V role

One or more FC HBAs installed on the server, each with an appropriate HBA
driver that supports virtual FC

NPIV-enabled SAN
Virtual machines using the virtual FC adapter must use Windows Server 2008,
Windows Server 2008 R2, or Windows Server 2012 R2 as the guest operating system.
Microsoft System
Center Virtual
Machine Manager
28
Microsoft System Center Virtual Machine Manager (SCVMM) is a centralized
management platform for the virtualized data center. SCVMM allows administrators
to configure and manage the virtualized host, networking, and storage resources, and
to create and deploy virtual machines and services to private clouds. SCVMM
simplifies provisioning, management, and monitoring in the Hyper-V environment.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Technology Overview
High availability
with Hyper-V
Failover Clustering
Hyper-V Replica
The Windows Server 2012 Failover Clustering feature provides high-availability in
Hyper-V. High availability is impacted by both planned and unplanned downtime, and
Failover Clustering significantly increases the availability of virtual machines during
planned and unplanned downtimes. Configure Windows Server 2012 Failover
Clustering on the Hyper-V host to monitor virtual machine health, and migrate virtual
machines between cluster nodes. The advantages of this configuration are:

Enables migration of virtual machines to a different cluster node if the cluster
node where they reside must be updated, changed, or rebooted.

Allows other members of the Windows Failover Cluster to take ownership of the
virtual machines if the cluster node where they reside suffers a failure or
significant degradation.

Minimizes downtime due to virtual machine failures. Windows Server Failover
Cluster detects virtual machine failures and automatically takes steps to
recover the failed virtual machine. This allows the virtual machine to be
restarted on the same host server, or migrated to a different host server.
Hyper-V Replica was introduced in Windows Server 2012 to provide asynchronous
virtual machine replication over the network from one Hyper-V host at a primary site
to another Hyper-V host at a replica site. Hyper-V replicas protect business
applications in the Hyper-V environment from downtime associated with an outage at
a single site.
Hyper-V Replica tracks the write operations on the primary virtual machine and
replicates the changes to the replica server over the network with HTTP and HTTPS.
The amount of network bandwidth required is based on the transfer schedule and
data change rate.
If the primary Hyper-V host fails, you can manually fail over the production virtual
machines to the Hyper-V hosts at the replica site. Manual failover brings the virtual
machines back to a consistent point from which they can be accessed with minimal
impact on the business. After recovery, the primary site can receive changes from the
replica site. You can perform a planned failback to manually revert the virtual
machines back to the Hyper-V host at the primary site.
Hyper-V snapshot
A Hyper-V snapshot creates a consistent point-in-time view of a virtual machine.
Snapshots function as source for backups or other use cases. Virtual machines do
not have to be running to take a snapshot. Snapshots are completely transparent to
the applications running on the virtual machine. The snapshot saves the point-in-time
status of the virtual machine, and enables users to revert the virtual machine to a
previous point-in-time if necessary.
Note: Snapshots require additional storage space. The amount of additional storage space
depends on the frequency of data change on the virtual machine and the number of
snapshots being retained.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
29
Solution Technology Overview
Cluster-Aware
Updating
Cluster-Aware Updating (CAU) was introduced in Windows Server 2012. It provides a
way of updating cluster nodes with little or no disruption. CAU transparently performs
the following tasks during the update process:
1.
Puts one cluster node into maintenance mode and takes it offline (virtual
machines are live-migrated to other cluster nodes).
2.
Installs the updates.
3.
Performs a restart if necessary.
4.
Brings the node back online (migrated virtual machines are moved back to
the original node).
5.
Updates the next node in the cluster.
The node managing the update process is called the Orchestrator. The Orchestrator
can work in a couple of different modes:

Self-updating mode: The Orchestrator runs on the cluster node being updated.

Remote-updating mode: The Orchestrator runs on a standalone Windows
operating system, and remotely manages the cluster update.
CAU is integrated with Windows Server Update Service (WSUS). PowerShell allows
automation of the CAU process.
EMC Storage
Integrator
30
EMC Storage Integrator (ESI) is an agentless, free plug-in that enables applicationaware storage provisioning for Microsoft Windows Server applications, Hyper-V,
VMware, and Xen Server environments. Administrators can provision block and file
storage for Microsoft Windows or Microsoft SharePoint sites by using wizards in ESI.
ESI supports the following functions:

Provisioning, formatting, and presenting drives to Windows servers

Provisioning new cluster disks, and automatically adding them to the cluster

Provisioning shared CIFS storage, and mounting it to Windows servers

Provisioning SharePoint storage, sites, and databases in a single wizard
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Technology Overview
Compute
The choice of a server platform for a VSPEX infrastructure is not only based on the
technical requirements of the environment, but on the supportability of the platform,
existing relationships with the server provider, advanced performance, management
features, and many other factors. For this reason, VSPEX solutions are designed to
run on a wide variety of server platforms. Instead of requiring a specific number of
servers with a specific set of requirements, VSPEX documents the minimum
requirements for the number of processor cores, and the amount of RAM. This can be
implemented with two or twenty servers, and still be considered the same VSPEX
solution.
In the example shown in Figure 4, the compute layer requirements for a specific
implementation are 25 processor cores and 200 GB of RAM. One customer might
want to implement this by using white-box servers containing 16 processor cores,
and 64 GB of RAM, while another customer chooses a higher-end server with 20
processor cores and 144 GB of RAM.
Figure 4.
Compute layer flexibility
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
31
Solution Technology Overview
The first customer needs four of the chosen servers, while the other customer needs
two.
Note: To enable high-availability at the compute layer, each customer needs one additional
server to ensure that the system has enough capability to maintain business operations
when a server fails.
Use the following best practices in the compute layer:

Use several identical, or at least compatible, servers. VSPEX implements
hypervisor level high-availability technologies, which may require similar
instruction sets on the underlying physical hardware. By implementing VSPEX
on identical server units, you can minimize compatibility problems in this area.

If you implement high availability at the hypervisor layer, the largest virtual
machine you can create is constrained by the smallest physical server in the
environment.

Implement the available high-availability features in the virtualization layer,
and ensure that the compute layer has sufficient resources to accommodate at
least single server failures. This enables the implementation of minimaldowntime upgrades and tolerance for single unit failures.
Within the boundaries of these recommendations and best practices, the compute
layer for VSPEX can be flexible to meet your specific needs. Ensure that there are
sufficient processor cores, and RAM per core to meet the needs of the target
environment.
Networking
Overview
32
The infrastructure network requires redundant network links for each Hyper-V host,
the storage array, the switch interconnect ports, and the switch uplink ports. This
configuration provides both redundancy and additional network bandwidth. This is a
required configuration regardless of whether the network infrastructure for the
solution already exists, or you are deploying it alongside other components of the
solution. Figure 5 depicts an example of this highly available network topology.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Technology Overview
Figure 5.
Example of highly available network design – for block
This validated solution uses virtual local area networks (VLANs) to segregate network
traffic of various types to improve throughput, manageability, application separation,
high availability, and security.
For blocks, EMC unified storage platforms provide network high availability or
redundancy by two ports per storage processor. If a link is lost on the storage
processor front end port, the link fails over to another port. All network traffic is
distributed across the active links.
For files, EMC unified storage platforms provide network high availability or
redundancy by using link aggregation. Link aggregation enables multiple active (MAC)
Ethernet connections to appear as a single link with a single MAC address, and
potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol
(LACP) is configured on the VNXe array, combining multiple Ethernet ports into a
single virtual device. If a link is lost on the Ethernet port, the link fails over to another
port. All network traffic is distributed across the active links.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
33
Solution Technology Overview
Storage
Overview
The storage layer is also a key component of any cloud infrastructure solution that
serves data generated by applications and operating system in data center storage
processing systems. This increases storage efficiency, management flexibility, and
reduces total cost of ownership. In this VSPEX solution, EMC VNXe series arrays
provide features and performance to enable and enhance any virtualization
environment.
EMC VNXe
The EMC VNX family is optimized for virtual applications, and delivers industryleading innovation and enterprise capabilities for file and block storage in a scalable,
easy-to-use solution. This next-generation storage platform combines powerful and
flexible hardware with advanced efficiency, management, and protection software to
meet the demanding needs of today’s enterprises.
Intel Xeon processors power the VNXe series for intelligent storage that automatically
and efficiently scales in performance, while ensuring data integrity and security. It is
designed to meet the high performance, high-scalability requirements of midsize and
large enterprises.
Table 1 shows the customer benefits that are provided by the VNXe series.
Table 1.
VNXe customer benefits
Feature
Benefit
Next-generation unified storage,
optimized for virtualized applications
Tight integration with Microsoft Windows
and System Center allows for advanced array
features and centralized management
Capacity optimization features including
compression, deduplication, thin
provisioning, and application-consistent
copies
Reduced storage costs, more efficient use of
resources and easier recovery of
applications
High availability, designed to deliver five
9s availability
Higher levels of uptime and reduced outage
risk
Automated tiering with FAST VP and
FAST Cache that can be optimized for the
highest system performance and lowest
storage cost simultaneously
More efficient use of storage resources
without complicated planning and
configuration
Simplified management with EMC
Unisphere™ for a single management
interface for all NAS, SAN, and
replication needs
Reduced management overhead and
toolsets required to manage environment
Different software suites and packs are also available for the VNXe series, which
provide multiple features for enhanced protection and performance.
Software suites
The following VNXe software suites are available:
34
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Technology Overview
EMC VNXe Virtual
Provisioning

FAST Suite—Automatically optimizes for the highest system performance and
the lowest storage cost simultaneously.

Security and Compliance Suite—Keeps data safe from changes, deletions, and
malicious activity.
EMC VNXe Virtual Provisioning™ enables organizations to reduce storage costs by
increasing capacity utilization, simplifying storage management, and reducing
application downtime. Virtual Provisioning also helps companies to reduce power
and cooling requirements and reduce capital expenditures.
Virtual Provisioning provides pool-based storage provisioning by implementing pool
LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that
maximizes the utilization of your storage by allocating storage only as needed. Thick
LUNs provide high performance and predictable performance for your applications.
Both types of LUNs benefit from the ease-of-use features of pool-based provisioning.
Pools and pool LUNs are also the building blocks for advanced data services such as
FAST VP, VNXe Snapshots, and compression. Pool LUNs also support a variety of
additional features, such as LUN shrink, online expansion, and User Capacity
Threshold setting.
Virtual Provisioning allows you to expand the capacity of a storage pool from the
Unisphere GUI after disks are physically attached to the system. VNXe systems have
the ability to rebalance allocated data elements across all member drives to use new
drives after the pool is expanded. The rebalance function starts automatically and
runs in the background after an expand action. You can monitor the progress of a
rebalance operation from the Jobs Panel in Unisphere, as shown in Figure 6.
Figure 6.
Storage pool rebalance progress
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
35
Solution Technology Overview
LUN expansion
Use pool LUN expansion to increase the capacity of existing LUNs. It allows for
provisioning larger capacity as business needs grow.
The VNXe series has the capability to expand a pool LUN without disrupting user
access. You can expand pool LUNs with a few simple clicks and the expanded
capacity is immediately available. However, you cannot expand a pool LUN if it is part
of a data-protection or LUN-migration operation. For example, snapshot LUNs or
migrating LUNs cannot be expanded.
For more detailed information of pool LUN expansion, refer to Virtual Provisioning for
the New VNX Series.
Alerting the user through the Capacity Threshold setting
You must configure proactive alerts when using a file system or storage pools based
on thin pools. Monitor these resources so that storage is available for provisioning
when needed and capacity shortages can be avoided.
Figure 7 explains why provisioning with thin pools requires monitoring.
Figure 7.
Thin LUN space utilization
Monitor the following values for thin pool utilization:

Total capacity is the total physical capacity available to all LUNs in the pool.

Total allocation is the total physical capacity currently assigned to all pool
LUNs.

Subscribed capacity is the total host-reported capacity supported by the pool.

Over-subscribed capacity is the amount of user capacity configured for LUNs
that exceeds the physical capacity in a pool.
Total allocation must never exceed the total capacity, but if it nears that point, add
storage to the pools proactively before reaching a hard limit.
36
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Technology Overview
Figure 8 shows the Storage Pool Properties dialog box in Unisphere, which displays
parameters such as Available Space, Used Space, Subscription, Alert Threshold and
Total Space.
Figure 8.
Examining storage pool space utilization
When storage pool capacity becomes exhausted, any requests for additional space
allocation on thin-provisioned LUNs fail. Applications attempting to write data to
these LUNs usually fail as well, and an outage is the likely result. To avoid this
situation, monitor pool utilization, and be alerted when thresholds are reached, set
the Percentage Full Threshold to allow enough buffer to take remedial action before
an outage situation occurs. This alert is only active if there are one or more thin LUNs
in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool
only contains thick LUNs, the alert is not active because there is no risk of running out
of space due to oversubscription.
Windows
Offloaded Data
Transfer
Windows Offloaded Data Transfer (ODX) provides the ability to offload data transfer
from the server to the storage arrays. This feature is enabled by default in Windows
Server 2012. VNXe arrays are compatible with Windows ODX on Windows Server
2012.
ODX supports the following protocols:

iSCSI

Fibre Channel (FC)

FC over Ethernet (FCoE)

Server Message Block (SMB) 3.0
The following data-transfer operations currently support ODX:

Transferring large amounts of data via the Hyper-V Manager, such as creating a
fixed size VHD, merging a snapshot, or converting VHDs

Copying files in File Explorer

Using the Copy commands in Windows PowerShell

Using the Copy commands in the Windows command prompt
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
37
Solution Technology Overview
Because ODX offloads the file transfer to the storage array, host CPU and network
utilization are significantly reduced. ODX minimizes latencies and improves the
transfer speed by using the storage array for data transfer. This is especially
beneficial for large files, such as database or video files. ODX is enabled by default in
Windows Server 2012, so when ODX-supported file operations occur, data transfers
automatically offloaded to the storage array. The ODX process is transparent to users.
EMC PowerPath
38
EMC PowerPath®is a host-based software package that provides automated data path
management and load balancing capabilities for heterogeneous server, network, and
storage deployed in physical and virtual environments. It offers the following benefits
for the VSPEX Proven Infrastructure:

Standardized data management across physical and virtual environments.

Automated multipathing policies and load balancing to provide predictable and
consistent application availability and performance across physical and virtual
environments.

Improved service-level agreements by eliminating application impact from I/O
failures.
VNXe FAST Cache
VNXe FAST Cache, enables flash drives to function as an expanded cache layer for the
array. FAST Cache is an array-wide, nondisruptive cache, available for both file and
block storage. Frequently accessed data is copied to the FAST Cache and subsequent
reads and/or writes to the data chunk are serviced by FAST Cache. This enables
immediate promotion of highly active data to flash drives. This dramatically improves
the response time for the active data and reduces data hot spots that can occur
within a LUN. The FAST Cache feature is an optional component of this solution.
VNXe FAST VP
VNXe FAST VP can automatically tier data across multiple types of drives to leverage
differences in performance and capacity. FAST VP is applied at the block storage pool
level and automatically adjusts where data is stored based on how frequently it is
accessed. Frequently accessed data is promoted to higher tiers of storage, while
infrequently accessed data can be migrated to a lower tier for cost efficiency. This
rebalancing is part of a regularly scheduled maintenance operation.
VNXe file shares
In many environments it is important to have a common location to store files
accessed by many different individuals. This is implemented as CIFS or NFS file
shares from a file server. VNXe storage arrays can provide this service along with
centralized management, client integration, advanced security options, and efficiency
improvement features.
ROBO
Organizations with remote office and branch offices (ROBO) often prefer to locate
data and applications close to the users in order to provide better performance and
lower latency. In these environments, IT departments need to balance the benefits of
local support with the need to maintain central control. Local Systems and storage
should be easy for local personnel to administer, but also support remote
management and flexible aggregation tools that minimize the demands on those
local resources. With VSPEX, you can accelerate the deployment of applications at
remote offices and branch offices. Customers can also leverage Unisphere Remote to
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Technology Overview
consolidate the monitoring, system alerts, and reporting of hundreds of locations
while maintaining simplicity of operation and unified storage functionality for local
managers.
Data Protection
Data protection, another important component in this VSPEX solution, provides
protection assurance by backing up data files or volumes on a defined schedule, and
then restores data from backup for recovery after a disaster.
Overview
EMC Data Protection is a smart method of backup. It consists of best of class,
integrated protection storage and software designed to meet backup and recovery
objectives now and in the future. With EMC market-leading protection storage, deep
data source integration, and feature-rich data management services, you can deploy
an open, modular protection storage architecture that allows you to scale while
lowering cost and complexity.
EMC Avamar
deduplication
EMC Avamar provides fast, efficient backup and recovery through a complete
software and hardware solution. Equipped with integrated variable-length
deduplication technology, Avamar facilitates fast, daily full backups for virtual
environments, remote offices, enterprise applications, network-attached storage
(NAS) servers, and desktops/laptops. Learn more at http://www.emc.com/avamar
EMC Data Domain
deduplication
storage systems
EMC Data Domain Deduplication storage systems continue to revolutionize disk
backup, archiving, and disaster recovery with high-speed, inline deduplication for
backup and archive workloads. Learn more at http://www.emc.com/datadomain
EMC RecoverPoint
EMC RecoverPoint is an enterprise-scale solution that protects application data on
heterogeneous SAN-attached servers and storage arrays. EMC RecoverPoint runs on a
dedicated appliance (RPA) and combines industry-leading continuous data protection
technology with a bandwidth-efficient, no-data-loss replication technology, allowing
it to protect data locally (continuous data protection, CDP), remotely (continuous
remote replication, CRR), or both (local and remote replication, CLR).

RecoverPoint CDP replicates data within the same site or to a local bunker site
some distance away, and the data is transferred via FC.

RecoverPoint CRR uses either FC or an existing IP network to send the data
snapshots to the remote site using techniques that preserve write-order.

In a CLR configuration, RecoverPoint replicates to both a local and a remote site
simultaneously.
RecoverPoint uses lightweight splitting technology on the application server, in the
fabric or in the array, to mirror application writes to the RecoverPoint cluster.
RecoverPoint supports several types of write splitters:

Array-based

Intelligent fabric-based
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
39
Solution Technology Overview

Host-based
Other technologies
In addition to the required technical components for EMC VSPEX solutions, other
items may provide additional value depending on the specific use case.
EMC XtremCache
EMC XtremCache™ is a server flash caching solution that reduces latency and
increases throughput to improve application performance by using intelligent caching
software and PCIe flash technology.
Server-side flash caching for maximum speed
XtremCache performs the following functions to improve system performance:

Caches the most frequently referenced data on the server-based PCIe card to
put the data closer to the application.

Automatically adapts to changing workloads by determining the most
frequently referenced data and promoting it to the server flash card. This means
that the “hottest” data (most active data) automatically resides on the PCIe
card in the server for faster access.

Offloads the read traffic from the storage array, which allocates greater
processing power to other applications. While one application accelerates with
XtremCache, the array performance for other applications remains the same or
slightly enhanced.
Write-through caching to the array for total protection
XtremCache accelerates reads and protects data by using a write-through cache to
the storage to deliver persistent high-availability, integrity, and disaster recovery.
Application agnostic
XtremCache is transparent to applications; there is no need to rewrite, retest, or
recertify to deploy XtremCache in the environment.
Minimum impact on system resources
Unlike other caching solutions on the market, XtremCache does not require a
significant amount of memory or CPU cycles, as all flash and wear-leveling
management are done on the PCIe card without using server resources. Unlike other
PCIe solutions, there is no significant overhead from using XtremCache on server
resources.
XtremCache creates the most efficient and intelligent I/O path from the application to
the datastore, which results in an infrastructure that is dynamically optimized for
performance, intelligence, and protection for both physical and virtual environments.
XtremCache active/passive clustering support
The configuration of XtremCache clustering scripts ensures that stale data is never
retrieved. The scripts use cluster management events to trigger a mechanism that
40
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Technology Overview
purges the cache. The XtremCache-enabled active/passive cluster ensures data
integrity, and accelerates application performance.
XtremCache performance considerations
XtremCache performance considerations include:

On a write request, XtremCache first writes to the array, then to the cache, and
then completes the application I/O.

On a read request, XtremCache satisfies the request with cached data, or, when
the data is not present, retrieves the data from the array, writes it to the cache,
and then returns it to the application. The trip to the array can be in the order of
milliseconds; therefore, the array limits how fast the cache can work. As the
number of writes increases, XtremCache performance decreases.

XtremCache is most effective for workloads with a 70 percent or greater
read/write ratio, with small, random I/O (8 K is ideal). I/O greater than 128 K is
not cached in XtremCache 1.5.
Note: For more information, refer to the Introduction to EMC Xtrem Cache White Paper.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
41
Chapter 4
Solution Architecture Overview
This chapter presents the following topics:
Overview ..................................................................................................................44
Solution architecture ............................................................................................... 44
Server configuration guidelines ...............................................................................50
Network configuration guidelines ............................................................................53
Storage configuration guidelines .............................................................................56
High availability and failover ...................................................................................63
Validation test profile .............................................................................................. 65
EMC Data Protection and configuration guidelines ..................................................65
Sizing guidelines .....................................................................................................65
Reference workload..................................................................................................66
Applying the reference workload .............................................................................67
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
43
Solution Architecture Overview
Overview
This chapter provides a comprehensive guide to the major architectural aspects of
this solution. Server capacity is presented in generic terms for required minimums of
CPU, memory, and network resources; the customer is free to select the server and
networking hardware that meet or exceed the stated minimums. The specified
storage architecture, along with a system meeting the server and network
requirements outlined, has been validated by EMC to provide high levels of
performance while delivering a highly available architecture for your private cloud
deployment.
Each VSPEX Proven Infrastructure balances the storage, network, and compute
resources needed for a number of virtual machines validated by EMC. In practice,
each virtual machine has its own set of requirements that rarely fit a predefined idea
of a virtual machine. In any discussion about virtual infrastructures, it is important to
first define a reference workload. Not all servers perform the same tasks, and it is
impractical to build a reference that takes into account every possible combination of
workload characteristics.
Solution architecture
Overview
The VSPEX solution for Microsoft Hyper-V Private Cloud with VNXe validates the
configuration for up to 200 virtual machines.
Note: VSPEX uses the concept of a reference workload to describe and define a virtual
machine. Therefore, one physical or virtual server in an existing environment may not be
equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the
reference to arrive at an appropriate point of scale. This document describes the process in
Applying the reference workload.
Logical
architecture
The architecture diagrams in this section show the layout of the major components in
this solution. Two types of storage, block-based and file-based, are shown in the
following diagrams.
Figure 9 shows the infrastructure validated with block-based storage, where an 8 Gb
FC or 10 Gb-iSCSI SAN carries storage traffic, and 10 GbE carries management and
application traffic.
44
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Figure 9.
Logical architecture for block storage
Figure 10 characterizes the infrastructure validated with file-based storage, where 10
GbE carries storage traffic and all other traffic.
Figure 10. Logical architecture for file storage
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
45
Solution Architecture Overview
Key components
The architectures include the following key components:
Microsoft Hyper-V—Provides a common virtualization layer to host a server
environment. The specifics of the validated environment are listed in Table 2 on page
48. Hyper-V provides highly available infrastructure through features such as:

Live Migration—Provides live migration of virtual machines within a virtual
infrastructure cluster, with no virtual machine downtime or service disruption.

Live Storage Migration—Provides live migration of virtual machine disk files
within and across storage arrays with no virtual machine downtime or service
disruption.

Failover Clustering High Availability (HA)–Detects and provides rapid recovery
for a failed virtual machine in a cluster.

Dynamic Optimization (DO)–Provides load balancing of computing capacity in a
cluster with support of SCVMM.
Microsoft System Center Virtual Machine Manager (SCVMM)—This solution does not
require SCVMM. However, if deployed, it simplifies provisioning, management, and
monitoring of the Hyper-V environment.
Microsoft SQL Server 2012—SCVMM, if used, requires a SQL Server database
instance to store configuration and monitoring details.
DNS Server—Use DNS services for the various solution components to perform name
resolution. This solution uses Microsoft DNS service running on Windows Server 2012
R2.
Active Directory Server—Various solution components require Active Directory (AD)
services to function properly. The Microsoft AD Service runs on a Windows Server
2012 R2.
IP network—A standard Ethernet network carries all network traffic with redundant
cabling and switching. A shared IP network carries user and management traffic.
Storage network
The storage network is an isolated network that provides hosts with access to the
storage arrays. VSPEX offers different options for block-based and file-based storage.
Storage network for block
This solution provides two options for block-based storage networks.
46

Fibre Channel (FC) is a set of standards that define protocols for performing
high speed serial data transfer. FC provides a standard data transport frame
among servers and shared storage devices.

10 Gb Ethernet (iSCSI) enables the transport of SCSI blocks over a TCP/IP
network. iSCSI works by encapsulating SCSI commands into TCP packets and
sending the packets over the IP network.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Storage network for file
With file-based storage, a private, non-routable 10 GbE subnet carries the storage
traffic.
VNXe storage array
The VSPEX Private Cloud configuration begins with the VNXe series storage arrays,
including:

EMC VNXe3200 array—Provides storage by presenting either Cluster Shared
Volumes (for block) or CIFS (SMB 3.0) shares (for file) to Hyper-V hosts for up to
200 virtual machines.
VNXe series storage arrays include the following components:

Storage processors (SPs) support block data with UltraFlex I/O technology that
supports FC, iSCSI, NFS, and CIFS protocols. The SPs provide access for all
external hosts, and for the file side of the VNXe array.

Standby power supply (SPS) is 1U in size and provides enough power to each
SP to ensure that any data in flight destages to the vault area in the event of a
power failure. This ensures that no writes are lost. Upon restart of the array, the
pending writes are reconciled and made persistent.

Disk array enclosures (DAEs) house the drives used in the array.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
47
Solution Architecture Overview
Hardware
resources
Table 2 lists the hardware used in this solution.
Table 2.
Solution hardware
Component
Microsoft
Hyper-V
servers
Configuration
CPU
1 vCPU per virtual machine
4 vCPUs per physical core
For 200 virtual machines:
 200 vCPUs
 Minimum of 50 physical CPUs
Memory
2 GB RAM per virtual machine
2 GB RAM reservation per Hyper-V host
For 200 virtual machines:
 Minimum of 400 GB RAM
 Add 2GB for each physical server
Network
Block
2 x 10 GbE NICs per server
2 HBAs per server
File
4 x 10 GbE NICs per server
Note: Add at least one additional server to the infrastructure beyond the minimum
requirements to implement Microsoft Hyper-V HA and meet the listed minimums.
Network
infrastructure
Minimum
switching
capacity
Block
2 physical switches
2 x 10 GbE ports per Hyper-V server
1 x 1 GbE port per Storage Processor for management
2 ports per Hyper-V server, for storage network
2 ports per SP, for storage data
File
2 physical switches
4 x 10 GbE ports per Hyper-V server
1 x 1 GbE port per Storage Processor for management
2 x 10 GbE ports per Storage Processor for data
EMC Backup
48
Avamar
Refer to EMC Backup and Recovery Options for VSPEX
Private Clouds White Paper.
Data Domain
Refer to EMC Backup and Recovery Options for VSPEX
Private Clouds White Paper.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Component
EMC VNXe
series
storage array
Configuration
Block
Common:
 1 x 1 GbE interface per SP for management
 2 front end fibre channel ports per SP.
 system disks for VNXe OE
For 200 virtual machines:
 EMC VNXe3200
 65 x 600 GB 10k rpm 2.5-inch Serial-Attached SCSI
(SAS) drives
 2 x 200 GB flash drives(optional)
 4 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares
 1 x 200 GB flash drive as a hot spare(optional)
File
Common:
 2 x 10 GbE interfaces per Storage Processor
 1 x 1 GbE interface per SP for management
 System disks for VNXe OE
For 200 virtual machines
 EMC VNXe3200
 65 x 600 GB 10k rpm 2.5-inch SAS drives
 2 x 200 GB flash drives(optional)
 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares
 1 x 200 GB flash drive as a hot spare(optional)
Shared
infrastructure
In most cases, a customer environment already has infrastructure services such as
Active Directory, DNS, and other services configured. The setup of these services is
beyond the scope of this document.
If implemented without existing infrastructure, add the following:
 2 physical servers
 16 GB RAM per server
 4 processor cores per server
 2 x 1 GbE ports per server
Note: These services can be migrated into VSPEX post-deployment; however, they
must exist before VSPEX can be deployed.
Note: The solution recommends using a 10 Gb network or an equivalent 1Gb network
infrastructure as long as the underlying requirements around bandwidth and redundancy are
fulfilled.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
49
Solution Architecture Overview
Software resources Table 3 lists the software used in this solution.
Table 3.
Solution software
Software
Configuration
Microsoft Hyper-V
Windows Server 2012 R2 Datacenter Edition
Microsoft Windows Server
Microsoft System Center Virtual
Machine Manager
(Datacenter Edition is necessary to support the
number of virtual machines in this solution)
Version 2012 R2
Version 2012 Enterprise Edition
Microsoft SQL Server
Note: Any supported database for SCVMM is
acceptable.
EMC VNXe
EMC VNXe OE
8.0
EMC Storage Integrator (ESI)
Check for latest version
EMC PowerPath
Check for latest version
Next-Generation Backup
EMC Avamar
6.1 SP1
EMC Data Domain OS
5.2
Virtual machines (used for validation – not required for deployment)
Base operating system
Microsoft Windows Server 2012 R2 Datacenter
Edition
Server configuration guidelines
Overview
When designing and ordering the compute or server layer of the VSPEX solution,
several factors may impact the final purchase. From a virtualization perspective, if a
system workload is well understood, features such as Dynamic Memory and Smart
Paging can reduce the aggregate memory requirement.
If the virtual machine pool does not have a high level of peak or concurrent usage,
reduce the number of vCPUs. Conversely, if the applications being deployed are
highly computational in nature, increase the number of CPUs and memory purchased.
Current VSPEX sizing guidelines specify a virtual CPU core to physical CPU core ratio
of 4:1 (for Ivy Bridge or new processors, use a ratio of 8:1). This ratio was based upon
an average sampling of CPU technologies available at the time of testing. As CPU
technologies advance, OEM server vendors that are VSPEX partners may suggest
differing (normally higher) ratios. Follow the updated guidance supplied by your OEM
server vendor.
50
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Table 4 lists the hardware resources that are used for the compute layer.
Table 4.
Hardware resources for compute layer
Component
Microsoft
Hyper-V
servers
Configuration
CPU
1 vCPU per virtual machine
4 vCPUs per physical core
For 200 virtual machines:
 200 vCPUs
 Minimum of 50 physical CPUs
Memory
2 GB RAM per virtual machine
2 GB RAM reservation per Hyper-V host
For 200 virtual machines:
 Minimum of 500 GB RAM
 Add 2GB for each physical server
Network
Block
2 x 10 GbE NICs per server
2 HBA per server
File
4 x 10 GbE NICs per server
Note: Add at least one additional server to the infrastructure beyond the minimum
requirements to implement Hyper-V HA and meet the listed minimums.
Hyper-V memory
virtualization
Microsoft Hyper-V has a number of advanced features to maximize performance, and
overall resource utilization. The most important features relate to memory
management. This section describes some of these features, and the items to
consider when using these features in the VSPEX environment.
In general, virtual machines on a single hypervisor consume memory as a pool of
resources, as shown in Figure 11.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
51
Solution Architecture Overview
Figure 11. Hypervisor memory consumption
Understanding the technologies in this section enhances this basic concept.
Dynamic Memory
Dynamic Memory was introduced in Windows Server 2008 R2 SP1 to increase
physical memory efficiency by treating memory as a shared resource, and
dynamically allocating it to virtual machines. The amount of memory used by each
virtual machine is adjustable at any time. Dynamic Memory reclaims unused memory
from idle virtual machines, which allows more virtual machines to run at any given
time. In Windows Server 2012 R2, Dynamic Memory enables administrators to
dynamically increase the maximum memory available to virtual machines.
Smart Paging
Even with Dynamic Memory, Hyper-V allows more virtual machines than the available
physical memory can support. In most cases, there is a memory gap between
minimum memory and startup memory. Smart Paging is a memory management
technique that uses disk resources as temporary memory replacement. It swaps out
less-used memory to disk storage, and swaps in when needed. Performance
degradation is a potential drawback of Smart Paging. Hyper-V continues to use the
guest paging when the host memory is oversubscribed because it is more efficient
than Smart Paging.
52
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Non-Uniform Memory Access
Non-Uniform Memory Access (NUMA) is a multi-node computer technology that
enables a CPU to access remote-node memory. This type of memory access degrades
performance, so Windows Server 2012 R2 employs a process known as processor
affinity, which pins threads to a single CPU to avoid remote-node memory access. In
previous versions of Windows, this feature is only available to the host. Windows
Server 2012 R2 extends this functionality to the virtual machines, which provides
improved performance in symmetrical multiprocessor (SMP) environments.
Memory
configuration
guidelines
The memory configuration guidelines take into account Hyper-V memory overhead,
and the virtual machine memory settings.
Hyper-V memory overhead
Virtualized memory has some associated overhead, which includes the memory
consumed by Hyper-V, the parent partition, and additional overhead for each virtual
machine. Leave at least 2 GB memory for the Hyper-V parent partition in this solution.
Virtual machine memory
In this solution, each virtual machine gets 2 GB memory in the fixed mode.
Network configuration guidelines
Overview
This section provides guidelines for setting up a redundant, highly available network
configuration. The guidelines outlined in Table 5 consider jumbo frames, VLANs, and
LACP on EMC unified storage.
Table 5.
Hardware resources for network
Component
Network
infrastructure
Configuration
Minimum
switching
capacity
Block
2 physical switches
2 x 10 GbE ports per Hyper-V server
1 x 1 GbE port per Storage Processor for management
2 ports per Hyper-V server, for storage network
2 ports per SP, for storage data
File
2 physical switches
4 x 10 GbE ports per Hyper-V server
1 x 1 GbE port per Storage Processor for management
2 x 10 GbE ports per Storage Processor for data
Note: The solution may use a 1 GbE network infrastructure as long as the underlying
requirements around bandwidth and redundancy are fulfilled.
VLAN
Isolate the network traffic so that the traffic between hosts and storage, hosts and
clients, and management traffic all move over isolated networks. In some cases,
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
53
Solution Architecture Overview
physical isolation may be required for regulatory or policy compliance reasons; but in
many cases logical isolation with VLANs is sufficient.
This solution calls for a minimum of three VLANs for the following usage:

Client access

Storage (for iSCSI or SMB only)

Management
Figure 12 depicts the VLANs and the network connectivity requirements for a blockbased VNXe array.
Figure 12. Required networks for block storage
Figure 13 depicts the VLANs and the network connectivity requirements for a filebased VNXe array.
54
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Figure 13. Required networks for file storage
The client access network is for users of the system, or clients, to communicate with
the infrastructure. The storage network provides communication between the
compute layer and the storage layer. Administrators use the management network as
a dedicated way to access the management connections on the storage array,
network switches, and hosts.
Note: Some best practices call for additional network isolation for cluster traffic,
virtualization layer communication, and other features. Implement these additional
networks if necessary.
Enabling jumbo
frames (iSCSI or
SMB only)
This solution recommends setting the MTU to 9,000 (jumbo frames) for efficient
storage and virtual machine migration traffic. Refer to the switch vendor guidelines to
enable jumbo frames for storage and host ports on the switches.
Enabling link
aggregation (SMB
only)
A link aggregation resembles an Ethernet channel, but uses the LACP IEEE 802.3ad
standard. The IEEE 802.3ad standard supports link aggregations with two or more
ports. All ports in the aggregation must have the same speed and be full duplex. In
this solution, LACP is configured on VNXe, combining multiple Ethernet ports into a
single virtual device. If a link is lost in the Ethernet port, the link fails over to another
port. All network traffic is distributed across the active links.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
55
Solution Architecture Overview
Storage configuration guidelines
This section provides guidelines for setting up the storage layer of the solution to
provide high availability and the expected level of performance.
Overview
Hyper-V allows more than one method of using storage when hosting virtual
machines. The tested solutions described below use different protocols FC /iSCSI(for
blocks) and CIFS (for files), and the storage layout described adheres to all current
best practices. A customer or architect with the necessary training and background
can make modifications based upon their understanding of the system usage and
load if required. However, the building blocks described in this document ensure
acceptable performance. The VSPEX storage building blocks section provides specific
recommendations for the customization.
Table 6 lists hardware resources for storage.
Table 6.
Hardware resources for storage
Component
EMC VNXe
series
storage array
Configuration
Block
Common:
 1 x 1 GbE interface per SP for management
 2 front end fibre channel ports per SP
 system disks for VNXe OE
For 200 virtual machines:
 EMC VNXe3200
 65 x 600 GB 10k rpm 2.5-inch SAS drives
 2 x 200 GB flash drives(optional)
 2 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares
 1 x 200 GB flash drive as a hot spare(optional)
File
Common:
 2 x 10 GbE interfaces per SP
 1 x 1 GbE interface per SP for management
 System disks for VNXe OE
For 200 virtual machines:
 EMC VNXe3200
 65 x 600 GB 10k rpm 2.5-inch SAS drives
 2 x 200 GB flash drives(optional)
 2 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares
 1 x 200 GB flash drive as a hot spare(optional)
Hyper-V storage
virtualization for
VSPEX
56
This section provides guidelines to set up the storage layer of the solution to provide
high-availability and the expected level of performance.
Windows Server 2012 Hyper-V and Failover Clustering use Cluster Shared Volumes v2
and VHDX features to virtualize storage presented from external shared storage
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
system to host virtual machines. In Figure 14, the storage array presents either blockbased LUNs (as CSV), or file-based CIFS share (as SMB shares) to the Windows hosts
to host virtual machines.
Figure 14. Hyper-V virtual disk types
CIFS
Windows Server 2012 R2 supports using CIFS (SMB 3.0) file shares as shared storage
for a Hyper-V virtual machine.
CSV
A Cluster Shared Volume (CSV) is a shared disk containing a New Technology File
System (NTFS) volume that is made accessible by all nodes of a Windows Failover
Cluster. It can be deployed over any SCSI-based local or network storage.
Pass Through
Windows 2012 also supports Pass Through, which allows a virtual machine to access
a physical disk mapped to the host that does not have a volume configured on it.
SMB 3.0 (file-based storage only)
The SMB protocol is the file sharing protocol that is used by default in Windows. With
the introduction of Windows Server 2012 R2, it provides a vast set of new SMB
features with an updated (SMB 3.0) protocol. Some of the key features available with
Windows Server 2012 SMB 3.0 are:

SMB Transparent Failover

SMB Scale Out

SMB Multichannel

SMB Direct

SMB Encryption

VSS for SMB file shares
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
57
Solution Architecture Overview

SMB Directory Leasing

SMB PowerShell
With these new features, SMB 3.0 offers richer capabilities that, when combined,
provide organizations with a high performance storage alternative to traditional Fibre
Channel storage solutions at a lower cost.
Note: For more details about SMB 3.0, refer to Chapter 3.
ODX
Offloaded Data Transfer (ODX) is a feature of the storage stack in Microsoft Windows
Server 2012 R2 that gives users the ability to use the investment in external storage
arrays to offload data transfers from the server to the storage arrays. When used with
storage hardware that supports the ODX feature, file copy operations are initiated by
the host, but performed by the storage device. ODX eliminates the data transfer
between the storage and the Hyper-V hosts by using a token-based mechanism for
reading and writing data within storage arrays and reduces the load on your network
and hosts.
Using ODX helps to enable rapid cloning and migration of virtual machines. Because
the file transfer is offloading to the storage array when using ODX, the host resource
usage, such as CPU and network, is significantly reduced. By maximizing the use of
storage array, ODX minimizes latencies and improve the transfer speed of large files,
such as database or video files.
When performing file operations that are supported by ODX, data transfers are
automatically offloaded to the storage array and are transparent to users. ODX is
enabled by default in Windows Server 2012 R2.
VHDX
Hyper-V in Windows Server 2012 R2 contains an update to the VHD format called
VHDX, which has much larger capacity and built-in resiliency. The main features of
the VHDX format are:

Support for virtual hard disk storage with the capacity of up to 64 TB.

Additional protection against data corruption during power failures by logging
updates to the VHDX metadata structures.

Optimal structure alignment of the virtual hard disk format to suit large sector
disks.
The VHDX format also has the following features:
58

Larger block size for dynamic and differential disks, which enables the disks to
better meet the needs of the workload.

The 4 KB logical sector virtual disk that enables increased performance when
used by applications and workloads that are designed for 4 KB sectors.

The ability to store custom metadata about the files that the user might want to
record, such as the operating system version or applied updates.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview

VSPEX storage
building blocks
Space reclamation features that can result in smaller file size and enable the
underlying physical storage device to reclaim unused space (for example, TRIM
requires direct-attached storage or SCSI disks and TRIM-compatible hardware).
Sizing the storage system to meet virtual server IOPS is a complicated process. When
I/O reaches the storage array, several components such as the SPs, back-end
dynamic random access memory (DRAM) cache, FAST Cache or FAST VP (if used), and
disks serve that I/O. Customers must consider various factors when planning and
scaling their storage system to balance capacity, performance, and cost for their
applications.
VSPEX uses a building block approach to reduce this complexity. A building block is a
set of disk spindles that can support a certain number of virtual servers in the VSPEX
architecture. Each building block combines several disk spindles to create a storage
pool that supports the needs of the private cloud environment.
VSPEX solutions have been engineered to provide a variety of sizing configurations
which afford flexibility when designing the solution. Customers can start out by
deploying smaller configurations and scale up as their needs grow. At the same time,
customers can avoid over-purchasing by choosing a configuration that closely meets
their needs. To accomplish this, VSPEX solutions can be deployed using one or both
of the scale-points below to obtain the ideal configuration while guaranteeing a given
performance level.
Building block for 15 virtual servers
The first building block can contain up to 15 virtual servers, with five SAS drives in a
storage pool, as shown in Figure 15.
Figure 15. Building block for 15 virtual servers
This is the smallest building block qualified for the VSPEX architecture. This building
block can be expanded by adding five SAS drives and allowing the pool to restripe to
add support for 15 more virtual servers.
Building block for 125 virtual servers
The second building block can contain up to 125 virtual servers. It contains 40 SAS
drives, as shown in Figure 16. This figure also shows the four drives required for the
VNXe operating system. The preceding sections outline an approach to grow from 15
virtual machines in a pool to 125 virtual machines in a pool.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
59
Solution Architecture Overview
Figure 16. Building block for 125 virtual servers
Implement this building block with all of the resources in the pool initially, or expand
the pool over time as the environment grows. Table 7 lists the flash and SAS
requirements in a pool for different numbers of virtual servers.
Table 7.
Number of disks required for different number of virtual machines
Virtual servers
SAS drives
15
5
30
10
45
15
60
20
75
25
90
30
105
35
120
40
125
40*
Note: Due to increased efficiency with larger stripes, the building block with 40 SAS drives
can support up to 125 virtual servers.
To grow the environment beyond 125 virtual servers, create another storage pool
using the building block method described here. To reach the tested maximum scale
of 200 virtual servers, the second pool should contain 25 disks. Configure the new
pool as described above.
VSPEX Private
Cloud validated
maximums
VSPEX Private Cloud configurations are validated on the VNXe3200 platforms. Each
platform has different capabilities in terms of processors, memory, and disks. For
each array, there is a recommended maximum VSPEX private cloud configuration. In
addition to the VSPEX private cloud building blocks, each storage array must contain
the drives used for the VNXe Operating Environment (OE), and hot spare disks for the
environment.
Notes: Allocate at least one hot spare for every 30 disks of a given type and size.
60
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
VNXe3200
VNXe3200 is validated for up to 200 virtual servers. Figure 17 shows a typical
configuration for that maximum scale.
Figure 17. Storage layout for 200 virtual machines using VNXe3200
This configuration uses the following storage layout:

Forty 600 GB SAS disks are allocated to a block-based storage pool for 125
virtual machines

Twenty-five 600 GB SAS disks are allocated to a second pool for 75 virtual
machines

Three 600 GB SAS disks are configured as hot spares

For block storage, allocate at least two LUNs from each pool to the Hyper-V
Failover Cluster to serve as CSV

For file storage, allocate at least two SMB shares from each pool to the Hyper-V
Failover Cluster for the virtual servers

Optionally configure two 200 GB flash drives for FAST VP for each pool

Optionally configure one 200 GB flash drive as a hot spare

Optionally configure flash drives as FAST Cache (up to 400 GB) in the array.
LUNs or storage pools where virtual machines reside that have a higher than
average I/O requirement can benefit from the FAST Cache feature. These drives
are an optional part of the solution, and additional licenses may be required to
use the FAST Suite.
Using this configuration, the VNXe3200 can support 200 virtual servers as defined in
the Reference workload section.
Conclusion
The scale levels listed in Figure 18 highlight the entry points and supported
maximums values for the arrays in the VSPEX Private Cloud environment. The entry
points represent optimal model demarcations in terms of the number of virtual
machines within the environment. This helps you to determine which VNXe array to
choose based on your requirements. You can choose to configure any of the listed
arrays with a smaller number of virtual machines than the maximum values
supported by using the building block approach described earlier.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
61
Solution Architecture Overview
Figure 18. Maximum scale levels and entry points of different arrays
62
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
High availability and failover
This VSPEX solution provides a highly available virtualized server, network, and
storage infrastructure. When implemented in accordance with this guide, business
operations survive from single-unit failures with little or no impact.
Overview
Virtualization layer Configure high availability in the virtualization layer, and configure the hypervisor to
automatically restart failed virtual machines. Figure 19 illustrates the hypervisor layer
responding to a failure in the compute layer.
Figure 19. High availability on the virtualization layer
By implementing high availability on the virtualization layer, even in a hardware
failure, the infrastructure attempts to keep as many services running as possible.
Compute layer
While the choice of servers to implement in the compute layer is flexible, use
enterprise class servers designed for the data center. This type of server has
redundant power supplies, as shown in Figure 20. Connect these servers to separate
power distribution units (PDUs) in accordance with your server vendor’s best
practices.
Figure 20. Redundant power supplies
To configure HA in the virtualization layer, configure the compute layer with enough
resources to meet the needs of the environment, even with a server failure, as
demonstrated in Figure 19.
Network layer
The advanced networking features of VNXe series provide protection against network
connection failures at the array. Each Windows host has multiple connections to user
and storage Ethernet networks to guard against link failures, as shown in Figure 21.
Spread these connections across multiple Ethernet switches to guard against
component failure in the network.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
63
Solution Architecture Overview
Figure 21. Network layer high availability (VNXe)
Ensure that there is no single point of failure (SPOF) to allow the compute layer to
access storage, and communicate with users even if a component fails.
Storage layer
The VNXe series design is for five 9s availability by using redundant components
throughout the array. All of the array components are capable of continued operation
in case of hardware failure. The RAID disk configuration on the array provides
protection against data loss caused by individual disk failures, and the available hot
spare drives can be dynamically allocated to replace a failing disk, as shown in Figure
22.
Figure 22. VNXe series HA components
64
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
EMC storage arrays support HA by default. When configured according to the
directions in their installation guides, no single unit failures result in data loss or
unavailability.
Validation test profile
Profile
characteristics
The VSPEX solution was validated with the environment profile described in Table 8.
Table 8.
Profile characteristics
Profile characteristic
Value
Number of virtual machines
200
Virtual machine OS
Windows Server 2012 R2 Datacenter
Edition
Processors per virtual machine
1
Number of virtual processors per physical CPU core
4*
RAM per virtual machine
2 GB
Average storage available for each virtual machine
100 GB
Average IOPS per virtual machine
25 IOPS
Number of LUNs or CIFS shares to store virtual
machine disks
2 per storage pool
Number of virtual machines per LUN or CIFS share
65 or 75 per LUN of CIFS share
Disk and RAID type for LUNs or CIFS shares
RAID 5, 600 GB, 10k rpm, 2.5-inch
SAS disks
*For Ivy Bridge or later processors, use 8 vCPU per physical core
Note: This solution was tested and validated with Windows Server 2012 R2 as the operating
system for Hyper-V hosts and virtual machines; however it also supports Windows Server
2008 R2 and Windows Server 2012. Hyper-V hosts on all supported versions of Windows
Server use the same sizing and configuration.
EMC Data Protection and configuration guidelines
For complete EMC Data Protection guidelines for this VSPEX Private Cloud solution,
refer to the EMC Backup and Recovery Options for VSPEX Private Clouds Design and
Implementation Guide.
Sizing guidelines
The following sections provide definitions of the reference workload used to size and
implement the VSPEX architectures. The sections include instructions on how to
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
65
Solution Architecture Overview
correlate those reference workloads to customer workloads, and how that may
change the end delivery from the server and network perspective.
Modify the storage definition by adding drives for greater capacity and performance,
and by adding features such as FAST Cache and FAST VP. The disk layouts provide
support for the appropriate number of virtual machines at the defined performance
level and for typical operations such as snapshots. Decreasing the number of
recommended drives or stepping down an array type can result in lower IOPS per
virtual machine, and a reduced user experience caused by higher response time.
Reference workload
Overview
When you move an existing server to a virtual infrastructure, you can gain efficiency
by right-sizing the virtual hardware resources assigned to that system.
Each VSPEX Proven Infrastructure balances the storage, network, and compute
resources needed for a set number of virtual machines, as validated by EMC. In
practice, each virtual machine has its own requirements that rarely fit a pre-defined
idea of a virtual machine. In any discussion about virtual infrastructures, first define a
reference workload. Not all servers perform the same tasks, and it is impractical to
build a reference that considers every possible combination of workload
characteristics.
To simplify the discussion, this section presents a representative customer reference
Defining the
reference workload workload. By comparing your actual customer usage to this reference workload, you
can determine which reference architecture to choose.
For the VSPEX solutions, the reference workload is a single virtual machine. Table 9
lists the characteristics of this virtual machine.
Table 9.
Virtual machine characteristics
Characteristic
Value
Virtual machine operating system
Microsoft Windows Server 2012 R2
Datacenter Edition
Virtual processors per virtual machine
1
RAM per virtual machine
2 GB
Available storage capacity per virtual machine
100 GB
I/O operations per second (IOPS) per virtual
machine
25
I/O pattern
Random
I/O read/write ratio
2:1
This specification for a virtual machine does not represent any specific application.
Rather, it represents a single common point of reference to measure other virtual
machines.
66
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Applying the reference workload
When you consider an existing server for movement into a virtual infrastructure, you
have the opportunity to gain efficiency by right sizing the virtual hardware resources
assigned to that system.
Overview
The solution creates a pool of resources that are sufficient to host a target number of
reference virtual machines with the characteristics shown in Table 9 on page 66. The
customer virtual machines may not exactly match the specifications. In that case,
define a single specific customer virtual machine as the equivalent of some number
of reference virtual machines together, and assume these virtual machines are in use
in the pool. Continue to provision virtual machines from the resource pool until no
resources remain.
Example 1:
Custom-built
application
A small custom-built application server must move into this virtual infrastructure. The
physical hardware that supports the application is not fully utilized. A careful analysis
of the existing application reveals that the application can use one processor, and
needs 3 GB memory to run normally. The I/O workload ranges between 4 IOPS at idle
time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB
on local hard drive storage.
Based on these numbers, the resource pool needs the following resources:

CPU of one reference virtual machine

Memory of two reference virtual machines

Storage of one reference virtual machine

I/Os of one reference virtual machine
In this example, an appropriate virtual machine uses the resources for two of the
reference virtual machines. If implemented on a VNXe3200 storage system which can
support up to 200 virtual machines, resources for 198 reference virtual machines
remain.
Example 2: Pointof-Sale system
The database server for a customer’s Point-of-Sale system must move into this virtual
infrastructure. It is currently running on a physical system with four CPUs and 16 GB
memory. It uses 200 GB storage and generates 200 IOPS during an average busy
cycle.
The requirements to virtualize this application are:

CPUs of four reference virtual machines

Memory of eight reference virtual machines

Storage of two reference virtual machines

I/Os of eight reference virtual machines
In this case, the one appropriate virtual machine uses the resources of eight
reference virtual machines. If implemented on a VNXe3200 storage system which can
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
67
Solution Architecture Overview
support up to 200 virtual machines, resources for 192 reference virtual machines
remain.
Example 3: Web
server
The customer’s web server must move into this virtual infrastructure. It is currently
running on a physical system with two CPUs and 8 GB memory. It uses 25 GB storage
and generates 50 IOPS during an average busy cycle.
The requirements to virtualize this application are:

CPUs of two reference virtual machines

Memory of four reference virtual machines

Storage of one reference virtual machine

I/Os of two reference virtual machines
In this case, the one appropriate virtual machine uses the resources of four reference
virtual machines. If implemented on a VNXe3200 storage system which can support
up to 200 virtual machines, resources for 196 reference virtual machines remain.
Example 4:
Decision-support
database
The database server for a customer’s decision support system must move into this
virtual infrastructure. It is currently running on a physical system with 10 CPUs and 64
GB memory. It uses 5 TB storage and generates 700 IOPS during an average busy
cycle.
The requirements to virtualize this application are:

CPUs of 10 reference virtual machines

Memory of 32 reference virtual machines

Storage of 52 reference virtual machines

I/Os of 28 reference virtual machines
In this case, one virtual machine uses the resources of 52 reference virtual machines.
If implemented on a VNXe3200 storage system which can support up to 200 virtual
machines, resources for 148 reference virtual machines remain.
Summary of
examples
68
These four examples illustrate the flexibility of the resource pool model. In all four
cases, the workloads reduce the amount of available resources in the pool. All four
examples can be implemented on the same virtual infrastructure with an initial
capacity for 200 reference virtual machines, and resources for 134 reference virtual
machines remain in the resource pool as shown in Figure 23.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Figure 23. Resource pool flexibility
In more advanced cases, there may be tradeoffs between memory and I/O or other
relationships where increasing the amount of one resource decreases the need for
another. In these cases, the interactions between resource allocations become highly
complex, and are beyond the scope of the document. Examine the change in resource
balance and determine the new level of requirements. Add these virtual machines to
the infrastructure with the method described in the examples.
Implementing the solution
Overview
This solution requires a set of hardware to be available for the CPU, memory, network,
and storage needs of the system. These are general requirements that are
independent of any particular implementation except that the requirements grow
linearly with the target level of scale. This section describes some considerations for
implementing the requirements.
Resource types
The solution defines the hardware requirements in terms of these basic resources:

CPU resources

Memory resources

Network resources

Storage resources
This section describes the resource types, their use in the solution, and key
implementation considerations in a customer environment.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
69
Solution Architecture Overview
CPU resources
The solution defines the number of CPU cores that are required, but not a specific
type or configuration. New deployments should use recent revisions of common
processor technologies. It is assumed that these perform as well as, or better than,
the systems used to validate the solution.
In any running system, monitor the utilization of resources and adapt as needed. The
reference virtual machine and required hardware resources in the solution assume
that there are four virtual CPUs for each physical processor core (4:1 ratio). (For Ivy
Bridge or later processors, use 8 vCPUs per physical core.) Usually, this provides an
appropriate level of resources for the hosted virtual machines; however, this ratio
may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor
layer to determine if more resources are required.
Memory resources
Each virtual server in the solution must have 2 GB of memory. In a virtual
environment, it is common to provision virtual machines with more memory than is
installed on the physical hypervisor server because of budget constraints. Memory
over-commitment assumes that each virtual machine does not use all its allocated
memory. To oversubscribe the memory usage to some degree makes business sense.
The administrator has the responsibility to proactively monitor the oversubscription
rate such that it does not shift the bottleneck away from the server and become a
burden to the storage subsystem via page file swapping.
This solution is validated with statically assigned memory and no over-commitment
of memory resources. If a real-world environment uses over-committed memory,
monitor the system memory utilization and associated page file I/O activity
consistently to ensure that a memory shortfall does not cause unexpected results.
Network resources
The solution outlines the minimum needs of the system. If additional bandwidth is
needed, add capability at both the storage array and the hypervisor host to meet the
requirements. The options for network connectivity on the server depend on the type
of server. The storage arrays have a number of included network ports, and can add
ports using EMC UltraFlex I/O modules.
For reference purposes in the validated environment, each virtual machine generates
25 IOPS with an average size of 8 KB. This means that each virtual machine is
generating at least 200 KB/s traffic on the storage network. For an environment rated
for 100 virtual machines, this comes out to a minimum of approximately 20 MB/sec.
This is well within the bounds of modern networks. However, this does not consider
other operations. For example, additional bandwidth is needed for:

User network traffic

Virtual machine migration

Administrative and management operations
The requirements for each network depend on how it will be used. It is not practical to
provide precise numbers in this context. However, the network described in the
solution should be sufficient to handle average workloads for the previously
described use cases.
70
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Regardless of the network traffic requirements, always have at least two physical
network connections shared for a logical network so that a single link failure does not
affect the availability of the system. Design the network so that the aggregate
bandwidth in the event of a failure is sufficient to accommodate the full workload.
Storage resources
The storage building blocks described in this solution contain layouts for the disks
used in the system validation. Each layout balances the available storage capacity
with the performance capability of the drives. Consider a few factors when examining
storage sizing. Specifically, the array has a collection of disks assigned to a storage
pool. From that storage pool, provision CIFS shares to the Windows cluster. Each layer
has a specific configuration that is defined for the solution and documented in
Chapter 5.
It is acceptable to

Replace drives with larger capacity drives of the same type and performance
characteristics, or with higher performance drives of the same type and
capacity. Similarly,

Change the placement of drives in the drive shelves in order to comply with
updated or new drive shelf arrangements.

Increase the scale using the building blocks with larger numbers of drives up to
the limit defined in the VSPEX Private Cloud validated maximums section.
Observe the following best practices:

Use the latest best practices guidance from EMC regarding drive placement
within the shelf. Refer to Applied Best Practices Guide: EMC VNX Unified Best
Practices for Performance.

When expanding the capability of a storage pool using the building blocks
described in this document, use the same type and size of drive in the pool.
Create a new pool to use different drive types and sizes. This prevents uneven
performance across the pool.

Configure at least one hot spare for every type and size of drive on the system.

Configure at least one hot spare for every 30 drives of a given type.
In other cases where there is a need to deviate from the proposed number and type of
drives specified, or the specified pool and datastore layouts, ensure that the target
layout delivers the same or greater resources to the system and conforms to EMC
published best practices.
Implementation
summary
The requirements in the reference architecture are what EMC considers the minimum
set of resources to handle the workloads required based on the stated definition of a
reference virtual machine. In any customer implementation, the load of a system
varies over time as users interact with the system. However, if the customer virtual
machines differ significantly from the reference definition, and vary in the same
resource group, add more of that resource type to the system to compensate.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
71
Solution Architecture Overview
Quick assessment of customer environment
Overview
An assessment of the customer environment helps to ensure that you implement the
correct VSPEX solution. This section provides an easy-to-use worksheet to simplify
the sizing calculations and assess the customer environment.
First, summarize the applications planned for migration into the VSPEX private cloud.
For each application, determine the number of virtual CPUs, the amount of memory,
the required storage performance, the required storage capacity, and the number of
reference virtual machines required from the resource pool. Applying the reference
workload provides examples of this process.
Fill out a row in the worksheet for each application, as listed in Table 10.
Table 10.
Blank worksheet row
CPU
(virtual
CPUs)
Application
Example
application
Memory
(GB)
IOPS
Capacity
(GB)
Resource
requirements
Equivalent
reference
virtual
machines
N/A
Equivalent
reference
virtual
machines
Fill out the resource requirements for the application. The row requires inputs on four
different resources:
CPU requirements

CPU

Memory

IOPS

Capacity
Optimizing CPU utilization is a significant goal for almost any virtualization project. A
simple view of the virtualization operation suggests a one-to-one mapping between
physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In
reality, consider whether the target application can effectively use all CPUs
presented.
Use a performance-monitoring tool, such as perfmon in Microsoft Windows to
examine the CPU utilization counter for each CPU. If they are equivalent, implement
that number of virtual CPUs when moving into the virtual infrastructure. However, if
some CPUs are used and some are not, consider decreasing the number of virtual
CPUs required.
72
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
In any operation that involves performance monitoring, collect data samples for a
period of time that includes all operational use cases of the system. Use either the
maximum or 95th percentile value of the resource requirements for planning
purposes.
Memory
requirements
Server memory plays a key role in ensuring application functionality and
performance. Therefore, each server process has different targets for the acceptable
amount of available memory. When moving an application into a virtual environment,
consider the current memory available to the system and monitor the free memory by
using a performance-monitoring tool, such as Microsoft Windows perfmon, to
determine memory efficiency.
In any operation involving performance monitoring, collect data samples for a period
of time that includes all operational use cases of the system. Use either the maximum
or 95th percentile value of the resource requirements for planning purposes.
Storage
performance
requirements
IOPS
The storage performance requirements for an application are usually the least
understood aspect of performance. Several components become important when
discussing the I/O performance of a system:

The number of requests coming in – or IOPS.

The size of the request, or I/O size, For example, a request for 4 KB of data is
easier and faster to process than a request for 4 MB of data.

The average I/O response time, or I/O latency.
The reference virtual machine calls for 25 IOPS. To monitor this on an existing system,
use a performance-monitoring tool such as Microsoft Windows perfmon. Perfmon
provides several counters that can help. The most common are:

Logical Disk or Disk Transfer/sec

Logical Disk or Disk Reads/sec

Logical Disk or Disk Writes/sec
Note: At the time of publication, Windows perfmon does not provide counters to expose
IOPS and latency for CIFS-based VHDX storage. Monitor these areas from the VNXe array as
discussed in Chapter 7.
The reference virtual machine assumes a 2:1 read: write ratio. Use these counters to
determine the total number of IOPS, and the approximate ratio of reads to writes for
the customer application.
I/O size
The I/O size is important because smaller I/O requests are faster and easier to
process than large I/O requests. The reference virtual machine assumes an average
I/O request size of 8 KB, which is appropriate for a large range of applications. Most
applications use I/O sizes that are even powers of 2, such as 4 KB, 8 KB, 16 KB, or 32
KB. The performance counter does a simple average; it is common to see 11 KB or 15
KB instead of the actual I/O sizes.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
73
Solution Architecture Overview
The reference virtual machine assumes an 8 KB I/O size. If the average customer I/O
size is less than 8 KB, use the observed IOPS number. However, if the average I/O
size is significantly higher, apply a scaling factor to account for the large I/O size. A
safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the
application is using mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4).
If that application generates 100 IOPS at 32 KB, the factor indicates to plan for 400
IOPS since the reference virtual machine assumes 8 KB I/O sizes.
I/O latency
You can use the average I/O response time, or I/O latency, to measure how quickly
the storage system processes I/O requests. The VSPEX solutions meet a target
average I/O latency of 20 ms. The recommendations in this document allow the
system to continue to meet that target, and at the same time, monitor the system and
reevaluate the resource pool utilization if needed. To monitor I/O latency, use the
“Logical Disk\Avg. Disk sec/Transfer” counter in Microsoft Windows perfmon. If the
I/O latency is continuously over the target, reevaluate the virtual machines in the
environment to ensure that these machines do not use more resources than
intended.
Storage capacity
requirements
The storage capacity requirement for a running application is usually the easiest
resource to quantify. Determine the disk space used, and add an appropriate factor to
accommodate growth. For example, virtualizing a server that currently uses 40 GB of a
200 GB internal drive with anticipated growth of approximately 20 percent over the
next year, requires 48 GB . In addition, reserve space for regular maintenance patches
and swapping files. Some file systems, such as Microsoft NTFS, degrade in
performance if they become too full.
Determining
equivalent
reference virtual
machines
With all of the resources defined, determine an appropriate value for the equivalent
reference virtual machines line by using the relationships in Table 11. Round all
values up to the closest whole number.
Table 11.
Reference virtual machine resources
Resource
Value for
reference
virtual
machine
CPU
1
Equivalent reference virtual machines = resource
requirements
Memory
2
Equivalent reference virtual machines = (resource
requirements)/2
IOPS
25
Equivalent reference virtual machines = (resource
requirements)/25
Capacity
100
Equivalent reference virtual machines = (resource
requirements)/100
Relationship between requirements and equivalent
reference virtual machines
For example, the Point of Sale system used in Example 2: Point-of-Sale system
requires four CPUs, 16 GB memory, 200 IOPS, and 200 GB storage. This translates to
four reference virtual machines of CPU, eight reference virtual machines of memory,
74
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
eight reference virtual machines of IOPS, and two reference virtual machines of
capacity. Table 12 demonstrates how that machine fits into the worksheet row.
Table 12.
Example worksheet row
CPU
Application
Example
application
(virtual
CPUs)
Memory
(GB)
IOPS
Capacity
(GB)
Equivalent
reference
virtual
machines
Resource
requirements
4
16
200
200
N/A
Equivalent
reference
virtual
machines
4
8
8
2
8
Use the highest value in the row to fill in the Equivalent reference virtual machines
column. As shown in Figure 24, the example requires eight reference virtual
machines.
Figure 24. Required resource from the reference virtual machine pool
Implementation example – stage 1
A customer wants to build a virtual infrastructure to support one custom-built
application, one Point of Sale system, and one web server. The customer computes
the sum of the Equivalent reference virtual machines column on the right side of the
worksheet as listed in Table 13 to calculate the total number of reference virtual
machines required. The table shows the result of the calculation, along with the
value, rounded up to the nearest whole number.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
75
Solution Architecture Overview
Table 13.
Example applications – stage 1
Application
Server resources
Storage resources
CPU
Memory
IOPS
Capacity
Reference virtual
machines
(virtual
CPUs)
Example
application
#1: Custom
built
application
Resource
requirements
1
3 GB
15
30 GB
N/A
Equivalent
reference
virtual
machines
1
2
1
1
2
Example
Application
#2: Point of
sale system
Resource
requirements
4
16 GB
200
200 GB
N/A
Equivalent
reference
virtual
machines
4
8
8
2
8
Example
Application
#3: Web
server
Resource
requirements
2
8 GB
50
25 GB
N/A
Equivalent
reference
virtual
machines
2
4
2
1
4
Total equivalent reference virtual machines
14
This example requires 14 reference virtual machines. According to the sizing
guidelines, one storage pool with 10 SAS drives and 2 or more flash drives provides
sufficient resources for the current needs and room for growth. You can implement
this storage layout with VNXe3200, for up to 200 reference virtual machines.
Figure 25 shows one reference virtual machine is available after implementing
VNXe3200 with 5 SAS drives and two flash drives.
76
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Figure 25. Aggregate resource requirements – stage 1
Figure 26 shows the pool configuration in this example.
Figure 26. Pool configuration – stage 1
Implementation example – stage 2
Next, this customer must add a decision support database to this virtual
infrastructure. Using the same strategy, calculate the number of Equivalent reference
virtual machines required, as shown in Table 14.
Table 14.
Example applications - stage 2
Application
Server resources
Storage resources
CPU
Memory
IOPS
Capacity
Reference
virtual
machines
(virtual
CPUs)
Example
application
#1: Custom
built
application
Resource
requirements
1
3 GB
15
30
N/A
Equivalent
reference virtual
machines
1
2
1
1
2
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
77
Solution Architecture Overview
Application
Server resources
Storage resources
Reference
virtual
machines
Example
application
#2: Point of
Sale system
Resource
requirements
4
16 GB
200
200 GB
N/A
Equivalent
reference virtual
machines
4
8
8
2
8
Example
application
#3:Web
server
Resource
requirements
2
8 GB
50
25 GB
N/A
Equivalent
reference virtual
machines
2
4
4
1
4
Example
application
#4: Decision
support
database
Resource
requirements
10
64 GB
700
5,120
GB
N/A
Equivalent
reference virtual
machines
10
32
28
52
52
Total equivalent reference virtual machines
66
This example requires 66 reference virtual machines. According to the sizing
guidelines, one storage pool with 25 SAS drives and two or more flash drives
provides sufficient resources for the current needs and room for growth. You can
implement this storage layout with VNXe3200, for up to 200 reference virtual
machines.
Figure 27 shows nine reference virtual machines available after implementing
VNXe3200 with 25 SAS drives and two flash drives.
Figure 27. Aggregate resource requirements - stage 2
Figure 28 shows the pool configuration in this example.
78
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Figure 28. Pool configuration – stage 2
Fine-tuning
hardware
resources
Usually, the process described in Determining equivalent reference virtual machines
determines the recommended hardware size for servers and storage. However, in
some cases there is a need to further customize the hardware resources available to
the system. A complete description of system architecture is beyond the scope of this
guide; however, you can perform additional customization at this point.
Storage resources
In some applications, there is a need to separate application data from other
workloads. The storage layouts in the VSPEX architectures put all of the virtual
machines in a single resource pool. To achieve workload separation, purchase
additional disk drives for the application workload and add them to a dedicated pool.
With the method outlined in Determining equivalent reference virtual machines, it is
easy to build a virtual infrastructure scaling from 15 reference virtual machines to 200
reference virtual machines with the building blocks described in VSPEX storage
building blocks, while keeping in mind the recommended limits of each storage array
documented in VSPEX Private Cloud validated maximums.
Server resources
For some workloads the relationship between server needs and storage needs does
not match what is outlined in the Reference virtual machine. Size the server and
storage layers separately in this scenario.
Figure 29. Customizing server resources
To do this, first total the resource requirements for the server components as shown
in Table 15. In the Server Component Totals line at the bottom of the worksheet, add
up the server resource requirements from the applications in the table.
Note: When customizing resources in this way, confirm that storage sizing is still
appropriate. The Storage Component Totals line at the bottom of Table 15 describes the
required amount of storage.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
79
Solution Architecture Overview
Table 15.
Server resource component totals
Application
Server resources
Storage resources
CPU
Memory
IOPS
Capacity
Reference
virtual
machines
(virtual
CPUs)
Example
application
#1: Custom
built
application
Resource
requirements
1
3 GB
15
30 GB
N/A
Equivalent
reference
virtual
machines
1
2
1
1
2
Example
application
#2: Point of
Sale system
Resource
requirements
4
16 GB
200
200 GB
N/A
Equivalent
reference
virtual
machines
4
8
8
2
8
Example
application
#3: Web
server #1
Resource
requirements
2
8 GB
50
25 GB
N/A
Equivalent
reference
virtual
machines
2
4
4
1
4
Example
application
#4: Decision
Support
System
database #1
Resource
requirements
10
64 GB
700
5,120
GB
N/A
Equivalent
reference
virtual
machines
10
32
28
52
52
174
Total equivalent reference virtual machines
Server customization
Server component totals
17
155
NA
Note: Calculate the sum of the Resource Requirements row for each application, not the
Equivalent reference virtual machines row, to get the Server/Storage Component Totals.
In this example, the target architecture required 17 virtual CPUs and 155 GB of
memory. With the stated assumptions of four virtual machines per physical processor
core, and no memory over-provisioning, this translates to 5 physical processor cores
and 155 GB of memory. With these numbers, the solution can be effectively
implemented with fewer server and storage resources.
80
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Solution Architecture Overview
Note: Keep high-availability requirements in mind when customizing the resource pool
hardware.
Appendix C provides a blank server resource component totals worksheet.
EMC VSPEX Sizing
Tool
To simplify the sizing of this solution EMC has produced the VSPEX Sizing Tool. This
tool uses the same sizing process described in the section above, and also
incorporates sizing for other VSPEX solutions.
The VSPEX Sizing Tool enables you to input your resource requirements from the
customer’s answers in the qualification worksheet. After you complete the inputs to
the VSPEX Sizing Tool, the tool generates a series of recommendations, which allows
you to validate your sizing assumptions while providing platform configuration
information that meets those requirements. This tool can be accessed at EMC VSPEX
Sizing Tool.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
81
Chapter 5
VSPEX Configuration Guidelines
This chapter presents the following topics:
Overview ..................................................................................................................84
Pre-deployment tasks .............................................................................................. 85
Customer configuration data ....................................................................................86
Preparing switches, connecting network, and configuring switches ........................86
Preparing and configuring storage array ..................................................................90
Installing and configuring Hyper-V hosts ...............................................................103
Installing and configuring SQL Server database ....................................................106
System Center Virtual Machine Manager server deployment .................................107
Summary................................................................................................................110
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
83
VSPEX Configuration Guidelines
Overview
The deployment process consists of the main stages listed in Table 16. After
deployment, integrate the VSPEX infrastructure with the existing customer network
and server infrastructure.
Table 16 lists the main stages in the solution deployment process, and also includes
references to sections that contain relevant procedures.
Table 16.
84
Deployment process overview
Stage
Description
Reference
1
Verify prerequisites
Pre-deployment tasks
2
Obtain the deployment
tools
Deployment prerequisites
3
Gather customer
configuration data
Customer configuration data
4
Rack and cable the
components
Refer to the vendor documentation.
5
Configure the switches and
networks, connect to the
customer network
Preparing switches, connecting network,
and configuring switches
6
Install and configure the
VNXe
Preparing and configuring storage array
7
Configure virtual machine
storage
Preparing and configuring storage array
8
Install and configure the
servers
Installing and configuring Hyper-V hosts
9
Set up SQL Server (used by
SCVMM)
Installing and configuring SQL Server
database
10
Install and configure
SCVMM
System Center Virtual Machine Manager
server deployment
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Pre-deployment tasks
The pre-deployment tasks shown in Table 17 include procedures not directly related
to environment installation and configuration, but provide needed results at the time
of installation. Examples of pre-deployment tasks are collecting hostnames, IP
addresses, VLAN IDs, license keys, and installation media. Perform these tasks
before the customer visit to decrease the time required onsite.
Overview
Table 17.
Deployment
prerequisites
Tasks for pre-deployment
Task
Description
Reference
Gather
documents
Gather the related documents listed in
Appendix D. These documents provide
detail on setup procedures and
deployment best practices for the
various components of the solution.
References: EMC
documentation
Gather
tools
Gather the required and optional tools
for the deployment. Use Table 18 to
confirm that all equipment, software,
and appropriate licenses are available
before starting the deployment process.
Table 18: Deployment
prerequisites checklist
Gather
data
Collect the customer-specific
configuration data for networking,
naming, and required accounts. Enter
this information into the Customer
configuration data sheet for reference
during the deployment process.
Appendix B
Table 18 lists the hardware, software, and licenses required to configure the solution.
For additional information, refer to Table 3.
Table 18.
Deployment prerequisites checklist
Requirement
Description
Hardware
Sufficient physical server capacity to
host 200 virtual servers
Reference
Windows Server 2012 R2 servers to
host virtual infrastructure servers
Note: The existing infrastructure may
already meet this requirement.
Table 2
Switch port capacity and capabilities
as required by the virtual server
infrastructure
EMC VNXe3200 (200 virtual
machines): Multiprotocol storage
array with the required disk layout
Software
SCVMM 2012 R2 installation media
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
85
VSPEX Configuration Guidelines
Requirement
Description
Reference
Microsoft Windows Server 2012 R2
installation media
Microsoft Windows Server 2012 R2
installation media (optional for virtual
machine guest OS)
Microsoft SQL Server 2012 R2 or
newer installation media
Note: The existing infrastructure may
already meet this requirement.
Licenses
Microsoft Windows Server 2012 R2
Standard Edition (or higher) license
keys (optional)
Microsoft Windows Server 2012 R2
Datacenter Edition license keys
Note: An existing Microsoft Key
Management Server (KMS) may
already meet this requirement.
Microsoft SQL Server license key
Note: The existing infrastructure may
already meet this requirement.
SCVMM 2012 R2 license keys
Customer configuration data
Assemble information such as IP addresses and hostnames during the planning
process to reduce the onsite time.
Appendix B provides a table to maintain a record of relevant customer information.
Add, record, or modify information as needed during the deployment process.
Preparing switches, connecting network, and configuring switches
Overview
86
This section lists the network infrastructure requirements to support this architecture.
Table 19 provides a summary of the tasks for switch and network configuration, and
references for further information.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Table 19.
Tasks for switch and network configuration
Task
Description
Reference
Configuring
infrastructure
network
Configure storage array and Windows
host infrastructure networking as
specified in Preparing and
configuring storage array and
Installing and configuring Hyper-V
hosts.
Preparing and configuring
storage array
Configuring
VLANs
Configure private and public VLANs
as required.
Your vendor’s switch
configuration guide
Completing
network
cabling
1. Connect the switch interconnect
ports.
Installing and configuring
Hyper-V hosts.
2. Connect the VNXe ports.
3. Connect the Windows server
ports.
Preparing network
switches
For validated levels of performance and high-availability, this solution requires the
switching capacity listed in Appendix A. Do not use new hardware if existing
infrastructure meets the requirements.
Configuring
infrastructure
network
The infrastructure network requires redundant network links for each Windows host,
the storage array, the switch interconnect ports, and the switch uplink ports to
provide both redundancy and additional network bandwidth. This is a required
configuration regardless of whether the network infrastructure for the solution already
exists, or you are deploying it alongside other components of the solution.
Figure 30 and Figure 31 show sample redundant infrastructure for this solution. The
diagrams illustrate the use of redundant switches and links to ensure that there are
no single points of failure.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
87
VSPEX Configuration Guidelines
In Figure 30 converged switches provide customers different protocol options (FC or
iSCSI) for the storage network. While existing FC switches are acceptable for FC
protocol option, use 10 Gb Ethernet network switches for iSCSI.
Figure 30. Sample Ethernet network architecture - block variant
88
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Figure 31 shows a sample redundant Ethernet infrastructure for file storage, and
illustrates the use of redundant switches and links to ensure that no single points of
failure exist in the network connectivity.
Figure 31. Sample Ethernet network architecture - file variant
Configuring VLANs
Ensure that there are adequate switch ports for the storage array and Windows hosts.
Use a minimum three VLANs for the following purposes:

Virtual machine networking and traffic management (These are customer-facing
networks. Separate them if required)

VM Mgmt/Live Migration networking (Private network)

Storage networking (iSCSI or SMB, private network)
Configuring jumbo
frames (iSCSI or
SMB only)
Use jumbo frames for iSCSI and SMB protocols. Set the MTU to 9,000 on the switch
ports for the iSCSI or SMB storage network. Consult your switch configuration guide
for instructions.
Completing
network cabling
Ensure the following:

All servers, storage arrays, switch interconnects, and switch uplinks plug into
separate switching infrastructures and have redundant connections.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
89
VSPEX Configuration Guidelines

There is a complete connection to the existing customer network.
Note: Ensure that unforeseen interactions do not cause service interruptions when you
connect the new equipment to the existing customer network.
Preparing and configuring storage array
Implementation instructions and best practices may vary because of the storage
network protocol selected for the solution. Each case contains the following steps:
1.
Configure the VNXe.
2.
Provision storage to the hosts.
3.
Optionally configure FAST VP.
4.
Optionally configure FAST Cache.
The sections below cover the options for each step separately depending on whether
one of the block protocols (FC, iSCSI), or the file protocol (CIFS) is selected.

For FC, or iSCSI, refer to the instructions marked for block protocols.

For CIFS, refer to the instructions marked for file protocols.
VNXe configuration This section describes how to configure the VNXe storage array for host access using
for block protocols block protocols such as FC, or iSCSI. In this solution, the VNXe provides data storage
for Windows hosts.
Table 20.
Tasks for VNXe configuration for block protocols
Task
Description
Reference
Preparing the
VNXe
Physically install the VNXe
hardware using the procedures in
the product documentation.
 EMC VNXe3200 Unified
Installation Guide
Setting up the
initial VNXe
configuration
Configure the IP addresses and
other key parameters on the
VNXe.
 Your vendor’s switch
Provisioning
storage for
Hyper-V hosts
Create the storage areas required
for the solution.
 Unisphere System
Getting Started Guide
configuration guide
Preparing the VNXe
The VNXe3200 Unified Installation Guide provides instructions to assemble, rack,
cable, and power up the VNXe. There are no specific setup steps for this solution.
Setting up the initial VNXe configuration
After the initial VNXe setup, configure key information about the existing environment
to enable the storage array to communicate with the other devices in the
environment. Configure the following common items in accordance with your IT data
center policies and existing infrastructure information:
90
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines

DNS

NTP

Storage network interfaces
For data connections using FC protocol
Ensure that one or more servers are connected to the VNXe storage system, either
directly or through qualified FC switches. Refer to the EMC Host Connectivity Guide for
Windows for more detailed instructions.
For data connections using iSCSI protocol
Connect one or more servers to the VNXe storage system, either directly or through
qualified IP switches. Refer to the EMC Host Connectivity Guide for Windows for more
detailed instructions.
Additionally, configure the following items in accordance with your IT data center
policies and existing infrastructure information:
1.
Set up a storage network IP address:
Logically isolate the storage network from the other networks in the solution,
as described in Chapter 3. This ensures that other network traffic does not
impact traffic between the hosts and the storage.
2.
Enable jumbo frames on the VNXe iSCSI ports:
Use jumbo frames for iSCSI networks to permit greater network bandwidth.
Apply the MTU size specified below across all the network interfaces in the
environment:
a.
In Unisphere, select Settings > Network > More Configuration >Port
Settings.
b.
Select the appropriate iSCSI network interface.
c.
On the right panel, set the MTU size to 9,000.
d.
Click Apply to apply the changes.
The reference documents listed in Table 20 on page 90 provide more information on
how to configure the VNXe platform. Storage configuration guidelines provides more
information on the disk layout.
Provisioning storage for Hyper-V hosts
This section describes provisioning block storage for Hyper-V hosts. To provision file
storage, refer to VNXe configuration for file protocols.
Complete the following steps in Unisphere to configure LUNs on the VNXe array to
store virtual servers:
1.
Create the number of storage pools required for the environment based on
the sizing information in Chapter 4. This example uses the array
recommended maximums described in Chapter 4.
a.
Log in to Unisphere.
b.
Select the array for this solution.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
91
VSPEX Configuration Guidelines
c.
Select Storage > Storage Configuration > Storage Pools.
d.
Click List View.
e.
Click Create.
Note: The pool does not use system drives for additional storage.
Table 21.
Storage allocation table for block
Number
of pools
Number of
10K SAS
drives per
pool
Number of
LUNs per
pool
LUN size (TB)
200 virtual
machines
2
Pool 1: 40
2
Pool 1: 7
Total
2
Configuration
Pool 2: 25
65
Pool 2: 5
2
Pool 1 - 2 x 7 TB LUNs
Pool 2 - 2 x 5 TB LUNs
Note: Each virtual machine occupies 102 GB in this solution, with 100 GB for the
OS and user space, and a 2 GB swap file.
2.
Use the pools created in step 1 to provision thin LUNs:
a.
Select Storage > LUNs.
b.
Click Create.
c.
Select Create a LUN.
d.
Specify the LUN Name.
e.
Select the pool created in step 1. Always create two thin LUNs in one
physical storage pool. User Capacity depends on the specific number of
virtual machines. Refer to Table 21 for more information.
f.
Configure appropriate Snapshot Schedule.
g.
Configure appropriate Host Access for each host.
h.
Review the Summary of LUN Configuration and click Finish to create the
LUNs.
VNXe configuration This section and Table 22 describe file storage provisioning tasks for Hyper-V hosts.
for file protocols
Table 22.
Tasks for storage configuration for file protocols
Task
Description
Reference
Prepare the VNXe
Physically install the VNXe hardware
with the procedures in the product
documentation.
 VNXe3200 Unified
Set up the initial
VNXe configuration
92
Configure the IP addresses and
other key parameters on the VNXe.
Installation Guide
 Unisphere System Getting
Started Guide
 Your vendor’s switch
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Task
Description
Create a network
interface
Configure the IP address and
network interface information for
the CIFS server.
Create a CIFS
server
Create the CIFS server instance to
publish the storage.
Create a storage
pool for file
Create the block pool structure and
LUNs to contain the file system.
Create the file
systems
Establish the SMB shared file
system.
Reference
configuration guide
Preparing the VNXe
The VNXe3200 Unified Installation Guide provide instructions to assemble, rack,
cable, and power up the VNXe. There are no specific setup steps for this solution.
Setting up the initial VNXe configuration
After the initial VNXe setup, configure key information about the existing environment
to allow the storage array to communicate with the other devices in the environment.
Ensure one or more servers connect to the VNXe storage system, either directly or
through qualified IP switches. Configure the following common items in accordance
with your IT data center policies and existing infrastructure information:

DNS

NTP

Storage network interfaces

Storage network IP address

CIFS services and Active Directory Domain membership
Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions.
Enabling jumbo frames on the VNXe storage network interfaces
Use Jumbo frames for storage networks to permit greater network bandwidth. Apply
the MTU size specified below across all the network interfaces in the environment.
Complete the following steps to enable jumbo frames:
1.
In Unisphere, select Settings > More Configuration >Port Settings.
2.
Select the appropriate network interface from the I/O modules panel.
3.
On the right panel, set the MTU size to 9,000.
4.
Click Apply to apply the changes.
Creating link aggregation on the VNXe storage network interfaces
Link aggregation provides network redundancy on the VNXe3200 system. Complete
the following steps to create a network interface link aggregation:
1.
Log in to the VNXe.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
93
VSPEX Configuration Guidelines
2.
Select a network interfaces from the I/O Modules panel.
3.
On the right panel, Aggregate with another network interface.
4.
Click the Create Aggregation button.
5.
Click Yes to apply the changes.
The reference documents listed in Table 20 provide more information on how to
configure the VNXe platform. The Server configuration guidelines section provides
more information on the disk layout.
Creating a CIFS server
A network interface maps to a CIFS server. CIFS servers provide access to file shares
over the network
Complete the following steps to create a network interface:
1.
Log in to the VNXe
2.
Click Settings > NAS Servers.
3.
Click Create.
From the Create NAS Server wizard, complete the following steps:
1.
Specify Server Name.
2.
Select the Storage Pool that will provide the file share.
3.
Type an IP Address for the interface.
4.
Type a Server Name for the interface.
5.
Type the Subnet Mask for the interface.
6.
Click Show Advanced.
7.
Select a storage processor that will support the file share.
8.
Select the Ethernet Port to the link aggregated interface created in Creating
link aggregation on the VNXe storage network interfaces.
9.
If required, specify the VLAN ID.
10. Click Next.
94
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Figure 32. Configure NAS Server Address
11. Select Windows Shares (CIFS).
12. Specify the appropriate information of the Standalone or Join to the Active
Directory.
13. Type in the DNS/NIS if required.
14. Review the NAS Server Summary and click Finish to complete the wizard.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
95
VSPEX Configuration Guidelines
Figure 33. Configure NAS Server type
Provisioning storage for Windows hosts
This section describes provisioning block storage for Windows hosts. To provision file
storage, refer to VNXe configuration for file protocols
Complete the following steps in Unisphere to configure LUNs on the VNXe array to
store virtual servers:
1.
Create the number of storage pools required for the environment based on
the sizing information in Chapter 4. This example uses the array
recommended maximums as described in Chapter 4.
a.
Log in to Unisphere.
b.
Select Storage > Storage Configuration > Storage Pools.
c.
Click List View.
d.
Click Create.
Note: The pool does not use system drives for additional storage.
96
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Table 23.
Storage allocation table for file
Number
of pools
Number of
10K SAS
drives per
pool
Number of
LUNs per
pool
LUN size (TB)
200 virtual
machines
2
Pool 1: 40
2
Pool 1: 7
Total
2
Configuration
Pool 2: 25
65
Pool 2: 5
2
Pool 1 - 2 x 7 TB LUNs
Pool 2 - 2 x 5 TB LUNs
Note: Each virtual machine occupies 102 GB in this solution, with 100 GB for the
OS and user space, and a 2 GB swap file.
Creating file systems
To create an SMB file share, complete the following tasks:
1.
Create a storage pool and a network interface.
2.
Create a file system.
VNXe requires a storage pool and a NAS Server to create a file system.
If no storage pools or interfaces exist, follow the steps in Provisioning storage for
Windows hosts and Creating a CIFS server to create a storage pool and a network
interface.
Create two thin file systems from the storage pool. Refer to Table 23 for details on the
number of file systems. Complete the following steps to create VNXe file systems for
SMB file shares:
1.
Log in to Unisphere.
2.
Select Storage > File Systems.
3.
Click Create.
The File System Creation wizard appears.
4.
Select a NAS server.
5.
Specify the file system name.
6.
Specify the storage pool and size. Size depends on the specific number of
virtual machines. Refer to Table 23 for more information.
7.
Specify the share name of the file system.
8.
Configure host access for each host.
9.
Select an appropriate snapshot schedule.
10. Review File System Creation Summary and click Finish to complete the wizard.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
97
VSPEX Configuration Guidelines
FAST VP
configuration
(optional)
Optionally, this procedure applies to both file and block storage implementations.
Complete the following steps to configure FAST VP. Assign two flash drives in the
storage pool:
1.
Select Storage > Storage Configuration > Storage Pools.
2.
Select the pool created when provisioning file or block storage and click
Details.
3.
Click Fast VP.You can see the amount of data relocated or to relocate in
different tier. You can either manually click Start Data Relocation to start
relocation or go to Fast VP Settings for more configuration. Figure 34 shows
the Fast VP tab.
Figure 34. Fast VP tab
Note: The Tier Status area shows FAST VP information specific to the selected pool.
4.
98
In Fast VP Settings, click General, select Enable Scheduled Relocations to
enable the scheduled relocations and select an appropriate Data Relocation
Rate, as shown in Figure 35.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Figure 35. Scheduled Fast VP relocation
Use the dialog box to control the Data Relocation Rate. The default rate is set
to Medium so as not to significantly affect host I/O.
5.
Click Schedule, and select appropriate days and times for schedule relocation.
Figure 36 shows an example of Fast VP relocation schedule.
Figure 36. Fast VP Relocation Schedule
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
99
VSPEX Configuration Guidelines
Note: FAST VP is an automated tool that provides the ability to create a relocation schedule.
Schedule the relocations during off-hours to minimize any potential performance impact.
FAST Cache
configuration
(optional)
Optionally, configure FAST Cache. To configure FAST Cache on the storage pools for
this solution, complete the following steps:
Note: FAST Cache is an optional component of this solution that can provide improved
performance as outlined in Chapter 3.
1.
100
Configure flash drives as FAST Cache:
a.
Select Storage > Storage Configuration > Fast Cache to configure the Fast
Cache.
b.
Click Create to start the configuration wizard. The wizard will show if it is
licensed to use the Fast Cache feature and has eligible flash disks.
c.
Click Next. The wizard shows the number of disks and the RAID type.
d.
Click Finish to complete the configuration. Figure 37 shows the steps to
create Fast Cache.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Figure 37. Create Fast Cache
Note: If a sufficient number of flash drives are not available, the Next button will be greyed
out.
2.
Enable FAST Cache on the storage pool.
If a LUN is created in a storage pool, you can only configure FAST Cache for
that LUN at the storage pool level. All the LUNs created in the storage pool
have FAST Cache enabled or disabled. Configure FAST Cache for a pool during
the Create Storage Pool wizard, as shown in Figure 38. After installing FAST
Cache on the VNXe series, it is enabled by default at storage pool creation.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
101
VSPEX Configuration Guidelines
Figure 38. Advanced tab in the Create Storage Pool dialog box
If a storage pool is created before FAST Cache is installed, use Settings in the
Storage Pool Detail dialog box to configure FAST Cache, as shown in Figure
39.
102
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Figure 39. Settings tab in the Storage Pool Properties dialog box
Note: The VNXe FAST Cache feature does not cause an instantaneous performance
improvement. The system must collect data about access patterns and promote frequently
used information into the cache. This process can take a few hours during which the
performance of the array steadily improves.
Installing and configuring Hyper-V hosts
Overview
This section provides the requirements for the installation and configuration of the
Windows hosts and infrastructure servers to support the architecture.
Table 24 describes the required tasks.
Table 24.
Tasks for server installation
Task
Description
Reference
Installing
Windows hosts
Install Windows Server 2012
R2 on the physical servers for
the solution.
http://technet.microsoft.com/
Installing Hyper-V
and configure
Failover
Clustering
1. Add the Hyper-V Server
role.
http://technet.microsoft.com/
2. Add the Failover Clustering
feature.
3. Create and configure the
Hyper-V cluster.
Configuring
windows hosts
networking
Configure Windows hosts
networking, including NIC
teaming and the Virtual Switch
network.
http://technet.microsoft.com/
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
103
VSPEX Configuration Guidelines
Task
Description
Reference
Installing
PowerPath on
Windows Servers
Install and configure
PowerPath to manage
multipathing for VNXe LUNs
PowerPath and PowerPath/VE for
Windows Installation and
Administration Guide.
Planning virtual
machine memory
allocations
Ensure that Windows Hyper-V
guest memory management
features are configured
properly for the environment.
http://technet.microsoft.com/
Installing Windows Follow Microsoft best practices to install Windows Server 2012 R2 and the Hyper-V
role on the physical servers for this solution.
hosts
Installing Hyper-V
and configuring
failover clustering
To install and configure Failover Clustering, complete the following steps:
1.
Install and patch Windows Server 2012 R2 on each Windows host.
2.
Configure the Hyper-V role, and the Failover Clustering feature.
3.
Install the HBA drivers, or configure iSCSI initiators on each Windows host. For
details, refer to EMC Host Connectivity Guide for Windows.
Table 24 on page 103 provides the steps and references to accomplish the
configuration tasks.
Configuring
Windows host
networking
To ensure performance and availability, the following network interface cards (NICs)
are required:

At least one NIC for virtual machine networking and management (can be
separated by network or VLAN if necessary).

At least two 10 GbE NICs for the storage network.

At least one NIC for Live Migration.
Note: Enable jumbo frames for NICS that transfer iSCSI or SMB data. Set the MTU to 9,000.
Consult the NIC configuration guide for instruction.
104
Installing
PowerPath on
Windows servers
Install PowerPath on Windows servers to improve and enhance the performance and
capabilities of the VNXe storage array. For the detailed installation steps, refer to
PowerPath and PowerPath/VE for Windows Installation and Administration Guide.
Planning virtual
machine memory
allocations
Server capacity serves two purposes in the solution:

Supports the new virtualized server infrastructure

Supports the required infrastructure services such as authentication or
authorization, DNS, and databases
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
For information on minimum infrastructure service hosting requirements, refer to
Appendix A. If existing infrastructure services meet the requirements, the hardware
listed for infrastructure services is not required.
Memory configuration
Take care to properly size and configure the server memory for this solution. This
section provides an overview of memory management in a Hyper-V environment.
Memory virtualization techniques enable the hypervisor to abstract physical host
resources such as Dynamic Memory to provide resource isolation across multiple
virtual machines, and avoid resource exhaustion. With advanced processors (such as
Intel processors with EPT support), this abstraction takes place within the CPU.
Otherwise, this process occurs within the hypervisor itself.
There are multiple techniques available within the hypervisor to maximize the use of
system resources such as memory. Do not substantially over commit resources as
this can lead to poor system performance. The exact implications of memory over
commitment in a real-world environment are difficult to predict. Performance
degradation due to resource-exhaustion increases with the amount of memory overcommitted.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
105
VSPEX Configuration Guidelines
Installing and configuring SQL Server database
Overview
Most customers use a management tool to provision and manage their server
virtualization solution even though it is not required. The management tool requires a
database backend. SCVMM uses SQL Server 2012 as the database platform.
This section describes how to set up and configure a SQL Server database for the
solution. Table 25 lists the detailed setup tasks.
Creating a virtual
machine for
Microsoft SQL
Server
Table 25.
Tasks for SQL Server database setup
Task
Description
Reference
Creating a virtual
machine for
Microsoft SQL
Server
Create a virtual machine to
host SQL Server.
http://msdn.microsoft.com
Installing
Microsoft
Windows on the
virtual machine
Install Microsoft Windows
Server 2012 R2 Datacenter
Edition on the virtual
machine.
http://technet.microsoft.com
Installing
Microsoft SQL
Server
Install Microsoft SQL Server
on the designated virtual
machine.
http://technet.microsoft.com
Configuring a
SQL Server for
SCVMM
Configure a remote SQL
Server instance or SCVMM.
http://technet.microsoft.com
Verify that the virtual server
meets the hardware and
software requirements.
Create the virtual machine with enough computing resources on one of the Windows
servers designated for infrastructure virtual machines. Use the storage designated for
the shared infrastructure.
Note: The customer environment may already contain a SQL Server for this role. In that case,
refer to the section Configuring a SQL Server for SCVMM.
The SQL Server service must run on Microsoft Windows. Install the required Windows
Installing
Microsoft Windows version on the virtual machine, and select the appropriate network, time, and
authentication settings.
on the virtual
machine
Installing SQL
Server
Use the SQL Server installation media to install SQL Server on the virtual machine.
The Microsoft TechNet website provides information on how to install SQL Server.
One of the installable components in the SQL Server installer is the SQL Server
Management Studio (SSMS). Install this component on the SQL server directly, and
on an administrator console.
To change the default path for storing data files, perform the following steps:
106
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Configuring a SQL
Server for SCVMM
1.
Right-click the server object in SSMS and select Database Properties. The
Properties window appears.
2.
Change the default data and log directories for new databases created on the
server.
To use SCVMM in this solution, configure the SQL Server for remote connections. The
requirements and steps to configure it correctly are available in the article Configuring
a Remote Instance of SQL Server for VMM.
Refer to the list of documents in Appendix D for more information.
Note: Do not use the Microsoft SQL Server Express–based database option for this solution.
Create individual login accounts for each service that accesses a database on the
SQL Server.
System Center Virtual Machine Manager server deployment
Overview
This section provides information on how to configure SCVMM. Complete the tasks in
Table 26.
Table 26.
Tasks for SCVMM configuration
Task
Description
Reference
Creating the SCVMM
host virtual machine
Create a virtual machine for the
SCVMM Server.
Create a virtual
machines
Installing the SCVMM
guest OS
Install Windows Server 2012 R2
Datacenter Edition on the SCVMM
host virtual machine.
Install the guest
operating system
Installing the SCVMM
server
Install an SCVMM server.
How to Install a VMM
Management Server
Installing the SCVMM
Management Console
Install an SCVMM Management
Console.
How to Install the VMM
Console
Installing the SCVMM
agent locally on the
hosts
Install an SCVMM agent locally on
the hosts SCVMM manages.
Installing a VMM Agent
Locally on a Host
Adding a Hyper-V
cluster into SCVMM
Add the Hyper-V cluster into
SCVMM.
Adding and Managing
Hyper-V Hosts and
Scale-Out File Servers in
VMM
Adding file share
storage in SCVMM (file
variant only)
Add SMB file share storage to a
Hyper-V cluster in SCVMM.
How to Assign SMB 3.0
File Shares to Hyper-V
Hosts and Clusters in
VMM
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
107
VSPEX Configuration Guidelines
Task
Description
Reference
Creating a virtual
machine in SCVMM
Create a virtual machine in SCVMM.
Creating and Deploying
Virtual Machines in
VMM
Performing partition
alignment, and assign
File Allocation Unite
Size
Using Diskpart.exe to perform
partition alignment, assign drive
letters, and assign file allocation
unit size of virtual machine’s disk
drive
Disk Partition Alignment
Best Practices for SQL
Server
Creating a template
virtual machine
Create a template virtual machine
from the existing virtual machine.
How to Create a Virtual
Machine Template
Create the hardware profile and
Guest Operating System profile at
this time.
Deploying virtual
machines from the
template virtual
machine
Creating a SCVMM
host virtual
machine
Deploy the virtual machines from the
template virtual machine.
How to Create and
Deploy a Virtual
Machine from a
Template
To deploy the Microsoft Hyper-V server as a virtual machine on a Hyper-V server that
is installed as part of this solution, connect directly to an infrastructure Hyper-V
server by using the Hyper-V manager.
Create a virtual machine on the Microsoft Hyper-V server with the customer guest OS
configuration by using an infrastructure server datastore presented from the storage
array.
The memory and processor requirements for the SCVMM server depend on the
number of Hyper-V hosts and virtual machines SCVMM must manage.
Installing the
SCVMM guest OS
Install the guest OS on the SCVMM host virtual machine.
Installing the
SCVMM server
Set up the VMM database and the default library server, and then install the SCVMM
server.
Install the requested Windows Server version on the virtual machine and select
appropriate network, time, and authentication settings.
Refer to the Microsoft TechNet Library topic Installing the VMM Server to install the
SCVMM server.
Installing the
SCVMM
Management
Console
108
SCVMM Management Console is a client tool to manage the SCVMM server. Install the
VMM Management Console on the same computer as the VMM server.
Refer to the Microsoft TechNet Library topic Installing the VMM Administrator Console
to install the SCVMM Management Console.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
VSPEX Configuration Guidelines
Installing the
SCVMM agent
locally on a host
If the hosts must be managed on a perimeter network, install a VMM agent locally on
the host before adding it to VMM. Optionally, install a VMM agent locally on a host in
a domain before adding the host to VMM.
Refer to the Microsoft TechNet Library topic Installing a VMM Agent Locally to install a
VMM agent locally on a host.
Adding a Hyper-V
cluster into
SCVMM
Add the deployed Microsoft Hyper-V cluster to SCVMM. SCVMM manages the Hyper-V
cluster.
Adding file share
storage to SCVMM
(file variant only)
To add file share storage to SCVMM, complete the following steps:
Creating a virtual
machine in
SCVMM
Refer to the Microsoft TechNet Library topic Adding and Managing Hyper-V Hosts and
Scale-Out File Servers in VMM to add the Hyper-V cluster.
1.
Open the VMs and Services workspace.
2.
In the VMs and Services pane, right-click the Hyper-V Cluster name.
3.
Click Properties.
4.
In the Properties window, click File Share Storage.
5.
Click Add, and then add the file share storage to SCVMM.
Create a virtual machine in SCVMM to use as a virtual machine template. Install the
virtual machine, then install the software, and change the Windows and application
settings.
Refer to the Microsoft TechNet Library topic How to Create and Deploy a Virtual
Machine from a Blank Virtual Hard Disk to create a virtual machine.
Performing
partition
alignment, and
assigning File
Allocation Unite
Size
Perform disk partition alignment on virtual machines whose operation system is prior
to Windows Server 2008. It is recommended to align the disk drive with an offset of
1,024 KB, and format the disk drive with a file allocation unit (cluster) size of 8 KB.
Creating a
template virtual
machine
Converting a virtual machine into a template removes the virtual machine. Back up
the virtual machine, because the virtual machine may be destroyed during template
creation.
Refer to the Microsoft TechNet Library topic Disk Partition Alignment Best Practices for
SQL Server to perform partition alignment, assign drive letters, and assign file
allocation unit size using diskpart.exe
Create a hardware profile and a Guest Operating System profile when creating a
template. Use the profiler to deploy the virtual machines.
Refer to the Microsoft TechNet Library topic How to Create a Virtual Machine
Template.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
109
VSPEX Configuration Guidelines
Deploying virtual
machines from the
template virtual
machine
The deployment wizard enables you to save the PowerShell scripts and reuse them to
deploy other virtual machines with the same configuration.
Refer to the Microsoft TechNet Library topic How to Deploy a Virtual Machine .
Summary
This chapter presented the required steps to deploy and configure the various
aspects of the VSPEX solution, including the physical and logical components. At this
point, the VSPEX solution is fully functional.
110
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Chapter 6
Verifying the Solution
This chapter presents the following topics:
Overview ................................................................................................................112
Post-installing checklist ........................................................................................113
Deploying and testing a single virtual server .........................................................113
Verifying the redundancy of the solution components ...........................................113
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
111
Verifying the Solution
Overview
This chapter provides a list of items to review after configuring the solution. The goal
of this chapter is to verify the configuration and functionality of specific aspects of
the solution, and ensure the configuration meets core availability requirements.
Complete the tasks listed in Table 27.
Table 27.
112
Tasks for testing the installation
Task
Description
Reference
Postinstalling
checklist
Verify that sufficient virtual ports
exist on each Hyper-V host virtual
switch.
Hyper-V : How many network cards
do I need?
Verify that each Hyper-V host has
access to the required Cluster
Shared Volume\CIFS Share and
VLANs.
Using a VNXe System with Microsoft
Windows Hyper-V
Verify that the Live Migration
interfaces are configured correctly
on all Hyper-V hosts.
Virtual Machine Live Migration
Overview
Deploying
and test a
single virtual
server
Deploy a single virtual machine by
using the System Center Virtual
Machine Manager (SCVMM)
interface.
Deploying Hyper-V Hosts Using
Microsoft System Center 2 Machine
Manager Manager
Verifying
redundancy
of the
solution
components
Perform a reboot for each storage
processor in turn, and ensure that
the storage connectivity is
maintained.
N/A
Disable each of the redundant
switches in turn and verify that
the Hyper-V host, virtual machine,
and storage array connectivity
remains intact.
Vendor documentation
On a Hyper-V host that contains at
least one virtual machine, restart
the host and verify that the virtual
machine can successfully migrate
to an alternate host.
Creating a Hyper-V Host Cluster in
VMM Overview
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Verifying the Solution
Post-installing checklist
The following configuration items are critical to the functionality of the solution.
On each Windows Server, verify the following items prior to deployment into
production:

The VLAN for virtual machine networking is configured correctly.

The storage networking is configured correctly.

Each server can access the required Cluster Shared Volumes/Hyper-V SMB
shares.

A network interface is configured correctly for Live Migration.
Deploying and testing a single virtual server
Deploy a virtual machine to verify that the solution functions as expected. Verify that
the virtual machine is joined to the applicable domain, has access to the expected
networks, and that it is possible to login to it.
Verifying the redundancy of the solution components
To ensure that the various components of the solution maintain availability
requirements, test specific scenarios related to maintenance or hardware failures.
The steps apply to both block and file environments.
Block and File
environments
Complete the following steps to restart each VNXe storage processor in turn and
verify that connectivity to Hyper-V datastores is maintained throughout each restart:
1.
Log in to SP A with administrator credentials.
2.
Restart SP A by using the command
svc_shutdown -r
3.
During the restart cycle, check for presence of datastores on Windows Server
Hyper-V hosts.
4.
When the cycle completes, log in to SP B and restart SP B using the same
command as above.
5.
On the host side, enable maintenance mode and verify that you can
successfully migrate a virtual machine to an alternate host.
Alternatively, log into the Unisphere console and restart the SPs using these steps:
1.
Navigate to Settings > Service System, enter the service password, and click
OK.
2.
In the System Components pane at upper left of the dialog window, select
Storage Processor SPA.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
113
Verifying the Solution
114
3.
In the Service Actions pane at left center, select Reboot, and then click
Execute service action.
4.
When the reboot is complete, repeat steps 2 and 3 for SPB.
5.
On the host side, enable maintenance mode and verify that you can
successfully migrate a virtual machine to an alternate host.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Verifying the SolutionBill of Materials
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
115
Chapter 7
System Monitoring
This chapter presents the following topics:
Overview ................................................................................................................118
Key areas to monitor ..............................................................................................118
VNXe resources monitoring guidelines...................................................................120
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
117
System Monitoring
Overview
System monitoring of the VSPEX environment is the same as monitoring any core IT
system; it is a relevant and core component of administration. The monitoring levels
involved in a highly virtualized infrastructure such as a VSPEX environment are
somewhat more complex than a purely physical infrastructure, as the interaction and
interrelationships between various components can be subtle and nuanced.
However, those who are experienced in administering physical environments should
be familiar with the key concepts and focus areas. The key differentiators are
monitoring at scale and the ability to monitor end-to-end systems and data flows.
The following business requirements drive the need for proactive, consistent
monitoring of the environment:

Stable, predictable performance

Sizing and capacity needs

Availability and accessibility

Elasticity: the dynamic addition, subtraction, and modification of workloads

Data protection
If self-service provisioning is enabled in the environment, the ability to monitor the
system is more critical because clients can generate virtual machines and workloads
dynamically. This can adversely affect the entire system.
This chapter provides the basic knowledge necessary to monitor the key components
of a VSPEX Proven Infrastructure environment. Additional resources are included at
the end of this chapter.
Key areas to monitor
Since VSPEX Proven Infrastructures comprise end-to-end solutions, system
monitoring includes three discrete, but highly interrelated areas:

Servers, including virtual machines and clusters

Networking

Storage
This chapter focuses primarily on monitoring key components of the storage
infrastructure, the VNXe array, but briefly describes other components as well.
Performance
baseline
118
When a workload is added to a VSPEX deployment, server, storage, and networking
resources are consumed. As additional workloads are added, modified, or removed,
resource availability and more importantly, capabilities change, which impact all
other workloads running on the platform. Customers should fully understand their
workload characteristics on all key components prior to deploying them on a VSPEX
platform; this is a requirement to correctly size resource utilization against the
defined reference virtual machine.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
System Monitoring
Deploy the first workload, and then measure the end-to-end resource consumption
along with platform performance. This removes the guesswork from sizing activities
and ensures the initial assumptions were valid. As additional workloads are
deployed, reevaluate resource consumption and performance levels to determine
cumulative load and impact on existing virtual machines and their application
workloads. Adjust resource allocation accordingly to ensure that any oversubscription
does not negatively impact overall system performance. Run these baselines
consistently, to ensure the platform as a whole, and the virtual machines themselves
operate as expected.
The following components make up the critical areas that affect the overall system
performance
Servers
The key resources to monitor from a server perspective include:

Processors

Memory

Disk (local, NAS, and SAN)

Networking
Monitor these areas from both a physical host level (the hypervisor host level) and
from a virtual level (from within the guest virtual machine). Depending on your
operating system, there are tools available to monitor and capture this data. For
example, if your VSPEX deployment uses Windows servers as the hypervisor, you can
use Windows perfmon to monitor and log these metrics. Follow your vendor’s
guidance to determine performance thresholds for specific deployment scenarios,
which can vary greatly depending on the application.
Detailed information about this tool is available from the Microsoft TechNet Library
topic Using Performance Monitor. Keep in mind that each VSPEX Proven Infrastructure
provides a guaranteed level of performance based on the number of reference virtual
machines deployed and their defined workload.
Networking
Ensure that there is adequate bandwidth for networking communications. This
includes monitoring network loads at the server and virtual machine level, the fabric
(switch) level, and if network file or block protocols such as NFS, CIFS, SMB and iSCSI
are implemented, at the storage level. From the server and virtual machine level, the
monitoring tools mentioned previously provide sufficient metrics to analyze flows into
and out of the servers and guests. Key items to track include aggregate throughput or
bandwidth, latencies and IOPS size. Capture additional data from network card or
HBA utilities.
From the fabric perspective, tools that monitor switching infrastructure vary by
vendor. Key items to monitor include port utilization, aggregate fabric utilization,
processor utilization, queue depths and inter switch link (ISL) utilization. If
networking storage protocols are used, those are discussed in the following section.
For detailed monitoring documentation, refer to your hypervisor or operating system
vendor.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
119
System Monitoring
Storage
Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining
the overall health and performance of the system. Fortunately, the tools provided with
the VNXe storage arrays provide an easy yet powerful way to gain insight into how the
underlying storage components are operating. For both block and file protocols, there
are several key areas to focus on, including:

Capacity

IOPS

Latency

SP utilization

CPU

Memory

Fabric/network interfaces - throughput in, throughput out
Additional considerations (though primarily from a tuning perspective) include:

I/O size

Workload characteristics

Cache utilization
These factors are outside the scope of this document; however, storage tuning is an
essential component of performance optimization. EMC offers the following
additional guidance on the subject through EMC Online Support: in the EMC VNX
Unified Best Practices for Performance-Applied Best Practices Guide.
VNXe resources monitoring guidelines
Monitor the VNXe with the Unisphere GUI, which is accessible by opening an HTTPS
session to the SP IP address. The VNXe series is a unified storage platform that
provides both block storage and file storage access through a single entity.
Monitoring is divided into two parts:
Monitoring block
storage resources

Monitoring block storage resources

Monitoring file storage resources
This section explains how to use Unisphere to monitor block storage resource usage
that includes capacity, IOPS, and latency.
Capacity
In Unisphere, two panels display capacity information. These panels provide a quick
assessment of the overall free space available within the configured LUNs and
underlying storage pools. For block, sufficient free storage should remain in the
configured pools to allow for anticipated growth and activities such as snapshot
creation. It is essential to have a free buffer, especially for thin LUNs because out-ofspace conditions usually lead to undesirable behaviors on affected host systems. As
120
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
System Monitoring
such, configure threshold alerts to warn storage administrators when capacity use
rises above 80 percent. In that case, auto-expansion may need to be adjusted or
additional space allocated to the pool. If LUN utilization is high, reclaim space or
allocate additional space.
To set capacity threshold alerts for a specific pool, complete the following steps:
1.
Select that pool and click the Details button.
2.
In Storage Pool Utilization, choose a value for Alert Threshold of this pool, as
shown in Figure 40.
3.
Used Space, Available Space, and Subscription are key metrics to examine.
Figure 40. Storage Pool Alert settings
You can find additional settings relevant to space management under the Settings
tab as shown in Figure 41. Snapshot Auto-Delete settings should be enabled if this
feature is in use.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
121
System Monitoring
Figure 41. Storage Pool Snapshot settings
To drill-down into capacity for block, complete the following steps:
1.
In Unisphere, select the VNXe system to examine.
2.
Select Storage > Storage Configurations > Storage Pools. This opens the
Storage Pools panel.
3.
Examine the columns titled Percent Used and Available Space and
Subscription, as shown in Figure 42.
Figure 42. Storage Pools panel
122
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
System Monitoring
Monitor capacity at the storage pool and LUN levels:
1.
Click Storage and select LUNs. This opens the LUN panel.
2.
Select a LUN to examine and click Details. This displays the detailed LUN
information, as shown in Figure 43.
3.
Verify the LUN capacity details in the dialog box. LUN Size is the total virtual
capacity available to the LUN. This may not be available if oversubscribed.
Allocated capacity is the total physical capacity currently used by the LUN.
Figure 43. LUN Properties dialog box
Examine capacity alerts, along with all other system events, by clicking on the Alerts
hot-link at the lower-left of the display. Alerts can also be accessed by clicking
System, and then selecting System Alerts, as shown in Figure 44.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
123
System Monitoring
Figure 44. System Panel
There are also several new features to help administrators monitor VNXe
performance, capacity and health, including the interactive System Health panel,
which provides detailed component information simply by clicking it, as shown in
Figure 45.
Figure 45. System Health panel
124
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
System Monitoring
IOPS
The effects of an I/O workload serviced by an improperly configured storage system,
or one whose resources are exhausted, can be felt system-wide. Monitoring the IOPS
that the storage array services includes looking at metrics from the host ports in the
SPs, along with requests serviced by the back-end disks. VSPEX solutions are
carefully sized to deliver a certain performance level for a particular workload level.
Ensure that IOPS are not exceeding design parameters.
Statistical reporting for IOPS (along with other key metrics) can be examined by
opening the System panel by selecting VNXe >System > System Performance. Monitor
the statistics online or offline using the Unisphere Analyzer, which requires a license.
Another metric to examine is Total Bandwidth (MB/s). An 8 Gbps front-end SP port
can process 800 MB per second. The average bandwidth must not exceed 80 percent
of the link bandwidth under normal operating conditions.
IOPS delivered to the LUNs are often more than those delivered by the hosts. This is
particularly true with thin LUNs, as there is additional metadata associated with
managing the I/O streams. Unisphere Analyzer shows the IOPS on each LUN, as
shown in Figure 46.
Figure 46. IOPS on the LUNs
Certain RAID levels also impart write-penalties that create additional back-end IOPS.
Examine the IOPS delivered to (and serviced from) the underlying physical disks,
which can also be viewed in the Unisphere Analyzer as shown in Figure 47. Table 28
shows the rules of thumb for drive performance.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
125
System Monitoring
Table 28.
IOPS
Rules of thumb for drive performance
15k rpm SAS drives
10k rpm SAS drives
NL-SAS drives
180 IOPS
150 IOPS
90 IOPS
Figure 47. IOPS on the drives
Latency
Latency is the by-product of delays processing I/O requests. This context focuses on
monitoring storage latency, specifically block-level I/O. Using similar procedures from
a previous section, and view the latency from the LUN level, as shown in Figure 48.
(Notice that the LUN filter has been applied.)
126
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
System Monitoring
Figure 48. Latency on the LUNs
Latency can be introduced anywhere along the I/O stream, from the application layer,
through the transport, and out to the final storage devices. Determining precise
causes of excessive latency requires a methodical approach.
Excessive latency in an FC network is uncommon. Unless there is a defective
component such as an HBA or cable, delays introduced in the network fabric layer are
normally a result of misconfigured switching fabrics. An overburdened storage array
typically causes latency within an FC environment. Focus primarily on the LUNs and
the underlying disk pool’s ability to service I/O requests. Requests that cannot be
serviced are queued, which introduces latency.
The same paradigm applies to Ethernet-based protocols such as iSCSI. However,
additional factors come into play because these storage protocols use Ethernet as
the underlying transport. Isolating the network traffic (either physical or logical) for
storage is a best-practice, and preferably some implementation of Quality of Service
(QoS) in a shared/converged fabric. If network problems are not introducing
excessive latency, examine the storage array. In addition to overburdened disks,
excessive SP utilization can also introduce latency. SP utilization levels greater than
80 percent indicate a potential problem. Background processes such as
deduplication, auto-expansion/restriping, data tiering movement, and snapshots all
compete for SP resources. Monitor these processes to ensure they do not cause SP
resource exhaustion. Possible mitigation techniques include staggering background
jobs, scheduling tiering to occur during off hours, and adding more physical
resources or rebalancing the I/O workloads. Growth may also mandate moving to
more powerful and/or additional hardware.
For SP metrics, examine data under the System Performance tab of the Unisphere
Analyzer, as shown in Figure 49. Review metrics such as Average CPU Utilization %
(shown), Average Disk Response Time, and Average Disk Queue Length.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
127
System Monitoring
Figure 49. SP CPU Utilization
High values for any of these metrics indicate the storage array is under stress and
likely requires mitigation. Table 29 shows the best practices recommended by EMC.
Table 29.
Best practice for performance monitoring
Threshold
Monitoring file
storage resources
Utilization (%)
Response Time (ms)
Queue Length
80
20
10
File-based protocols such as NFS and CIFS/SMB involve additional management
processes beyond those for block storage. Unlike VNX systems, the VNXe3200
features integrated file-services and does not require Data Movers to provide that
functionality. On the VNXe3200, the storage processors intercept file protocol
requests from the client side, and convert the requests to the appropriate SCSI block
semantics on the array side. The additional protocols and translation introduce
additional load and monitoring requirements such as SP network link utilization,
memory utilization, and SP processor utilization.
To examine file metrics in the System Performance panel, select the appropriate
metric to monitor. In this example, Total Network Bandwidth is selected, as shown in
Figure 50. Usage levels in excess of 80 percent indicate potential performance
concerns and likely require mitigation through SP reconfiguration, additional physical
resources such as additional network ports, and analysis of the current network
topology.
128
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
System Monitoring
Figure 50. VNXe file statistics
Capacity
The System Capacity panel provides a quick analysis of overall space utilization, as
shown in Figure 51.
Figure 51. System Capacity panel
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
129
System Monitoring
To monitor capacity at the pool and file system level:
1.
Select VNXe > Storage > File Systems. The File Systems window panel appears,
as shown in Figure 52.
Figure 52. File Systems panel
130
2.
Select a file system to examine and click Details, and then select Capacity,
which displays detailed file system information, as shown in Figure 53.
3.
Similar to the Capacity tab for block, examine the key metrics such as File
System Size, Thin status, Used, Free, Allocated space, and Pool Size Used.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
System Monitoring
Figure 53. File System Capacity panel
IOPS
In addition to block storage IOPS, Unisphere also provides the ability to monitor file
system IOPS. Select VNXe > System > System Performance. Then select Total File
System throughput/IOPS, as shown in Figure 54.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
131
System Monitoring
Figure 54. System Performance panel displaying file metrics
Summary
132
Consistent and thorough monitoring of the VSPEX Proven Infrastructure is a best
practice. Having baseline performance data helps to identify problems, while
monitoring key system metrics helps to ensure that the system functions optimally
and within designed parameters. The monitoring process can extend through
integration with automation and orchestration tools from key partners such as
Microsoft with its System Center suite of products.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Appendix A
Bill of Materials
This appendix presents the following topic:
Bill of materials ......................................................................................................134
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
133
Bill of Materials
Bill of materials
Table 30 lists the hardware used in this solution.
Note: EMC recommends that you use 10 GbE network or equivalent 1 GbE network
infrastructure for these solutions as long as the underlying requirements around bandwidth
and redundancy are fulfilled.
Table 30.
List of components used in the VSPEX solution for 200 virtual machines
Component
Windows
servers
Solution for 200 virtual machines
CPU
 1 vCPU per virtual machine
 4 vCPUs per physical core
 200 vCPUs
 Minimum of 50 physical CPUs
Memory
 2 GB RAM per virtual machine
 2 GB RAM reservation per Hyper-V host
Minimum of 400 GB RAM + 2 GB per host
Network
Block
 2 x 10 GbE NICs per server
 2 HBAs per server
File
4 x 10 GbE NICs per server
Note: To implement Microsoft Hyper-V HA functionality and to meet the listed
minimums, the infrastructure should have at least one additional server beyond the
number needed to meet the minimum requirements.
Network
infrastructure
Minimum
switching
capacity
Block
 2 physical switches
 2 x 10 GbE ports per Windows server
 1 x 1 GbE port per Storage Processor for management
 2 ports per Windows server, for storage network
 2 ports per Storage Processor, for storage data
File
 2 physical switches
 4 x 10 GbE ports per Windows server
 1 x 1 GbE port per Storage Processor for management
 2 x 10 GbE ports per Storage Processor for data
EMC Data
Protection
134
Avamar
Data Domain
Refer to Data Protection For EMC VSPEX Proven
Infrastructure
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
System Monitoring
Component
EMC VNXe
series
storage array
Solution for 200 virtual machines
Block
 VNXe3200
 1 x 1GbE interface per SP for management
 2 x 8Gb FC interfaces per storage processor (FC).
 2 x 10GbE interfaces per storage processor(iSCSI)
 65 x 600 GB 10k rpm 2.5-inch SAS drives
 3 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares
File
 VNXe3200
 2 x 10 GbE interfaces per Storage
Processor(CIFS/SMB)
 1 x 1 GbE interface per Storage Processor for
management
 1 x 1 GbE interface per SP for management
 65 x 600 GB 10k rpm 2.5-inch SAS drives
 3 x 600 GB 10k rpm 2.5-inch SAS drives as hot spares
Shared
infrastructure
In most cases, a customer environment already has infrastructure services such as
Active Directory, DNS, and other services configured. The setup of these services is
beyond the scope of this document.
If this is being implemented without existing infrastructure, a minimum number of
additional servers is required:
 2 physical servers
 16 GB RAM per server
 4 processor cores per server
 2 x 1 GbE ports per server
Note: These services can be migrated into VSPEX post-deployment; however, they
must exist before VSPEX can be deployed.
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
135
Bill of Materials
136
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Appendix B
Customer Configuration Data Sheet
This appendix presents the following topic:
Customer configuration data sheet ........................................................................138
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
137
Customer Configuration Data Sheet
Customer configuration data sheet
Before you start the configuration, gather some customer-specific network, and host
configuration information. The following tables provide information on assembling
the required network and host address, numbering, and naming information. This
worksheet can also be used as a “leave behind” document for future reference.
The VNXe File and Unified Worksheets should be cross-referenced to confirm
customer information.
Table 31.
Common server information
Server name
Purpose
Primary IP
Domain Controller
DNS Primary
DNS Secondary
DHCP
NTP
SMTP
SNMP
System Center Virtual
Machine Manager
SQL Server
Table 32.
Hyper-V server information
Server name
Purpose
Primary IP
Private Net (storage) addresses
Hyper-V
Host 1
Hyper-V
Host 2
…
138
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Customer Configuration Data Sheet
Table 33.
Array information
Array name
Admin account
Management IP
Storage pool name
Datastore name
Block
FC WWPN
FCOE WWPN
iSCSI IQN
iSCSI Port IP
File
Table 34.
Name
CIFS server IP
Network infrastructure information
Purpose
Default
gateway
IP
Subnet mask
VLAN ID
Allowed Subnets
Ethernet Switch 1
Ethernet Switch 2
…
Table 35.
Name
VLAN information
Network Purpose
Virtual machine
networking
managements
iSCSI storage network
(block)
CIFS storage network
(file)
Live Migration (optional)
Public (client access)
Table 36.
Service accounts
Account
Purpose
Password (optional, secure
appropriately)
Windows Server administrator
Array administrator
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
139
Customer Configuration Data Sheet
Account
Purpose
Password (optional, secure
appropriately)
SCVMM administrator
SQL Server administrator
140
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Appendix C
Server Resources Component
Worksheet
This appendix presents the following topic:
Server resources component worksheet ................................................................142
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
141
Server Resources Component Worksheet
Server resources component worksheet
Table 37.
Blank worksheet for determining server resources
Application
Server resources
Storage resources
CPU
IOPS
(virtual
CPUs)
Memory
(GB)
Reference
virtual
machines
Capacity
(GB)
Resource
requirements
N/A
Equivalent
reference virtual
machines
Resource
requirements
N/A
Equivalent
reference virtual
machines
Resource
requirements
N/A
Equivalent
reference virtual
machines
Resource
requirements
N/A
Equivalent
reference virtual
machines
Total equivalent reference virtual machines
Server customization
Server component totals
NA
Storage customization
Storage component totals
NA
Storage component equivalent reference virtual machines
NA
Total equivalent reference virtual machines - storage
142
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Appendix D
References
This appendix presents the following topic:
References .............................................................................................................144
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
143
References
References
EMC
documentation
Other
documentation
144
The following documents, available on EMC Online Support provide additional and
relevant information. If you do not have access to a document, contact your EMC
representative.

EMC Storage Integrator (ESI) 2.1 for Windows Suite

EMC VNX Virtual Provisioning Applied Technology

VNX FAST Cache: A Detailed Review

Introduction to EMC XtremCache

VNXe3200 Unified Installation Guide

Using EMC VNX Storage with Microsoft Windows Hyper-V

EMC VNX Unified Best Practice For Performance -Applied Best Practices Guide

EMC Host Connectivity Guide for Windows

EMC VNX Series: Introduction to SMB 3.0 Support

Configuring and Managing CIFS on VNX
The following documents, located on the Microsoft website, provide additional and
relevant information:

Installing the VMM Server

Adding and Managing Hyper-V Hosts and Scale-Out File Servers in VMM

How to Create a Virtual Machine Template

Configuring a Remote Instance of SQL Server for VMM

Installing Virtual Machine Manager

Installing the VMM Administrator Console

Installing a VMM Agent Locally on a Host

Adding Hyper-V Hosts and Host Clusters, and Scale-Out File Servers to VMM

How to Create a Virtual Machine with a Blank Virtual Hard Disk

How to Deploy a Virtual Machine

Install and Deploy Windows Server 2012 R2 and Windows Server 2012

Use Cluster Shared Volumes in a Failover Cluster

Hardware and Software Requirements for Installing SQL Server 2014

Install SQL Server 2014

How to Install a VMM Management Server
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
Appendix E
About VSPEX
This appendix presents the following topic:
About VSPEX ..........................................................................................................146
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to
200 Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide
145
About VSPEX
About VSPEX
EMC has joined forces with the industry leading providers of IT infrastructure to create
a complete virtualization solution that accelerates deployment of cloud
infrastructure. Built with best-of-breed technologies, VSPEX enables faster
deployment, more simplicity, greater choice, higher efficiency, and lower risk.
Validation by EMC ensures predictable performance and enables customers to select
technology that uses their existing IT infrastructure while eliminating planning, sizing,
and configuration burdens. VSPEX provides a proven infrastructure for customers
looking to gain the simplicity that is characteristic of truly converged infrastructures,
while at the same time gaining more choice in individual solution components.
VSPEX solutions are proven by EMC, and packaged and sold exclusively by EMC
channel partners. VSPEX provides channel partners with more opportunity, faster
sales cycles, and end-to-end enablement. EMC and channel partners are working
together to deliver simple, efficient, and flexible private cloud infrastructure for
business.
146
EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 200
Virtual Machines Enabled by EMC VNXe3200 and EMC Data Protection
Proven Infrastructure Guide