Download Zero Data Loss Disaster Recovery for Microsoft Exchange 2010

Document related concepts

Microsoft Access wikipedia , lookup

Database model wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Clusterpoint wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Team Foundation Server wikipedia , lookup

Transcript
Chapte 1:
Introduction
Zero Data Loss Disaster Recovery for
Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for
Exchange 2010, Brocade End-to-End Network, Dell Servers, and
Microsoft Hyper-V
A Detailed Review
EMC Information Infrastructure Solutions
Abstract
This document provides a virtualized Microsoft Exchange 2010 disaster recovery solution designed, built, and tested
by EMC in partnership with Microsoft, Brocade, and Dell. It highlights the benefits of leveraging EMC® Replication
Enabler for Exchange 2010 to provide zero data loss SAN-based block-level synchronous replication as an
alternative to native Exchange’s 2010 DAG network log shipping asynchronous replication.
October 2010
Copyright © 2010 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED ―AS IS.‖ EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN
THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com
All other trademarks used herein are the property of their respective owners.
Part number: H7410.1
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
2
Table of Contents
Table of Contents
Chapter 1: Introduction .......................................................................................... 10
Overview ......................................................................................................................................... 10
Exchange 2010 Tested Solutions ................................................................................................... 10
Partnership .................................................................................................................................. 10
Virtualized Exchange 2010 solutions .......................................................................................... 10
Supported virtualization platforms ............................................................................................... 10
Executive summary ......................................................................................................................... 11
Overview ..................................................................................................................................... 11
Solution overview ............................................................................................................................ 12
Purpose ....................................................................................................................................... 12
Scope .......................................................................................................................................... 12
Audience ..................................................................................................................................... 13
Chapter 2: Technology and Key Components ..................................................... 14
Overview ......................................................................................................................................... 14
Topics .......................................................................................................................................... 14
Components ................................................................................................................................ 14
Microsoft Exchange Server 2010 .................................................................................................... 15
Windows 2008 R2 Hyper-V ............................................................................................................. 15
EMC Replication Enabler for Exchange Server 2010 ..................................................................... 16
Overview ..................................................................................................................................... 16
REE components ........................................................................................................................ 16
REE benefits ............................................................................................................................... 17
EMC Replication Manager .............................................................................................................. 18
Overview ..................................................................................................................................... 18
How Replication Manager works with Exchange 2010 ............................................................... 18
Using RM with VSS technology .................................................................................................. 19
EMC Unified Storage ...................................................................................................................... 20
EMC CLARiiON family overview ................................................................................................. 20
Why use CLARiiON with Microsoft Hyper-V? ............................................................................. 21
EMC MirrorView .............................................................................................................................. 22
Overview ..................................................................................................................................... 22
Flexible choices for deploying MirrorView ................................................................................... 22
CLARiiON MirrorView/S configuration with REE ........................................................................ 22
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
3
Table of Contents
EMC SnapView ............................................................................................................................... 23
Brocade network load balancing ..................................................................................................... 24
Overview ..................................................................................................................................... 24
Brocade Serverlron ADX ............................................................................................................. 24
Brocade SAN and LAN/WAN network infrastructure ...................................................................... 27
Overview ..................................................................................................................................... 27
Brocade 300 SAN switch ............................................................................................................ 27
Brocade 825 Dual Port 8G FC HBA ............................................................................................ 28
Brocade network infrastructure ....................................................................................................... 29
Brocade FastIron Ethernet switch ............................................................................................... 29
Brocade NetIron routers .............................................................................................................. 29
Dell PowerEdge R910 Servers ....................................................................................................... 29
Dell PowerEdge R910 server ...................................................................................................... 29
Other key Dell technologies enabling robust virtualization ......................................................... 30
Fail-safe virtualization ................................................................................................................. 30
Embedded system management ................................................................................................ 30
Hardware used in this solution ........................................................................................................ 30
Storage ........................................................................................................................................ 30
Servers ........................................................................................................................................ 31
LAN and SAN Switches .............................................................................................................. 31
Software used in this solution ......................................................................................................... 32
Chapter 3: Solution Design .................................................................................... 33
Overview ......................................................................................................................................... 33
Solution design methodology .......................................................................................................... 33
Key solution requirements............................................................................................................... 33
Exchange 2010 design architecture with EMC synchronous replication by REE ........................... 34
Database availability group (DAG) design with REE ...................................................................... 36
Planning your DAG deployment .................................................................................................. 36
Understanding the concept of active and passive copies with REE ........................................... 37
Site resiliency considerations and DAG design with REE .......................................................... 38
Leveraging virtualization for Exchange deployment ................................................................... 39
Creating an effective HA DAG design with REE ......................................................................... 39
Virtualized Exchange roles .......................................................................................................... 41
Identifying Hyper-V host and VM requirements .............................................................................. 43
Overview ..................................................................................................................................... 43
Identifying Exchange user profile type requirements .................................................................. 43
Identifying CPU requirements for Mailbox server failure contingency ........................................ 44
Calculate the CPU capacity of the Hyper-V Root server ............................................................ 44
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
4
Table of Contents
Determine the CPU capacity of the VMs .................................................................................... 45
Chapter 4: Storage Design ..................................................................................... 46
Overview ......................................................................................................................................... 46
Methodology for sizing Exchange storage ...................................................................................... 46
Exchange storage design using EMC building block methodology ................................................ 47
What is a building block? ............................................................................................................ 47
Step 1. Identify user requirements .............................................................................................. 47
Step 2. Identify Exchange VM requirements ............................................................................... 48
Step 3. Identify and calculate storage requirements based on IOPS and capacity .................... 48
Step 4. Finalize the Exchange VM building-block ....................................................................... 53
Storage design summary ................................................................................................................ 53
Total storage requirements summary ......................................................................................... 53
LUN configurations ...................................................................................................................... 54
Mailbox server configuration summary ....................................................................................... 54
Chapter 5: LAN and SAN Architecture .................................................................. 56
Introduction...................................................................................................................................... 56
Overview ..................................................................................................................................... 56
Topics .......................................................................................................................................... 56
SAN and LAN/WAN configuration ................................................................................................... 56
SAN configuration ....................................................................................................................... 56
LAN configuration ........................................................................................................................ 57
Network load balancing ................................................................................................................... 59
Overview ..................................................................................................................................... 59
Exchange RPC Client Access and Address Book services ........................................................ 59
Network traffic ............................................................................................................................. 59
Configuring Layer 2 Active/Hot Standby Redundancy ................................................................ 60
Network configuration for DAG ................................................................................................... 60
Best practices planning for network load balancing ........................................................................ 61
Overview ..................................................................................................................................... 61
Affinity .......................................................................................................................................... 61
LB-created cookie ....................................................................................................................... 61
Source IP port persistence .......................................................................................................... 62
Monitoring the Outlook client configuration ................................................................................. 62
Chapter 6: Exchange 2010 backup with EMC Replication Manager ................... 63
Introduction...................................................................................................................................... 63
Overview ..................................................................................................................................... 63
Topics .......................................................................................................................................... 63
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
5
Table of Contents
Replication Manager design............................................................................................................ 63
RM functionality Overview with Exchange 2010 ......................................................................... 63
Recommendations for best RM performance ............................................................................. 64
Preparing your Exchange 2010 environment for backups with Replication Manager .................... 64
RM design considerations ........................................................................................................... 64
Backup configuration in this solution ........................................................................................... 65
Rapid restore using Replication Manager ....................................................................................... 65
Roll-forward recovery .................................................................................................................. 65
Point-in-time recovery ................................................................................................................. 65
Chapter 7: Best Practices Planning ...................................................................... 67
Overview ......................................................................................................................................... 67
Exchange 2010 best practices ........................................................................................................ 67
Optimizing SAN best practices ....................................................................................................... 67
Reliability considerations ............................................................................................................. 67
Performance considerations ....................................................................................................... 68
Additional considerations ............................................................................................................ 68
Exchange Mailbox server optimization for EMC storage ................................................................ 69
Chapter 8: Solution Validation ............................................................................... 70
Introduction...................................................................................................................................... 70
Overview ..................................................................................................................................... 70
Topics .......................................................................................................................................... 70
Validation methodology and tools ................................................................................................... 71
Overview ..................................................................................................................................... 71
Jetstress 2010 ............................................................................................................................. 71
Loadgen ...................................................................................................................................... 71
Exchange storage validation with Jetstress .................................................................................... 72
Overview ..................................................................................................................................... 72
Test configuration ........................................................................................................................ 72
Jetstress test results and CX4-480 performance ........................................................................ 73
CX4-480 performance with Exchange 2010 Jetstress ................................................................ 74
Database replication process for DAG in a third-party replication mode ........................................ 74
Overview ..................................................................................................................................... 74
Initial synchronization performance............................................................................................. 75
Environment validation with Loadgen ............................................................................................. 75
Overview ..................................................................................................................................... 75
Loadgen test preparation ............................................................................................................ 75
Loadgen configuration for peak load ........................................................................................... 76
How to validate test results ......................................................................................................... 76
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
6
Table of Contents
Validation tests scenarios ........................................................................................................... 77
Test 1 – Normal operating condition – peak load ........................................................................... 77
Objectives .................................................................................................................................... 77
Configuration ............................................................................................................................... 77
Performance results and analysis ............................................................................................... 78
Test 2 – Host failure within a site .................................................................................................... 79
Objectives .................................................................................................................................... 79
Configuration ............................................................................................................................... 79
Performance results and analysis ............................................................................................... 79
Test 3 – Site failure simulation ........................................................................................................ 81
Objectives .................................................................................................................................... 81
Configuration ............................................................................................................................... 81
Performance results and analysis ............................................................................................... 81
In-site database switchover with EMC Replication Enabler for Exchange 2010 ............................ 83
Datacenter switchover validation .................................................................................................... 83
Datacenter switchover process ................................................................................................... 83
Step 1. Activating Mailbox servers .............................................................................................. 84
Step 2. Activating Client Access servers .................................................................................... 85
Validating primary datacenter service restoration (failback) ........................................................... 85
Overview ..................................................................................................................................... 85
Restoring storage ........................................................................................................................ 85
Mailbox server role failback ......................................................................................................... 86
Chapter 9: Conclusion ............................................................................................ 87
Appendixes ............................................................................................................... 88
Appendix A: References ................................................................................................................. 88
White papers ............................................................................................................................... 88
Product documentation ............................................................................................................... 88
Other documentation ................................................................................................................... 89
Appendix B: REE Powershell Cmdlets Reference .......................................................................... 90
Overview ..................................................................................................................................... 90
Parameters to REE Cmdlets ....................................................................................................... 90
List of REE cmdlets ..................................................................................................................... 91
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
7
Table of Contents
List of Figures
Figure 1.
Figure 2.
Figure 3.
Figure 4.
Figure 5.
Figure 6.
Figure 7.
Figure 8.
Figure 9.
Figure 10.
Figure 11.
Figure 12.
Figure 13.
Figure 14.
Volume Shadow Copy Service components ........................................................................... 19
Brocade ServerIron ADX load balancer with the Exchange Server 2010 ............................... 26
Exchange 2010 DAG with EMC Replication Enabler .............................................................. 35
Shared storage with Exchange 2010 DAG in third-party replication mode shown in
Windows Failover Cluster Manager ........................................................................................ 35
Exchange 2010 DAG members with third-party replication mode shown in Windows
Failover Cluster Manager ........................................................................................................ 36
Database activation preference order settings in the Exchange Management console ......... 37
Two DAGs deployment model (physical) ................................................................................ 38
Two DAGs deployment model (virtualization) ......................................................................... 39
Exchange 2010 HA and site resiliency with Hyper-V and REE............................................... 40
Fully virtualized Exchange 2010 environment ......................................................................... 41
Virtualized datacenter environment with Exchange 2010 reference architecture ................... 42
Database configuration with DAG in third-party replication mode........................................... 55
The solution’s network zoning layout ...................................................................................... 58
Jetstress test results for Exchange 2010 on a CLARiiON CX4-480 ....................................... 73
List of Tables
Table 1.
Table 2.
Table 3.
Table 4.
Table 5.
Table 6.
Table 8.
Table 9.
Table 10.
Table 12.
Table 13.
Table 14.
Table 15.
Table 16.
Table 17.
Table 18.
Table 19.
Table 20.
Table 21.
Table 22.
REE benefits compared to native DAG features ..................................................................... 17
CLARiiON CX4 storage systems features ............................................................................... 20
EMC Unified Storage CX4-480 (integrated CLARiiON CX4-480) ........................................... 30
Dell PowerEdge System .......................................................................................................... 31
LAN and SAN switches ........................................................................................................... 31
Software used in this solution .................................................................................................. 32
Message mailbox requirements............................................................................................... 43
Mailbox CPU requirements...................................................................................................... 44
VM CPU and memory configurations summary ...................................................................... 45
Exchange VM requirements .................................................................................................... 48
Mailbox size on disk summary................................................................................................. 50
Database capacity requirements summary ............................................................................. 51
Database LUN size requirements ........................................................................................... 51
Log Size requirements ............................................................................................................. 52
Log LUN size requirements ..................................................................................................... 52
Building block summary........................................................................................................... 53
Storage capacity requirements summary ................................................................................ 53
Disk Requirements summary .................................................................................................. 53
Exchange server configurations for this solution ..................................................................... 54
Jetstress test results summary ................................................................................................ 74
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
8
Table of Contents
Table 23.
Table 24.
Table 25.
Table 26.
Table 27.
Table 28.
Table 29.
Table 30.
Table 31.
Table 32.
Table 33.
Initial MirrorView/S synchronization performance summary ................................................... 75
Primary counters and validation criteria .................................................................................. 77
Loadgen Validation - Test scenarios ....................................................................................... 77
Validation of expected load for Test 1 ..................................................................................... 78
Performance results for Loadgen in Test 1 ............................................................................. 78
Validation of expected load for Test 2 ..................................................................................... 79
Performance results for Loadgen Test 2 ................................................................................. 80
Validation of the expected load for Test 3 ............................................................................... 81
Performance results for Loadgen Test 3 ................................................................................. 82
Common REE cmdlet parameters ........................................................................................... 90
REE cmdlets ............................................................................................................................ 91
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
9
Chapter 1: Introduction
Chapter 1:
Introduction
Overview
This chapter introduces this white paper and its contents. It includes the following
topics.
Topic
See Page
Exchange 2010 Tested Solutions
10
Executive summary
11
Solution overview
12
Exchange 2010 Tested Solutions
Partnership
This paper provides a solution designed by EMC in partnership with Microsoft,
Brocade, and Dell as part of the Exchange 2010 Tested Solutions venture.
Exchange 2010 Tested Solutions is a joint venture between Microsoft and
participating server, storage, and network infrastructure partners to examine
common customer scenarios and key design decision points facing customers who
are planning to deploy Exchange 2010. Through a series of solution white papers,
this initiative provides examples of well-designed, cost-effective Exchange 2010
solutions deployed on the latest and greatest available hardware configurations
offered by server and storage partners.
Virtualized
Exchange 2010
solutions
As part of a this new venture, EMC in partnership with Microsoft, Brocade, and Dell
have designed, built, and validated Exchange 2010 solutions that can help
customers make decisions about deploying virtualized Exchange 2010 in their
environment. The solution described in this white paper demonstrates how deploying
Exchange in a virtualized environment can help customers realize the long-term
benefits of their server and storage infrastructure investment.
Supported
virtualization
platforms
Leveraging a Hyper-V virtualization platform with high performance servers and
shared Brocade Ethernet and SAN resources provides greater user resource
consolidation with more flexible disaster recovery (DR) choices. Today, Microsoft
fully supports Exchange on both the Hyper-V platform and VMware technology, as
well as on all virtualization products that comply with the Microsoft Server
Virtualization Validation Program (SVVP). Today, for Exchange Server 2010,
Microsoft supports all server roles in virtualized environments, except for the Unified
Messaging role.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
10
Chapter 1: Introduction
For more details about the:

SVVP program, visit Microsoft at
http://www.windowsservercatalog.com/svvp.aspx

Exchange 2010 requirements in virtualization deployments, visit
http://technet.microsoft.com/en-us/library/aa996719.aspx
Executive summary
Overview
Exchange Server 2010 introduces a concept called database availability group
(DAG) to provide high availability (HA) for Exchange Mailbox databases. A DAG is a
set of mailbox servers that replicate to one another, providing database-level
protection from unplanned outages.
Native DAG uses host-based network replication and a subset of Windows failover
clustering technologies to provide HA and site resiliency. In Exchange 2010, DAGs
replaced earlier Exchange and Windows failover clustering based technologies used
for HA and site resiliency, such as single copy clusters (SCC), continuous cluster
replication (CCR), and standby continuous replication (SCR).
These new continuous protection options require more copies of the Exchange
database compared to storage-based solutions and can be challenging to set up and
replicate. The removal of single copy clusters impacts native HA and DR solutions
previously available with Microsoft Exchange Server.
In order to help address storage-based HA and DR technologies, Microsoft
Exchange Server 2010 includes an application programming interface (API) to
integrate third-party replication solutions into the DAG framework. When enabled,
third-party replication support disables the native network-based log shipping
mechanism used by DAGs. We can then use storage-based replication technologies
to protect the Exchange database copies specified within the Exchange 2010
environment. EMC’s implementation of the third-party replication API framework also
allows local shared clustering functionality, which simplifies local failover/failback
actions and reduces the amount of storage needed for server HA.
As an alternative to a native Exchange 2010 network based DAG replication, EMC
®
developed a free software utility called EMC Replication Enabler for Microsoft
Exchange Servers 2010 (REE). This tool uses block-level synchronous storagebased replication over existing Fibre Channel (FC) storage area networks (SANs)
that is already part of network infrastructure in most customer datacenters. Because
it is synchronous, it is lossless as the writes are committed at the target as they
replicate.
REE integrates with the DAG third-party replication API to enable both shared local
storage as well as synchronously replicated storage with MirrorView™ and
RecoverPoint as database ―copies‖ within a DAG. The ability to use shared local
storage as well as synchronously replicated remote copies helps to enable HA and
site resiliency functionality similar to SCC and geographically dispersed cluster
capabilities available with previous versions of Exchange. Using the array-based
replication solution eases the strain on the network bandwidth while preserving the
scalability, reliability, and performance benefits of the storage array.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
11
Chapter 1: Introduction
Solution overview
Purpose
The purpose of this white paper is to provide customers with information about how
to design, build, deploy, test, and validate a virtualized metropolitan Exchange 2010
synchronous solution on Microsoft’s Hyper-V platform leveraging SAN-based
synchronous replication with EMC unified storage to achieve a zero data loss and
the lowest possible recovery point objective (RPO). The document includes details
about the solution’s design and provides performance results recorded during the
validation phase.
This white paper also describes how to simplify Exchange virtual deployments by
leveraging the building-block approach, which helps to deploy virtualized Exchange
solutions more easily and effectively.
This document’s objectives are to:
Scope

Provide guidance and best practice methodologies for designing virtualized
multisite Exchange 2010 solutions.

Explain how to design an EMC Unified Storage array to support synchronous
replication with EMC Replication Enabler for Exchange 2010 (REE).

Describe how to design and configure Exchange 2010 DAGs across multiple
sites using REE.

Demonstrate how highly reliable Brocade LAN and SAN infrastructures
support metropolitan Exchange 2010 deployments.

Document how to use Replication Manager to back up Exchange databases
and logs.

Provide guidance and best practice methodologies for deploying Brocade Load
Balancers in an Exchange 2010 solution.

Validate the solution using Microsoft tools such as Jetstress and Loadgen.

List the steps and procedures for performing a switchover and failover using
REE.
The scope of this solution is to design an Exchange 2010 DR solution by leveraging
Exchange 2010 DAG in a third-party synchronous replication mode with EMC
Replication Enabler for Exchange 2010.
The scope also covers the solution design and validation process. Some of the
design steps outlined during each design and validation phase are high level in
nature. You should read this information in conjunction with the Exchange 2010
documentation referenced at the Microsoft TechNet website
(http://technet.microsoft.com/en-us/default.aspx).
It is important to note that actual customer configurations will be different and you
should use the information contained in this white paper only as a reference to build
similar solutions.
Before implementing a solution in a production environment, consider the size and
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
12
Chapter 1: Introduction
complexity of that environment. EMC recommends that you consult with the EMC
Consulting services or EMC Microsoft Solutions consultants (MSCs) for onsite
assistance with planning, installation, and integration requirements.
The information contained in this document is not intended to replace existing,
detailed product implementation guides for deploying Exchange 2010, SAN, and
storage infrastructures.
Audience
This white paper is intended for Information Technology professionals who are
involved in the evaluation, architecture, deployment, and daily management of data
center computer systems, storage systems, and Microsoft Exchange infrastructure.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
13
Chapter 2: Technology and Key Components
Chapter 2:
Technology and Key Components
Overview
Topics
This chapter identifies and briefly describes the technologies and components used
in this solution and contain the following topics.
Topic
See Page
Microsoft Exchange Server 2010
15
EMC Replication Enabler for Exchange Server 2010
16
EMC Replication Manager
18
Windows 2008 R2 Hyper-V
15
EMC Unified Storage
20
22
EMC MirrorView
Components
EMC SnapView
23
Brocade network load balancing
24
Brocade SAN and LAN/WAN network infrastructure
27
Dell PowerEdge R910 Servers
29
Hardware used in this solution
30
Software used in this solution
32
This solution integrates the latest software and hardware technologies from
Microsoft, Brocade, Dell, and EMC. The components for this solutions include:

Microsoft Exchange Server 2010

Microsoft Hyper-V for virtualization and consolidation of the Exchange
environment

EMC Unified Storage

EMC Replication Enabler for Exchange 2010 for synchronous storage base
replication

EMC Replication Manager for Exchange backups

Brocade load-balancing application delivery controllers

Brocade SAN and LAN/WAN infrastructure for high-speed performance

Dell PowerEdge servers for high performance and Exchange environment
consolidation
This chapter describes each of these components in more details.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
14
Chapter 2: Technology and Key Components
Microsoft Exchange Server 2010
Microsoft Exchange Server 2010 is an enterprise e-mail and communication system
that allows businesses and customers to collaborate and share information.
With the new version of Exchange 2010, Microsoft presents a new, unified approach
to high availability (HA) and disaster recovery (DR) by introducing features such as
DAGs, and online mailbox moves. Customers can now implement Mailbox servers in
mailbox resiliency configurations with database-level replication and failover.
A DAG is the base component of the HA and site resiliency framework that is built
into Exchange 2010. A DAG is a group of up to 16 Mailbox servers that host a set of
databases and provide automatic database-level recovery from failures that affect
individual servers or databases. Exchange 2010 replaces on-site data replication
(CCR) and off-site data replication (SCR) that were introduced in Exchange 2007
and now combines and integrates them into a single framework that is called the
DAG. Once administrators add servers to a DAG, they can add replicated database
copies incrementally. Exchange 2010 switches between these copies automatically
as needed to maintain availability.
Major improvements with the application database structure and I/O reduction
include support for a larger variety of disk and RAID configurations.
Windows 2008 R2 Hyper-V
Hyper-V is Microsoft’s hypervisor-based virtualization technology that is integrated
into all Windows Server 2008 x64 operating systems. As a virtualization solution,
Hyper–V enables users to take maximum advantage of the server hardware by
providing the capability to run multiple operating systems (on virtual machines
(VMs)) on a single physical server.
Microsoft Hyper-V is virtualization software that allows you to consolidate your
servers by running (as virtual machines) several instances of similar and dissimilar
operating systems on one physical machine. This cost-effective, highly scalable, VM
platform offers advanced resource management capabilities. Hyper-V minimizes
TCO for your environment by:

Increasing resource utilization

Decreasing the number of servers and all associated costs

Maximizing server manageability
For more details see the following websites:

Microsoft Hyper-V visit Microsoft technical library at
http://www.microsoft.com/windowsserver2008/en/us/hyperv.aspx.

Microsoft’s Exchange 2010 Systems requirements for hardware virtualization
visit Microsoft at http://technet.microsoft.com/en-us/library/aa996719.aspx.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
15
Chapter 2: Technology and Key Components
EMC Replication Enabler for Exchange Server 2010
Overview
The EMC Replication Enabler for Microsoft Exchange Server 2010 (REE) is a free
software plug-in that provides HA and DR for Microsoft Exchange databases. The
enabler leverages the Microsoft-provided API for synchronous replication of
Exchange databases. The enabler, which integrates directly into Exchange 2010
DAG, will detect any failover notifications from the Exchange Server, and will
automatically handle the failover of the databases between Mailbox servers. EMC’s
Replication Enabler for Microsoft Exchange Server 2010 integrates EMC
RecoverPoint and RecoverPoint/SE synchronous remote replication and EMC
MirrorView/Synchronous (MirrorView/S) replication with the Exchange Server 2010
DAG architecture.
REE
components
The three Replication Enabler plug-in components include:

Synchronous replication framework

MirrorView/Synchronous plug-in

RecoverPoint plug-in
The enabler installs all three components, and invokes the appropriate replication
software plug-in, depending on which replication software you have installed,
MirrorView/Synchronous or RecoverPoint.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
16
Chapter 2: Technology and Key Components
REE benefits
REE benefits customers by:

Allowing them to leverage their existing SAN and storage and infrastructure.

Helping them to replace Microsoft’s native application-based DAG replication
®
with RecoverPoint synchronous remote replication or CLARiiON array-based
MirrorView/S remote replication. This benefit provides a data center-wide data
replication solution for all business-critical systems and not just for individual
applications such as Exchange.

Keeping replication traffic off of the IP network.

Helping the network administrator to reseed corrupted or lost databases.

Simplifying configurations and storage requirements (one database copy per
site).

Allowing the administrators to takes advantage of local array replication for fast
backup and restore.
Table 1 summarizes the benefits of REE in comparison with Exchange 2010 native
DAG features.
Table 1.
REE benefits compared to native DAG features
Comparison Points
Native Database
Availability Group (DAG)
Replication Enabler for
Exchange 2010 (REE)
Managed from within Exchange DAG
functionality
Yes
Yes
Leverages host-based replication
Yes
No
Allows local shared clustering
No
Yes
Integrates with off host replication
No
Yes
Can perform synchronous replication
No
Yes
Low impact storage-based compression
No
Yes
Ability to share replication across apps
No
Yes
Integrates with hardware-based snapshot/clone
products
No
Yes
For more details about EMC Replication Enabler for Exchange 2010, visit
www.emc.com.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
17
Chapter 2: Technology and Key Components
EMC Replication Manager
Overview
EMC Replication Manager automates and simplifies the management of disk-based
replicas. It orchestrates critical business applications, middleware, and underlying
EMC replication technologies to create and manage replicas at the application level
to achieve operational recovery, backup, restore, development, simulation, and
repurposing.
EMC Replication Manager also helps customers safeguard their business-critical
applications such as Microsoft Exchange Server 2010, using either point-in-time
disk-based replicas or continuous data protection sets that you can restore to any
significant point in time that falls within the protection window.
At the same time, Replication Manager is deeply integrated with the Microsoft
Exchange Server 2010 application. Replicas (snaps or clones) are created by
coordinating with Microsoft Volume Shadow Copy Service (VSS) to ensure a
consistent copy of active Exchange databases with minimal impact to the production
Exchange environment.
How
Replication
Manager works
with Exchange
2010
Replication Manager leverages the VSS functionality provided by Microsoft that
facilitates the creation of application integrated snapshot backups. Specifically,
Replication Manager provides support for Exchange 2010 snapshots using VSS.
Replication Manager supports Exchange 2010 in standalone or DAG environments,
including support for a third-party DAG mode with EMC Replication Enabler. The
VSS provides the framework for creating point-in-time transportable snapshots of
Exchange 2010 data. The three VSS components include the:

Requestor—The VSS requestor is typically a backup application. It requests
the shadow copy set. Replication Manager is a VSS requestor.

Writer—The VSS writer is the application-specific logic needed in the
snapshot creation and restore/recovery process. Exchange 2010 provides the
VSS writer.

Provider—The VSS provider is third-party hardware control software that
actually creates the shadow copy.
The VSS coordinates these components’ functions shown in Figure 1.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
18
Chapter 2: Technology and Key Components
Figure 1. Volume Shadow Copy Service components
Replication Manager leverages Microsoft VSS to provide point-in-time recovery and
roll-forward recovery using Copy and Full backup mode for Exchange 2010. Both
modes back up the databases and transaction logs, but only Full mode truncates the
logs after a successful backup. Since these snapshots are transportable, you can
also use them for repurposing. For example, if your server attaches to a SAN, you
can mask the shadow copy from the production server and unmask to another server
that can reuse it for backup or mailbox-level recovery.
On the EMC Unified Storage arrays, Replication Manager can take advantage of
both clone and snapshot functionality. When you create a replica using clone
functionality, you make an exact copy of the data onto a separate logical unit (LUN)
or disk. When you create a snap, the data is stored as a copy on first write to diskbased cache memory. Snaps only store the information from changed tracks, so they
use a minimum of save device space on the storage array. You can use snapshot
replicas effectively for short-term data storage and working copies of the data.
Using RM with
VSS
technology
Using Replication Manager integrated with VSS technology in the backup and DR
design:

Improves the recovery time objective (RTO) for Exchange database recovery.
Recovery from a VSS integrated replica using Replication Manager is faster
than a restore from tape or even a restore from a disk-based backup.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
19
Chapter 2: Technology and Key Components

Improves recovery point objectives (RPO) for Exchange database recovery.
You can make multiple VSS integrated replicas using Replication Manager
during a 24-hour period. For example, creating a replica every four hours
minimizes data loss to only four hours (time between replicas) if a full database
and log volume restore is required. A partial restore of just the database
volume, allows the Exchange transaction logs to replay and bring the database
up to the current point in time.

Helps meet backup to tape windows. You can mount VSS integrated replicas
directly to a backup server using Replication Manager. This eases the
complexity of meeting backup windows for large Exchange databases across
the LAN or SAN.
EMC Unified Storage
EMC CLARiiON
family overview
The EMC CLARiiON family of networked storage systems brings high performance
to the mid-tier market with a wide range of storage solutions—all based on the
powerful, proven, eight generations of CLARiiON architecture. CLARiiON provides
multiple tiers of storage (EFDs, FC, and SATA) in a single storage system. This
system significantly reduces acquisition costs and management costs by allowing
users to manage multiple storage tiers with a single management interface.
The CX4 series CLARiiON systems with UltraFlex™ technology deliver storage
systems that you can easily customize by populating your I/O slots with either FC or
iSCSI I/O modules. Products with multiple back ends such as the CX4-240, CX4480, and CX4-960 can support disks operating at both two Gb/s and four Gb/s
simultaneously.
CLARiiON storage systems address a wide range of storage requirements by
providing flexible levels of capacity, functionality, and performance. The CX4-120
supports up to 120 drives and connectivity for up to 128 HA hosts. The CX4-240
storage system expands the family, supporting up to 256 HA hosts and up to 240
drives. The CX4-480 further expands the CX4 family by supporting 256 HA hosts
and 480 drives. The high-end CX4-960 adds even more capability, supporting up to
512 HA hosts and up to 960 drives. Table 2 summarizes the basic features for the
CLARiiON CX4 storage system.
Table 2.
CLARiiON CX4 storage systems features
Feature
CX4-120
CX4-240
CX4-480
CX4-960
Maximum disks
120
240
480
960
Storage processors (SP)
2
2
2
2
Physical memory per SP
3 GB
4 GB
8 GB
16 GB
Max write cache
600 MB
1.264 GB
4.5 GB
10.764 GB
Max initiators per system
256
512
512
1024
High-availability hosts
128
256
256
512
Minimum form factor size
6U
6U
6U
9U
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
20
Chapter 2: Technology and Key Components
Why use
CLARiiON with
Microsoft
Hyper-V?
Feature
CX4-120
CX4-240
CX4-480
CX4-960
Maximum standard LUNs
1024
1024
4096
4096
SnapView™ snapshots and
clones
Yes
Yes
Yes
Yes
SAN Copy™
Yes
Yes
Yes
Yes
MirrorView/S and MirrorView/A
Yes
Yes
Yes
Yes
RecoverPoint
Yes
Yes
Yes
Yes
CLARiiON and Hyper-V work well together. Some of the reasons CLARiiON is an
ideal fit for Hyper-V in the midrange storage market include:

CLARiiON storage systems provide several flexible levels of models with FC
and iSCSI interfaces. This allows the user to make the optimal choice of a
storage system based on capacity, performance, and cost.

CLARiiON storage systems can scale quickly to manage anticipated data
growth, especially as the storage needed for VMs increases on Microsoft
Hyper-V Server.

CLARiiON Virtual Provisioning™ (thin provisioning) improves storage
capacity utilization and simplifies storage management by presenting a VM
with sufficient capacity for an extended period of time. With CLARiiON
Virtual Provisioning, customers can provision less storage. Rather than
buying additional capacity up front, customers can reduce or defer initial
capacity requirements. Furthermore, customers can save on acquisition and
energy costs by running their systems at higher utilization rates and adding
capacity as needed without disrupting applications.

CLARiiON storage can be shared across multiple Microsoft Hyper-V
Servers, allowing storage consolidation to provide efficient use of storage
resources. This storage consolidation is valuable for clustering and quick
migration.

VM applications and data on CLARiiON storage systems enhance
performance and therefore maximize functionality, reliability, and efficiency
of Microsoft Hyper-V Server as opposed to internal server storage.

The Unisphere™ Manager suite provides web-based centralized control of
global disk space, availability, security, quality of service, and replication for
VMs provisioned by the CLARiiON storage system.

The redundant architecture of the CLARiiON storage system provides no
single point of failure, thereby reducing application downtime and minimizing
business impact for storage upgrades.

The CLARiiON storage system’s modular architecture allows a mixture of
EFDs, FC, and SATA drives.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
21
Chapter 2: Technology and Key Components
EMC MirrorView
Overview
EMC MirrorView provides highly available data storage across campus, across the
state, or across the globe. By maintaining synchronous or asynchronous data
mirroring between Dell/EMC CX™ arrays, MirrorView helps ensure data availability
for important business functions. MirrorView is an array-based application,
minimizing the impact to your server while maximizing data uptime. MirrorView also
integrates with EMC SnapView point-in-time snapshot software. Together,
MirrorView and SnapView provide a unique solution for online data availability and
disaster recovery.
You configure and manage MirrorView from within EMC’s Unisphere management
software. In a MirrorView/Synchronous (MirrorView/S) configuration, a server writes
to the source array, which records the data and synchronously writes the same data
to the target EMC array. The array sends back an acknowledgement to the server
once it writes the data to both the source and target arrays, ensuring a complete
transaction record on both arrays.
Flexible
choices for
deploying
MirrorView
MirrorView enables long-distance remote mirroring through the same FC switch that
you use for your hosts. Depending on the distance between your two sites, you can
use several options for FC-based mirroring for distances up to 60 km: Short Wave
GBICs, Long Wave GBICs, Optical Link Extenders, or Dense Wave Division
Multiplexers (DWDM). DWDM extends MirrorView over FC synchronous or
asynchronous disaster restart capabilities up to 200 km (320 miles). DWDM enables
you to multiplex MirrorView sessions with other EMC arrays over a single redundant
FC path. In addition to the high throughput and low delays enabled by FC, this
configuration can reduce connectivity costs.
CLARiiON
MirrorView/S
configuration
with REE
By creating a synchronous mirror between EMC arrays, MirrorView/S maintains an
exact byte-for-byte copy of your production data in a second location. You can use
the mirrored copy for failover, for online restore from backup, and for running
backups against a SnapView snapshot of the remote mirror. MirrorView/S helps
minimize exposure to internal issues and external disaster situations, and is
designed to provide fast recovery time if a disaster does strike.
MirrorView/S protects data throughout the entire mirroring process. Fracture logs
track changes and provide a source for restoring modifications to source data if the
source array loses contact with the target array during a failure. When the target
array becomes available, MirrorView/S captures the pending writes in the fracture
log and writes them to the target array, restoring its consistent state.
MirrorView/S also maintains a write-intent log in the unlikely event of a source array
issue. Upon repair of the source array, MirrorView/S accesses the write intent log to
make any changes that were in process between the two arrays during the failure, to
the source data. Next, a partial re-sync with the target array takes place to obtain a
consistent state between the source and target arrays.
MirrorView/S offers a feature called Consistency Groups, which helps ensure
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
22
Chapter 2: Technology and Key Components
consistent remote copies of data from one or more applications for disaster recovery
purposes. Inter-related LUNs remain in sync and are recoverable in the event of an
outage at the primary site. All LUNs in the Consistency Group must reside in the
same array.
EMC SnapView
SnapView is a storage system-based software application that allows you to create a
copy of a LUN by using either clones or snapshots. A clone is an actual copy of a
LUN and takes time to create, depending on the size of the source LUN. A snapshot
is a virtual point-in-time copy of a LUN and takes only seconds to create.
SnapView has the following important benefits:

It allows full access to a point-in-time copy of your production data with modest
impact on performance and without modifying the actual production data.

For decision support or revision testing, it provides a coherent, readable and
writable copy of real production data.

For backup, it practically eliminates the time that production data spends
offline or in hot backup mode, and it offloads the backup overhead from the
production server to another server.

It provides a consistent replica across a set of LUNs. You can do this by
performing a consistent fracture, which is a fracture of more than one clone at
the same time, or a fracture that you create when starting a session in
consistent mode.

It provides instantaneous data recovery if the source LUN becomes corrupt.
You can perform a recovery operation on a clone by initiating a reverse
synchronization and on a snapshot session by initiating a rollback operation.
Depending on your application needs, you can create clones, snapshots, or
snapshots of clones.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
23
Chapter 2: Technology and Key Components
Brocade network load balancing
Overview
Brocade
Serverlron ADX
This section describes the network load balancers used in this solution

The Brocade ServerIron ADX

Brocade ServerIron ADX with Microsoft Exchange Server 2010
With the introduction of Exchange Server 2010, hardware network load balancing is
becoming more of a requirement, not a ―nice to have.‖
In this solution, we deployed the Brocade ServerIron ADX 1000 Application Delivery
Controller/Load Balancer. This solution uses the Brocade ServerIron ADX to ensure
®
affinity and to load balance the traffic going to the server farm. The Brocade
®
ServerIron ADX Series provides application load balancing/availability, affinity,
performance, and security capabilities. The Brocade ServerIron ADX uses SSL
proxy to decrypt incoming traffic, apply CSW rules, and re-encrypt traffic back to the
server farm.
For more information on the need and requirements for Network Load balancing,
refer to the following article on Microsoft TechNet:
http://technet.microsoft.com/en-us/library/ff625247.aspx
Application availability
The application availability features of the Brocade ServerIron ADX include:

Server and application health checks: Continuously monitors the health of
the application availability.

Server load balancing: Efficiently routes the end user and Web services
request to the best available server dependent on the load-balancing scheme.
Application affinity options
The application availability features of the Brocade ServerIron ADX include:

SSL Proxy: Secure Socket Layer (SSL) Proxy is the most secure configuration
option, allowing for end‐to‐end SSL encryption. SSL Proxy allows the Brocade
ServerIron ADX to decrypt HTTPS traffic, run complex HTTP Content
Switching Rules (CSW rules), re-encrypt the traffic, and forward it to the
appropriate server.
The CSW feature makes sure that existing user sessions are forwarded to the
same server to which the session was initially connected. Affinity is handled by
the CSW rules, which look at the cookie and determine whether it is a new
cookie from a new session or an existing cookie generated by the Brocade
ServerIron ADX. The new cookie is stripped from the packet and replaced with
a load-balancer cookie to which a server ID is attached. The server ID ensures
that all traffic from that session forwards to the same server.

Source IP Port Persistence: Source IP Port Persistence provides a persistent
hashing mechanism for virtual server ports, which evenly distributes hash
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
24
Chapter 2: Technology and Key Components
assignments and enables a client to always be redirected to the same real
server. Source IP Port Persistence, which functions at Layer 4, can be applied
to non-HTTP traffic for which cookies are not part of the protocol specification
and other HTTP traffic as long as no other persistence mechanism is applied
on the same port.
Application performance
The application performance features include:

Server load balancing: The Brocade ServerIron ADX balances the traffic load
between the real servers based on a predictor used for optimal resource
utilization, maximum throughput, and minimum response time.

Server health monitoring: The Brocade ServerIron ADX performs health
checks on the real servers to ensure that traffic does not forward to a real
server that has failed or is non-responsive.
Application security
The application security features include:

End user access control: Provides Access Control Lists (ACLs) to protect the
client-to-server traffic from worms and intruders that attack vulnerable open
server ports not being used by the application.

SYN attack protection: Provides protection to back-end servers from SYN
attacks. SYN attacks can exhaust back-end server resources by opening a
vast number of partial TCP connections.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
25
Chapter 2: Technology and Key Components
Brocade
ServerIron ADX
with Microsoft
Exchange
Server 2010
Figure 2 shows the solution architecture used for the Brocade ServerIron ADX with
Microsoft Exchange Server 2010.
Outlook
clients
Brocade
L2 switch
Brocade
ServerIron ADX
Active
Directory
Brocade
L2 switch
MBX 1
CAS 1
MBX 2
CAS 2
MBX 3
CAS 3
MBX4
Figure 2. Brocade ServerIron ADX load balancer with the Exchange Server 2010
Brocade ServerIron ADX
One of the unique features of the Brocade ADX 1000 is the ability to scale the
capacity of the device as your infrastructure grows. The Brocade ADX 1000 Series
includes four models of varying processor and port capacity, all based on the full
platform hardware and operating software, including:

ADX 1008-1–1 application core and 8 x 1 GbE ports

ADX 1016-2–2 application cores and 16 x 1 GbE ports

ADX 1016-4–4 application cores and 16 x 1 GbE ports

ADX 1216-4–4 application cores, 16 x 1 GbE ports, and 2 x 10 GbE ports
Depending on the model selected, a specific number of application cores, interface
ports, hardware acceleration, and software capabilities are enabled. You can quickly
unlock the remaining untapped capacity by applying simple license upgrade key
codes
Administrators benefit from the flexibility of selecting the features and performance
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
26
Chapter 2: Technology and Key Components
they need to meet current demand, and investing in performance upgrades as
requirements increase in the future. This allows their infrastructure to grow with their
business needs.
Features
The Brocade ServerIron ADX provides the following features:

Growth of client access to applications makes precise capacity planning more
difficult.

The Brocade ServerIron ADX 1000 Series can be upgraded from entry level to
high end with simple license activation.

Options include performance upgrade by processor unlocking, interface port
unlocking, SSL hardware acceleration, and premium software features.

The Brocade ServerIron ADX 1000 is the only 1U application delivery
controller with 10 Gigabit Ethernet (GbE) capability, leading the industry with 9
Gigabits per second (Gbps) of Layer 4–7 throughput, which is more than twice
that of any competitor in its class.

Hardware acceleration boosts SSL traffic flows up to 28,672 SSL transactions
per second.
For more information on this and other ADC platforms from Brocade, visit
http://www.brocade.com/products-solutions/products/application-delivery/serverironadx-series/index.page.
Brocade SAN and LAN/WAN network infrastructure
Overview
Brocade 300
SAN switch
This section describes the following Brocade products:

Brocade 300 SAN switch

Brocade 825 Dual Port 8G FC HBA

Brocade FastIron GS Ethernet Switch

Brocade NetIron Routers
The Brocade 300 SAN switch provides storage connectivity in this solution. The
Brocade 300 is an 8 Gb/s FC switch that provides an affordable entry-level solution
for small SANs or for the edge of larger SANs.
The Brocade 300 features a non-blocking architecture with as many as 24 ports
concurrently active at 8 Gb/s (full duplex) with no over-subscription—providing an
overall bandwidth of 192 Gb/s. It combines auto-sensing 1, 2, 4, and 8 Gbps FC
throughputs with features that greatly enhance fabric operation. In addition,
enhanced Brocade ISL Trunking enables a single logical high-speed trunk capable of
up to 64 Gbps of throughput.
The Brocade 300 is easy to deploy, manage, and integrate into both new and
existing IT environments. With capabilities such as Ports On Demand scalability from
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
27
Chapter 2: Technology and Key Components
8 to 16 or 24 ports, the Brocade 300 enables organizations to grow their storage
networks when necessary, in a non-disruptive manner. In addition, organizations can
initially deploy 4 Gb/s SFPs and upgrade to 8 Gbps SFP+ when desired.
For more information about:
Brocade 825
Dual Port 8G
FC HBA

Brocade HBAs:
http://www.brocade.com/products-solutions/products/serverconnectivity/product-details/index.page

Brocade HBAs offered by EMC:
http://www.emc.com/products/emc-select/host-bus-adapters-convergednetwork-adapters-switches.htm
In this solution, the Brocade 825 Dual Port 8G HBA provides redundant server
®
connectivity to the SAN. The Brocade 815 and 825 8 Gb/s FC HBAs are a new
class of server connectivity products with unmatched hardware capabilities and
unique software features. This new class of HBA is designed to help IT organizations
deploy and manage end-to-end SAN services across next-generation data centers.
While legacy HBA providers focus on simple connectivity, the Brocade 8 Gbps HBAs
leverage five generations of Application-Specific Integrated Circuit (ASIC) design.
The current ASIC is an evolution of the industry-leading Brocade FC switching ASIC.
It leverages the same technology and features that make Brocade the market leader
in storage networking, including frame-based trunking for additional performance and
availability, and Virtual Channels for Quality of Service (QoS) and isolation. This
enables organizations to extend essential Brocade Advanced Fabric Services into
the server. The Brocade 825 Dual Port 8G FC HBA offers the following advanced
options:

Centralizes management across the data center by leveraging Brocade Data
Center Fabric Manager (DCFM)

Provides fabric-based boot LUN discovery for simplified SAN boot
configuration for diskless server deployments

Maximizes bus throughput with a FC to PCIe Gen2 (x8) bus interface with
intelligent lane negotiation

Delivers unprecedented performance with I/O transfer rates of up to 500,000
IOPS per port and 1,000,000 IOPS per dual-port adapter

Provides throughput of up to 1,600 MB/s per port full duplex (800 MB/s per
port on 4 Gb/s models)

Supports NPIV with up to 255 virtual ports

Extends advanced fabric services such as QoS to the application level
For more information on Brocade Server Connectivity options please refer to:
http://www.brocade.com/products-solutions/products/server-connectivity/productdetails/index.page
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
28
Chapter 2: Technology and Key Components
Brocade network infrastructure
Brocade
FastIron
Ethernet switch
The Brocade FastIron access switch series provides enterprise organizations with
high performance, flexible and feature-rich solutions for building a secure and
converged 10/100/1Gig and 10 G networks. Upgradeable with 10-Gigabit Ethernet,
PoE, and stacking technology, the FastIron Ethernet switch Series gives enterprises
the cost and operational benefits of a ―pay-as-you-grow‖ architecture.
Brocade
NetIron routers
The Brocade NetIron MLX Series of advanced switching routers provides industryleading 10 GbE and 1 GbE port density, wire-speed performance, and a rich set of
IPv4, IPv6, MPLS, VPLS, and Multi-VRF capabilities as well as advanced Layer 2
switching capabilities. These switching routers address the diverse needs of
environments such as data centers, large enterprises, government networks,
education networks, High-Performance Computing (HPC) networks, and Internet
Service Providers (ISPs).
The NetIron MLX Series includes the four-slot NetIron MLX-4, eight-slot NetIron
MLX-8, 16-slot NetIron MLX-16, and the 32-slot NetIron MLX-32. The series offers
industry-leading port capacity and density with up to 256 10 Gigabit Ethernet (GbE),
1536 1 GbE, 64 OC-192, or 256 OC-48 ports in a single system.
For more information on Brocade network offerings, visit:
http://www.brocade.com/dotcom/products-solutions/products/ethernet-switchesrouters/enterprise-mobility/index.page?
Dell PowerEdge R910 Servers
Dell PowerEdge
R910 server
TM
TM
The Dell PowerEdge R910 is a high-performance four-socket 4U rack server that
features built-in reliability and scalability for mission-critical applications and includes
the following:

Built-in reliability features at the CPU, memory, hardware and hypervisor levels

Integrated systems management, Lifecycle Controller and embedded
diagnostics to help maximize uptime

Internal Dual SD Module providing superior hypervisor redundancy
Dell focuses on delivering a processor, memory, and I/O combination to allow its
customers to get the most out of virtualization. The servers are designed with
features that enable customers to accelerate virtualization deployment, to integrate
their products easily, and to require very little maintenance. Customers can run more
workloads on the Dell PowerEdge R910 server with Intel 7500 processors,
embedded hypervisors, and balanced memory architectures.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
29
Chapter 2: Technology and Key Components
Other key Dell
technologies
enabling robust
virtualization
The Dell PowerEdge R910 provides reliability, incorporating features such as Intel
advanced reliability, availability and serviceability (RAS) capabilities; redundant
power supplies; remote IDRAC6 connectivity; and embedded diagnostics. Internal
Dual SD Module provides failover at the hypervisor—a reliability feature designed
with direct input from Dell customers.
®
Fail-safe virtualization
Dell introduced an embedded hypervisor to speed deployment and operation of
virtualization. This extra step provides a dual embedded hypervisors to give
customers the added security of a redundant hypervisor right on board. The increase
in the number of VMs made possible by the increase in balanced memory provides
customers with a feeling of more security.
Embedded system management
New Lifecycle Controller 1.3 has drivers for server provisioning ship pre-loaded on
Dell’s enterprise servers. Dell Lifecycle Controller technology delivers ―Instant On‖
integrated manageability through a single access point. Rather than requiring
customers to install software by using CDs, Dell provides all of the virtualization
functionality preloaded on each server. (Most competitors require customers to use
optical media (DVD or CD) to install this functionality.)
Hardware used in this solution
Storage
Table 3 provides information about the EMC hardware used in this solution.
Table 3.
EMC Unified Storage CX4-480 (integrated CLARiiON CX4-480)
Item
Description
Storage
2 CLARiiON CX4-480s – 1 per site
Storage connectivity to host (FC or iSCSI)
FC
Storage cache
16 GB
Number of storage controllers
2 per storage frame
Number of storage ports available/used
8 (4 per SP) available per storage frame,
4 used (2 per SP)
Maximum bandwidth of storage connectivity to
host
8 * 4 Gbps (4 used in this solution)
Total number of disks tested in solution
80 disks per storage array (160 for both
sites)
Maximum number of spindles that can be
hosted in the storage
480 disks in a single storage array
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
30
Chapter 2: Technology and Key Components
Servers
Table 4 provides information about the Dell R910 servers used in this solution.
Table 4.
LAN and SAN
Switches
Dell PowerEdge System
Item
Description
Processors
Dell PowerEdge R910 with Quad-Eight-Core Intel Xeon x 7560
@ 2.26 GHz Processors per server - 2 per site
Memory
192 GB RAM (16 x 8GB DIMM) per server
HBA
Brocade 825 dual-port 8 Gbps HBAs - 1 per server
Network
4 at 1 Gbps Ethernet ports
Table 5 provides information about the LAN and SAN switches used in this solution.
Table 5.
LAN and SAN switches
Item
Description
Load Balancer
Brocade ServerIron ADX 1000 running ASM12100c for ActiveStandby and ASR12100c for Active-Active - one per site
Core Ethernet Router
Brocade MLX-8 Core Router
(one for each site)
Ethernet switch
Brocade FastIron GS Series Layer 2 switch (1 for each site)
Fibre Channel switch
Brocade 300 SAN switch (Brocade FOS v6.3) – 1 per site
Host Bus Adapters
(HBAs)
Brocade 825 dual-port 8 Gb HBA (Brocade HBA Driver v2.2) -1
per host
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
31
Chapter 2: Technology and Key Components
Software used in this solution
Table 6 provides information about software used in this solution.
Table 6.
Software used in this solution
Item
Description
Hypervisor host servers
Windows 2008 R2 Hyper-V Enterprise
Exchange Server VMs
Windows 2008 R2 Enterprise
Exchange Server 2010 Mailbox role
Enterprise Edition RU3 or later
Exchange Server 2010 Hub
Ttransport and Client Access Server
role
Standard Edition RU3 or later
DAG in third-party replication mode
EMC Replication Enabler for Exchange 2010 v 1.0
Exchange backup software
EMC Replication Manager v 5.3
Multipath and I/O balancing
EMC PowerPath 5.3 Sp1
Antivirus software (on Hub Transport
servers)
ForeFront Protection 2010 for Exchange Server
®
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
32
Chapter 3: Solution Design
Chapter 3:
Solution Design
Overview
This chapter describes the methodology used to design this solution. Customer
requirements and design assumptions were used as the key decision points during
the Exchange 2010 environment implementation phase. This section contains the
following topics:
Topic
See Page
Solution design methodology
33
Key solution requirements
33
Exchange 2010 Design architecture with EMC Synchronous
replication by REE
34
Database availability group (DAG) design with REE
36
Identifying Hyper-V host and VM requirements
43
Solution design methodology
Every Exchange implementation requires careful planning and starts with identifying
key requirements. The range of these requirements can vary from customer to
customer and will depend on many different variables. To help you with the design
approach, we have listed some of the key requirements that are necessary to build
this solution.
Key solution requirements
Table 7 summarizes key customer requirements and assumptions based upon which
this Exchange solution was designed, built, and validated.
Table 7.
Key solution requirements
Requirement
Description
20,000 users across two
datacenters
 10,000 active Exchange users per site
 70-100% user concurrency during normal operation
(global call center operation)
 100km distance between datacenters
Exchange User Profile
 500 MB per mailbox
 150 messages sent/received/user/day profile (0.15
IOPS)
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
33
Chapter 3: Solution Design
Requirement
Description
 100% Outlook MAPI clients (cached mode)
Virtualization and consolidation
 Consolidation and virtualization of all servers and
applications in the datacenter
HA requirements
 Tolerance for an individual server failure
 Tolerance for a single site failure without service
degradation
 In site local RTO and RPO requirements
-5 min RTO
-Zero RPO (no data loss)
 Site failure RTO and RPO
-1 hour RTO
-Zero RPO (no data loss)
Storage requirements
 Existing SAN Infrastructure must be leveraged
Customer already leveraging SAN with EMC Unified
Storage (CX4-480) and 450 GB 15k rpm drives
Exchange 2010 design architecture with EMC synchronous replication by REE
This section describes how EMC Replication Enabler works with Exchange 2010,
storage, and Microsoft Windows failover clustering.
Native Exchange DAG configuration is build on top of Windows Failover Clusters.
You will not see any Exchange 2010 cluster components (cluster storage groups,
disk resources) in the failover cluster manager because Exchange 2010 does not
operate as a clustered application, and the cluster resource management model is
no longer used for high availability. The Exchange cluster resource DLLs and all of
the cluster resources it provides no longer exist. Instead, Exchange 2010 uses its
own internal high-availability model. Although some components of Windows failover
clustering are still used in this model, Exchange 2010 now manage them exclusively.
With Exchange 2010 DAG in third-party replication mode the Windows Failover
Clusters are again effectively used to allow shared storage.
Figure 3 shows how REE can manage an Exchange 2010 DAG in third-party
replication mode. Figure 4 and Figure 5 show disk resources and DAG node
members. You can configure any member of the DAG to have access to disk
resources in order to process the switchover or failover operations.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
34
Chapter 3: Solution Design
Figure 3. Exchange 2010 DAG with EMC Replication Enabler
Figure 4. Shared storage with Exchange 2010 DAG in third-party replication mode
shown in Windows Failover Cluster Manager
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
35
Chapter 3: Solution Design
Figure 5. Exchange 2010 DAG members with third-party replication mode shown in
Windows Failover Cluster Manager
Database availability group (DAG) design with REE
Planning your
DAG
deployment
From a planning perspective, you should try to use the same best practices for DAG
deployments in the native configurations as you would use with REE. You should
always minimize the number of DAGs deployed. Consider going with more than one
DAG if you:

Deploy more than 16 Mailbox servers

Have active mailbox users in multiple sites (active/active site configuration)

Require separate DAG-level administrative boundaries

Have Mailbox servers in separate domains (DAG is domain bound)
In order to address the server HA requirement, two copies of every mailbox
database must be available. One copy is active and the other is passive. Both the
active and passive copies of each mailbox database must be hosted on separate
Mailbox servers. If the Mailbox server that hosts the active database fails, or an
active mailbox database becomes inaccessible, the passive mailbox database
becomes active on a different server in the same site.
Before going into more details about the DAG design, you need to understand the
concept of active and passive copies when deploying a solution in third-party
replication mode, specifically with REE.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
36
Chapter 3: Solution Design
Understanding
the concept of
active and
passive copies
with REE
When deploying a DAG with EMC Replication Enabler the function of active and
passive copies differs from their function with native DAG replication. With REE,
there is one primary shared storage (source images) system per Exchange
database. This primary storage synchronously replicates to the secondary storage
(target images) on the remote site. Source images are shared between multiple
Mailbox servers within the same site and the target images are shared between the
Mailbox servers at the remote site. These shared images work similar to a single
copy cluster within the same site. When a switchover/failover is initiated, a best effort
is made to move the database to one of the Mailbox servers present at the local site.
An attempt to move the databases to the remote Mailbox servers happens only when
REE is unable to move the database to the servers with in the local site. If there are
multiple Mailbox servers within the same site, the ―activation preference‖ setting
determines which Mailbox server (within the site) is attempted first.
Figure 6 shows an activation preference order example. Based on the activation
preference order in this example, database D1-DB1 on MBX1A will be first activated
on MBX3A. In case MBX3A is unavailable, REE will try to activate a shared image
(―copy‖) on MBX1P.
Figure 6. Database activation preference order settings in the Exchange
Management console
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
37
Chapter 3: Solution Design
Site resiliency
considerations
and DAG
design with
REE
When planning a site resiliency strategy for Exchange mailbox databases, you need
to consider several conditions to create an effective DAG design. Exchange 2010
site resiliency requires an additional passive copy of every mailbox databases to be
hosted by a Mailbox server at a remote site. When a site failure occurs, the passive
mailbox databases at the remote site become active, providing client access to the
mailbox content.
The DAG in a third-party replication mode with REE accommodates this siteresiliency requirement by enabling remote images (―copies‖) on the secondary
storage at the remote site. This process is very similar to performing a site failover
with a native DAG and can be accomplished by using integrated PowerShell cmdlets
that are available after the REE installation.
In a solution where active mailbox users exist in more than one physical location
(datacenters or sites), it is not recommended to deploy a single DAG. With active
mailboxes in more than one site you cannot use the Database Activation
Coordination (DAC) mode to prevent unwanted database activations in an alternate
site. Deploying two DAGs will prevent this scenario. More details about DAC will be
provided later in the paper. Because the customer requirement specifies active
mailbox users in both sites (datacenters), we deploy two separate DAGs with each
DAG spanning two sites.
Figure 7 shows how this solution distributes two eight-member DAGs with four active
and four passive nodes. DAG1 contain four active Mailbox servers in the primary site
(Site A) and four passives Mailbox servers in the secondary site (Site B). DAG2 has
a similar configuration with four active Mailbox servers in Site B and four passives
Mailbox servers in site A.
Figure 7. Two DAGs deployment model (physical)
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
38
Chapter 3: Solution Design
Leveraging
virtualization
for Exchange
deployment
An Exchange 2010 Mailbox server role running on a physical server can be a
member of only one DAG. However, running the Mailbox server in a VM enables a
physical server to host multiple Mailbox server VMs with mailboxes that belong to
different DAGs. The Mailbox server in each VM can be a member of a different DAG.
By leveraging virtualization, we are able to consolidate the server roles and benefit
from the flexibility to design a very effective mailbox resiliency solution that without
virtualization would require several more servers in the physical configuration.
Figure 8 Illustrates how this solution benefits from virtualizing the Exchange Mailbox
roles by distributing the databases across two Hyper-V hosts in each datacenter.
Eight Mailbox servers within the datacenter are virtualized across two physical
Hyper-V hosts. Each Hyper-V server hosts four mailbox VMs, where two of these
VMs are members of a different DAG. This configuration provides an effective
solution to satisfy the customer requirement for HA and site resiliency.
Figure 8. Two DAGs deployment model (virtualization)
Creating an
effective HA
DAG design
with REE
An individual physical server is a single point of failure. Though the servers have
many redundant components, if the physical server experiences a catastrophic
failure, all VMs that run on that server will also fail. By placing Mailbox servers with
active and passive mailbox databases replicated from another server local site and a
second Mailbox server with passive mailboxes replicated from a remote site, none of
your users will lose Exchange access if an individual server failure occurs.
Figure 9 details the HA configuration for Exchange VMs in this solution. It shows how
Exchange mailbox VMs will have access to a source-shared image (copy) to be
available if the VM or Hyper-V host fails or requires maintenance. For example, it
shows that if Exchange VM MBX1A on Hyper-V Host 1 fails, REE automatically reenables the source image (copy) access to the Exchange VM MBX3A on Hyper-V
Host 2. The same is true if the opposite happens. If the Exchange VM on MBX3A on
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
39
Chapter 3: Solution Design
Hyper-V Host 2 fails, the source image (copy) access automatically becomes
activated on MBX1A on Hyper-V Host 1.
The same process also occurs for a physical server failure. If the physical Hyper-V
(Host 1) fails, both VMs from Hyper-V Host 2 (MBX3A and MBX4A) will take over the
service for users from the VMs on Hyper-V Host 1 (MBX1A and MBX2A).
If there is a site failure, the target images (copies) are activated using REE
PowerShell cmdlets. Hyper-V hosts and VMs from the secondary site assume
control of the services from the VM at the primary site. This design allows us to make
optimum use of the physical hardware and provides for mailbox resiliency and HA
within each physical site.
Figure 9. Exchange 2010 HA and site resiliency with Hyper-V and REE
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
40
Chapter 3: Solution Design
Virtualized
Exchange roles
Exchange 2010 requires three Exchange Server roles in order to deliver functional
messaging and collaboration services:

Mailbox server role

Client Access server role

Hub Transport server role
So far, this white paper has focused on the Mailbox server role because it is the core
Exchange role. We have discussed hosting four VMs running the Mailbox server role
on the same Hyper-V host where two of the four Mailbox servers are members of a
different DAG. By adding a VM that runs the HUB/CAS roles, we now have a fully
functioning Exchange environment. By adding an Active Directory Domain
Controller, we have essentially created a virtualized datacenter.
Figure 10 shows a fully virtualized datacenter environment with all Exchange roles
distributed among physical Hyper-V hosts.
Figure 10.
Fully virtualized Exchange 2010 environment
By adding all SAN, storage, and networking components, we have designed a
solution that satisfies all customer requirements. Figure 11 displays the final
reference architecture for our virtualized datacenter environment with Exchange
2010.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
41
Chapter 3: Solution Design
Figure 11. Virtualized datacenter environment with Exchange 2010 reference architecture
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
42
Chapter 3: Solution Design
Identifying Hyper-V host and VM requirements
Overview
To ensure that all Exchange Mailbox server roles are functioning correctly we identify
the CPU and memory resources that each Exchange role requires to run efficiently.
We also need to determine the amount of resources that the physical Hyper-V host
requires to support the design. This section outlines the process of identifying these
requirements for our virtual datacenter infrastructure.
To become more familiar with planning and deploying Microsoft 2010,we
recommend that you review the following documents found at Microsoft’s TechNet
Library website:

Understanding Processor Configurations and Exchange Performance,
Understanding Memory Configurations and Exchange Performance
Important!
Using megacycle capacity to determine the number of mailbox users that an
Exchange mailbox server can support is not an exact science. Several factors
can contribute to unexpected megacycle results in both test and production
environments. You should only use megacycles to approximate the number of
mailbox users that an Exchange mailbox server can support. Remember that it
is always better to be more conservative rather than overly aggressive during
the capacity-planning portion of the design process.

Identifying
Exchange user
profile type
requirements
Understanding the Mailbox Database Cache
To start identifying CPU and memory requirements for this solution we need to know
the Exchange user profile type and host server specifications (CPU and memory).
Customer requirements direct us to size for a 150-message profile. The 150message mailbox user action profile has the following characteristics and
requirements.
Table 8.
Message mailbox requirements
Parameter
Value
Messages sent per 8-hour day
30
Messages received per 8-hour day
120
Required megacycles per active mailbox
3
Required megacycles per passive mailbox
N/A
Mailbox replication megacycles factor per database copy
N/A
Required database cache memory per mailbox (MB)
9
The DAG design specifies four Exchange Mailbox server role VMs and three
HUB/CAS VMs per physical Hyper-V host with 32 logical processors. In this design,
two Exchange Mailbox server role VMs host active and passive images (copies) at
the local site and the other two VM hosts a passive images (copies) from a remote
site.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
43
Chapter 3: Solution Design
CPU and memory requirements must support server and site failure contingencies.
The hosts and VMs must be sized to handle the following conditions:
Identifying CPU
requirements
for Mailbox
server failure
contingency

A single Exchange mailbox VM failure/maintenance: If a single Exchange
mailbox VM fails, make sure that the other VM in the same site is able to
service users without degradation and without failover to the other site.

A single Hyper-V host failure/maintenance: If a single server fails, make
sure that there is no service degradation or failover to another site. The failed
server can be a Mailbox server, a Client Access server, or a Hub Transport
server.

A site failure: Accommodating a site failure requires that the surviving site
support the entire workload of the failed site.
CPU and memory requirement calculations start with the Mailbox server role. Based
on the customer requirements, each datacenter within the site will be servicing 70%
of the 10,000 users per site during normal operation. In the event of the datacenter
failure, the surviving site must support another 10,000 users with 70% mailbox
access concurrency. Additionally, we need to provision sufficient megacycles so that
Mailbox server CPU utilization does not exceed 80 percent. Table 9 lists the Mailbox
server megacycle and memory requirement calculations.
Table 9.
Mailbox CPU requirements
Parameter
Calculate the
CPU capacity
of the Hyper-V
Root server
Value
Active megacycles
10,000 mailboxes x 3 megacycles per
mailbox = 30,000
Passive megacycles
0 (No passive mailboxes in this solution)
Replication megacycles
0 (No replication megacycles in this
solution)
Maximum mailbox access concurrency
0.7 (70%)
Total required megacycles per site during
normal operation condition
30,000 x 0.7 = 21,000
Total required megacycles per site to support
site failover condition
21,000 x 2 = 42,000
Total environment required Mailbox server
megacycles
42,000 / 0.80 = 52,500
Based on Microsoft’s guidelines and server vendor specifications, we can now
determine the CPU and memory requirements for each VM role. For more details on
planning additional Mailbox server CPU capacity see the topic entitled ―Mailbox
Server Processor Capacity‖ at http://technet.microsoft.com/enus/library/ee712771.aspx.
Based on the SPECint_rate2006 results (http://www.spec.org/), a Dell PowerEdge
R910 server with Quad-Eight-Core Intel Xeon x 7560 @ 2.26 GHz (2,260 MHz)
processors provides 759 megacycles with 23.72 megacycles per core (known in the
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
44
Chapter 3: Solution Design
formula as new platform per core value). The baseline value per core value is 18.75.
To determine the total megacycles of the Dell R910 server, use the following
formula:
((New platform per core value) × (Hertz per core of baseline platform)) ÷
(Baseline per core value) = Adjusted megacycles per core
23.72 x 3,333 / 18.75 = 4,216 adjusted megacycles per core or 134,926 megacycles
per server (4,216 x 32 cores)
Now we need to make an adjustment to account for hypervisor overhead, estimated
at about 10 percent for typical deployments. The following formula identifies the total
number of megacycles per server core available to the VMs with hypervisor
overhead adjustment.
((Adjusted megacycles per core) x (1- Hypervisor overhead)) x (Max CPU
utilization Target) = Hypervisor adjusted megacycles per core
(4,216 x (1- 0.1)) x .8 = 3,794 x .8 = 3,035 megacycles per core or 97,120
megacycles per server (3,035 x 32 cores)
The above calculations determined that a single Dell PowerEdge R910 server with
32 cores and 97,120 megacycles will be able to handle the entire mailbox server
requirements of 52,500 megacycles. This provide us with an extra megacycles
capacity do deploy other VMs on same physical server
Determine the
CPU capacity
of the VMs
Based on the version of Windows 2008 R2 with Hyper-V used in this solution, we
can allocate a maximum of four virtual CPUs per VM. A Mailbox server VM with four
vCPUs handling users with a 150 message profile (3 megacycles per user) will
require 2,914 megacycles based on calculations performed in the following formula:
((Hypervisor adjusted megacycles per core) x (Number of vCPUs per VM)) /
(Megacycles per user profile) = Megacycles per VM
(3035 x 4) / 3 = 4,047 megacycles per Exchange Mailbox VM with four vCPUs
Table 10 provides a summary of the VMs CPU and memory configurations in this
solution.
Table 10.
VM CPU and memory configurations summary
VM Role
vCPUs per
VM
Memory per
VM
Mailbox (to support 5000 users during
switchover/failover at 70% concurrency)
4
32 GB
HUB/CAS (to support 20k for site failover)
4
8 GB
DC
2
4 GB
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
45
Chapter 4: Storage Design
Chapter 4:
Storage Design
Overview
This chapter contains the following topics:
Topic
See Page
Methodology for sizing Exchange storage
46
Exchange storage design using EMC building block methodology
47
Storage design summary
53
Methodology for sizing Exchange storage
Sizing and configuring storage for use with Microsoft Exchange Server 2010 can be
a complicated process, driven by many factors, which vary from organization to
organization. Properly configured Exchange storage, combined with a properly sized
server and network infrastructure, can guarantee smooth Exchange operations and a
positive user experience. This solution uses the building-block approach to simplify
sizing and configurations of storage used with Microsoft Exchange Server 2010. This
approach helps Exchange storage administrators to deploy large amount of
Exchange storage on EMC CLARiiON more easily and efficiently.
Make sure to consult with a server and storage vendor for additional guidelines
during the design and deployment phases. You can download these tools from the
following locations:

For access to the Exchange 2010 Mailbox Server Role Requirements
Calculator from Microsoft, visit the Microsoft Exchange Team Blog at
http://msexchangeteam.com/archive/2009/11/09/453117.aspx

For the EMC Exchange 2010 storage calculator visit Powerlink at
http://powerlink.emc.com.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
46
Chapter 4: Storage Design
Exchange storage design using EMC building block methodology
What is a
building block?
A building block represents the required amount of resources required to support a
specific number of Exchange 2010 users on a single VM. You derive the number of
required resources from a specific user profile type, mailbox size, and disk
requirement. Using the building-block approach removes the guesswork and
simplifies the implementation of Exchange VMs.
After designing the initial Exchange Mailbox server VM building block, you can easily
reproduce it to support all of the users in your organization that share similar user
profile characteristics. By using this approach, Exchange administrators can create
their own building blocks based on their company’s Exchange environment
requirements. This approach is very helpful when a customer expects future growth,
as it makes Exchange environment additions much easier and straightforward. You
can apply this methodology when deploying Exchange in either a physical or a virtual
environment.
EMC’s best practices involving the building block approach for an Exchange Server
design has been very successful for many customer implementations. To create a
building-block for a Mailbox server role VM, you need to:
1. Identify user requirements
2. Identify Exchange VM requirements
3. Identify and calculate storage requirements based on both IOPS and
capacity
4. Finalize the Exchange VM building block
The following sections detail these four steps.
Step 1. Identify
user
requirements
Exchange administrators can create building blocks based on their organization’s
user requirements. To obtain user profile information for your existing Exchange
environment use the Microsoft Exchange Server Profile Analyzer tool.
Table 11 summarizes key customer requirements for this solution. This information is
required to perform the Exchange storage design.
Table 11.
Exchange Environment Requirements
Parameter
Value
Target message profile ( messages
sent/received/ user/day)
150 messages (0.15 IOPS)
Target average message size (KB)
75 KB
Outlook mode
100% MAPI
Mailbox size (MB)
500 MB
Total number of mailboxes in the environment
20,000
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
47
Chapter 4: Storage Design
Step 2. Identify
Exchange VM
requirements
Parameter
Value
Total number of users per site
10,000
Number of users per Mailbox server during
normal operation
2,500
Number of users per Mailbox server during
switchover operation
5,000
Number of sites
2
Deleted items retention window (―dumpster‖)
(days)
7
Logs protection buffer
3 days
Database read/write ratio
3:2 in mailbox resiliency configurations
Based on the DAG design with REE and the allocation of Exchange Mailbox role
VMs per Hyper-V host, each Exchange mailbox VM will host 2,500 users during
normal conditions. If the Mailbox server’s maintenance storage LUNs fails, the
Mailbox server is redirected to another local or remote target Mailbox server. This
means that from the storage perspective, each we need to size the Mailbox server
for 2,500 users. However, we need to size CPU and memory based on 5,000 users
at 70 percent concurrency (as per customer requirements) in order to handle
switchover conditions.
Earlier in the design process, we identified the CPU and memory requirements for
the Exchange Mailbox role VMs. Table 12 summarizes these requirements.
Table 12.
Step 3. Identify
and calculate
storage
requirements
based on IOPS
and capacity
Exchange VM requirements
VM Role
vCPUs per VM
Memory
Mailbox (to support 5,000 users during
switchover/failover at 70% concurrency)
4
32 GB
Based on our DAG design for this solution we have eight Exchange Mailbox server
role VMs for each site. During normal conditions, only four mailbox VM are active
and other four are passive. We size the storage to provide the necessary IOPS and
capacity to sustain a site failover, in which case four passive VMs will become active.
In this condition all 20,000 users require access to the storage resources at one site.
When using the building-block approach for storage design, EMC recommends that
you identify the smallest ―common denominator,‖ which in our case is 2,500 users.
This is our storage building block.
As a best practice for calculating Exchange storage, always calculate both IOPS and
capacity requirements. These procedures show the basic calculations for a targeted
user profile. The customer for which this solution was designed requires that their
current infrastructure, which includes an EMC Unified CX4-480 array with 450 GB
15k rpm drives, be incorporated into the design.
Based on the storage design building-block methodology each Exchange mailbox
VM with 2,500 users requires one building block. Eight Exchange mailbox VMs per
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
48
Chapter 4: Storage Design
site (and per storage array) require eight building-blocks:

One VM with 2,500 users and one building block

Eight VMs per site with 20,000 users and eight building blocks
Calculating the mailbox I/O requirements
It is important to understand the amount of database IOPS consumed by each
mailbox user because it is one of the key transactional I/O metrics needed to
adequately size your storage. Pure sequential I/O operations are not factored in the
IOPS per Mailbox server calculation because storage subsystems can handle
sequential I/O much more efficiently than random I/O. These operations include
BDM, log transactional I/O, and log replication I/O.
This step describes how to calculate the total IOPS required to support all mailbox
users using the building block methodology.
Note:
To determine the IOPS for different message profiles, refer to the table provided at
the following Microsoft TechNet location Understanding Database and Log
Performance Factors:
Total Transactional IOPS =
0.15 * 2500 * 20% = 450 IOPS per Exchange VM
Note:
Twenty percent overhead includes log IOPS and BDM IOPS.
In the above procedure, we determined that the IOPS requirement is to support one
building block of 2,500 users. To support 20,000 users during a site failover, we
need eight of these building blocks. Therefore, the total IOPS required for 20,000
users per site is 3,600 (450 IOPS * 8 building blocks).
To calculate the number of disks required to provide the desired user performance
based on the IOPS, use the following formula.
Disk requirements based on IOPS=
(450 x .6) + 4(450 x .4) / 155 = 990/155 = 6.4 (round-up to 7 disks)
To support 3,600 IOPS for 20,000 users using 15k rpm drive in a RAID 5
configuration requires 56 disks (7 * 8 building-blocks = 56 disks).
Notes:
 IOPS capacity per disk type can vary depending on the disk type, storage
array model, and cache capacity available. Contact your EMC representative
to obtain the latest guidelines for disk types and speeds.

Database read/write ratio in mailbox resiliency configuration is 3:2

RAID write penalty is: RAID 1/0 = 2, RAID 5 = 4, RAID 6 = 6
The IOPS calculations concluded that:

To support 450 IOPS for 2,500 users per server, we require seven disks.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
49
Chapter 4: Storage Design

To support 3,600 IOPS for 20,000 users per site, we require 56 disks.
Calculating the capacity requirements
After performing IOPS calculations, the next step is to calculate the capacity
requirements. Per customer requirements, the solution must support a 150-message
user profile at 0.15 IOPS and provide each user with 500 MB mailbox capacity.
Below are the steps to calculate how much storage is required to support these
requirements. The following log and DB capacity requirements calculation follows
Microsoft’s guidelines. For additional detail about these calculations, review the
Mailbox Server Storage Design section on Microsoft’s website.
Calculating the mailbox size on disk
It is important to determine the mailbox size on disk will be, before attempting to
determine your total storage requirements. A full mailbox with a 500 MB quota
requires more than 500 MB of disk space because we have to account for the:

Prohibit send/receive limit

Number of messages the user sends/receives per day

Deleted item retention window (with or without calendar version logging and
single item recovery enabled)

Average database daily variations per mailbox
You can use the Microsoft Mailbox Server Role Requirements Calculator calculate
this number, but we have provided the raw calculations below if you prefer to do
them manually.
Use the following calculations to determine the mailbox size on disk for this solution
based on mailbox size of 500 MB:
Mailbox Size On Disk = (Mailbox Limit) + (White space) + (Dumpster)
Whitespace = 150 messages / day x 75/1024 MB = 11 MB
Dumpster for 500 MB mailbox = (150 messages / day x 75/1024 MB * 7 days) +
(500 MB x 0.012) + (500 MB x 0.058) = 112 MB
Table 13 details the summary of the Mailbox size on disk requirements.
Table 13.
Mailbox size on disk summary
Mailbox quota
White space
Dumpster size (one
week)
Total size on disk
500 MB
11 MB
112 MB
623 MB
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
50
Chapter 4: Storage Design
Database Capacity Requirements
To determine the actual database size, use the following formula:
Database Size = <Number of Mailboxes> x <Mailbox Size on Disk> x <Database
Overhead Growth Factor>
Based on the number of mailboxes, the actual size of the mailboxes, and the
database growth overhead factor of 20 percent, the database size requirements is
1,869 GB as shown in the Table 14.
Table 14.
Database capacity requirements summary
Mailboxes per server
Database size requirements
2,500
1,869 GB
(2,500 users * 623 + 20%)
Database LUN Sizes
To determine total database LUN size requirements for 2500 users the following
formula is used:
Database LUN size = (Database Size) + (Content Index Catalog) / (1 - Free
Space Percentage Requirement)
Note:
Content Index is 10% of the database size
Table 15 details the calculation summary for determining required database LUN
size.
Table 15.
Database size
requirements
1,869
Database LUN size requirements
Databases
per server
Content
index size
Total database
LUN size
4
19 GB
(1869 * 0.1)
2,360 GB
((1,869 + 19) / .8)
LUN size
per
database
590
(2,360 / 4)
To support 20,000 users per site (during failover) requires eight building blocks of
2,360 GB with a total of 18,880 GB of storage capacity.
Log Capacity Requirements
To ensure that the Mailbox server does not sustain any outages because of space
allocation issues, make sure to size the transaction logs LUNs to accommodate all of
the logs that will be generated during the backup set. If this architecture leverages
the mailbox resiliency and single item recovery features as the backup architecture,
the log capacity should allocate three times the daily log generation rate in the event
that a failed copy is not repaired for four days. (Any failed copy prevents log
truncation from occurring.)
A 150-message per day profile mailbox generates 30 transaction logs per day on
average, so 2,500 users will generate 75,000 transaction logs each day. With four
databases per server and 625 users per database, this means that each database
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
51
Chapter 4: Storage Design
generates 18,750 logs per day. One percent of the mailboxes are moved per week
one day during the weekend. This solution leverages EMC Replication Manager for
Exchange backups, and is sized to tolerate three days without log truncation.
Calculate the log capacity using the following formula:
Log capacity = ((Daily Log size) x (Truncation failure days)) + (Mailbox Move %)
Table 16.
Log Size requirements
Number of
databases
per server
Logs per
database
Log file
size
Daily log
size
Move mailbox size
/ database (1 %)
Truncation
failure tolerance
Log size
requirements
per database
4
18,750
1 MB
19 GB
4 GB
57 GB
61 GB
(25 × 623 MB / 4)
(3 days × 19 GB)
(4 GB + 57 GB)
Determining the required log LUN size
Earlier, we determined that 278 GB is required for log capacity to support 2,500
users per server. Now we need to calculate the total Log LUN size requirements.
Total Log LUN size can be calculated using the following formula. Table 16 details
the summary of the calculations.
Log LUN size = (Log capacity) / (1 - Free Space Percentage Requirement)
Table 17.
Log LUN size requirements
Log size
requirements
per server
Number of
Log LUNs
per server
Free LUN
space
percentage
requirement
Total Log
LUN size
Log LUN size per
database
244 GB
4
20 %
305 GB
80 GB
((244 GB) / .8)
(305 GB /4)
(61 GB * 4)
To support 20,000 users per site (during failover) we require eight building blocks of
2,360 GB for DBs and 305 GB for logs with a total of 21,320 GB of storage capacity.
Building block disk requirements for 2,500 users
We have determined that the 2,500-mailbox building block for provisioning mailbox
of 500 MB requires a storage capacity of 2,360 GB for database LUNs and 305 GB
for log LUNs with a total of 2,665 GB. Using five 450 GB drives in a RAID 5
configuration on a CLARiiON storage system effectively provides us with the
required capacity to support both the database and logs requirements. Each RAID 5
(4+1) storage pool provides approximately 1,608 GB of user capacity. To calculate
the number of required disks, use the following formula:
Disks Required = (Database <or logs> capacity requirements) / (Raid Group
raw capacity) x (Number of disks in a Raid set)
Note:
Round the calculations up to a number divisible by the number of spindles in a RAID
set.
Total disks (in RAID5 storage pool) = 2665/1608 * 5 = 8.3 disks (round up to 10)
To support 20,000 users on a single CX4-480 we require eight building blocks of 10
disks for a total of 80 disks.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
52
Chapter 4: Storage Design
Step 4. Finalize
the Exchange
VM buildingblock
The calculations steps performed earlier, indicated that capacity requirements
supersede the IOPS requirements. The IOPS requirements calculations indicate that
only six disks are required to support 2,500 users, but after performing the space
calculations, we determine that 10 disks are required to support these users. Table
18 summarizes the configuration for a building block of 2,500 users based on the
150-message profile with a 500 MB mailbox quota.
Table 18.
Building block summary
Users per
Exchange VM
Disks per VM
CPUs per VM
Memory per
VM
2,500
10 (for DBs and Logs)
4
32 GB
Storage design summary
Total storage
requirements
summary
Table 19 summarizes the total required mailbox storage capacity for a single site in
this solution.
Table 19.
Storage capacity requirements summary
Database total space
requirements per site
Log total space
requirements per site
Total capacity
required per site
18,880 GB (2,360 * 8)
2,440 GB (305 * 8)
21,320 GB
Our capacity calculations conclude that using the building block methodology for
calculating Exchange Mailbox storage is easy and very effective. Table 20 provides
disk requirements summary based on a 2,500-user building block. We have
determined that capacity requirements supersede the IOPS requirements, and we
need 80 disks for capacity versus 56 disks for IOPS to support 20,000 users for a
site failover condition.
Adding more users with the same profile involves allocating the same amount of
storage, memory, and CPU resources to achieve the necessary performance. This
flexible design offers customers the capability of keeping pace with an increasing
user population. You can easily add users that share the same profile to the
environment, as shown in Table 20.
Table 20. Disk Requirements summary
Supporting…
Requires…
2,500 users
10 disks ( for DBs and logs)
5,000 users
20 disks (for DBs and logs)
7,500 users
30 disks (for DBs and logs)
10,000 users
40 disks (for DBs and logs)
20,000 users
80 disks (for DBs and logs)
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
53
Chapter 4: Storage Design
LUN
configurations
Now that we have determined the number of spindles required to support the IOPS
and capacity requirements of the building block, we need to determine the best way
to provision LUNs on the array for that building block.
Based on the DAG design and storage requirements identified earlier, we have
determined that we have four mailbox databases per each Mailbox server. To
achieve the best restore granularity EMC recommends that you place the database
and its corresponding logs on its own LUN. With VSS type backup solutions, this
shortens the restore window and provides better performance. Based on these best
practices, we separated the database and its logs onto individual LUNs and
configured four databases and four log LUNs from a single RAID 5 storage pool.
In this solution, we created eight separate RAID 5 storage pools with 10 disks per
pool for each of the building blocks of 2,500 users on each CX4-480 storage array.
We configured four storage pools to support 10,000 active users from a primary site
and other four pools to support 10,000 users from a secondary site in case of site
failover. With this configuration, you have more flexibility when adding new users and
provisioning new storage when necessary.
From each 10-disk RAID 5 storage pool containing 450 GB drives, we created four
590 GB database LUNs and four 80 GB log LUNs.
Mailbox server
configuration
summary
Table 21 provides a summary of a single Exchange mailbox server configuration in
this solution.
Table 21.
Exchange server configurations for this solution
Components
Mailbox Servers
vCPU
4
Memory
32
Disks
9 LUNs (1 OS, 4 DB, 4 logs)
 1 VHD for OS
 8 disks in a physical pass-through
mode (4 DB, 4 Logs)
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
54
Chapter 4: Storage Design
Figure 12 summarizes configuration of Exchange databases for each Mailbox server
VM and Hyper-V host.
Figure 12.
Database configuration with DAG in third-party replication mode
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
55
Chapter 5: LAN and SAN Architecture
Chapter 5:
LAN and SAN Architecture
Introduction
Overview
Topics
The network in this solution comprises of end-to-end Brocade components,
including:

Server access to the storage was enabled through Brocade HBAs connecting
to Brocade SAN switching.

Ethernet access was enabled via internal 1G Ethernet NICs in the Dell servers
connecting to Brocade access layer FastIron Series Ethernet switches, which
connected to Brocade NetIron MLX Series switches in the Network core.

The Exchange environment was load balanced by Brocade ServerIron ADX
1000 series Application Delivery Controllers.
This chapter describes the solutions network architecture and contains the following
topics:
Topic
See Page
SAN and LAN/WAN configuration
56
Network load balancing
59
Best practices planning for network load balancing
61
SAN and LAN/WAN configuration
SAN
configuration
The SAN configuration uses a redundant fabric configuration. Each fabric consists of
a Brocade 300 8Gb SAN switch. We installed the Brocade 825 dual-port 8 Gb HBAs
in each server (hosts 1-4) and then connected each host server through each of the
Brocade 300 switches to the EMC CX4-480 array.
We implemented soft zoning (also known as initiator-based zoning) on the FC fabric
for SAN traffic isolation and configured zone groups to isolate storage I/O between
each initiator port on the host and two target ports, where each target port is located
on a different storage processor for redundancy reasons.
For the storage replication between arrays, the solution uses a direct FC connection
between the two fabrics.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
56
Chapter 5: LAN and SAN Architecture
LAN
configuration
The overall test network consists of:

Brocade ADX 1000s, which provide Ethernet traffic load balancing

Brocade NetIron MLX platforms, which provide 10 Gb aggregation and routing

Brocade FastIron Series switches, which provide access layer connectivity.
Load Balancing
In our test environment, we used the Brocade ServerIron ADX 1000s to provide
hardware load balancing. The ADX 1000s were set up in an active/hot standby
architecture, configured to balance the incoming load from the LoadGen servers and
distribute it to the Exchange CAS server farm based on the preset affinity rules.
Network
The Brocade NetIron MLX provides 10 Gb aggregation and routing between the
Exchange setup in the data center and the test load servers running Microsoft
Loadgen. The Brocade FastIron Series switches provide 1 Gb server access to the
servers.
The load servers, running Microsoft LoadGen, were placed into a single VLAN and
were directed toward the Exchange setup. As the traffic passed to the Exchange
environment, the load balancers front-ended the Exchange server setup and
provided a virtual IP address for the load servers. The load balancers then directed
the traffic, based on the pre-configured affinity rules, to the CAS servers within the
Exchange back-end environment.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
57
Chapter 5: LAN and SAN Architecture
Figure 13 shows the solution’s overall network and SAN layout.
Figure 13.
The solution’s network zoning layout
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
58
Chapter 5: LAN and SAN Architecture
Network load balancing
Overview
Compared to previous Exchange Server releases, some architectural changes in
Exchange 2010 have resulted in network load balancing that is becoming
increasingly important, for both large-scale and small-scale deployments. Microsoft's
TechNet article "Understanding Load Balancing in Exchange 2010"
http://technet.microsoft.com/en-us/library/ff625247.aspx outlines the new
requirements and explains why hardware-based load balancing is important for
Exchange 2010 environments. This section discusses these changes.
Exchange RPC
Client Access
and Address
Book services
The addition of the RPC Client Access Service and the Exchange Address Book
service improve the user's experience during Mailbox role failovers by moving the
connection endpoints for Outlook (and other MAPI clients) to the CAS role rather
than the Mailbox role.
In previous Exchange versions, Outlook connected directly to the Mailbox server
responsible for the data being accessed, and directory connections were either
proxied using the Mailbox role or they were referred directly to a particular Active
Directory Global Catalog (GC). Now that these connections are handled by the CAS
role, you must load balance the Outlook connections (both internal and external)
across the CAS servers in the deployment. In Exchange Server 2010, this is known
as a CAS array.
Network traffic
The solution uses the Brocade ServerIron ADX to load balance the traffic to the
server farm using round robin, least local connections, dynamic weighted predictor.
For more about these methods, see the Brocade ServerIron ADX product
documentation.
Since the solution uses source IP port persistence on the Virtual IP (VIP), the round
robin predictor for the VIP is automatically enabled and used to evenly distribute
hash assignments. After you enter the port <port> persist-hash command,
the predictor round-robin command automatically appears under the virtual server
entry in the configuration file.
Active/hot standby redundancy uses a dedicated link between the Brocade
ServerIron ADX switches, which transmits flow-state information, configuration
synchronization information, and the redundancy heartbeat.
The ports used by Exchange services are HTTP (Port 80) and HTTPS (Port 443).
HTTP and HTTPS are used for several Microsoft Exchange servers, including:

Outlook Web Access (OWA)

Outlook Anywhere (OA) (MAPI tunneled over HTTPS)

Exchange ActiveSync for mobile devices (EAS)

Exchange Control Panel (ECP)

Exchange Web Services (EWS)
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
59
Chapter 5: LAN and SAN Architecture

AUtoDIscover (AUDI)

Offline address book distribution
Microsoft recommends using HTTPS:

RPC/MAPI (Port 135)—Microsoft remote procedure call (RPC) defines a
powerful technology for creating distributed client/server programs. The RPC
run-time stubs and libraries manage most of the processes relating to network
protocols and communication.

TCP Ports 60000 and 60001—As with RPC applications, the RPC end-point
mapper allocates the end-point TCP port numbers for these services when the
services start. This means that you may need to configure a large range of
destination ports for load balancing without the ability to specifically target
traffic for these services based on port number.
It is possible to statically map these services to specific port numbers in order
to simplify load balancing. If the ports for these services are statically mapped,
then the traffic for these services is restricted to port 135 (used by the RPC
port mapper) and the two specific ports selected for these services. In this
configuration, TCP ports 60000 and 60001 are statically mapped. You can
manually change these values. Refer to the Microsoft Knowledge Base at
http://technet.microsoft.com/en-us/library/ff625248.aspx for details on
mapping the static ports.
Configuring
Layer 2
Active/Hot
Standby
Redundancy
In this deployment, we configured the Brocade ServerIron ADX to be active/hot
standby. Both ServerIron ADX switches share the same VIP and configuration
(except for the management IP address). In a typical hot standby configuration, one
ServerIron ADX is the active device and performs all of the Layer 2 switching and
Layer 4 server load balancing (SLB) switching, while the other ServerIron ADX
monitors switching activities and remains in a hot standby role.
If the active ServerIron ADX becomes unavailable, the standby ServerIron ADX
immediately assumes the unavailable ServerIron ADX’s responsibilities. The failover
from the unavailable ServerIron ADX to the standby device is transparent to users. In
addition to the same Virtual IP address, both ServerIron ADX switches share a
common MAC address known to the clients. Therefore, if a failover occurs, the
clients still recognize the ServerIron ADX by the same MAC address. The active
sessions running on the clients continue and the clients and routers do not need to
resolve their address resolution protocol again for the ServerIron ADX MAC address.
This solution configured the ServerIron ADX switches with the Microsoft Exchange
Server ADX for active/hot standby, in which both ServerIron switches share the
same VIP address and configuration (with the exception of the management IP
address) but one ServerIron ADX is active and the other is in hot standby mode.
Network
configuration
for DAG
A DAG is the base component of the HA and site resiliency framework built into
Microsoft Exchange Server 2010. A DAG is a group of up to 16 Mailbox servers that
host a set of databases and provide automatic database-level recovery from failures
that affect individual servers or databases. DAG is invisible from a load balancer
perspective.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
60
Chapter 5: LAN and SAN Architecture
Best practices planning for network load balancing
Overview
One of the most interesting and important aspects of the network design for
Exchange 2010 is implementing load balancing, which Microsoft highly recommends
with any Exchange 2010 environment containing two or more CAS servers.
Understanding some of the key terms and best practices for load balancing is critical
to the success any project that uses this solution. This section presents concepts
that you should understand if you are deploying a load balancer within your
Exchange environment.
Affinity
Affinity is the process of associating a client with a specific CAS to ensure that all
requests sent from that client go to the same CAS. The following load balancing
methods support affinity on the Brocade ServerIron ADX:
LB-created
cookie

Load Balancer (LB)-created cookie

Secure socket layer (SSL) session ID

Source IP
Because the incoming traffic uses SSL, you must use an SSL proxy. You can use an
LB-created cookie or SSL with source IP, but you cannot use all three load-balancing
methods together. You must use one of the methods because Microsoft
recommends using HTTPS rather than HTTP with Exchange. Brocade recommends
that you use LB-created cookies and source IP port persistence. Brocade does not
recommend that you use affinity because some browsers negotiate new SSL
session IDs every few minutes, thus breaking the affinity condition.
The LB-created cookie method is very reliable for tying a client session to a CAS.
The load balancer inserts a cookie into the client-server protocol that is associated
with a CAS server. The session continues to forward traffic to the same CAS server
until the session is over.
The Outlook Web Access (OWA), and Exchange Control Panel (ECP) protocols
support the LB-created cookie method that runs with HTTP in Microsoft Exchange
2010, with the following limitations:

They do not support RPC, the Exchange Address Book Service, POP, or IMAP

The load balancer must be able to read and interpret the HTTP stream.

When using SSL, the load balancer must decrypt the traffic to examine
content.
To use this method, the client must be able to receive arbitrary cookies from the
server and then include them in all future requests to the server. The following
services to not support this capability:

Exchange ActiveSync clients

Outlook Anywhere (OA) (not supported in any version of Outlook up to and
including Outlook 2010)
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
61
Chapter 5: LAN and SAN Architecture

Some Exchange Web services clients
However, OWA and ECP, and remote PowerShell Exchange 2010 protocols do
support the LB-created cookie method.
Source IP port
persistence
In the source IP port persistence method, the load balancer looks at a client IP
address and sends all traffic from a certain source/client IP to a given CAS. Brocade
recommends that you use cookie persistence for workloads that support it; but you
can use source IP persistence when it is necessary for those workloads. The source
IP method has two limitations:

Whenever the client’s IP address changes, the affinity is lost. However, the
user impact is acceptable as long as this occurs infrequently.

Having many clients from the same IP address leads to uneven distribution.
Distribution of traffic among the CAS machines then depends on the number of
clients that originate from a given IP address.
Two things that can cause several clients to originate from the same IP address are:
Monitoring the
Outlook client
configuration

Network Address Translators (NATs) or outgoing proxy servers (for
example, Microsoft Forefront Threat Management Gateway, or TMG). In this
case, the original client IP addresses are masked by the NAT or outgoing
proxy server IP addresses

CAS-to-CAS proxy traffic. One CAS machine can proxy traffic to another
CAS machine, typically between Active Directory (AD) sites, as most
Exchange 2010 traffic needs to be handled by either a CAS in the same AD
site as the mailbox being accessed or a CAS with the same major version as
the mailbox being accessed.
Since all users are connecting with an Outlook client from the internal network, we
are primarily concerned with load balancing TCP socket-oriented traffic. We need to
ensure that that traffic maintains client IP-based persistence. Outlook initiates a
connection to the RPC CAS and Exchange Address Book Service using the RPC
endpoint mapper on port 135.
We set the RPC CAS server to use static port 60000 and the Exchange Address
Book service to use static port 60001. This port handles connections for both the
Address Book Referral (RFR) interface and the Name Service Provider Interface
(NSPI). If we do not set static ports, a random port is used. This means that we may
need to configure a large range of destination ports for load balancing without the
ability to specifically target traffic for these services based on port number.
Outlook also uses some HTTP-based services including Auto discover, Exchange
Web Services (EWS), and Offline Address Book (OAB).
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
62
Chapter 6: Exchange 2010 backup with EMC Replication Manager
Chapter 6:
Exchange 2010 backup with EMC Replication Manager
Introduction
Overview
Exchange 2010 allows databases to be up to 2 TB. Having multiple copies of these
large databases can present a challenge to many customers, as their nightly backup
windows may not able to accommodate the Exchange backup.
With Exchange 2010 configured in third-party synchronous replication mode, there is
essentially only one copy of the data. Therefore, it is critical to schedule backups to
run on a regular basis. EMC Replication Manager provides an effective Exchange
2010 backup solution by leveraging VSS functionality and integrating the backups
with CLARiiON SnapView Clones, or SnapView Snaps technology.
Topics
This chapter contains the following topics:
Topic
See Page
Replication Manager design
63
Preparing your Exchange 2010 environment for backups with
Replication Manager
64
Rapid restore using Replication Manager
65
Replication Manager design
RM
functionality
Overview with
Exchange 2010
Replication Manager is a robust enterprise-level application that you can use in
conjunction with CLARiiON SnapView replication technologies and Microsoft
Exchange 2010 to:

Create Exchange database application sets

Create replicas of Exchange databases protected as part of a DAG, whether it
is an active or passive copy of the database, or databases in a DAG that is
configured in a third-party replication mode

Check the consistency of replicated data

Start on-demand mount and dismount operations

Perform point-in-time or roll-forward restores to the production databases in
the event of a corruption
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
63
Chapter 6: Exchange 2010 backup with EMC Replication Manager
Recommendations Some best practice considerations when designing a Replication Manager
for best RM
performance
environment for Exchange 2010 include:

Installing Replication Manager on all of the servers in a DAG with REE. This is
because in a DAG with third-party replication, in this case REE, you can only
replicate active databases.

Using either clones or snaps, or a combination of clones and snaps to achieve
the best possible protection end backup granularity for Exchange data.

Separating the database and logs onto different LUNs to take advantage of
Replication Manager’s rapid restore capability.

Avoiding nested mount points to prevent VSS errors.

Using a physical mount host as opposed to a VM as this will reduce the time it
takes to mount the volumes.

Running consistency checks once a week. You no longer need to run
consistency checks on the mounted snap copy.

Using separate dedicated spindles for the save pool for better performance

Zoning the RM mount hosts’ HBAs to different array ports than the production
Exchange server’s HBAs for best performance.
Preparing your Exchange 2010 environment for backups with Replication
Manager
RM design
considerations
The Replication Manager Product Guide has the latest information on how to
prepare your Exchange environment for Replication Manager. In addition to
providing information about the placement of databases and logs on volumes, it
discusses security settings and requirements.
There are a couple of important things to consider when planning for Replication
Manager in an Exchange 2010 environment. One is that Replication Manager
restores data at the LUN level, not the file level. Place the database and log files on
volumes with a restore in mind. If the database and logs are on the same volume,
restoring the database file also restores the logs. Logs that are not part of the replica
are deleted. Alternatively, if a two databases share a volume, you may not restore
just one database file. You would have to restore and recover both.
Another consideration is the use of mount points. VSS has trouble importing replicas
that contain nested mount points. If your logs are on a volume (like L:) and you
create a mount point on that volume called ―DB_MP‖ and put the .edb file there, you
have a nested mount point. The volume L: and the volume ―DB_MP―, which is on L:,
would be in the same replica. Replication Manager experiences errors from VSS
when importing the replica.
There are two ways to get around this problem. One is to create a ―tiny‖ LUN and
create the L: volume on that LUN. Create two mount points on the volume, one for
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
64
Chapter 6: Exchange 2010 backup with EMC Replication Manager
the database file and the other for the transaction logs. Use the other mount point for
the volumes that are local to the server.
Backup
configuration in
this solution
The local replication technology used for this solution is SnapView Snaps. Microsoft
recommends that you host multiple full copies of Exchange databases on separate
servers to provide local recoverability capability. However, in most environments with
large mailboxes, Exchange 2010 is capacity driven more so than performance. For
this reason, using snaps provides significant space savings. In most deployments,
you need to allocate an average of 20 percent of Exchange storage capacity to
snaps.
SnapView Snaps are pointer-based copies of the source LUN that store just the
changed tracks; hence, they use minimum space on the CLARiiON array as
opposed to clones. In addition, by taking snaps of data images on the target storage
array we minimize any potential performance impact on the production Exchange
databases. Also, Replication Manager integrates well with the SnapView Snap
technology and provides for instant restore capabilities – both point in time and roll
forward.
Four databases were included in every application set to minimize the number of RM
backup jobs. A single mount host is required for this.
In this solution, we have scheduled snap jobs to run four times a day. This lowered
the RPO from 24 to six hours. Since the databases and logs were located on
separate LUNs, this also provided for a roll forward restore capability and a no data
loss solution when the logs were intact.
Rapid restore using Replication Manager
Roll-forward
recovery
The most likely recovery scenario involves restoring a single corrupt database. RM
uses the latest successful VSS snapshot of a database to perform its recovery. It
takes an average of three to five minutes to execute the recovery, during which RM
executes the roll-forward database-only recovery and brings the database online.
Point-in-time
recovery
It is also possible to execute a point-in-time recovery so that RM restores both the
database and log LUNs to the time that it created the snapshot. The amount of time
required for this action is about the same, but it requires that the mirrors be
resynchronized since the logs are over-written too.
In an Exchange 2010 DAG environment (including DAG on a third-party replication
mode), you can only restore replicas to the same server on which the replica was
originally created. Microsoft restricts restores of database copies to only the active
copy. To perform a single database recovery using Replication Manage, you need to
perform the following steps:
1. In the Replication Manager console, right-click on the replica that you want to
restore back to the source and click Next.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
65
Chapter 6: Exchange 2010 backup with EMC Replication Manager
2. Choose only the database files from the list of objects.
3. If the server is not hosting the active copy of the database, then choose
the activate databases before selecting the Restore checkbox in the
Restore Wizard’s Restore Options screen and click Finish.
4. Once the restore is completed, go the Exchange Management console,
and resume the database copy using REE Powershell cmdlets.
Note:
This procedure depends on the customer’s specific configuration. High-level steps
are provided here as a reference point.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
66
Chapter 7: Best Practices Planning
Chapter 7:
Best Practices Planning
Overview
This chapter contains the following topics:
Topic
See Page
Exchange 2010 best practices
67
Optimizing SAN best practices
67
Exchange Mailbox server optimization for EMC storage
69
Exchange 2010 best practices
In comparison to earlier versions of Microsoft Exchange, Exchange Server 2010 has
made significant improvements in the areas of I/O and storage. For example, there
have been many changes to the core schema, and the Exchange's extensible
storage engine (ESE) to reduce the I/O usage profile. Due to this I/O reduction,
Exchange 2010 now supports more drive types such as SATA and SAS disks, as
well as FC and EFD drives.
For information on Exchange 2010 Mailbox server design best practices, go to
http://technet.microsoft.com/en-us/library/dd346703.aspx. In addition to Microsoft’s
recommendations, EMC recommends the best practices described in this section
when planning an EMC Unified Storage implementation for best performance results
with Exchange 2010.
Optimizing SAN best practices
Reliability
considerations
Although SANs provide excellent storage architectures for Exchange
implementations, it is important that you optimize your SAN for reliability and
performance. The following best practices are important to consider when
implementing Exchange in a SAN environment.
To optimize a SAN for reliability, you should:

Configure redundant controllers, SAN switches, and use RAID.

Use redundant HBAs connected to different fabrics.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
67
Chapter 7: Best Practices Planning
Performance
considerations
To optimize SAN for performance:

Install EMC PowerPath on the physical hypervisor hosts for optimal path
management and maximum I/O performance. For more information on
installing and configuring PowerPath, visit:
http://www.emc.com/products/detail/software/powerpath-multipathing.htm
Additional
considerations

Dedicate physical spindles within your SAN to Exchange databases to isolate
the Microsoft Exchange server database workload from other I/O-intensive
applications or workloads. This ensures the highest performance level for
Exchange and simplifies troubleshooting in the event of disk-related issues.

Make sure you plan for performance even in a failover situation. Balance LUNs
across the array SPs to take advantage of CLARiiON’s performance and HA
features.

Plan so that expected peak utilization does not exceed 80 percent saturation of
the system.

Configure the storage to support your expected IOPS value that you calculated
as instructed earlier in this white paper. Always size the Exchange
environment for IOPS, then capacity.
After calculating the IOPS requirements, always apply a 20 percent I/O overhead
factor to your calculations to account for additional IOPS, such as logs, log
replication, and BDM, that are not included in the IOPS per user profile.
You should:

Verify that your SAN switch can support your IOPS requirements, even in a
failover situation. The SAN switch has to process the incoming I/O request and
forward it out the appropriate port, which therefore limits the amount of I/O that
can be handled.

Verify that the HBA installed in the server can support your IOPS
requirements, even in a failover situation. To avoid throttling, ensure that the
queue depth is set according to EMC recommendations.
For more information on this topic, see Microsoft’s TechNet article ―Understanding
Database and Log Performance Factors‖ at http://technet.microsoft.com/enus/library/ee832791.aspx.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
68
Chapter 7: Best Practices Planning
Exchange Mailbox server optimization for EMC storage
Follow these recommendations to ensure the best possible Mailbox server
performance:

Partition alignment is no longer required when running Microsoft Windows
Server 2008 as partitions are automatically aligned to a 1 MB offset.

Exchange Server 2010 requires Windows Server 2008 or Windows 2008 R2.

When formatting new NTFS volumes for an Exchange database and logs, you
should set the allocation unit size (ALU) to 64 KB using the drop-down list in
Disk Manager or through the CLI using the diskpart command.

Microsoft recommends a maximum database size of 100 GB in environments
that do not use DAGs. When DAGs are being used with a minimum of two
database copies, the maximum database size can be up to 2 TB. Consider
backup (if applicable) and restore times when calculating the database size.

Enable BDM on large databases (greater than 500 GB).
For more information on EMC solutions for Microsoft Exchange Server, visit
EMC’s website at http://www.emc.com/solutions/applicationenvironment/microsoft/solutions-for-microsoft-exchange-unifiedcommunications.htm.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
69
Chapter 8: Solution Validation
Chapter 8:
Solution Validation
Introduction
Overview
The sections in this chapter describe the approach and methodology used to validate
this solution, which involves both functional and performance tests.
The performance tests include:

Storage performance validation using Jetstress

Database seeding performance

Server and Exchange environment (end-to-end) performance validation using
Loadgen
The functional test included a site (datacenter) failover/failback validation.
Topics
This chapter contains the following topics:
Topic
See Page
Validation methodology and tools
71
Exchange storage validation with Jetstress
72
Database replication process for DAG in a third-party replication
mode
74
Environment validation with Loadgen
75
Test 1 – Normal operating condition – peak load
77
Test 2 – Host failure within a site
79
Test 3 – Site failure simulation
81
In-site database switchover with EMC Replication Enabler for
Exchange 2010
83
Datacenter switchover validation
83
Validating primary datacenter service restoration (failback)
85
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
70
Chapter 8: Solution Validation
Validation methodology and tools
Overview
You can use a variety of tools to measure the performance of Exchange 2010,
including Jetstress and Load Generator (LoadGen). The Windows Server 2008
operating system also includes some general performance tools including Windows
Performance Monitor. You can perform storage analysis using Unisphere.
In addition to these tools, you should analyze your current user loads to establish a
minimum server requirements baseline. Understanding how your users use the
system is one of your biggest challenges. The Exchange Server Profile Analyzer can
help provide useful data when analyzing your current user loads. After you determine
your hardware requirements, you should conduct a pilot test to make sure
performance levels are acceptable.
For more information, see Tools for Performance and Scalability Evaluation available
on Microsoft’s TechNet website.
Jetstress 2010
The best tool for validating the Exchange storage design is Jetstress. Jetstress
simulates Exchange I/O at the database level by interacting directly with the
database technology of the ESE, also known as the Jet, on which Exchange is built.
You can configure Jetstress to test the maximum I/O throughput available to your
disk subsystem within the required performance constraints of Exchange. You can
also configure it to accept a user count, profile, and I/O per second per user to
validate that the disk subsystem is capable of maintaining an acceptable
performance level with that profile. We strongly recommend you use Jetstress to
validate storage reliability and performance prior to the deploying your Exchange
servers in to production environment.
You can download Jetstress from Microsoft Exchange Server Jetstress 2010 (64 bit)
at http://go.microsoft.com/fwlink/?LinkId=178616. The Jetstress documentation
describes how to configure and execute an I/O validation or evaluation on your
server hardware.
While Jetstress tests the performance of the Exchange storage subsystem before
placing it in the production environment, it does not test the impact of the server CPU
and memory configuration of MAPI user activity. Use the Microsoft Loadgen tool for
this purpose.
Loadgen
You use Exchange Load Generator (Loadgen) to perform a full end-to-end
assessment of the Exchange 2010 environment. You can use Loadgen to perform
benchmarking, pre-deployment validation, and stress testing tasks that introduce
various workload types into a test (non-production) Exchange messaging system.
This test simulates the delivery of multiple MAPI, Outlook Web access, IMAP, POP,
and SMTP client messaging requests to an Exchange server.
Important!
You should use Loadgen only in the test lab configuration and in non-production
Exchange environments. For more information on Loadgen go to the Exchange Load
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
71
Chapter 8: Solution Validation
Generator 2010 (64 bit) website at:
http://www.microsoft.com/downloads/details.aspx?familyid=CF464BE7-7E52-48CDB852-CCFC915B29EF&displaylang=en
Exchange storage validation with Jetstress
Overview
Before implementing a storage solution in a production environment, it is important
that you validate that the Exchange storage is sized and configured properly. This
section describes the approach and methodology used to validate the storage
design. The testing performed is similar to the one required for Microsoft’s Exchange
Solution Reviewed Program (ESRP) program, which is designed for storage vendors
like EMC to submit their Exchange solutions.
Test results listed in this white paper are provided according to ESRP guidelines for
easy comprehension and comparison to other ESRPs published on Microsoft’s
website (including ESRP submissions from EMC and other storage providers).
http://technet.microsoft.com/en-us/exchange/ff182054.aspx
Test
configuration
To validate the CX4-480 performance with 20,000 Exchange users, we configured
Jetstress to run against all disks in a storage array configured for Exchange. These
tests ran simultaneously from all eight Exchange Mailbox servers in one site at the
same time. We configured eight Mailbox server so that each produces a load for
2,500 users at 0.18 IOPS per user.
The following Jetstress tests helped to determine the performance of Exchange 2010
with REE (based on MirrorView/S).

Test 1 – Baseline test for 20,000 users without MirrorView/S enabled on the
CX4-480

Test 2 – Test for 20,000 users with MirrorView/S enabled on the CX4-480 and
the data being replicated to a secondary CX4-480

Test 3 – Test for peak IOPS with 20,000 users with MirrorView/S enabled on
the CX4-480 and the data being replicated to a secondary CX4-480
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
72
Chapter 8: Solution Validation
Jetstress test
results and
CX4-480
performance
Figure 14 displays the Jetstress test results for a single CX4-480 array performance
on all disks configured for 20,000 users.
Testing against all disks on a single storage frame shows that CX4-480 achieved
8,120 Exchange 2010 transactional IOPS across eight Exchange VMs. This is 2,288
IOPS over the designed target baseline of 5,832 IOPS for a single site.
This additional headroom provides a nice buffer and great insurance against any
unexpected DAG failovers, large mailbox moves, I/O spikes, peak loads, and
potential malware attacks that may have otherwise taken the server down.
Disk latencies were all within the acceptable parameters according to Microsoft’s
best practices for Exchange 2010 performance.
Figure 14.
Jetstress test results for Exchange 2010 on a CLARiiON CX4-480
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
73
Chapter 8: Solution Validation
CX4-480
performance
with Exchange
2010 Jetstress
Table 22 shows the aggregate performance across all servers, which is the sum of
the I/Os and the average latency across all servers in the solution.
Table 22.
Jetstress test results summary
Target
Values
20,000 users
No
MirrorView/S
(8 servers)
20,000 users
With
MirrorView/S
(8 servers)
Database I/O
Achieved Transactional IOPS
(I/O Database Reads/sec + I/O
Database Writes/sec)
3600 IOPS
5259 IOPS
5108 IOPS
I/O Database Reads/sec
N/A
2908
2873
I/O Database Writes/sec
N/A
2351
2235
I/O Database Reads Average
Latency (msec)
< 20 ms
11 ms
10 ms
I/O Database Writes Average
Latency (msec)
< 20 ms
9 ms
12 ms
3 ms
6 ms
(at 0.18 IOPS)
Transaction Log I/O
I/O Log Writes/sec
N/A
I/O Log Reads Average Latency
(msec)
<10 ms
Database replication process for DAG in a third-party replication mode
Overview
The native network-based database replication process with an Exchange 2010
DAG in a third-party replication mode with REE is replaced by synchronizing primary
source images (copies) from one storage array to another. Replication is performed
over the FC SAN using MirrorView/S that is configured between two storage arrays
in both sites.
MirrorView is based on block-level replication and is LUN-based. This means that if
there are any changes performed on the database that occupies a single source
LUN, the changes are synchronized to a target mirror for the entire block. Initially,
the entire LUN is synchronized. For all subsequent updates, only incremental resyncs are required. For example, when you perform the initial synchronization on the
500 GB database that occupies a 1 TB LUN, the entire LUN is synchronized. In
addition, when changes are written to a database, only changes on the disk tracks
are synchronized to a target mirror.
When performing a replication operation with DAGs in a third-party replication mode,
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
74
Chapter 8: Solution Validation
you must use REE Powershell cmdlets to perform any action on Exchange
databases. REE Powershell cmdlets are very similar to native Exchange Powershell
cmdlets. A full set of REE Powershell cmdlets is provided in the section entitled
Appendix B: REE Powershell Cmdlets Reference on page 90.
Initial
synchronization
performance
Table 23 provides seeding performance information in our test environment. These
results are based on the test environment configuration and can differ, depending on
the customer’s environment. In our tests, we performed the initial synchronization
over a four Gbps FC SAN connection between two storage arrays. Replication was
performed from Site A to Site B. The source array had 15,704 TB of storage to be
replicated (based on LUN sizes).
Table 23.
Initial MirrorView/S synchronization performance summary
What is
synchronized
Size
Average
synchronization
Time
Throughput
(GB/hr)
32 LUNs (4 VMs- 8
LUNs per VM)
15,704 GB
17 hrs
923 GB/hr
Environment validation with Loadgen
Overview
After completing the storage validation with Jetstress and determining that the
storage is sized and performs as expected, the next step in the validation process is
to use the Loadgen tool to simulate the MAPI workload against the entire Exchange
infrastructure. Loadgen testing is necessary to determine how each Exchange
component performs under a real, close-to-production user load.
Loadgen requires full deployment of the Exchange environment for validation testing.
You should perform all Loadgen validation testing in an isolated lab environment
where there is no connectivity to your production data. Loadgen generates the users
and workloads against the entire Exchange environment, including both the network
and storage components.
Loadgen simulates the entire mail flow, helping to determine any bottlenecks in the
solution. It is the only tool that helps you determine CPU and memory resources that
are necessary to sustain the load for which the Exchange environment was
designed.
Loadgen test
preparation
In our tests, we used the Exchange Server Load Generator 2010 (Loadgen) to
simulate Outlook 2007 online mode mailboxes with the following characteristics:

The action profile is 150 messages per mailbox per day.

Each mailbox is 500 MB.

Each database contains 625 mailboxes.
To simulate a normal operation, the workday duration is set to eight hours and each
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
75
Chapter 8: Solution Validation
simulation runs for eight hours.
The 150-message profile sends 30 messages and receives 120 messages per
mailbox per day. We expected that during an eight-hour simulated day the Mailbox
server with 2,500 active users will log approximately 2.60 sent messages per second
and 10.42 delivered message per second. We use the formula below to calculate the
expected number of sent and delivered messages per second.
Messages sent per second =
Messages delivered second =
Loadgen
configuration
for peak load
Peak load is used in this simulation for the 150-message Outlook 2007 Online Mode
action profile. We enabled peak load for an action profile by setting the simulated
workday to four hours rather than eight hours. This Loadgen configuration simulates
peak load by doubling the sent and delivered message rate per second. The 150message action profile running in peak mode generates double the sent and
delivered messages per second.
How to validate
test results
Use the following performance monitor counters on the Mailbox server to monitor the
message sent and delivered rates:
MSExchangeIS Mailbox (_Total)\Messages Sent/sec
MSExchangeIS Mailbox (_Total)\Messages Delivered /sec
We tracked the Mailbox server response time for client requests to determine the
amount of time it takes the Mailbox server to respond to a client request. The
response time average per request should not exceed 10 milliseconds. We use the
following performance monitor counter on the Mailbox server to monitor response
time.
MSExchangeIS\RPC Averaged Latency
Note:
As a best practice, disable Hyper Threading on the root server for all simulations for Exchange
deployments by rebooting the server, entering the BIOS configuration, and disabling the Hyper
Threading option.
Use the following formula to determine achieved megacycles per mailbox:
(Adjusted megacycles per core) * (Hypervisor overhead%) * (Number of cores) *
(Mailbox Server CPU Utilization) / (Number of users per server)
Note:
In a Hyper-V configuration use ―Hyper-V Hypervisor Logical Processor\% Guest Run Time‖
performance counter value instead of ―Mailbox Server CPU utilization value‖
The validity of the each test run is determined by comparing the results of select
performance counters to a Microsoft specified criteria. We collected performance
counter data at 10-second intervals for the duration of each test run and discarded
the results of the first and last hours. We averaged the results over the remaining
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
76
Chapter 8: Solution Validation
duration of the test.
Table 24 lists the primary counters and their validation criteria.
Table 24.
Validation tests
scenarios
Primary counters and validation criteria
Performance monitor counter
Criteria
Processor(_Total)\% Processor Time
Not to exceed 80% during peak load
MSExchangeIS\RPC Averaged Latency
Not to exceed 10 msec.
MSExchangeIS Mailbox(_Total)\Messages
Sent/sec
Approximately 0.002083
messages/second/mailbox
MSExchangeIS Mailbox(_Total)\Messages
Delivered/sec
Approximately 0.008333 messages/second
/mailbox
Logical Disk Disk Sec/read
Not to exceed 20 msec.
Logical Disk Disk Sec/write
Not to exceed 20 msec.
For this solution, we performed the following Loadgen tests to measure the
Exchange infrastructure performance.
Table 25.
Test
Loadgen Validation - Test scenarios
Description
1
Normal operation: in this test, we simulated a 100% concurrency load for
10,000 users at one site, with each Mailbox server handling 2,500 users.
2
Within a site switchover: in this test, we simulated the failure of a single
Hyper-V host server per site and ran a 70% concurrency load against a
single Hyper-V host with two Exchange Mailbox VMs, each handling 5,000
users. In this test, only three CAS/HAB servers handled the load.
3
In this test, we simulated a site failure and activated secondary images on
stand-by Mailbox servers. We ran a 70% concurrency load against 20,000
users.
Test 1 – Normal operating condition – peak load
Objectives
In this test, the objective was to validate the entire Exchange environment under
normal operating condition with a peak load. Each Hyper-V host and VM’s
performance was measured against Microsoft’s recommended performance targets
and thresholds.
Configuration
In this test, all of the Exchange VMs were operating under normal conditions. We
configured Loadgen to simulate peak load. We expected the 150-message action
profile running in peak mode to generate double the sent and delivered messages
per second.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
77
Chapter 8: Solution Validation
Performance
results and
analysis
The results in Table 26 and Table 27 show that all Hyper-V servers and Exchange
VMs handled the peak load and the achieved results were within the target metrics.
During peak loads, an average CPU utilization on the primary Mailbox VMs were
approximately 26 percent and the Hyper-V host utilization was about 20 percent. On
the HUB/CAS VMs, CPU utilization was approximately 26 percent. Achieved
megacycles per mailbox were 3.7 (3,794 * 4 *.43 / 1,750 = 3.7).
Table 26.
Validation of expected load for Test 1
Parameter
Target
Tested 1 results
Message delivery rate per mailbox
0.0056
0.0056
IOPS per mailbox
0.12
0.22
Megacycles per mailbox
3
3.7
Table 27.
Performance results for Loadgen in Test 1
Mailbox Servers
Hyper-V
Root
Servers
Performance counter
Target
Hyper-V Hypervisor Logical Processor(_total)\% Guest Run Time
<75%
20%
<5%
2%
Hyper-V Hypervisor Logical Processor(_total)\% Total Run Time
<80%
22%
Hyper-V Hypervisor Logical Processor(VP0-3)\% Guest Run Time
<80%
43%
MSExchange database\I/O database reads (attached) average latency
<20 ms
9 ms
MSExchange database\I/O database writes (attached) average latency
<20 ms
<Reads Avg.
7 ms
<20 ms
5 ms
<70
3
<10 ms
2 ms
<80%
26%
<250 ms
5 ms
<40
3
<3000
2
\MSExchangeTransport Queues(_total)\Active Remote Delivery Queue
Length
<250
0
\MSExchangeTransport Queues(_total)\Active Mailbox Delivery Queue
Length
<250
2.5
Hyper-V Hypervisor Logical Processor(_total)\% Hypervisor Run Time
MSExchange database\IO log writes average latency
MSExchangeIS\RPC requests
MSExchangeIS\RPC averaged latency
CAS/HUB Servers combo
Test 1 results
Hyper-V Hypervisor Logical Processor(VP0-3)\% Guest Run Time
MSExchange RpcClientAccess\RPC Averaged Latency
MSExchange RpcClientAccess\RPC Requests
MSExchangeTransport Queues(_total)\Aggregate Delivery Queue
Length (All Queues)
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
78
Chapter 8: Solution Validation
Test 2 – Host failure within a site
Objectives
In this test, the objective was to validate the entire Exchange environment under
physical Hyper-V host failure/maintenance operating conditions with a peak load.
Each Hyper-V host and VM’s performance was measured against Microsoft’s
recommended performance targets and thresholds.
Configuration
During this test, all VMs running on one of the Hyper-V hosts within the site were
shut down to simulate a host maintenance condition. This resulted in database
images (copies) being moved over to other Mailbox servers that created an
operating condition of 5,000 user per Mailbox server. Also in this test, only half of the
HUB/CAS servers processed client access and mail delivery.
Performance
results and
analysis
The results in Table 28 and Table 29 show that all Hyper-V servers and Exchange
VMs handled the peak load and the achieved results were with target metrics. During
the peak loads, an average CPU utilization on the primary Mailbox VMs were
approximately 80 percent and the Hyper-V host utilization was about 36 percent. On
the HUB/CAS VMs, CPU utilization was approximately 48 percent. Achieved
megacycles per mailbox were 3.5 (3,794 * 4 *.80 / 3,500 = 3.5).
Table 28.
Validation of expected load for Test 2
Parameter
Target
Test 2 results
Message delivery rate per
mailbox
0.0056
0.0056
IOPs per mailbox
0.12
0.24
Megacycles per mailbox
3.0
3.5
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
79
Chapter 8: Solution Validation
Table 29.
Performance results for Loadgen Test 2
CAS/HUB Server combo
Primary Mailbox
Servers
Hyper-V
Root
Servers
Performance counter
Target
Hyper-V Hypervisor Logical Processor(_total)\% Guest Run Time
Hyper-V Hypervisor Logical Processor(_total)\% Hypervisor Run Time
Test 2 results
<75%
36%
<5%
2%
Hyper-V Hypervisor Logical Processor(_total)\% Total Run Time
<80%
38%
Hyper-V Hypervisor Logical Processor(VP0-3)\% Guest Run Time
<80%
80%
MSExchange database\I/O database reads (attached) average latency
<20 ms
20.5 ms
MSExchange database\I/O database writes (attached) average latency
<20 ms
<Reads Avg.
27 ms
<20 ms
5 ms
<70
6
MSExchangeIS\RPC averaged latency
<10ms
2 ms
Hyper-V Hypervisor Logical Processor(VP0-3)\% Guest Run Time
<80%
48%
<250ms
12
MSExchange database\IO log writes average latency
MSExchangeIS\RPC requests
MSExchange RpcClientAccess\RPC Averaged Latency
<40
43
<3000
12
\MSExchangeTransport Queues(_total)\Active Remote Delivery Queue
Length
<250
0
\MSExchangeTransport Queues(_total)\Active Mailbox Delivery Queue
Length
<250
12
MSExchange RpcClientAccess\RPC Requests
MSExchangeTransport Queues(_total)\Aggregate Delivery Queue
Length (All Queues)
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
80
Chapter 8: Solution Validation
Test 3 – Site failure simulation
Objectives
The objective of this test was to validate the Exchange environment performance
during a site failure condition. Each Hyper-V host and VM’s performance was
measured against Microsoft’s recommended performance targets and thresholds.
Configuration
During this test, we perform a site failure scenario where the secondary images are
activated on the stand-by servers. This results in 20,000 mailboxes running in the
second site (at 70% concurrency).
Performance
results and
analysis
The results in Table 30 and Table 31 show that all Hyper-V servers and Exchange
VMs handled the peak load and the majority of them achieved results within the
target metrics. A slight increase in logical processor (LP) utilization of up to 85
percent was observed on the Hyper-V root servers. This was due to an additional
load placed on the CAS and HUB servers as they were serving double the load of
user mail activity. Achieved megacycles per mailbox were 3.7 (3,794 * 4 *.43 / 1,750
= 3.7).
Table 30.
Validation of the expected load for Test 3
Parameter
Target
Tested 3 results
Message delivery rate per mailbox
0.0056
0.0056
IOPs per mailbox
0.12
0.13
Megacycles per mailbox
3.0
3.7
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
81
Chapter 8: Solution Validation
Table 31.
Performance results for Loadgen Test 3
Hyper-V
Root
Servers
Performance counter
Target
Hyper-V Hypervisor Logical Processor(_total)\% Guest Run Time
<75%
41%
Hyper-V Hypervisor Logical Processor(_total)\% Hypervisor Run
Time
<5%
2%
Hyper-V Hypervisor Logical Processor(_total)\% Total Run Time
<80%
43%
Primary Mailbox
Servers
Hyper-V Hypervisor Logical Processor(VP0-3)\% Guest Run Time
CAS/HUB Server combo
Test 3 results
<80%
43%
MSExchange database\I/O database reads (attached) average
latency
<20 ms
9 ms
MSExchange database\I/O database writes (attached) average
latency
<20 ms
<Reads Avg.
6 ms
<20 ms
5 ms
<70
3
MSExchangeIS\RPC averaged latency
<10ms
2 ms
Hyper-V Hypervisor Logical Processor(VP0-3)\% Guest Run Time
<80%
41%
<250ms
12 ms
<40
8
<3000
13
\MSExchangeTransport Queues(_total)\Active Remote Delivery
Queue Length
<250
0
\MSExchangeTransport Queues(_total)\Active Mailbox Delivery
Queue Length
<250
12
MSExchange database\IO log writes average latency
MSExchangeIS\RPC requests
MSExchange RpcClientAccess\RPC Averaged Latency
MSExchange RpcClientAccess\RPC Requests
MSExchangeTransport Queues(_total)\Aggregate Delivery Queue
Length (All Queues)
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
82
Chapter 8: Solution Validation
In-site database switchover with EMC Replication Enabler for Exchange 2010
One of the EMC Replication Enabler components is the REE Exchange Listener
service. This REE service responds to Exchange notifications and automatically
moves Exchange databases to an alternate Mailbox server if it finds any problems
with the primary Mailbox server. To start the REE Exchange Listener service, run the
following command:
[PS] C:\Windows\system32>Start-REEExchangeListener
EMC Replication Enabler includes a set of Powershell cmdlets to perform various
maintenance operations. A full set of REE cmdlets is provided in the Appendix C and
in the reference document listed in the EMC Replication Enabler for Microsoft
Exchange Server 2010 Installation and Configuration Guide available on
powerlink.emc.com.
To perform a database switchover manually, use the following command to move the
database from one server to another.
Move-REEActiveMailboxDatabase -Identity <Database name> -MailboxServer <DAG
Member in Primary Site> -Mount
Datacenter switchover validation
Datacenter
switchover
process
Managing a datacenter or site failure is different than managing the types of failures
that can cause a server or database failover. In a HA configuration, the system
initiates the automatic recovery, and the failure typically leaves the messaging
system in a fully functional state.
By contrast, a datacenter failure is considered a disaster recovery event, and as
such, recovery must be manually performed and completed in order for the client
service to be restored and for the outage to end. This process is called a datacenter
switchover.
As with many disaster recovery scenarios, prior planning and preparation for a
datacenter switchover can simplify your recovery process and reduce the duration of
your outage.
There are two basic steps that you need to complete to perform a datacenter
switchover, after making the initial decision to activate the second datacenter:
1. Activate the Mailbox servers
2. Activate the other server roles
The following sections describe each of these steps.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
83
Chapter 8: Solution Validation
Step 1.
Activating
Mailbox servers
Before activating the DAG members in the second datacenter, we recommend that
you validate that the infrastructure services in the second datacenter are ready for
messaging service activation.
If the DAG cluster loses quorum due to a disaster at the primary datacenter, the
cluster service and the EMC REE service on the surviving DAG member servers at
the secondary site will be in a stopped state. Perform the following steps:
1. Start these services by running the following commands from an elevated
command prompt on each surviving DAG member server at the secondary site.
net start clussvc /forcequorum
net start "EMC Replication Enabler for Exchange 2010"
2. Activate the databases by running the following power shell cmdlet:
Move-REEActiveMailboxDatabase -Identity <Database name> MailboxServer <DAGMemberInSecondSite>
If the replication link is broken between the primary and secondary sites and the
secondary images are not in sync with primary images you have to retry the
above command with a force switch.
Move-REEActiveMailboxDatabase -Identity <Database name> MailboxServer <DAGMemberInSecondSite> -Force
Note:
By running Move-REEActiveMaiboxDatabase, REE automatically handles
the storage failover (i.e., mirror promotion).
3. Check the event logs and review all error and warning messages to ensure that
the secondary site is healthy. Follow up on and correct all issues prior to
mounting the databases.
4. Mount the databases using the following power shell cmdlet:
Get-MailboxDatabase <DAGMemberInSecondSite> | Mount-Database
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
84
Chapter 8: Solution Validation
Step 2.
Activating
Client Access
servers
Clients connect to service endpoints to access the Microsoft Exchange services and
data. Activating Internet-facing Client Access servers, therefore, involves changing
DNS records to point to the new IP addresses to be configured for the new service
endpoints. Clients will then automatically connect to the new service endpoints in
one of two ways:

Clients continue attempting to connect, and automatically connect after the
TTL has expired for the original DNS entry, and after the entry expired from the
client's DNS cache. Users can also run the ipconfig /flushdns command from
a command prompt to clear their DNS cache manually.

As the Outlook clients start or restart, they perform a DNS lookup on and
obtain the new IP address for the service endpoint, which is a Client Access
server or array in the second datacenter.
To validate this scenario with Loadgen, perform the following steps:
1. Change the DNS entry for the Client Access array to point to the VIP of the
HWLB in the secondary site.
2. Run the ipconfig /flushdns command on all Loadgen servers.
3. Restart the Loadgen load.
4. Verify that the CAS servers in the secondary site are servicing the load.
Validating primary datacenter service restoration (failback)
Overview
Failback is the process of restoring service to a previously failed datacenter. The
steps used to perform a datacenter failback are similar to the steps used to perform
a datacenter switchover. A significant distinction is that datacenter failbacks are
scheduled, and the duration of the outage is often much shorter.
It is important that you do not perform the failback until the infrastructure
dependencies for Exchange are reactivated, functioning, stable, and validated. If
these dependencies are not available or healthy, it is likely that the failback process
will cause a longer than necessary outage, and It is possible that the process could
fail altogether.
Restoring
storage
To restore your CLARiiON storage replication after a site failure, perform the
following steps:
1. Power on the storage at the failed site.
2. Restore the MirrorView and IP links.
3. All consistency groups that are not locally promoted are marked as ―Waiting
on Admin‖ in Navisphere. For each consistency group that is marked ―Waiting
on Admin", do the following:
a. From Navisphere, right-click on each consistency group and choose
Synchronize from drop down-menu.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
85
Chapter 8: Solution Validation
b. Wait for the consistency groups to synchronize.
4. The process of restoring consistency groups that are locally promoted at the
secondary site is more detailed. For each consistency group that is locally
promoted, perform the following sequence of steps:
a. From Navisphere, destroy the consistency groups on both CLARiiON
arrays. Open CG Properties and click the Force Destroy button.
b. Destroy the remote mirrors on the CLARiiON array at the failed site. Open
Mirror Properties, select the Primary Image tab and click the Force
Destroy button.
c. Remove the corresponding LUNs from the storage group on the
CLARiiON array at the failed site.
d. Right-click each remote mirror on the CLARiiON array at the surviving
site, and choose Add Secondary Storage.
e. Choose the LUN from the CLARiiON array at the failed site.
f. Create a new consistency group using the same name.
g. Add all remote mirrors that were part of the original consistency group.
h. Add the corresponding LUNs to the storage group on the CLARiiON array
at the failed site.
Mailbox server
role failback
The Mailbox server role should be the first role that has to fail back to the primary
datacenter. The following steps detail the Mailbox server role failback.
1. Start the Mailbox servers at the primary site and verify the cluster service and
―EMC Replication Enabler for Exchange 2010‖ service are started.
2. Update the REE configuration by running the following powershell cmdlet:
Update-REEDatabaseInfo
3. Dismount the databases being reactivated in the primary datacenter from the
second datacenter using the following power shell cmdlet:
Dismount-Database -Identity <Database name>
4. After dismounting the databases, move the Client Access server URLs from
the second datacenter to the primary datacenter by changing the DNS record
for the URLs to point to the Client Access server or array in the primary
datacenter.
Important:
Do not proceed to the next step until the Client Access server URLs have moved and
the DNS TTL and cache entries are expired. Activating the databases in the primary
datacenter prior to moving the Client Access server URLs to the primary datacenter
results in an invalid configuration (for example, a mounted database that has no
Client Access servers in its Active Directory site).
5. You can now activate or move the databases by running the following power
shell cmdlet:
Move-REEActiveMailboxDatabase -Identity <Database name> -MailboxServer
<DAGMemberInPrimary Site>
6. Mount the databases using the following power shell cmdlet:
Mount-Database -Identity <Database name>
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
86
Chapter 9: Conclusion
Chapter 9:
Conclusion
This document provides an overview of a zero data loss disaster recovery solution
designed, built, and tested by EMC in partnership with Microsoft, Brocade and Dell. It
highlights the benefits of leveraging EMC Replication Enabler for Exchange 2010 to
provide SAN-based block-level synchronous replication as an alternative to native
Exchange’s 2010 DAG network log shipping asynchronous replication. The paper
also highlights a fully virtualized Exchange 2010 deployment on EMC Unified
Storage with Microsoft Hyper-V, Brocade load balancers, LAN and SAN solutions,
and Dell PowerEdge servers. Testing was conducted at the Enterprise Engineering
Center (EEC), Microsoft’s state-of-the-art enterprise solutions, validation laboratory
on their main campus in Redmond, Washington.
This white paper documents the value of leveraging SAN-based synchronous
replication with EMC Unified Storage to achieve a zero data loss and lowest possible
RPO. This provides tremendous value to Exchange 2010 environments. It allows
customers to implement Mailbox servers in mailbox resiliency configurations with
database-level replication and failover. EMC’s REE integrates directly into Microsoft
Exchange 2010 and provides replication and disaster recovery for the Exchange
2010 databases.
Brocade provides a reliable infrastructure for deploying a highly available virtualized
Exchange solution in a metropolitan data environment. It is easy to deploy, manage,
and integrate into both new and existing IT environments. For Exchange 2010, the
Brocade Serverlron ADX ensures affinity and load balancing of the traffic going to
the Exchange servers.
Dell’s PowerEdge R910 helps you get the most out of virtualization. Its features
enable you to accelerate virtualization deployment and attain optimal performance
with its embedded hypervisor shipped preloaded on the servers.
The value of deploying Exchange 2010 in a virtualized environment by consolidating
over 20,000 users onto four Dell PowerEdge R910 servers between two datacenters
provided a three-to-one consolidation that would require up to 32 servers in a
physical Exchange deployment.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
87
Chapter 9: Conclusion
Appendixes
Appendix A: References
White papers
For additional information, see the white papers listed below.

EMC CLARiiON Virtual Provisioning
http://www.emc.com/collateral/hardware/white-papers/h5512-emc-clariionvirtual-provisioning-wp.pdf

Product
documentation
Deploying the Brocade ServerIron ADX with Microsoft Exchange Server 2010
http://www.brocade.com/downloads/documents/solution_guides/BrocadeADX_
MSExchange2010_GA-DG-303-00.pdf
For additional information, see the documentation for the products listed below.

EMC CLARiiON CX4-480

Microsoft Exchange Server 2010

Dell PowerEdge R910:


R910 Product Details Page:
http://www.dell.com/us/en/enterprise/servers/poweredger910/pd.aspx?refid=poweredge-r910&s=biz&cs=555

R910 Product Spec Sheet:
http://www.dell.com/downloads/global/products/pedge/en/poweredger910-specs-en.pdf

R910 Cabling White Paper:
http://www.dell.com/downloads/global/products/pedge/en/R910-CablingWhite-Paper.pdf
Brocade networking solutions:

Brocade ServerIron ADX Family of Application Delivery Controllers
http://www.brocade.com/products-solutions/products/applicationdelivery/serveriron-adx-series/index.page

Brocade ServerIron ADX Security Guide (Chapter 6 - SSL Acceleration)
http://www.brocade.com/downloads/documents/product_manuals/B_Ser
verIron/ServerIron_1221_SecurityGuide.pdf

Brocade NetIron MLX and FastIron Ethernet Switches and Routers
http://www.brocade.com/dotcom/products-solutions/products/ethernetswitches-routers/enterprise-mobility/index.page?

Brocade SAN Switches (Brocade 300 SAN Switch)
http://www.brocade.com/products-solutions/products/switches/productdetails/index.page

Brocade Server Connectivity (Brocade 825 Dual Port 8G HBA)
http://www.brocade.com/products-solutions/products/server-
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
88
Chapter 9: Conclusion
connectivity/product-details/index.page

Other
documentation
Brocade – Microsoft Exchange Partner Site
http://www.brocade.com/sites/dotcom/partnerships/technology-alliancepartners/technology-alliances/Microsoft/Microsoft-ExchangeServer/index.page
For additional information, see the documents listed below.

Dell’s Exchange 2010 Advisor Tool
http://advisors.dell.com/advisorweb/Advisor.aspx?advisor=b6372fc5-75564340-8328-b8a88e2e64b2-001ebc&c=us&l=en&cs=g_5

Dell’s Exchange 2010 Page
http://www.dell.com/exchange2010

Dell’s Exchange 2010 Architecture Models white paper
http://www.dell.com/downloads/global/solutions/security/exchange_2010.pdf

Microsoft Exchange 2010 Mailbox Server Processor Capacity Planning
http://technet.microsoft.com/en-us/library/ee712771.aspx

Microsoft Exchange 2010 – Understanding Mailbox Database Cache
http://technet.microsoft.com/en-us/library/ee832793.aspx

Microsoft Exchange 2010 Mailbox Server Role Requirements Calculator
http://msexchangeteam.com/archive/2009/11/09/453117.aspx

Exchange 2010 Performance and Scalability Counters and Thresholds
http://technet.microsoft.com/en-us/library/dd335215.aspx

Microsoft Exchange 2010 Overview of Mailbox Server Role
http://technet.microsoft.com/en-us/library/bb124699.aspx

SPEC CPU2006
http://www.spec.org/cpu2006/

Intel Xeon Processor 7500 Series Overview
http://www.intel.com/cd/channel/reseller/asmona/eng/products/server/processors/7500/feature/index.htm?prn
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
89
Chapter 9: Conclusion
Appendix B: REE Powershell Cmdlets Reference
Overview
Full reference to REE Powershell cmdlets and detailed information about REE
installation and configuration can be found in the Replication Enabler for Microsoft
Exchange Server 2010 Installation and Configuration Guide available at
http://powerlink.emc.com/km/live1/en_US/Offering_Technical/Technical_Documentat
ion/300-010-675.pdf
To use REE Powershell cmdlets, open the Exchange Management Shell and import
REECli.Base module. Use the get-command to display a list of available REE
cmdlets, for example:
[PS] C:\Windows\system32>import-module REECli.Base
[PS] C:\Windows\system32>get-command -module REECli.Base
Parameters to
REE Cmdlets
Cmdlets parameters vary with the command. Table 32 lists the most common
parameters.
Table 32.
Common REE cmdlet parameters
Parameter
Description
-MailboxServer <String>
Specifies the Mailbox Server where the cmdlet will be
executed. If not specified the cmdlet executes on the server
where the powershell console is running.
When used during the move database request the database
will be moved to the specified Mailbox server.
-Single
This parameter specifies the command should be executed
only on the specified Mailbox Server as opposed to all the
servers in the DAG.
If you do not specify the Mailbox Server, the cmdlet executes
on server where the powershell console is running.
For example, the following cmdlet is executed on the server
PE-MB01 and displays the information related to the REE
plug-ins detected on the mailbox server PE-MB01.
Get-REEPluginInfo -Single -MailboxServer PE-MB01
The following cmdlet is executed on the server PE-MB01 and
displays the information related to the REE plug-ins detected
on all mailbox servers in the DAG.
Get-REEPluginInfo -MailboxServer PE-MB01
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
90
Chapter 9: Conclusion
List of REE
cmdlets
Table 33 lists and describes the REE cmdlets.
Table 33.
REE cmdlets
Cmdlet
Description
Get-REEPluginInfo [-Single] [MailboxServer <String>]
Lists the installed REE plugins, versions and their
state: Enabled/Disabled.
Enable-REEPlugin -Identity <String>
[-Single] [-MailboxServer <String>]
Enables the specified plugin. Plugin name or Id
must be specified using the ―Identity‖ parameter.
Disable-REEPlugin -Identity <String>
[-Single] [-MailboxServer <String>]
Disables the specified plugin. Plugin name or Id
must be specified using the ―Identity‖ parameter.
Set-REEPluginConfig -Plugin
<String> -ConfigFile <String> [Single] [-MailboxServer <String>]
Sets or updates the plug-in configuration with
information like IP addresses or credentials. This
information is encrypted and stored by the REE
Framework.
Update-REEDatabaseInfo [-Single] [MailboxServer <String>]
Triggers the discovery of replicated LUNs and their
relationships to the mailbox database copies. REE
flags any inconsistencies as configuration errors.
To see the up-to-date information including
configuration errors (if any) run the cmdlet ―GetREEDatabaseInfo‖.
Get-REEDatabaseInfo [MailboxServer <String>]
Lists the discovered databases, local, and remote
copies and configuration errors (if any) for each
database
GetREEMailboxDatabaseCopyStatus [Identity <String>] [-Server <String>] [Active]
Replaces the native Exchange Server cmdlet ―GetMailboxDatabaseCopyStatus‖
EnableREEDatabaseTargetImageAccess Identity <String> [-Single] [MailboxServer <String>]
By default, the database target images are not
accessible for read. In configurations that require
this access, you can use this cmdlet to make
database target images read enabled. Specify the
database name or ID using the ―Identity‖
parameter.
DisableREEDatabaseTargetImageAccess Identity <String> [-Single] [MailboxServer <String>]
Disables read access to the database target
images. Specify the database name or ID using
the ―Identity‖ parameter.
Get-REEExchangeListenerStatus [Single] [-MailboxServer <String>]
Processes the move database (failover)
notifications from the Exchange Server. This
cmdlet displays the status of the REE Exchange
Listener.
Start-REEExchangeListener [-Single]
[-MailboxServer <String>]
Starts the REE Exchange Listener
Stop-REEExchangeListener [-Single]
[-MailboxServer <String>]
Stops the REE Exchange Listener.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
91
Chapter 9: Conclusion
Get-REEPrimaryActiveManager [MailboxServer <String>]
Lists the current primary Active Manager. Active
Manager runs on all Mailbox servers that are
members of a database availability group (DAG).
There are two Active Manager Roles: Primary
Active Manager (PAM) and Standby Active
Manager (SAM). Only one of the mailbox servers
in a DAG can be PAM at any give n time. PAM is
responsible for getting topology change
notifications and reacting to server failures. REE
coordinates and utilizes the services of PAM
during move database operation.
Move-REEActiveMailboxDatabase –
Identity <String> -MailboxServer
<String> [-Mount] [-Force]
Moves the DB from one Mailbox server to another.
Specify the database name or ID using the
―Identity‖ parameter.
 Use mount switch to specify if the database
needs to be mounted after the move
completes.
 Use force switch to fail over a remote copy,
where the database target image status is
other than consistent or synchronized for
MirrorView or replicating for Recover Point.
Note: This prevents any data loss by accidentally
failing over a database copy that is out of sync.
Zero Data Loss Disaster Recovery for Microsoft Exchange 2010
Enabled by EMC Unified Storage, EMC Replication Enabler for Exchange 2010, Brocade End-to-End Network, Dell
Servers, and Microsoft Hyper-V—A Detailed Review
92