Download Horizon® DaaS® Platform 7.0 Service Provider Administration

Document related concepts

Management features new to Windows Vista wikipedia , lookup

Security and safety features new to Windows Vista wikipedia , lookup

Transcript
Horizon® DaaS® Platform 7.0
Service Provider Administration
March 2017
Revision History
Date
Version
Description
03/02/2017
1.0
Initial release
03/13/2017
1.1
1st revision
04/07/2017
1.2
2nd revision
04/24/2017
1.3
3rd revision
05/03/2017
1.4
4th revision
05/09/2017
1.5
5th revision
© 2017 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
ii
Contents
1 Upgrading to Horizon DaaS Platform 7.0.0
IMPORTANT: Snapshot Required Prior to Upgrade
1.1 Pre-Upgrade Tasks
1.1.1 Confirm that System is Ready for Upgrade
1.1.2 Verify Port Availability
1.1.3 Create Snapshot of All Service Provider Appliances
1.1.4 Increase CPU Count and RAM Size for Service Provider Appliances
1.1.5 Obtain the Upgrade Files
1.2 Upgrade Appliances to DaaS Platform 7.0.0
1.2.1 Upgrade Primary Service Provider Appliance
1.2.2 Upgrade Org1000 Appliances
1.2.3 Upgrade All Other Appliances
1.2.4 Check Desktop Functionality
1.3 Update DaaS Agent
1.4 Snapshots and Rollback
1.4.1 Rescheduling the Snapshot Deletion
1.4.2 Snapshot
2 System Blueprint
2.1 Introduction
2.1.1 Intended Audience
2.1.2 Organization of this Section
2.1.3 Terms
2.2 Platform Overview
2.2.1 Key Components of the Horizon DaaS Platform
2.2.2 Horizon DaaS Management Appliances
2.2.2.1 Management Appliance High Availablity
2.2.3 Network Services
2.2.4 Agent Compatibility
2.3 Compute Resources
2.3.1 Hardware Requirements for Horizon DaaS Management Hosts
2.3.1.1 Horizon DaaS Appliance Sizing Requirements
2.3.1.2 Sizing a Management Host for a Specific Number of Tenants
2.3.2 Hardware Requirements for Virtual Desktop Hosts
2.3.2.1 Sizing a Desktop Host for a Specific Number of Desktops
2.3.2.2 Host Sizing Calculations
2.3.2.3 CPU Speed Considerations
2.4 Network Resources
2.4.1 Virtual LANs in the Horizon DaaS Platform
2.4.1.1 Distributed Virtual Networking
2.4.2 Layer 2 segregation using VLANs
2.4.3 Layer 3 segregation using VRFs
2.4.4 Networking Resources
2.4.4.1 Switches
2.4.4.2 Routers
2.4.4.3 Load Balancing
2.4.4.4 Gateways
2.4.4.5 Cross Data Center HA
2.4.5 Managing External IPs and global DNS entries
2.4.6 Wiring the Data Center
2.5 Storage Resources
2.5.1 Storage Options
2.5.1.1 SAN Storage
2.5.1.2 NFS Storage
2.5.2 Tenant Data Storage
1
1
1
1
2
3
3
3
5
5
6
6
6
7
7
7
7
1
1
1
1
2
3
3
4
5
7
7
7
8
8
8
9
9
9
9
10
11
11
11
12
12
13
13
13
13
13
13
14
14
14
14
15
16
iii
2.6 Networking between Data Centers
2.6.1 Traffic between Data Centers
2.7 Sample Deployment
2.7.1 Horizon DaaS Management Appliances Required for Tenant A
2.7.2 Horizon DaaS Management Appliances Required for Tenant B
2.7.3 Desktop Host Requirements for Tenant A and Tenant B
2.7.4 Additional Data Centers and Tenant C
2.7.5 Storage: Three Tenants in Two Data Centers
2.8 Planning a Production Environment
2.9 Data Protection
2.9.1 Horizon DaaS Appliance Backup
2.9.2 Desktop Backup
2.9.3 User Data Backup / Replication
2.10 Security
2.10.1 Platform Security
2.10.1.1 Appliance Services
2.10.1.2 DaaS Agent Services
2.10.2 Security Best Practices
2.10.2.1 Unique DNS Record for Each Portal
2.11 Role Separation and Administration
2.11.1 Service Center
2.11.2 Administration console
2.11.3 Desktop Portal
2.12 NetApp Information
2.12.1 Best Practices
2.12.2 Configuring Your vFiler
2.12.3 Desktop DR Deployment Strategies
2.13 Cisco Virtualized Multi-Tenant Data Center (VMDC)
2.13.1 VMDC 2.2 Solution Components
2.13.2 Suggested components for trial and POC environments
Platform Install Checklist – Install Using vCenter
Service Provider Installation Worksheet
Tenant 1 – Installation Worksheet
Tenant 2 – Installation Worksheet
3 Tenant Installation – vCenter
3.1 Overview
3.2 Tenant Installation Prerequisites
3.2.1 Discovery and Assignment
3.2.1.1 (Optional) Configure NetApp VSC Plug-in
3.2.2 Enterprise Network Connectivity
3.2.3 Tenant Network Configuration
3.2.4 DNS Configuration
3.2.5 Allocate Tenant IP Addresses
3.2.6 Define or install DHCP service for the tenant.
3.2.7 Active Directory Configuration
3.2.8 SSL Certificate
3.2.9 About File Shares
3.3 Tenant Installation
3.3.1 Overview
3.3.2 Create Tenant Appliances
3.3.3 Add Desktop Compute Resources
3.3.4 (Optional) Configuring the Netapp VSC Plug-in
3.3.5 Assign Resources to Tenant
3.3.6 Assign Networks to Desktop Manager(s)
3.3.7 Configure Datastores
3.3.8 Assign Desktop Model Quotas
3.3.9 Assign Protocol Quotas
iv
16
17
17
18
18
19
19
19
20
20
20
21
21
21
21
21
22
22
22
22
22
23
23
24
25
26
26
26
27
28
29
30
32
33
34
34
35
35
35
35
36
36
36
36
37
37
37
38
38
38
39
40
40
41
41
41
41
3.3.10 Set Up Desktop Connection Via Access Point
3.3.11 Enter the Tenant’s AD Information
3.3.12 Apply Tenant Certificates to Tenant Appliances
3.3.12.1 Generate Tenant Certificates
3.3.12.2 Apply Tenant Certificates
3.3.13 Extending a Tenant Across Datacenters
3.3.13.1 Adding a Network Component
3.3.13.2 Adding Appliances
3.4 Creating a Windows 7 Gold Template
Tenant Installation Worksheet
4 Service Provider Installation – vCenter
4.1 Overview
4.2 Service Provider Prerequisites
4.2.1 Required Files
4.3 Bootstrap Primary Service Provider (SP1) Appliance
4.3.1 Prepare Storage Configuration on Both Management Hosts
4.3.2 Deploy the DaaS OVA File
4.3.3 Run the Bootstrap Script to Configure Network on SP1 Appliance
4.3.4 Copy the DaaS Software to the Service Provider Appliance
4.3.5 Run bootstrap Script to Install the DaaS Software
4.4 Configure Service Center
4.4.1 Start the Service Center
4.4.2 Register the Service Provider Domain
4.4.3 Discover the DaaS Management Server
4.4.4 (Optional) Rename Resource Manager
4.4.5 (Optional) Turn off Local Disk Provisioning
4.5 (Optional) Configuring the Netapp VSC Plugin
4.6 Create the Remaining Service Provider Appliances
4.6.1 Create HA Service Provider (SP2) Appliance
4.6.2 Apply Service Provider Certificate Files to Service Provider Appliances
4.7 Add Tenant Resource Manager
4.7.1 Create a Tenant Resource Manager Appliance
4.7.2 (Optional) Give the Tenant Resource Manager a Friendly Name
4.7.3 (Optional) Configuring Netapp VSC plugin for Tenants
4.8 Define Desktop Models
4.9 Installing Multiple DaaS Datacenters
4.9.1 Prepare Storage Configuration on Both Management Hosts
4.9.2 Deploy OVA File
4.9.3 Run the Bootstrap Script
4.9.4 Rerun the Bootstrap Script
4.9.5 Start the Service Center and Discover Management Host
4.9.6 Complete the Datacenter Build Out
5 Access Point Setup
5.1 Overview
5.1.1 High-Level Architecture
5.1.2 Basic Functionality
5.1.3 Access Point vs. dtRAM
5.1.4 Performance
5.2 Set Up Access Point
5.3 Example of Load Balancer Configuration
6 HTML Access (Blast) Setup
6.1 Overview
6.1.1 About Desktop Protocols
6.1.2 About HTML Access (Blast)
42
44
44
44
45
45
45
46
47
49
51
51
51
54
55
55
55
56
57
58
58
58
58
59
60
60
60
61
61
62
62
62
63
63
64
65
65
65
65
67
67
68
69
69
69
70
70
71
71
74
77
77
77
77
v
6.1.3 System Requirements
77
6.1.4 HTML Access (Blast) Support for RDSH Applications
78
6.2 Setup Procedure
78
6.2.1 Install Correct Browser
78
6.2.2 Prepare Desktops to Support Protocol
78
6.2.3 Install DaaS Agent
78
6.2.4 Install Horizon Agent
78
6.2.4.1 Install on Desktop (Windows 7, Windows 8, Windows 8.1)
79
6.2.4.2 Install on Windows Server 2008 R2 or 2012 R2 as Personal Desktop (Non-RDSH)
79
6.2.4.3 Install on Windows Server 2008R2/2012 as RDSH Role
79
6.2.5 Add the HTML Access (Blast) Group Policy Settings to the Local Computer Policy Environment
80
6.2.6 Automate SSL Installation
81
6.2.6.1 Import Certificate and Record Certificate Thumbprint
81
6.2.6.2 Create Post Sysprep Script/Batch File on Gold Pattern Image and Copy Certificate
83
6.2.6.3 Windows 7 and Later
83
6.2.6.4 Windows XP
83
6.2.6.5 Convert Image to Gold Pattern or Reseal
84
6.3 Troubleshoot Connection Problems
84
6.4 Known Limitations and Workarounds
84
7 Tenant Customization
7.1 Custom Branding
7.2 Configure NetApp Storage
7.2.1 Summary
7.2.2 Hardware and Software Requirements
7.2.3 NFS Exports
7.2.4 Local Mount Point Structure
7.2.5 Permissions and Security
7.2.6 Add a New NetApp Service Account
7.3 Super Tenant
7.3.1 Overview
7.3.2 Super Tenant Prerequisites
7.3.2.1 Networking Requirements
7.3.2.2 Tenant Active Directory and DNS Configuration
7.3.2.3 Tenant DHCP Configuration
7.3.2.4 Gold Pattern and DaaS Agent
7.3.2.5 Gold Pattern
7.3.2.6 DaaS Agent
7.3.2.7 Tenant Infrastructure Overview Diagram
7.3.3 Service Center
7.3.3.1 Create a Super Tenant
7.3.3.2 Enable an Existing Tenant as a Super Tenant
7.3.3.3 Add Networks for a Super Tenant
7.3.3.4 Disabling the Super Tenant Option
7.3.4 Billing
8 Reports
8.1 Billing Summary Reports
8.1.1 Overview
8.1.2 Override Report Intervals
8.1.3 Description of Record Layout
9 System Maintenance
9.1 Backing Up and Restoring Databases
9.1.1 Back Up a Database
9.1.2 Restore a Database
vi
86
86
86
86
86
86
87
87
87
88
88
88
88
88
89
89
89
90
90
90
90
91
92
93
93
95
95
95
95
96
97
97
97
97
9.2 Slony Reinitialization
9.3 Database Failover
9.3.1 Overview
9.3.2 Enable Write Operations on the Secondary Database
9.3.3 Promote the Secondary Appliance to Primary Appliance
9.4 Datacenters
9.4.1 Failover Master
9.4.2 Failback a Datacenter
9.4.3 Rename a Datacenter
9.4.3.1 Overview
9.4.3.2 Edit final_config.txt
9.4.3.3 Rerun bootstrap.sh
9.4.4 Decommision a Datacenter
9.4.4.1 Execute Initial Shutdown Steps
9.4.4.2 Perform Initial Tenant Maintenance
9.4.4.3 Promote the Primary Service Provider and Tenant to be the Primary Across Datacenters
9.4.4.4 Perform Initial Service Provider Maintenance on Remaining Datacenter
9.4.4.5 Clean Up Proxychains Configuration
9.4.4.6 Clean Up FDB
9.4.4.7 Re-initialize Slony on Affected Nodes
9.4.4.8 Bring the System Up
9.4.4.9 Final Tasks
9.5 Monitoring
9.5.1 Introduction
9.5.1.1 Critical Nodes
9.5.1.2 Basic System Functions
9.5.2 Web Application Monitoring
9.5.2.1 Port Response
9.5.2.2 Monitoring CIM Classes
9.5.3 CIM Providers on DaaS Management Nodes
9.5.3.1 Operating Environment CIM Providers for DaaS Nodes
9.5.3.1.1 Linux_OperatingSystem
9.5.3.1.2 Linux_EthernetPort
9.5.3.1.3 Linux_ComputerSystem
9.5.3.1.4 CIM_FileSystem
9.5.3.2 Application-Specific CIM Providers for DaaS Management Appliances
9.5.3.2.1 Service Provider Appliances
9.5.3.2.2 Resource Manager Appliances
9.5.3.2.3 Tenant Appliances
9.5.3.2.4 Desktop Manager Appliances
9.5.3.3 Description of DaaS CIM Providers
9.5.3.3.1 Desktone_ActiveDirectoryStatus
9.5.3.3.2 Desktone_ApplicationServer
9.5.3.3.3 Desktone_ApplicationServerStatistics
9.5.3.3.4 Desktone_CommonDatabase
9.5.3.3.5 Desktone_DatabaseService
9.5.3.3.6 Desktone_DatabaseReplicationService
9.5.3.3.7 Desktone_HypervisorManagerStatus
9.5.3.3.8 Desktone_InstalledProduct
9.5.3.3.9 Desktone_NTPService
9.5.3.3.10 Desktone_XMPService
9.5.4 WBEM and CIM
9.5.4.1 Connecting to the WBEM/CIM Server of a DaaS Management Appliance
9.5.4.2 Using WBEM/CIM to Monitor ESX Hosts
98
99
99
99
100
100
100
101
102
102
102
103
103
103
103
103
104
104
104
105
105
106
106
106
106
107
107
107
107
107
107
108
108
109
109
110
110
110
110
111
111
111
112
112
113
114
114
115
116
116
117
117
117
118
Appendix A Connection Matrix
121
Appendix B Guest OS Support
128
vii
This page intentionally left blank.
viii
Upgrading to Horizon DaaS Platform 7.0.0
1 Upgrading to Horizon DaaS Platform 7.0.0
IMPORTANT: Snapshot Required Prior to Upgrade
Before you begin the upgrade procedure, you must take a snapshot of all
service provider appliances. This is required to protect system data in case you
encounter issues during upgrade. See Create Snapshot of All Service Provider
Appliances below for more information.
1.1 Pre-Upgrade Tasks
1.1.1 Confirm that System is Ready for Upgrade
● All appliances across all datacenters must be at version 6.1.6. To verify the version of your appliances,
log in to the Service Center and select appliances > browse appliances. The version number is
displayed in the Version column of the table.
● All appliances across all datacenters must be up and running.
● The Virtual Center server must have proper valid certificates.
○ CN of the certificate must match with the hostname.
○ SSL certificate must have Virtual Center IP addresses in the DNS entry (if it is being accessed by
IP address).
● All appliances across datacenters must be synchronized with the NTP server. There should not be any
time differences between appliances.
● Administrator group should not be part of any Organization Unit in the domain.
● All appliances should be recognized by FQDN.
● All appliances across datacenters should be accessible over ssh (as desktone and as root) without
password from primary Service Provider appliance of that Datacenter. If this is not true, contact
VMware support for the workaround/resolution.
Note: DO NOT use the floating IP address when executing SSH commands. Instead, go directly to the
SP1 appliance.
● The Service Provider appliance password must not have been changed after the bootstrap process. The
SHA2 certificates generation process during the upgrade will not work if the password has been
changed post-bootstrap. Contact VMware support for the workaround/resolution in such case.
Upgrading to Horizon DaaS Platform 7.0.0
● All active reservations must have been completed successfully. To verify whether there is any
reservation with 'unknown' state in the reservation section of Service Center portal, execute the
following sql statements on the master service provider appliance fdb database only.
update reservation set life_state = 'completed' where life_state = 'started' ;
Before executing the upgrade, query for reservations and confirm that the reservations are not valid
ones. Confirm that there are no scheduled or active reservations before starting the upgrade process.
1.1.2 Verify Port Availability
All appliances now use port 8443 to communicate with each other, and the DaaS Agent also uses port
8443 to communicate with the appliances. You must confirm that port 8443 is open, and may also need
to open an additional firewall port to allow traffic using port 8443.
In addition to port 8443, confirm that the following ports are open.
2
Appliance(s)
Inbound/Outbound
Port
Tenant
Tenant
Inbound
Inbound
TCP 3443
TCP 4001
Tenant
Tenant
Inbound
Outbound
Tenant
Outbound
TCP 6443
AD servers: TCP
3268/3269
AD servers: TCP 88
Tenant
Outbound
TCP 6443
Desktop
Manager
Desktop
Manager
Inbound
TCP 3443
Inbound
TCP 4002
Desktop
Manager
Inbound
TCP 4101
Desktop
Manager
Desktop
Manager
Inbound
TCP 6443
Outbound
TCP 4101
Desktop
Manager
Outbound
TCP 6443
Resource
Manager
Inbound
TCP 6443
Description
Listens on localhost only for messages
from Desktop Managers.
Global catalog port on AD servers for
LDAP/LDAPS (respectively).
Kerberos (for new, more secure LDAP
communication & password change
functionality)
Handles connections from agents.
When agents startup they connect to
the message bus on this port on one of
the Desktop Managers so that they can
receive messages from them.
Used for router clustering. JMS routers
on HA pairs connect to each other on
this port so that they can route
messages between Desktop Managers
and ensure messages reach the agent,
regardless of which Desktop Manager
the agent is connected to.
Used for router clustering. JMS
routers on HA pairs connect to each
other on this port so that they can
route messages between Desktop
Managers and ensure messages reach
the agent, regardless of which
Desktop Manager the agent is
connected to.
Upgrading to Horizon DaaS Platform 7.0.0
Refer the following lists of commands to open inbound and outbound ports.
● Commands to execute in tenant and desktop manager appliances across datacenters:
iptables
iptables
iptables
iptables
iptables
-A
-A
-A
-A
-A
INPUT -p tcp --dport 3443 -j ACCEPT
INPUT -p tcp --dport 4002 -j ACCEPT
INPUT -p tcp --dport 4101 -j ACCEPT
INPUT -p tcp --dport 6443 -j ACCEPT
OUTPUT -p tcp --dport 4101 -j ACCEPT
● Commands to execute in resource manager appliances across datacenters:
iptables -A INPUT -p tcp --dport 6443 -j ACCEPT
iptables -A OUTPUT -p tcp --dport 6443 -j ACCEPT
1.1.3 Create Snapshot of All Service Provider Appliances
During upgrade, the DaaS platform automatically creates a snapshot of all existing Tenants and Resource
Managers, but does not create snapshots of Service Providers. Before starting the upgrade, you must create
manual snapshots of all Service Provider appliances.
Procedure
1. Shut down all Service Provider appliances, making sure you shut down the primary Service Provider
appliance last.
2. Using the VMware vSphere Client, create a snapshot of all Service Provider appliances.
3. Beginning with the primary Service Provider appliance, power up all Service Provider appliances.
4. Wait until all appliances are online before proceeding with the upgrade. To check the status in the
Service Center, select appliances > browse appliances. The appliance is online if the status line
contains the green arrow icon
.
1.1.4 Increase CPU Count and RAM Size for Service Provider Appliances
To increase CPU count and RAM size, perform the following steps for each Service Provider appliance.
Procedure
1. In the vCenter interface, right-click the Service Provider VM.
2. Select Edit Settings.
3. Increase the CPUs count to 2.
4. Increase the RAM (Memory) to 4096 MB.
5. Click OK to save the settings.
1.1.5 Obtain the Upgrade Files
Procedure
1. Log in to the primary Service Provider appliance and run all commands as root:
sudo –i
2. Verify that the upgrade directory exists and is empty:
rm –rf /usr/local/desktone/upgrade
mkdir /usr/local/desktone/upgrade
3
Upgrading to Horizon DaaS Platform 7.0.0
3. Verify that the /data/tmp directory exists and is empty:
rm –rf /data/tmp
mkdir /data/tmp
4. Copy the upgrade package (upgrade7.0.0.tgz) to the /data/tmp directory on the primary Service
Provider appliance.
5. Verify the package’s integrity by computing sha256sum and comparing it with the expected value of
“7ae1781a40c6f1e5bc89694a847579909434ea7076faa125b219141ee0158dfe":
sha256sum /data/tmp/upgrade7.0.0.tgz
The output should appear exactly as below:
7ae1781a40c6f1e5bc89694a847579909434ea7076faa125b219141ee0158dfe
upgrade7.0.0.tgz
6. Untar the upgrade package in the /data/tmp directory:
cd /data/tmp
tar --no-same-owner –xzvf ./upgrade7.0.0.tgz
7. On SP1 of the primary datacenter only, run “./preUpgradeTask-7.0.0.sh” with no argument.
This script run permanently changes the APT repo cache directory to /data/apt and also fixes a bug to
allow any debian package to be downloaded from the APT repo and provides other fixes.
Note: This script run attempts to restart dtService on all non-local SP appliances across DCs. If for
some environmental reason it fails to stop dtService, please manually stop dtService and rerun this
script.
Note: This script is idempotent so it can be run multiple times if necessary.
Important: You must wait up to three minutes for the changes introduced by the preUpgradeTask7.0.0 to complete before proceeding to the next steps.
8. On SP1 of each datacenter, run the following:
preUpgradeTask-7.0.0.sh checkspace
This will check that the necessary space is available on each appliance and inform you if it is not. The
install requires approximately 1.4 GB in /usr and 150 MB in /var.
9. Download the following debian files to the /data/tmp directory on the primary Service Provider
appliance.
○ dt-platform-7_0_0.deb
○ dt-aux-7_0_0.deb
To avoid errors due to space limitations, download the debian files only after running the
preUpgradeTask-7.0.0 script.
10. Move the debian files to /data/repo/ :
mv /data/tmp/*.deb /data/repo/
11. Delete the upgrade tarball and move the rest of the files to /usr/local/desktone/upgrade
rm –f /data/tmp/*.tgz
mv /data/tmp/* /usr/local/desktone/upgrade
The /usr/local/desktone/upgrade directory size must not exceed 2 MB.
To verify directory size, execute the following command.
du -sh /usr/local/desktone/upgrade
4
Upgrading to Horizon DaaS Platform 7.0.0
If the directory size exceed 2 MB, delete the unwanted files before proceeding to the next step.
12. Wait for approximately five minutes to allow file replication to complete.
13. Verify that the right packages are on all of the Service Provider appliances of each Datacenter:
/usr/local/desktone/upgrade/checkPackages.sh
The expected output of this command is similar to the following:
Found 167 out of 167 packages at xxx.xx.xxx.xx.
Found 167 out of 167 packages at xxx.xx.xxx.xx.
All packages found on specified IPs. Please confirm all SP Nodes are listed and
proceed with upgrade.
Note: If any SP Node entry in the list is missing, do not proceed with the upgrade.
1.2 Upgrade Appliances to DaaS Platform 7.0.0
1.2.1 Upgrade Primary Service Provider Appliance
Upgrade primary SP1 to DaaS platform 7.0.0 using the following procedure.
Procedure
1. Ensure that all commands are run as root (required only if not already root):
sudo –i
2. Execute the upgrade file, specifying the full path is required:
/usr/local/desktone/upgrade/runUpgrade.sh
Note: If the upgrade fails and you see a debian package update/install error in
/var/log/desktone/upgrade.log, then perform the following steps:
a. Run apt-get command to install/update the package in question.
b. Run commands:
mv /usr/local/pgsql.old /usr/local/pgsql
mv /data/db.old /data/db
service postgres start
c. Run /usr/local/desktone/upgrade/runUpgrade.sh again.
The system prompts you to accept the End-User License Agreement (EULA).
3. If you accept the terms, enter Yes.
Upon successful completion, the Primary SP appliance reboots.
Upon a failed completion, the dtService and CIM service stop and information regarding the failure is
contained in this file: /var/log/desktone/upgrade.log
4. Once reboot completes and the dtService has started, navigate to the Service Center using the Primary
Service Providers IP address:
http://IP_Address_Primary_Service_Provider/service
5. Review the Appliances page (appliances > browse appliances) and verify that the Primary Service
Provider appliance is at version 7.0.0.
5
Upgrading to Horizon DaaS Platform 7.0.0
1.2.2 Upgrade Org1000 Appliances
Note: Before upgrading appliances for an organization, navigate to the Reservations page in the Service
Center (appliances > reservations) and reschedule any pending reservations until after the upgrade.
Procedure
1. In the Service Center, select appliances > maintenance.
2. Under Upgrade Operation, select organization ID of 1000 and click Invoke Upgrade.
3. All appliances for organization 1000 are updated. Allow a minimum of 40-50 minutes for each data
center to upgrade. Services on affected appliances are stopped and started automatically. Watch
desktone.log for progress (located in the /var/log/desktone directory on the Primary Service
Provider). While the upgrade is in progress, the status of “running” will be displayed on the table
under the “Upgrade Operation” section. When it completes, the status of “successful” or “failed” is
shown. You may need to refresh the page to get the latest status.
4. Stay logged in to service provider if you are upgrading other appliances.
1.2.3 Upgrade All Other Appliances
Note: Before upgrading appliances for an organization, navigate to the Reservations page in the Service
Center (appliances > reservations) and reschedule any pending reservations until after the upgrade.
1. In the Service Center, select tenants > browse tenants to find the organization IDs for each Tenant.
2. Select appliances > maintenance.
3. Under Upgrade Operation, select the organization ID of your choice and click Invoke Upgrade.
4. Tenant appliances for the organization are updated. Allow a minimum of 40-50 minutes for each data
center to upgrade. Services on affected appliances are stopped and started automatically. Watch
desktone.log for progress (located in the /var/log/desktone directory on the Primary Service
Provider). While the upgrade is in progress, the status of “running” will be displayed on the table
under the “Upgrade Operation” section. When it completes, the status of “successful” or “failed”
would be shown. You may need to refresh the page to get the latest status.
5. In the Service Center, select appliances > browse appliances to verify that all tenant appliances are at
version 7.0.0. If some appliances are not at version 7.0.0, you did not update all of the organization IDs
shown on the Tenants screen.
6. Stay logged in to service provider if you are upgrading other organization and repeat this section as
needed.
1.2.4 Check Desktop Functionality
When you have finished the upgrade, perform the steps below to check that desktops are running on each
of the desktop managers.
Procedure
1. In the Service Center, open the Service Grid page and click on Desktop Managers.
2. In the tree in the left window, select each desktop manager that had one or more desktops running
before the upgrade and confirm that the desktops are still running.
6
Upgrading to Horizon DaaS Platform 7.0.0
3. If any of the desktop managers that had VMs online before the upgrade now show zero VMs online,
initiate a full inventory by performing the following steps.
a. Display the policy settings for the tenant in question in the Service Center under Tenants ->
Policy
b. Note the current setting for the element.monitor.element.fullInventory.interval policy (default
value is 43200000 milliseconds).
c. Change the setting to a number less than 100 and save changes.
d. Change the setting back to the previous value.
e. This will cause the full inventory to run and the desktops will reappear within minutes.
1.3 Update DaaS Agent
When the Tenant appliances have been upgraded, update the DaaS Agent using a domain controller with a
GPO policy.
1.4 Snapshots and Rollback
During upgrade, the DaaS platform automatically creates snapshots of Tenant appliances and resource
managers (and creates a reservation task to remove snapshots from the previous upgrade). Use these and
the Service Provider snapshots you created previously if you need to roll back to the previous version after
completing the upgrade.
Note: If you roll back to DaaS platform 6.1.x, any changes made to the environment after the DaaS
platform 7.0.0 upgrade will be lost, for example changes to pools and mappings.
1.4.1 Rescheduling the Snapshot Deletion
A reservation exists to delete Tenant and Resource Manager snapshots. Once the snapshots are deleted, you
cannot roll back. To change the reservation and keep the snapshots for a longer time period:
1. In the Service Center, select appliances > reservations.
2. Locate the reservation named “Delete snapshots for <X > - < Y>”, where X is org ID and Y is the
datacenter, and click the Reschedule link for this reservation.
1.4.2 Snapshot
You can see the list of current snapshots using the vSphere client. For example, if you are upgrading from
6.1.6 to 7.0.0, the snapshot name and description in vSphere will look like this:
Name: Automated Snapshot at 6.1.6
Description: AUTO|6.1.6|7.0.0|<Timestamp>
7
System Blueprint
2 System Blueprint
NOTE: THIS SECTION IS IN DRAFT STATUS, AND SOME INFORMATION MAY NOT BE UP-TODATE FOR THE CURRENT RELEASE. FOR MORE INFORMATION REGARDING THE TOPICS
DISCUSSED BELOW, CONTACT YOUR VMWARE REPRESENTATIVE.
2.1 Introduction
This section provides an overview of the data center infrastructure requirements of the Horizon DaaS
Platform. The view presented here is from the service provider (SP) perspective: that is, data center
infrastructure primarily refers to the organization of resources within the SP site. Topics such as tenant user
storage requirements are not addressed in detail in this section.
The examples in this section are hypothetical and are presented to introduce the methods of estimation only.
The equipment needed in any installation is specific to a data center, tenants, and network. For example, the
typical storage requirements for a tenant are difficult to generalize since virtual machine (VM) image sizes
can vary from as low as 8 or 10 GB in a student environment to more than 50 GB in a business environment.
Estimates in this section are based on past experience and cannot accurately reflect every environment.
VMware strongly recommends performing an analysis to determine the requirements of your specific
environment. Contact VMware Technical Support for assistance with this analysis.
2.1.1 Intended Audience
This section assumes that you are familiar with:
● Basic networking concepts such as layer 2 separation
● Microsoft Active Directory
● Virtualization software, such as from VMware vSphere
● Storage concepts such as I/O, protocols like NFS and iSCSI, and replication
● Linux and Windows operating systems
● Data center operations
2.1.2 Organization of this Section
Platform Overview introduces the individual components of a Horizon DaaS installation, describing each
appliance’s function and its place in the system.
Compute Resources describes how to size the servers needed to host the Horizon DaaS Platform.
System Blueprint
Network Resources describes the build-out of more tenants within a data center. The emphasis in this
section is on the core networking in the Horizon DaaS environment between the SP and tenants, particularly
on maintaining clear and secure separation between tenants.
Storage Resources focuses on the storage resource requirements for a data center.
Networking between Data Centers demonstrates the basic architecture of a multiple data center SP that
includes multiple tenants across multiple geographic locations.
Role Separation and Administration describes the three browser-based graphical user interface portals
provided by the Horizon DaaS Platform: the Service Center (for SPs), Administration console (for Tenant
Administrators), and the Desktop Portal (for end-users).
2.1.3 Terms
The table below lists some common terms that are specific to the Horizon DaaS Platform. Other terms are
defined as necessary within the text.
Table 2–1 Common Terms
Term
Definition
Appliance
An appliance is a virtual machine (VM) combined with a functional
unit of software in the Horizon DaaS Platform. The term node is used
interchangeably with appliance.
Data Center
Data center is a label used to logically group lower-level virtualization
resources. Data center corresponds to a physical location managed by
an SP.
Hypervisor
A hypervisor allows multiple operating systems to run concurrently
on a host computer (hardware virtualization).
Hypervisor Manager
A hypervisor manager refers to the management layer that
communicates directly with the hypervisor. vCenter is an external
hypervisor management layer that aggregates multiple hypervisors. .
Tenant
A customer that is consuming hosted virtual desktops from a Service
Provider.
Virtual Desktop
A virtual desktop is a virtual machine that is running remotely
(relative to the end user) usually on a virtual desktop host. The virtual
desktop has input devices (keyboard and mouse) and a display device
(monitor) to view the desktop display.
Virtual Machine Pool
A pool is a named and managed collection of virtual desktops. There
are two types of pools, static and dynamic. A typical data center will
have a mix of static and dynamic pools.
● A static pool consists of virtual desktops that are assigned to
individuals. The first time a persistent user logs in, they are
allocated an available VM from the pool. After that time, that
VM is now assigned to that user and not available to other users.
The number of VMs in the pool should equal the number of users
assigned to the pool if all users have 1:1 mappings.
● A dynamic pool consists of virtual desktops that are assigned on
an as-needed basis. An end user receives any appropriate virtual
desktop from the pool. The state of the virtual desktop in a
dynamic pool can be recycled to a predefined state between
sessions. The non-persistent pool defines the number of users
that can be connected concurrently. Therefore, the number of
2
System Blueprint
users assigned to the pool can exceed the number of VMs in the
pool.
2.2 Platform Overview
Your DaaS environment can be broken down into four key elements, compute, storage, network, and the
Horizon DaaS Platform. Through our patented assembly of these resources managed by the Horizon DaaS
Platform, your DaaS solution can scale to hundreds of thousands of virtual desktops across hundreds of
tenants in multiple data centers around the globe.
2.2.1 Key Components of the Horizon DaaS Platform
Horizon DaaS Management Hosts – an HA pair of physical machines that run a hypervisor and host
multiple Horizon DaaS management virtual appliances for the service provider and tenants.
Virtual Desktop Hosts - physical machines that run a hypervisor and host tenant desktop VMs. The Horizon
DaaS Platform allows for sharing of a Virtual Desktop Host between multiple tenants; however, Microsoft
licensing for desktop operating systems prohibits this configuration. If tenants are running a Linux OS or
using Windows Server (for which there is SPLA licensing available) then sharing of the Virtual Desktop
Host across tenants is permitted.
Storage System- A dedicated storage system supporting block or file based storage access for persistence of
the virtual machine’s virtual disk.
Networking– The network must support VLAN tagging or alternatively distributed virtual networking (also
referred to as DVS) can be used in conjunction with VMware vCenter. A unique network should be defined
for the management network (containing the hosts and storage systems), the Service Provider network
(literally, an extension of the Service Providers network into the VMware datacenter), the Horizon DaaS
management network (referred to as the backbone link local network), and 1 or more isolated networks for
each tenant.
Horizon DaaS Appliances – Virtual servers that live on the Horizon DaaS Management Hosts and support
the Horizon DaaS solution.
3
System Blueprint
Figure 2–1 Relationship between Horizon DaaS Management Hosts and Horizon DaaS Appliances
(Multi-Tenant)
Figure 2–2 Horizon DaaS Logical View (One Tenant)
2.2.2 Horizon DaaS Management Appliances
The following Horizon DaaS Management Appliances are virtual machines that are used to control and run
the Horizon DaaS Platform:
● Service Provider Appliance – Provides two types of access to the system: (a) via the Service Center web
based UI; (b) as a transit point for enabling ssh access to all the management appliances in the data
4
System Blueprint
center. The Service Provider Appliance is the first appliance installed in a data center and, once
bootstrapped, provides the foundation to install the remainder of the Horizon DaaS Platform.
● Resource Manager Appliance – A Resource Manager appliance integrates with the physical and virtual
infrastructure in a given data center. A single Resource Manager appliance can be shared across
multiple tenants. The Resource Manager abstracts all specifics of the infrastructure from the tenant
appliances.
● Tenant Appliance – Provides the tenant with both end user and administrative access to their virtual
desktops. End users can access and manage their individual virtual desktops via the Desktop Portal.
Administrators have the ability to create and manage their virtual desktops via the Administration
Console.
● Desktop Manager Appliances – A desktop manager appliance is a tenant appliance that does not
include the components providing brokering and user access (end user or admin). Desktop manager
appliances serve 2 key purposes:
○ Desktop capacity scale out: The initial tenant appliance is designed to provide capacity of up to
5,000 desktops per data center. When an individual tenant needs to scale beyond 5,000 desktops
in a particular data center, additional desktop manager appliances can be added to provide this
capacity. Additional desktop manager appliance pairs can support up to another 5,000 desktops
each.
○ Compute resource optimization: A desktop manager is designed to treat the individually
assigned compute equally. If there is a requirement for a specialized desktop workload, that
workload can be optimized by creating a new desktop manager pair with only the compute for
that workload assigned to it. A few examples of specialized workloads consist of delivering
standard VDI, VDI with GPU, and RDS. In these cases the compute for these workloads would
be separate and disctinct from each other.
All management appliances are connected to the Backbone and either the SP network (Service Provider
Appliance and Resource Manager Appliance) or the Tenants own network as shown in the figure above.
Horizon DaaS requires that all management appliances be installed as HA pairs. To ensure physical
hardware high availability, all Horizon DaaS Management appliance pairs are automatically distributed
across separate physical Horizon DaaS Management Hosts.
The Horizon DaaS Management Appliances allow monitoring via the standard CIM (common information
model) WBEM (web-based enterprise management) interface. You can use any monitoring tool capable of
understanding the CIM data model (for example, Tivoli). Information related to the types of CIM classes
and recommended thresholds are available in the Horizon DaaS Platform Monitoring technote in the
Partner Portal.
2.2.2.1 Management Appliance High Availablity
High availability of management appliances is inherently built-in to the platform itself. How high
availability is accomplished varies depending on the appliance type. In all cases management appliances
are deployed in pairs to protect against the failure of any single appliance. Appliances are placed on
separate physical management servers to protect against hardware failure. In some cases, paired appliances
could end up on the same host. This would normally happen when vCenter clustering is used with DRS.
DRS could vMotion an appliance to a host where the appliance pair already exists. It is possible to ensure
that DRS doesn’t move a VM, but it has been a conscious decision to let DRS appropriately balance the
environment and determine where a VM should run. If a host were to fail that has both paired appliances
the appliances would automatically and immediately be started on another host in the cluster resulting in
very minimal downtime. There are several technologies used to enable high availability that require
different handling when an appliance goes down. They are:
● Database Replication: All management appliances that contain a database utilize a master/slave
replication scheme. That means at any point in time there is only a single master database for a pair of
appliances. The master database is the only database that can be written to. The master database will
always be on the appliance that is marked as primary in the appliances screen in the service center. If
5
System Blueprint
an appliance that is not the primary appliance (i.e is running a slave database instance) goes down
there is no effect to the environment. If an appliance that is the primary appliance and therefore
contains the master database instance of a database cluster goes down there are 2 courses of action
depending on the type of appliance:
1) Manual Failover: In general it’s not desired to promote a slave instance in a database cluster
to be the new master if it’s not required. When failover occurs, the old master database has
been evicted from the cluster. When the failed appliance comes back online, the database
cluster must be reinitialized so that the new instance can be added. This requires a small
amount of downtime. In many cases it’s not required to automatically promote a slave
instance in the database cluster to be the new master. This is possible because critical
activities can still be accomplished without writing to the database. So, the approach here is
to not automatically fail a master database instance but rather wait for the master to come
back online. If the primary appliance has truly failed, a manual promotion of a slave
instance should be performed.
2) Automatic Failover: There are cases where it’s not possible for the system to perform critical
functions when the primary appliance has gone down. In these cases automatic action is
taken to promote a slave instance to be the new master of the database cluster which evicts
the old master instance from the cluster. There is care taken when doing this though. If a
primary appliance were to reboot or go down for a short period of time it would not be
desirable to evict it from the database cluster because manual cluster reinitialization would
be required which implies downtime. So a grace period is allowed before action is taken.
By default, this grace period is 5 minutes and can be alternately configured.
● Floating IP Address: For any appliances that have services accessed by a user (end user or admin), a
floating IP address is used to ensure all traffic is routed to an active appliance. This includes SP and
tenant appliances. When creating SP and tenant appliances, individual static IP addresses are
requested as well as a 3rd shared address called a floating IP address. The appliance pair will share a
floating IP address. The floating IP address is designed to move between the pair of appliances and
always be bound to an appliance that is online. The technology used to handle failover of the floating
IP address exchanges heartbeats between the paired appliances and when an appliance that was the
owner of the floating IP address goes down it will be moved to the active appliance thus taking
ownership of any traffic destined for the floating IP address. This process is near instantaneous as the
heartbeats occur at a very frequent intervals and an arp request is sent to the switch ensuring network
routing tables are immediately updated. In order to ensure consistent user access, any DNS resolution
should be configured to point at the floating IP address and not an individual appliance address.
In general all appliances are considered disposable except for the database. It is highly recommended to
back up the databases on all primary appliances on a nightly basis. Please see System Maintenance for
further information on how to configure backup. Scripts are provided in order to do proper database
backup and restore. It is not required to back up the appliances themselves as they can very easily be
recreated by using the appliance restore feature available in the Service Center. In fact appliance restore is
almost always faster than manually restoring a VM from backup. Please refer to the online help on
appliance restore in the Service Center. There are specific limitations to understand especially when
restoring a pair of appliances at the same time. Given the approaches to HA as described above, the HA
characteristics of the individual management appliances are as follows:
● Service Provider Appliances: SP appliances contain a replicated database and employ a manual
failover approach. In a multi-DC install the primary appliance in the primary DC has the master DB
instance and all other appliances run a slave instance. This even includes the primary SP appliance in
a secondary DC. So, for example, in a 2 DC install there would be 2 pairs of appliances deployed (1
pair per DC) but 1 master DB instance and 3 slave instances.
● Resource Manager Appliance: Resource Manager appliances are completely stateless and therefore
there are no special HA considerations for these appliances.
● Tenant Appliances: Tenant appliances contain 2 databases that make up separate database clusters.
Inside a tenant appliance are 2 core services, referred to as the Access Manager and the Desktop
Manager. Each of the core services manages a separate database cluster and employs a different
approach to replication. The Access Manager employs a manual failover approach. This approach is
6
System Blueprint
taken because users can still establish connections to the desktops and applications even when the
master database instance is down. Operations such as provisioning are not available when the
primary tenant appliance is down. It’s possible to manually promote a secondary tenant appliance if
provisioning or other Access Manager operations are required. In a multi-DC tenant install there is
only a single Access Manager master database just as there was for the SP appliances. The Desktop
Manager service is used to manage the desktops themselves and employs an automatic failover
approach because it must write to the database in order to perform basic operations such as desktop
allocation. Automatic failover occurs as it has been described above.
● Desktop Manager Appliances: Desktop manager appliances run the Desktop Manager service as
described previously in the Tenant Appliances section.
2.2.3 Network Services
DNS, DHCP, NTP and Active Directory are necessary components as part of a Horizon DaaS installation.
There are two ways to implement these services. They can be implemented locally within the data center in
the Tenant Network, or the Tenant Network can be extended via a site to site connection (VPN or MPLS) to
the Tenant’s own data center.
The site to site connection back to the customer’s network works well for DNS, DHCP, and Active Directory
as long as:
● The latency back to the customer’s network is less than 200ms.
● The customer’s data center has the bandwidth required for this connectivity.
If the site to site connection is being used for only the AD, DNS, and DHCP traffic, then there will be
minimal bandwidth requirements (1Mbps or less). If the customer is using the site to site connection back to
their network for other applications, then it is necessary to determine the bandwidth requirements of those
applications. For example, if users will be accessing network file shares frequently using the CIFS protocol,
then the latency needs to be kept to a minimum as CIFS has very poor performance once there is any latency
or packet loss.
If the bandwidth requirements are too high, or the latency is not within a reasonable range, then it is
possible to have a local copy of the AD running in the data center as a replicated AD.
2.2.4 Agent Compatibility
It is recommended that the Horizon Agent and DaaS Agent components be updated in all gold patterns and
virtual desktops with each new release of the Horizon DaaS Platform. The DaaS Agent is backwards
compatible to one minor version of the Horizon DaaS Platform. However, to take advantage of new
functionality in the Horizon DaaS Platform, the DaaS Agent would need to be upgraded. Please refer to the
product support notices section of the release notes for each Horizon DaaS Platform release for specifics on
the functionality that affects the these components.
2.3 Compute Resources
Compute resources refers to the physical servers necessary to support the Horizon DaaS Platform and the
software required on those hosts. Horizon DaaS Management Appliances and desktop virtual machines
cannot reside on the same physical server. Separate servers must be used for the following:
● Horizon DaaS Management Host (service provider)
● Virtual Desktop Host (tenant)
Although both types of hosts support virtualized servers or desktops, the optimization of each of these hosts
is slightly different. As such the process for sizing each server is defined separately below.
7
System Blueprint
2.3.1 Hardware Requirements for Horizon DaaS Management Hosts
The Management hosts are a pair of physical machines that contain the Horizon DaaS Management
Appliances (both service provide and tenant appliances). Several sample profiles are defined below; once a
server is full you can simply add additional Management hosts to the platform.
2.3.1.1 Horizon DaaS Appliance Sizing Requirements
The following are the prescribed sizing requirements for Horizon DaaS appliances.
Table 2–2 Horizon DaaS Appliance Sizing Requirements
Appliance
Template (Memory/Disk Space)
Sizing
Service Provider Appliance
Standard (3GB/20GB)
1 pair / dc
Resource Manager Appliance
Standard (3GB/20GB)
1 pair / dc / 20,000 VMs
Tenant Appliance
Standard (3GB/20GB)
1 pair / dc / tenant / 5,000 users
The smallest environment begins with two management hosts, each with one service provider appliance,
one resource manager and one tenant appliance. From there, additional tenants are added to the datacenter
by adding an additional tenant appliance to each management host. The size of the management host is
generally referred to by the number of tenants it can support.
2.3.1.2 Sizing a Management Host for a Specific Number of Tenants
There are three variables to consider when determining the hardware configuration for a Horizon DaaS
Management Host:
● CPU – Each core supports 10 tenants. For example, four cores are required to support 40 tenants.
● Memory – Each tenant requires 3 GB of RAM on each Horizon DaaS management host. For example,
120 GB of RAM on each host is required to support 40 tenants.
● Storage – Each Tenant requires 40 GB of storage, 20 GB allocated to each Horizon DaaS management
host. For example, a pair of management hosts that can scale to 50 tenants requires 1000 GB (1 TB) of
storage each (2 TB total).1
The table below defines the server hardware used for two sample Horizon DaaS Management hosts.
Table 2–3 Sample Hardware Requirements for each Horizon DaaS management host
1
8
Component
Trial Environment
Production Recommendation
CPU
1 CPU
1 CPU
CPU Architecture
4 Cores
8 Cores
Minimum RAM
48 GB
128 GB
Data disk configuration1
500 GB
1 TB of Storage
Supported Tenants
15
35
If you are using external storage that has deduplication for the Horizon DaaS appliances, the storage capacity requirements would be less.
System Blueprint
2.3.2 Hardware Requirements for Virtual Desktop Hosts
Virtual desktop hosts are sized based on the number of CPU cores and amount of memory installed in the
server. All tenant virtual desktops reside on shared storage. As such, Desktop Hosts only require a pair of
small disks for the hypervisor O/S installation.
2.3.2.1 Sizing a Desktop Host for a Specific Number of Desktops
The guidelines for sizing the Memory and CPU for a desktop host are:
● CPU – VMware recommends allocating about 300 MHz CPU for a stand 1vCPU desktop. Based on the
total speed of the CPU you can determine the virtual to physical ratio. For example, a 3.0 GHz CPU
core will support 10 standard desktops, or a 10:1 virtual to physical ratio. Virtual to physical CPU
ratios will vary based on use case and type of processor. Please refer to CPU Speed Considerations
below for further information.
● Memory – VMware recommends setting a 30% over commit ratio for memory; that is 32GB of physical
memory yields approximately 40GB of virtual memory allocated for desktops. Memory over commit
ratios can vary based on the workload on a host. The more similar the workload the higher the
memory over commit ratio can be. For example, if a host has all Windows 7 SP1 VMs then page
sharing will be optimal and therefore the memory over commit can be high.
2.3.2.2 Host Sizing Calculations
Two formulas are helpful when sizing your hosts. There are four variables for each formula, you can solve
for any of the four variables, so long as you know three. The variables are the number of VMs on the host,
the amount of memory or CPU cores assigned to each virtual desktop, the amount of physical memory or
CPU cores installed in the host, and the over commit ratios for either the memory or CPU cores.
Memory:
[Number of VMs] x [Virtual memory per VM] <= [Physical memory] x [Memory over commit ratio]
CPU:
[Number of VMs] x [Virtual CPUs per VM] <= [Number of physical cores] x [CPU over commit ratio]
The table below defines the server hardware used for the sample Virtual Desktop hosts. The table assumes a
virtual desktop that consists of 2 GB of RAM and one virtual CPU. This example uses a 10 x CPU over
commit ratio and a 1.5 x memory over commit ratio.
Table 2–4 Minimum Hardware Requirements for Virtual Desktop Host
Number of Desktops
Cores Required
RAM Required (GB)
20
2
32
40
4
64
60
6
96
80
8
128
120
12
192
2.3.2.3 CPU Speed Considerations
As the diversity of server processors continues to expand, a simple core ratio is not appropriate in most
situations – particularly in the case of very fast or very slow processors. It has become standard practice to
use MHz based sizing. To size desktops to a server using MHz based sizing simply multiply the clock
9
System Blueprint
speed of the processor by the number of cores then divide by a per user allocation. VMware recommends
allocating 300 MHz per virtual CPU.
As with host sizing equations above, the equation has four variables, you can solve for any of the four
variables, so long as you know three.
[Number of VMs] x [MHz allocated per desktop] <= [Number of physical cores] x [CPU clock speed]
Example – how many desktops can I host with a single 1.9Ghz six core CPU?
Number of VMs = 6 x 1900 / 250 = ~45
2.4 Network Resources
Note: IPv6 connectivity is required for all appliance installations over the Link Local Backbone network.
There are two key components to the network to assure tenant separation when assembling the Horizon
DaaS Platform, VLAN tagging and VRF support. In the Horizon DaaS environment, a tenant network is not
a subnet within the SP network; each tenant network is a logical extension of the tenant network existing in
the SP data center.
The figure below shows the distinct networks within the data center:
● The Link Local Backbone network is fully controlled by the SP. This network is a link local nonroutable subnet (169.254.0.0/16) that is logically separated from all tenant networks. The backbone
network connects all Horizon DaaS Management Appliances. For example, the Tenant A Appliance
connects to the SP Resource Manager via the link local backbone network.
● The Management Network is used to segregate all the physical hosts and storage systems.
● The SP network is an extension of the Service Provider’s network into the data center. The Service
Center is accessed through the Service Provider appliances on the Service Provider network. The SP
VLAN must have access to the management VLAN for access to Virtual Desktop hosts and the storage
systems.
● The Tenant networks are fully controlled by the tenants, again as a discrete VLAN that is separate
from the SP and other tenant networks. The tenant network connects the tenant appliances to a
tenant's virtual desktops. The tenant VLAN is not accessible to the SP.
The figure below emphasizes the clean separation of SP and tenant networks. The area labeled Horizon
DaaS service grid provides the core of the Horizon DaaS Platform. The portion of the diagram directly
connected to a Tenant network represents the components of the system that are duplicated for multiple
tenants.
Note the following network architecture:
● The Tenant networks are not a subnet of the SP network. It is a logical extension of the Tenant A or
Tenant B network existing in the SP data center.
● High availability indicates redundant pairs to ensure failover integrity. Most of the computing
components are pairs of virtual machines (shown in light blue).
● The number of physical servers in a tenant network largely depends on the size and number of virtual
desktops the tenant is hosting.
10
System Blueprint
Figure 2–3 Horizon DaaS Logical View (Additional Tenant)
2.4.1 Virtual LANs in the Horizon DaaS Platform
A VLAN is an emulation of a standard LAN that allows data transfer to take place without the traditional
physical restraints placed on a network. An understanding of the use of VLANs is important to planning
and implementing the Horizon DaaS Platform due to their role in ensuring separation of tenants and SP,
optimizing the performance of data and management information flows, and in increasing the scalability of
the Horizon DaaS Platform.
2.4.1.1 Distributed Virtual Networking
An alternate network configuration option is available by using distributed virtual networking as opposed
to using the local vSwitch with VLANs. Distributed virtual networking allows the network configuration to
be done in a software appliance based controller rather than in the physical hardware device (i.e. the router).
There are two distributed virtual networking options available with the Horizon DaaS Platform. They are
VMware vSphere Distributed Switch (VDS) and Cisco Nexus 1000V. Please refer to the product
documentation for further information.
2.4.2 Layer 2 segregation using VLANs
VLANs provide segregation of traffic at layer 2 to prevent two tenants from seeing each other’s traffic while
still sharing the same physical network path. However, in order to reach the end-user, this traffic must pass
11
System Blueprint
through a router. Without additional segregation at layer 3, customers would be able to route to each other,
or could have a non-resolvable conflict of IP addresses.
Figure 2–4 Layer 2 Segregation Using VLANs
The figure above illustrates layer 2 segregation with two tenants. In a typical environment, a multilayer
switch is required. Note that while four physical switches and two routers are shown in the figure above,
only one of each might be needed; virtual switches and routers can be used. The VLANs are trunked. In
addition, each of the Virtual Desktop Hosts and the Horizon DaaS management host support multiple
virtual machines.
2.4.3 Layer 3 segregation using VRFs
Virtual Routing and Forwarding (VRF) allows multiple instances of a routing table to co-exist within the
same router at the same time. Because the routing instances are independent, the same or overlapping IP
addresses can be used without conflicting with each other.
VLANs in a Horizon DaaS installation are aggregated into VRFs. Once aggregated into a VRF, segregation
is handled at layer 3 and the VLAN IDs are effectively discarded. This means that VLANs require
uniqueness only under one VRF, and the same VLAN IDs can be reused under multiple VRFs within a
single data center.
2.4.4 Networking Resources
Switches, routers, load balancers and gateways play a role in any Horizon DaaS installation. This section
lists the characteristics required of each of these components. Horizon DaaS is hardware agnostic, only
requiring that equipment supports the characteristics listed in this section.
12
System Blueprint
For example, any layer 3 network devices – including firewalls or routers – installed between the customer
site and the customer’s virtual desktops at the SP’s data center must meet one of the following requirements,
in order of preference:
● Support multiple independent routing tables (VRFs)
● Be dedicated to the customer and support out-of-band management
● Be dedicated to the customer and managed in-band from the customer’s network
Not meeting at least one of these requirements can result in IP address conflicts between the SP and the
tenant.
2.4.4.1 Switches
Switches must support trunking of VLANs. Here are some guidelines regarding the connectivity
requirements for the switches:
● Connectivity between the desktops hosts and the storage should be a 10 GB Ethernet network. For
additional bandwidth, you could aggregate the two links to achieve 20 GB.
● Connectivity for tenant/protocol traffic can be 1 GB connections.
2.4.4.2 Routers
Tenant networks are VLAN tagged; for uniformity of management, SP networks are also VLAN tagged.
Routers must support VRFs. If there will be customers with VPN access back into their corporate network
for access to network services like DNS/AD/DHCP, or for access to other applications, then the router must
have the ability to tie that VPN tunnel to the VRF for a tenant.
2.4.4.3 Load Balancing
Load balancing is only required in front of the tenant appliances for a tenant with 25,000 or more desktops
in a single data center.
2.4.4.4 Gateways
Gateways must support VRFs.
2.4.4.5 Cross Data Center HA
If you configure a tenant in two data centers, there should be a plan for failover to a backup data center in
the event of a failure of one data center. The simplest solution is to redirect users to the backup data center
by changing the DNS record for the portals. There are more advanced solutions available, such as
configuring the F5 BIG-IP Global Traffic Manager to perform the failover automatically. The requirements
for HA have to be determined before choosing the best solution.
2.4.5 Managing External IPs and global DNS entries
In order for a customer to access their cloud hosted desktops from anywhere, the service provider must
allocate external IP addresses to each tenant. The edge router must be able to direct traffic destined for the
tenant portals to the tenant appliance shared IP. Depending on the desired DNS name, a global DNS entry
should be created for the portal IP on the Service Provider’s DNS (such as tenant.SvcProsDesktops.com) or
in the tenant’s global DNS (such as desktops.tenant.com). The domain where the name is hosted matters
because this also defines who is responsible for supplying the SSL certificates for the tenant.
13
System Blueprint
Many routers support port based NATing. If your router is capable of redirecting traffic based on the
incoming port, you can condense the number of external IPs needed to one per tenant. In such a
configuration, configure traffic destined for port 80 or 443 to redirect to the Tenant Appliance Shared IP.
2.4.6 Wiring the Data Center
Your management hosts will need to have the default tenant VLANs available since the tenant appliances
reside there. Your desktop hosts need to have only the specific tenant VLAN that is assigned to that host.
These need to be configured manually on the ESXi hosts before you add the host to the Horizon DaaS
Platform.
Figure 2–5 Sample ESX Network configuration for both tenant and management host.
2.5 Storage Resources
Storage and storage planning are critical parts of any virtualization environment. There are two types of
storage you must plan for, local storage on the host and networked storage. All hosts (both the Horizon
DaaS management hosts and the Virtual Desktop Hosts) require local storage or SAN storage for the
installation of the hypervisor; this is typically accomplished with a relatively small pair of RAID 1 disks
installed in the host or a boot LUN from the SAN. Horizon DaaS management hosts require additional local
storage or shared storage for all of the Horizon DaaS Management Appliances. For a Horizon DaaS
management host, as described above, one could install four additional 600 GB high-speed SAS drives in a
RAID 5 configuration to support the management appliances for 50 tenants. Alternatively, a 1.4 TB
datastore could be used on each management host.
Horizon DaaS uses shared storage to store the virtual desktops. The shared storage options available
depend on how the hypervisors are being discovered by the Horizon DaaS Platform. More information
regarding the storage options follow.
2.5.1 Storage Options
2.5.1.1 SAN Storage
Fibre channel and iSCSI storage can be used because local storage is not an option for storing desktop VMs.
All storage configuration changes are made outside of the Horizon DaaS Platform. Typically this would be
done using the vSphere client connected to vCenter. One or more LUNs must be mapped to all ESXi hosts
assigned to a Desktop Manager. The datastore(s) associated to the LUN(s) must be created on all hosts for
that Desktop Manager and have the same exact name (case sensitive). The Horizon DaaS Platform will
clone desktop VMs to the same datastore that the gold pattern resides on using the native vCenter cloning
14
System Blueprint
API. Please refer to VMware documentation for further information regarding the use of shared storage
across ESXi hosts.
2.5.1.2 NFS Storage
NFS is an option when using vCenter. The NFS share where the desktops are stored is mounted to each of
the tenant’s Virtual Desktop Hosts in a given data center as an NFS datastore. Because the virtual desktops
are on shared storage, it is possible to move a virtual desktop from one tenant host to another. Make sure
your NFS datastore has sufficient I/O capacity. For planning purposes, assume an average of 12- 20 IOPs
per virtual desktop for standard knowledge worker workloads. The figure below presents an overview of
the Horizon DaaS storage architecture.
Figure 2–6 Horizon DaaS Storage Architecture for NFS based configurations
Notable characteristics of this architecture are:
● NFS shares presented and mounted on corresponding host servers
● Isolation is accomplished by presentation of NFS shares to specific hosts
● Failover and VM restart done at the Tenant Appliance layer
● De-duplication done at NFS storage layer with up to 95% space saving
NAS is used for storage of virtual desktops and virtual desktop patterns. This shared location allows
starting a desktop on any of the tenant hosts.
NAS hardware is capable of fast cloning, thin provisioning, and de-duplication.
VMware recommends clustered storage for redundancy. In addition, each NFS storage head should connect
to a resilient switching layer with bonded Ethernet between each NFS storage head and switch. This
increases the available bandwidth and resiliency.
Some of the reasons for choosing NAS over SAN are:
● Each NAS mount point can be shared to the entire data center.
15
System Blueprint
● There is no built-in limit to the number of hosts that can mount a single NAS mount point.
● Capacity management can be done at data center granularity.
● Failure of any individual head can be handled automatically by software. No manual intervention is
required.
● Single NAS head can host multiple tenants, using NFS export controls to provide firm tenant
separation. Certain NAS heads can also be configured into virtual NAS heads to further improve
tenant separation.
The four primary goals of any storage solution for VDI space are:
● Rapid manufacturing of virtual desktops
● Data de-duplication to reduce overall storage footprint
● Increased IOPS capacity
● Reduced cost
2.5.2 Tenant Data Storage
VMware recommends that user’s do not store data inside of the virtual desktop. Instead user data
directories, such as the My Documents folder, should be redirected to a separate storage location. There are
several reasons for this:
● The storage used for desktop images is highly optimized for performance: high speed FC or SAS disk
is required; other optimizations are potentially in place to improve I/O. The storage requirements and
performance profile for user data is significantly different from the desktop images and thus a
different class of storage can be used.
● Protecting data that is stored inside of a virtual desktop requires that each of the virtual desktops is
backed up individually. If user data is redirected to a CIFS share, the external storage can be better
protected and maintained independently to the desktop image.
● There are several options for tenant data storage. A tenant’s user data can be:
● Collocated with the Desktops in the Service Provider’s Data Center — the service provider can offer an
add-on service that provides storage space for user data. Typically this would be CIFS storage as a
service where the service provider offers secure tenant specific user data containers based on CIFS
shares with integration into the tenant’s own Active Directory.
● In the Tenant’s Own Data Center — Access to file shares and other data from the Virtual Desktops
would be over the site-to-site VPN or MPLS connection between the tenant and service provider data
centers. This connection is often referred to as the backhaul connection. Performance of the backhaul
connection depends on the latency between the tenant and service provider data centers. In certain
cases, the backhaul connection can be optimized with a WAN accelerator.
● In the Cloud — the tenant can use a cloud storage service for storing user data.
2.6 Networking between Data Centers
As noted above, a tenant network can be separated into functional pools rather than only into geographic
locations. For example, Tenant C in the example can be divided into Sales, Manufacturing and Finance
through the creation of pools since a pool is a logical unit that can span multiple geographic locations.
Virtual desktops within a pool have no awareness of where their VM is located; end users should only be
concerned with their user experience. Alternatively, pools could be aligned to a specific SLA or use case (i.e.
stateless pool versus stateful pool).
16
System Blueprint
The figure below shows a segment of the example network in which Tenant C is divided into organizational
functions by pools rather than geographic locations. Finance and Development each have staff located in
both New York and London.
Figure 2–7 Multiple Data Centers
2.6.1 Traffic between Data Centers
SPs that maintain multiple data centers require network connectivity between the data centers for data sync
operations and application traffic. The SP nodes in each data center need to be able to communicate with
each other using their IP addresses (NAT will not work). This can be accomplished via a VPN tunnel.
The traffic between data centers tends to be bursty in nature. But experience suggests that a T1 connection
between data centers is sufficient to handle all cases without a disruption of service.
Note that the Horizon DaaS management resource overhead scales independently of the number of tenant
virtual desktops. Increasing the number of virtual desktops reduces the percentage of the cost of an
installation that is required by Horizon DaaS management alone. The Horizon DaaS management overhead
is an increasingly smaller portion of the cost of operation as data centers are scaled out with the addition of
virtual desktops.
It is important to realize that the example used in this section is a set of calculation methods based on a
hypothetical case in order to demonstrate the methods. Optimization of tenant virtual desktops could
reduce the number of required desktops resulting in decreases in the number of physical servers.
2.7 Sample Deployment
In order to practice the concepts in this section we have a created a hypothetical Horizon DaaS network to
accommodate 30,000 virtual desktops. Examples of calculations concerning resources are based on the
architecture of this network (see figure below).
The sample network consists of two SP data centers located in New York and London. There are three
tenants:
● Tenant A is located in New York and consists of 5,000 virtual desktops.
● Tenant B is also located in New York and consists of 5,000 virtual desktops.
● Tenant C has two geographical locations and consists of a total of 20,000 virtual desktops:
● 5,000 in New York
● 15,000 in London
17
System Blueprint
Figure 2–8 Sample Network used in this Section
2.7.1 Horizon DaaS Management Appliances Required for Tenant A
The following appliances are necessary for the initial Tenant A deployment. We will require the minimum
two physical servers to provide physical server level HA. The table below describes how the Horizon DaaS
appliances are allocated on the two Horizon DaaS management hosts.
Table 2–5 : Sample Appliance Estimate (Single Tenant)
Number of Horizon DaaS Appliances Required
Horizon DaaS Management
Appliance
Mgmt Host A
Mgmt Host B
SP
Service Provider Appliance
1
1
SP
Resource Manager Appliance
1
1
1
1
Tenant
A
Tenant Appliance
Remember that virtual desktops cannot be deployed on the Horizon DaaS management hosts.
2.7.2 Horizon DaaS Management Appliances Required for Tenant B
To add Tenant B to the New York data center we simply need to add the additional tenant appliances
required to support Tenant B. Additional appliances are indicated in bold font in the table below.
Table 2–6 : Sample Appliance Estimate (Two Tenants)
Number of Horizon DaaS Appliances Required
Horizon DaaS Management
Appliance
Mgmt Host A
Mgmt Host B
SP
Service Provider Appliance
1
1
SP
Resource Manager Appliance
1
1
Tenant
18
A
Tenant Appliance
1
1
B
Tenant Appliance
1
1
System Blueprint
After adding Tenant B we now have four Horizon DaaS Management Appliances running on each Horizon
DaaS management host.
2.7.3 Desktop Host Requirements for Tenant A and Tenant B
Considering the size of these tenants, we have the advantage of being able to use very large servers. For this
example we will use the largest servers specified in this section, although in reality we may use this
opportunity to explore even higher density compute options.
Desktops per server
Cores
RAM (GB)
120
12
192
Using this server, each tenant will require 42 servers to reach 5000 desktops.
2.7.4 Additional Data Centers and Tenant C
When adding an additional data center, it is necessary to localize some of the SP appliances which results in
some additional overhead cost. However, we also gain efficiencies with the ability to provide data center
level HA. For instance, the example includes Tenant C headquartered in London - with 5,000 desktops in
New York and 15,000 desktops hosted at a new data center in London. (The additional appliances are
indicated in bold font in the table below.)
Table 2–7 Additional Data Center Appliance Estimates
Number of Horizon DaaS Appliances Required
Horizon DaaS Management
Appliance
Mgt Host A
(NY)
Mgt Host B
(NY)
Mgt Host C
(London)
Mgt Host D
(London)
SP
Service Provider Appliance
1
1
1
1
SP
Resource Manager Appliance 1
1
1
1
A
Tenant Appliance
1
1
B
Tenant Appliance
1
1
C
Tenant Appliance
1
1
3
3
Tenant
After adding Tenant C there are a total of five management appliances on each Horizon DaaS management
host in NY and five management appliances on each Horizon DaaS management host in London.
2.7.5 Storage: Three Tenants in Two Data Centers
Published figures for de-duplication suggest a conservative 75% reduction in storage required. Note that
since most virtual desktops can consist largely of duplicated operating system components, the deduplication savings are typically larger.
For the two data centers described above, the storage requirements under these assumptions are
summarized in the table below.
Table 2–8 Sample Storage Estimate (Two Data Centers)
Data Center
NY
London
Total Desktop VM disk
450.0 TB
450.0 TB
Total Desktop VM disk assuming 75% de-duplication factor
112.5 TB
112.5 TB
19
System Blueprint
Note the significant savings realized by current de-duplication technology using the fairly conservative
estimate of 75%. To meet these targets, de-duplication functionality must exist on the primary storage
system.
2.8 Planning a Production Environment
Through VMware’s own experiences operating the Horizon DaaS Cloud service, we developed a successful
sales model that we offer as part of the Horizon DaaS Blueprint. The process makes use of several dedicated
environments to allow prospective customers to experience how simple and productive a DaaS option can
be. When initially deploying your production environment we will deploy four tenants. Each of these has a
specific purpose in the sales cycle.
The initial tenant we typically configure is the Demo environment. This will become your showcase
environment to enable your sales team to show off what DaaS has to offer. The goal is to configure this
tenant with static and dynamic pools consisting of Windows 7, Windows server desktops, Windows XP and
Ubuntu desktops. This tenant also affords the install team the best opportunity to test the environment
infrastructure.
Typically the next tenant we deploy is an internal tenant for use by the service provider. The purpose of this
environment is to enable your sales and technical teams to begin using hosted desktops themselves. How
better to evangelize the service than if your teams are using the service themselves and are better able to
understand customer concerns and the many additional opportunities that come from hosted Desktops.
The final two tenants are directly tied to sales enablement – i.e. let customers see firsthand how simple DaaS
can be. The Free Trial environment allows potential customers access to a non-persistent Windows server
desktop after a quick enrollment process. This is a key sales lead driver and introduces customers to your
service offering. Once your sales team has qualified a lead, you can provide a full static Windows 7 POC
desktop for the customer to trial for a short period of time. The goal of the POC environment is to quickly
engage prospective customers so they can see how quick and easy DaaS can be and so they can test their
own applications in the cloud on their own.
Figure 2–9 Sample Data Center Wiring Diagram
2.9 Data Protection
2.9.1 Horizon DaaS Appliance Backup
With the exception of the database embedded in the Horizon DaaS appliance, the appliance is disposable: it
is not required that an entire appliance be backed up. VMware provides backup and restore scripts for the
embedded databases with the platform. The backup scripts should be executed daily. The most recent
20
System Blueprint
backups should be kept on site for fast retrieval access. Additionally, a rotation should be identified to send
backups offsite. If the Horizon DaaS appliances reside on external storage that is snapshot and replication
capable, that could be considered as an alternate backup strategy.
2.9.2 Desktop Backup
If user data is properly redirected to external file shares, backup of individual desktops should not be
required because the desktop can easily be recreated. However, the gold images used to create the desktops
should be backed up on a regular basis. Because a gold image is a powered off virtual machine, the backup
should be performed by the service provider. The service provider could use snapshots and replication
from the storage system as a strategy to protect the gold image(s). This has the benefit of being able to
restore quickly and the ability to provide disaster recovery. A more traditional backup scheme could also be
implemented if desired.
2.9.3 User Data Backup / Replication
The user data redirected from the desktop should be backed up on a regular basis. Using storage system
snapshots and replication can be an effective strategy with the ability to restore individual files to a specific
point in time. A traditional backup could also be used to provide a similar backup capability.
If user data is stored either at the service provider or tenant data center, file shares should be replicated if
high availability and/or disaster recovery is desired. If user data is stored in the cloud, coordination will
need to take place with the cloud storage provider in order to understand how to implement HA.
2.10 Security
2.10.1 Platform Security
2.10.1.1 Appliance Services
The service provider, resource manager, tenant and desktop manager appliances act as both clients and
servers of several internal platform services. SSL is employed to ensure communication for these services is
secure. This involves encrypting the communication channel, and verifying that a certificate identifying the
appliance running the service exists and is valid for that appliance.
Every Horizon DaaS deployment generates its own unique Certificate Authority (CA) key pair and
certificate during the bootstrap phase of the primary service provider appliance in the first data center. This
CA is then used to sign the certificates for the appliances, including the primary service provider itself. The
certificate for an appliance is created during its installation step. A key pair and Certificate Signing Request
(CSR) are generated initially. The CSR is then copied to the primary service provider appliance for signing,
and the signed certificate copied back to the appliance being installed. At no time is the private key for any
appliance ever transmitted. The CA certificate is also copied to the appliance and is installed in a trusted
certificate store. This process is repeated whenever an appliance is restored, and will generate a new
certificate each time. Unlike other appliances, the primary service provider containing the CA cannot be
restored. It is important to maintain the appropriate backups for it, and to follow the procedures for
promoting another service provider appliance to be the primary in the event of an unrecoverable appliance
failure.
When an appliance makes a request to a service (i.e., is acting as a client), it establishes an SSL connection to
port 8443 of the appliance running the service (i.e., the server). The server responds with its certificate and
the client verifies that it trusts the server by checking that its trusted certificate store contains a certificate for
the CA that signed the server certificate. It also validates the properties of the certificate, including the
expiration date, and whether the IP address in the certificate matches the IP address it used to make the
request.
21
System Blueprint
2.10.1.2 DaaS Agent Services
The DaaS agent acts as a client of services running on the tenant and desktop manager appliances. SSL is
employed to ensure communication for these services is secure. This involves encrypting the
communication channel, and verifying that a certificate identifying the appliance running the service exists
and is valid for that appliance.
The DaaS agent must be provided with the CA certificate generated by the Horizon DaaS platform. The CA
certificate is used to verify that the tenant and desktop manager appliances are trusted. Otherwise, it will
refuse to connect to the appliances. When configuring a new gold pattern, the certificate file should be
copied to the certificate folder in the DaaS agent installation folder before the gold pattern is sealed.
When the DaaS agent makes a request to a service (i.e., is acting as a client), it establishes an SSL connection
to port 8443 of the appliance running the service (i.e., the server). The server responds with its certificate and
the DaaS agent verifies that it trusts the server by checking that its certificate folder contains a certificate for
the CA that signed the server certificate. It also validates the properties of the certificate, including the
expiration date, and whether the IP address in the certificate matches the IP address it used to make the
request.
2.10.2 Security Best Practices
2.10.2.1 Unique DNS Record for Each Portal
It is recommended that you create a separate DNS record for each of the three system portals:
● Desktop Portal (user portal)
● Administration console (admin portal)
● REST API portal
They may all point to the IP of the tenant appliance. When distributing links to users, make sure to provide
the appropriate hostname.
2.11 Role Separation and Administration
The Horizon DaaS Platform presents three browser-based graphical user interface portals:
•
Service Center
•
Administration console
•
Desktop Portal
2.11.1 Service Center
The Service Center is used by the Service Provider administrators to manage the data center resources, such
as hosts, storage and the Horizon DaaS Management Appliances. The service center also enables the
management of tenant contracts defining tenant models and quotas as well as the configuration of tenant
appliances and networks.
The Service Center supports creating and assigning additional roles and permissions among the Service
provider administrators to securely distribute management tasks among larger organizations.
22
System Blueprint
2.11.2 Administration console
The Administration Console is used by the enterprise administrators (Tenant Administrators) to manage
their virtual infrastructure. Enterprise administrators can provision both static and dynamic pools (or
“assignments”) of desktops based on templates (or “images”) that they have customized or new templates
that they might upload. Enterprise administrators can also add additional domains and map groups or
individuals to either specific virtual desktops or pools.
The Administration Console supports creating and assigning additional roles and permissions among the
Service provider administrators to securely distribute management tasks among larger organizations.
2.11.3 Desktop Portal
The Desktop Portal enables individual users to connect to their virtual desktops. Every tenant has their own
customizable portal. Users login to the portal and have the option of being directly connected to a desktop
they have defined as their default, or presented with a list of available desktops and enabled to choose
which virtual desktop to connect to. Users can also set default protocols per VM and additional protocol
customizations. Users can connect to the Desktop Portal from a variety of clients including thin clients (both
WTOS based WYSE clients and any thin clients running Windows Embedded), thick clients (such as PCs
running Windows, Mac OS or Linux) as well as iOS and Android based mobile devices.
The Desktop Portal facilitates connections using a wide variety of remoting protocols.
● RDP (Microsoft) – Microsoft’s Remote Desktop Protocol is a very strong protocol with broad support.
The protocol supports a good multi-media experience when less than 20ms of latency, and good user
experience when using office productivity apps when latency is under 50ms.
Note: The RDP protocol is not supported in Horizon DaaS Platform 7.0.0 for new pools. However, the
existing pools which were created prior to upgrade will continue to work with the RDP protocol.
● PCoIP (VMware) – The PCoIP experience provides a very good multi-media user experience in
situations with both high latency and constrained bandwidth.
● RGS (HP) – Remote Graphics Software developed by HP primarily for WAN deployments provides
good multimedia support with latency as high as 100ms when provided ample bandwidth. RGS is
particularly popular for graphics-intense use cases like CAD.
● HTML5 (Ericom) – This client allows you to access your desktop via any HTML5-compatible web
browser. It does not require any additional plug-ins, add-ons or installation of any kind on the end
user device which makes this suitable for devices such as Chrome OS Netbooks.
● NX (Linux) – Enables access to Linux Desktop
23
System Blueprint
2.12 NetApp Information
This appendix lists Horizon DaaS Platform information specific to the NetApp environment.
Supported Hardware
NetApp FAS Series, NetApp V-Series, and IBM N-Series
NFS Permissions
NFS permissions must be manually configured such that the required
hypervisors have root access to the appropriate NFS exports.
Access Credentials
In order for the Horizon DaaS Resource Manager to properly access the
NetApp API, a service account must be created. This account must have
special privileges in order to make the API calls. The specific privileges
required are included in a tech note in the Customer Knowledge Base.
Data ONTAP®
The Horizon DaaS Platform is qualified to work with the following versions
of Data ONTAP:
 Data ONTAP 7-mode
 V7 (7.3.1 or greater)
 V8 (8.0.x, 8.1.x, 8.2.x)
 Clustered Data ONTAP (supported with vCenter using VSC)
 V8 (8.1.x, 8.2.x)
Virtual Storage Console
The NetApp Virtual Storage Console (VSC) is a plugin for VMware vCenter
leveraged by the Horizon DaaS Platform for rapid cloning of virtual
machines. The following versions of VSC are supported:
 4.1-P1, 4.2.1
Licenses
FlexClone – FlexClone provides the ability for the Horizon DaaS Platform to
clone gold images in the matter of seconds (optional for Horizon DaaS but
recommended)
FlexScale – FlexScale is the tuneable software component to Flash Cache.
FlexScale allows different caching modes to be used based on the type of
workload (optional for Horizon DaaS but recommended)
Multistore – Provides the ability to use vFiler for added tenant security and
data mobility (optional for Horizon DaaS but recommended)
NFS – This is the base license required to use the NetApp filer for NFS
(required by Horizon DaaS)
A-SIS – ASIS is the deduplication engine in ONTAP (optional for Horizon
DaaS but recommended)
NearStore – A NearStore license is required when using ASIS deduplication
(optional for Horizon DaaS but recommended)
SnapMirror – A SnapMirror license is required for thin replication (optional
for Horizon DaaS but recommended if requiring site HA/DR)
NetApp & Horizon DaaS See the document Guidelines for Virtual Desktop Storage Profiling and Sizing
Best Practices
on the NetApp website.
System Blueprint
2.12.1 Best Practices
VMware recommends the following best practices in the NetApp environment:
● File System Alignment - File system misalignment is a known issue in virtual environments and can
cause performance issues for virtual machines (VMs) and therefore could impact the performance of a
Horizon DaaS virtual desktop deployment. It is therefore critically important that the NetApp file
system alignment practices are followed as per the NetApp document Best Practices for File System
Alignment in Virtual Environments. Note that this issue is not unique to NetApp storage arrays and can
occur with any storage array from any vendor.
● Separate Aggregate for Management – It is recommended to create a separate aggregate at a smaller
size to support Horizon DaaS management functions where they are hosted on shared NFS storage.
The reason for this is to separate the management I/O load from the virtual desktop I/O load.
● Maximize Aggregate Size for Desktops – It is recommended to create an aggregate at the maximum
size supported for the particular model of NetApp filer. The reason for this is to include as many
spindles as possible to spread data across to have optimum I/O performance.
● Minimize the number of Volumes – It is recommended to minimize the number of volumes that are
deployed to host virtual desktops. The reason for this is that data deduplication is done within the
context of a volume. So, the more data in a single volume the greater chance of data blocks being
deduplicated.
● Separate Volume and NFS Shares per Tenant – For maintainability, separation, protection and
portability purposes, it is recommended that a separate volume hosting NFS shares is created for each
tenant’s virtual desktops. For smaller tenant deployments, it might not be appropriate to follow the
separate volume rule due to reduced efficiencies.
● vFiler – It is recommended for an extra layer of security that each tenant have a Multistore vFiler
instance created to host the virtual desktops. A vFiler container could also be used to secure and
segregate tenant user data if hosted at the SP. vFiler functionality creates logical access separation
within the same NetApp filer and would allow integration into the tenant’s own Active Directory.
● Separate Volume to host swap, temp and winpage files – This is required to separate redundant data
from the volumes hosting the base desktop image. This approach reduces the redundant data that
would otherwise be locked into snapshots and also replicated for DR. These volumes should have
ASIS (de-duplication) disabled.
● Enable Deduplication – It is recommended to enable ASIS deduplication for the virtual desktop and
any user data host volumes. Note that this needs to be enabled on a per volume basis (disabled by
default). Important: when setting up deduplication on the Storage Efficiency tab for a volume, make
sure you check the "Scheduled" radio-button, not “On-demand” or "Automated".
● Use NetApp FlexClone – It is highly recommended that NetApp FlexClone be utilized for desktop
provisioning operations. The ability to provision with FlexClone is built into the Horizon DaaS
Platform. FlexClone technology is hardware-assisted rapid creation of space-efficient, writable, point
in time images of individual files. FlexClone provides the ability to clone hundreds and possibly
thousands of desktop images from a base desktop image providing significant cost, space and time
savings.
● Use SecureAdmin – The Horizon DaaS Platform supports SSL and non-SSL access to the NetApp filer.
By default, the Horizon DaaS Resource Manager will attempt to communicate with the NetApp filer
via SSL. There is a configuration parameter to use non-SSL communication. Please refer to Horizon
DaaS documentation regarding the specific setting. In order to use SSL to communicate with the
NetApp filer, SecureAdmin must be installed and configured.
● Use FlashCache – It is highly recommended that FlashCache be used to provide a large front end read
cache. This helps lighten the I/O load on the disks and thus helps deal with high read I/O load such
as boot storms. The size of the FlashCache varies depending on the FAS model. With a clustered
NetApp, each filer head should have its own FlashCache card.
25
System Blueprint
● Use Snapshots – Snapshots can provide an almost instantaneous way to protect and recover appliance,
user data, and gold images. Snapshots do not affect performance and provide the ability for very fast
restores.
● SnapMirror for DR – SnapMirror can be used to replicate desktop and/or user data offsite in the case
of a site failure. SnapMirror complements NetApp FlexClone and deduplication providing a ‘thin
replication’ capability where the data reduction persists during the replication process.
2.12.2 Configuring Your vFiler
To enable the Horizon DaaS Platform to see your NetApp vFiler you need to enable httpd and create an
account. Use the following commands as an example, these need to be run in the context of the appropriate
vFiler (not on the root vFiler)
options httpd.admin.enable on
useradmin role add desktone_role -c "Role for Desktone API Support" -a login-httpadmin,api-license-list-info,api-system-get-info,api-system-get-version,api-system-getontapi-version,api-nfs-status,api-nfs-exportfs-list-rules-2,api-nfs-exportfs-modifyrule-2,api-clone-start,api-clone-stop,api-clone-list-status,api-vfiler-list-info
useradmin group add desktone_group -c "Group for Desktone" -r desktone_role
useradmin user add desktone -c "Service account for Desktone" -n "Desktone SA" -g
desktone_group
2.12.3 Desktop DR Deployment Strategies
Multiple Data Center Desktop Images: SnapMirror can be used to replicate VM images between multiple
data centers. Should a user’s primary data center become inaccessible, replicated images would be available
in an alternate data center to provide that user’s primary desktop.
Multiple Data Center User Data: SnapMirror can be used to replicate data between multiple data centers.
Should any data center become inaccessible, replicated user data would be available in an alternate data
center.
Backup: Snapshot and SnapRestore can be used to backup and restore VM images and user data.
2.13 Cisco Virtualized Multi-Tenant Data Center (VMDC)
The Cisco® Virtualized Multi-Tenant Data Center (VMDC) architecture is a set of specifications and
guidelines for creating and deploying a scalable, secure, and resilient infrastructure that addresses the needs
of cloud computing. To develop a trusted approach to cloud computing, Cisco VMDC combines the latest
routing and switching technologies, advancements in cloud security and automation, and leading edge
offerings from cloud ecosystem partners. Cisco VMDC enables service providers (SPs) to build secure public
clouds and enterprises to build private clouds with the following benefits:
● Reduced time to deployment - Provides a fully tested and validated architecture that enables
technology adoption and rapid deployment.
● Reduced risk - Enables enterprises and service providers to deploy new architectures and technologies
with confidence.
● Increased flexibility - Enables rapid, on-demand workload deployment in a multi-tenant environment
using a comprehensive automation framework with portal-based resource provisioning and
management capabilities.
26
System Blueprint
● Improved operational efficiency - Integrates automation with multi-tenant resource pools (compute,
network, and storage) to improve asset use, reduce operational overhead, and mitigate operational
configuration errors.
For more information about the Cisco VMDC Framework go to http://cisco.com/go/vmdc
2.13.1 VMDC 2.2 Solution Components
Please consider the following defines the versions tested. Please contact your Cisco representative for the
latest version of the VMDC architecture.
Features
Components
Network
Cisco Nexus® 7010, 7018, NXOS 5.2.1
Data center services node—Cisco Catalyst® 6509-E Switch (with Virtual
Switching System [VSS]), IOS 12.2(33)SXJ
Cisco ASR 9000, XR 4.1.0
Cisco ASR 1006, XE 3.4.0 15.1(3)S
Services
Cisco Virtual Security Gateway, 4.2(1)SV1(2)
Cisco Virtual Network Management Center: 1.2(1b)
Cisco Adaptive Security Appliance 5585-60X, 8.4.2
Cisco ACE30 Application Control Engine Module, A 4.2.1
Compute
Cisco Unified Computing System™ (UCS™), 1.4(2b)
Cisco UCS 5108 Blade Server Chassis
Cisco UCS 6248 Fabric Interconnect
Cisco UCS B230 M2 Blade Server
Cisco UCS M71KR-E Emulex Converged Network Adapter (CNA)
Cisco UCS M81KR Virtual Interface Card (VIC)
27
System Blueprint
Virtualization
VMware® vSphereTM 4.1 U1
VMware ESXi 4.1U1 Hypervisor
Nexus N1010-x
Storage
NetApp FAS3170 and NetApp FAS6080 with ONTAP 8.0.2
2.13.2 Suggested components for trial and POC environments
When deploying an initial trial or Proof of Concept environment you might need to augment your
environment in order to test key aspects of the Horizon DaaS Multi-Tenant platform. A basic environment
can be constructed using the following Cisco components:
28
Component
Use
ASR 1001
Routing, VRF and VLAN tagging
ASA 5505
VPN termination and firewall
Catalyst Switch
VLAN
UCS
Compute
NetApp
Storage
System Blueprint
Platform Install Checklist – Install Using vCenter
Before you can install the Horizon DaaS Platform, you first need to complete the tasks listed in the System
Blueprint section of this document. Contact your VMware customer service representative for help with
any of these prerequisites. A Service Provider Installation Worksheet and two Tenant Installation
worksheets are included after this appendix to help organize and collect all the information needed to start
the install.
1. Build the network infrastructure required to support multi-tenancy, typically accomplished with
VLAN tagging for network separation at layer 2 and VRFs to isolate tenants and allow for a separate
routing table for each tenant. See Network Resources in the System Blueprint for more detail.
Confirm the management network is reachable from the service provider network.
2. Install and configure your storage system. See Storage Resources in the System Blueprint for more
details.
3. Install three appropriately sized ESX Hosts with ESXi 5.0 or 5.1. Version 4.x is not supported with a
vCenter installation and, as of this writing (Feb 7, 2014) ESXi 5.5 is not supported yet. Two hosts will
be used for management with the third host for the Test Tenant. See Compute Resources in the System
Blueprint for more sizing details. ESX hosts must be configured with a minimum of an ESXi Standard
license (do not use the free version; the free license will not work).
On each of the management hosts add all five of the required networks. Sample configuration
can be found in the System Blueprint.
● Management Network – For ESXi / vCenter
● Service Provider Network – For the Desktone SP Appliances
● Link Local Network – Not Routed – For all the Desktone Appliances
● Tenant 1 Network – For the 1st Tenant Appliances and Desktops
● Tenant 2 Network – For the 2nd Tenant Appliances and Desktops
On each of the management hosts add the Service Provider mount/LUN to the ESXi hosts.
On each tenant host add just the Management, Link Local, and Tenant networks.
4. Install and configure required network services. The Service Provider requires NTP, Active Directory
and DNS services. The Test and POC Tenants will require their own Active Directory, DNS and
DHCP services.
5. Assign IPs to Service Provider Nodes from the Service Provider network.
6. Service Provider AD Setup (Confirm settings with ADExplorer)
7. Assign IPs to Tenant Appliances from the Test Tenant network and POC Tenant network.
29
System Blueprint
Service Provider Installation Worksheet
Appliance “SP1” Network Bootstrap
Addition to Multi-DC Setup?
Are you building a new environment or joining an existing one. Usually
the answer is No for a new environment.
Datacenter name
Unique to each physical site
Linklocal backbone VLAN ID
VLAN ID number or DVS Port Profile Name (as seen in vCenter)
Linklocal backbone IP for eth1
169.254.x.x – Host IP for appliance SP1 – ALL linklocal IPs start from this
one IP
dt linklocal backbone mask in CIDR
Format (0-32)
16 (recommended for production) or 24 (ok to use in poc environments)
SP VLAN ID
VLAN ID number or DVS Port Profile Name (as seen in vCenter)
“SP1” Host IP for eth0
x.x.x.x – Actual Host IP for SP1 appliance (not the floating)
SP Network mask in CIDR Format (0-32)
SP Network Gateway
GW of SP Network
Hostname of SP1 appliance
FQDN Needed ex: name.domain.suffix
DNS Server IP
IP of DNS Server – Just one
NTP server (one per line)
IP of NTP Servers – more than one if needed
Is this an HA Setup?
Usually Yes even if only one management host is currently available
SP Floating IP Address
Floating IP of SP Pair – only needed in HA setup
psql Database Password
(do not have to write it on sheet but please have one in mind for the install)
Password for user “desktone” – Will be
used for all appliances
(do not have to write it on sheet but please have one in mind for the install)
SP Domain Bootstrap
NETBIOS name
Name of Domain – what you put before the \ when you log in – VMWARE (for
example)
Domain Suffix
vmware.com
AD Protocol
ldap or ldaps
AD Protocol Port
389 or 636
Primary DNS Server IP
Usually same as table above
Context
DC=VMWARE,DC=COM
Domain Bind Account*
CN=Joe Smith,CN=Users
Domain Bind Account Pwd
password for above account
Super Admin (SC Access)
Distinguished Name w/out Context - Admin Group for SP Appliance GUI Access
(Service Center). For example, CN=serviceadmins,CN=groups
* To find domain bind acct, go to server manager, then “View -> Advanced Features” to turn on the “Attribute Editor”
tab. Find the user, then user properties, then go to “Attribute Editor” and find the “distinguishedName” field. That is
the account name needed, minus the end DC sections that make up the domain suffix.
30
System Blueprint
Management Hosts
Host
Name or IP
Management Account / Pswd
Memory OverAllocation Ratio
CPU Over-Allocation
Ratio
vCenter
IP or FQDN
Login/Pwd of user that appliances
use to connect to vCenter
NA
NA
MgtHost1
Name as seen in vC
1.0
10
MgtHost2
Name as seen in vC
1.0
10
Service Provider NetApp NFS Storage with VSC
(Only needed if storage is a NetApp and is using the VSC vCenter plugin)
Use
NetApp Name or IP *
Service Provider
IP or FQDN
Tenant 1
IP or FQDN
Tenant 2
IP or FQDN
Tenant etc…
IP or FQDN
Account / Password
* Important: When adding a storage system to the Desktone Platform, the Address field must match how the VSC plugin was discovered for vCenter. If the plug-in was discovered as an IP address you must enter the IP address in the
Address field. If it was discovered as a FQDN, then you must enter the complete domain name in the Address field.
Service Provider Appliance Information
Node Use
VM / Appliance Hostname
Service Provider IP
Service Provider 2
Name of VM
SP Host IP – NOT LL IP – NOT FQDN
Tenant Resource Manager 1
Name of VM
SP Host IP – NOT LL IP – NOT FQDN
Tenant Resource Manager 2
Name of VM
SP Host IP – NOT LL IP – NOT FQDN
31
System Blueprint
Tenant 1 – Installation Worksheet
The following table lists the fields you will need to specify in the Service Center when installing a tenant.
Field
Values
Sample Value / Notes
Tenant VLAN ID or DVS Name
115 or Tenant-1-Net
Tenant Gateway
172.16.115.1
Tenant DNS Name
172.16.115.2 (AD server)
Tenant Subnet mask
255.255.255.0
Primary Tenant Appliance /VM Name
TenantA-Node1
Primary Tenant Appliance IP
Address
172.16.115.21
Secondary Tenant Appliance/VM Name
TenantA-Node2
Secondary Tenant Appliance IP
Address
172.16.115.22
Floating IP Address
172.16.115.20
Tenant vCenter
DNS name or IP of host
Tenant vCenter User name
HostMgtAcct
Tenant vCenter Password
hostPsswd
Tenant Management Hosts
(vCenter info is only needed if an additional Tenant vCenter is used)
Host
Name or IP
Management Account / Pswd
Memory OverAllocation Ratio
CPU Over-Allocation
Ratio
Tenant
vCenter
IP or FQDN
Login/Pwd of user that Horizon DaaS
Appliances use to connect to vCenter
NA
NA
MgtHost1
Name as seen in vC
1.5
10
MgtHost2
Name as seen in vC
1.5
10
Optionally, if you are configuring the Tenant Active Directory, you also need the following tenant info.
Field
Description
NETBIOS Name
Active Directory domain name
DNS Domain Name
Fully qualified Active Directory domain name
Protocol
LDAP or LDAPS, depending on your Active Directory setup
Bind Username
Domain administrator
Bind Password
Domain administrator password
Port
The defaults for this field are: LDAP -> 389 and LDAPS -> 636. You should not need to
modify this field unless you are using a non-standard port.
Domain Controller IP
(Optional) Specify a single preferred domain controller IP address if you want AD traffic
to hit a specific domain controller.
Context
This field is auto-populated based on the DNS Domain Name information provided
earlier.
32
System Blueprint
Tenant 2 – Installation Worksheet
The following table lists the fields you will need to specify in the Service Center when installing a second tenant.
Field
Values
Sample Value / Notes
Tenant VLAN ID or DVS Name
115 or Tenant-2-Net
Tenant Gateway
172.18.115.1
Tenant DNS Name
172.18.115.2 (AD server)
Tenant Subnet mask
255.255.255.0
Primary Tenant Appliance /VM Name
TenantB-Node1
Primary Tenant Appliance IP Address
172.18.115.21
Secondary Tenant Appliance/VM Name
TenantB-Node2
Secondary Tenant Appliance IP Address
172.18.115.22
Floating IP Address
172.18.115.20
Tenant vCenter
DNS name or IP of host
Tenant vCenter User name
HostMgtAcct
Tenant vCenter Password
hostPsswd
Tenant Management Hosts
(vCenter info is only needed if an additional Tenant vCenter is used)
Host
Name or IP
Management Account / Pswd
Memory OverAllocation Ratio
CPU Over-Allocation
Ratio
Tenant
vCenter
IP or FQDN
Login/Pwd of user that Horizon DaaS
Appliances use to connect to vCenter
NA
NA
MgtHost1
Name as seen in vC
1.5
10
MgtHost2
Name as seen in vC
1.5
10
Optionally, if you are configuring the Tenant Active Directory, you also need the following tenant info.
Field
Description
NETBIOS Name
Active Directory domain name
DNS Domain Name
Fully qualified Active Directory domain name
Protocol
LDAP or LDAPS, depending on your Active Directory setup
Bind Username
Domain administrator
Bind Password
Domain administrator password
Port
The defaults for this field are: LDAP -> 389 and LDAPS -> 636. You should not need to
modify this field unless you are using a non-standard port.
Domain Controller IP
(Optional) Specify a single preferred domain controller IP address if you want AD traffic
to hit a specific domain controller.
Context
This field is auto-populated based on the DNS Domain Name information provided
earlier.
33
Tenant Installation – vCenter
3 Tenant Installation – vCenter
3.1 Overview
The DaaS platform software allows you to manage your tenant desktops using VMware vCenter hypervisor
management software. This guide provides you with information that is specific to installing and
configuring a Tenant appliance in a datacenter using vCenter after you have installed or upgraded and
configured the Service Provider appliance and Resource Manager.
A Tenant Installation Worksheet is included at the end of this document to help you collect and organize all
the information needed to complete the install. Please consider that the installation process explained in this
section is dedicated to standing up a tenant in the DaaS platform. However, a successful tenant launch also
needs to take into consideration items such as VDA licensing and image requirements and preparation.
Please contact VMware support for further guidance with developing your own tenant on-boarding
processes.
34
Tenant Installation – vCenter
3.2 Tenant Installation Prerequisites
The prerequisites are slightly different, depending on whether the tenant will have VPN backhaul to the
customer network for services or applications.
The supported combinations for managing your tenant desktops with vCenter are the following:
Table 3–1 vCenter used as Hypervisor Manager – Supported (Y) and Non-supported (N)
Configurations
ESXi 4.0
ESXi 4.1
ESXi 5.0
ESXi 5.1
ESXi 5.5
ESXi 6.0
vCenter v 6.0 U3
N
N
N
N
Y
Y
vCenter v 6.0 U2
N
N
Y
Y
Y
Y
vCenter v 5.5 U3*
Y
Y
Y
Y
Y
N
vCenter v 5.5 U2
Y
Y
Y
Y
Y**
N
*Recommended: upgrade to ESX 5.5 Update 3b
**Strongly recommended: upgrade to vCenter 5.5 U3 to manage ESXi 5.5 U3 hosts.
3.2.1 Discovery and Assignment
Configure an account in the vCenter for the DaaS platform to manage the virtual resources via the vSphere
API.
Discover one or more vCenter servers for Tenant desktops. Assign one of these vCenters to the Tenant
Desktop Manager via the Service Center Service Grid. There is a limit of 1 vCenter per Desktop Manager.
Note: You can use the same vCenter for both your management appliances and tenant desktops or you can
use separate vCenters for each. If you are using the same vCenter, all the hosts required for management
appliances and tenant desktops must be in the same vCenter DataCenter.
Important: The datastores configured on each vSphere ESXi or cluster within a vCenter Datacenter must be
the same. Shared storage is required for desktop VMs. In order for this to work properly, datastores must
be created and mapped to the same LUNs on all the desktop hosts for a particular tenant with the same
datastore name (case sensitive).
3.2.1.1 (Optional) Configure NetApp VSC Plug-in
If you are using the NetApp VSC plugin for vSphere, it must be installed on the same machine as the
vCenter server. Supported versions of the NetApp VSC plugin are 4.1.P1 and 4.2.1. When registering the
VSC plugin with vCenter, use the vCenter administrator account credentials, which will also be used to
discover the desktop hypervisor later in this section of the document. For further information please refer to
the NetApp VSC installation and configuration guide.
3.2.2 Enterprise Network Connectivity
VPN/MPLS. If the tenant requires backhaul then configure VPN access (IPSEC Tunnel, MPLS Circuit) from
the tenant network back to the customers network that houses, for example, their AD, DNS, and DHCP as
well as any other applications required by the virtual desktop users.
35
Tenant Installation – vCenter
3.2.3 Tenant Network Configuration
Define the tenant network. If the tenant has backhaul, work with the tenant to identify an internal subnet
that is not in use in their infrastructure to be used for the virtual desktops. Otherwise assign an appropriate
subnet to the tenant network.
Add VLAN(s), VXLAN(s), or a Distributed Virtual Port Group (DVPG) to the tenant. At least one of these
must be the Tenant Network.
Assign at least one of the added network(s) to the Desktop Manager via the Service Grid in the Service
Center. These networks will be used to ensure desktop isolation and may be shared across multiple
Desktop Managers.
Important: DVPG must be configured to use ephemeral port binding.
3.2.4 DNS Configuration
Define or install a DNS server for the tenant. There must be a DNS server available from the tenant network
which can be used to resolve the name of the domain so that the tenant can authenticate.
3.2.5 Allocate Tenant IP Addresses
A minimum of 3 IP addresses should be allocated on the tenant network. Additional IPs will need to be
allocated for scenarios where multiple Desktop Managers are required.
● 2 IPs for the Management Appliances themselves
● 1 IP to be shared between the Appliances
●
(Optional) 1 IP if the tenant has backhaul to a DHCP server. This will be used for the DHCP relay
service
3.2.6 Define or install DHCP service for the tenant.
● A DHCP helper/relay is required to deliver the DHCP requests over the VPN tunnel to the tenant
network. This can be done directly on the switches to which the hosts are attached or if not possible, a
small Linux appliance can be configured in the tenant to perform this function.
● Configure the DHCP scope for the desktop subnet, starting at x.x.x.30.
● Configure DHCP option code 74 (IRC Chat) to point to the two IPs allocated for the tenant appliances.
For example, if you are using a Windows server to provide DHCP service:
a. Open the DHCP configuration client from Control Panel > Administrative Tools.
b. Right-click Server Options and select Configure Options from the pop-up menu
c. If you have defined limited address scopes, you can confine the options configuration to a
particular scope. Click on the scope and right-click on Scope Options to configure the 074 option
code for that scope only. Configuration is the same as for the whole DHCP server.
d. Scroll down to the 074 option for Internet Relay Chat (IRC) and check the box.
e. Add IP addresses for tenant appliances
36
Tenant Installation – vCenter
3.2.7 Active Directory Configuration
Define or install tenant Active Directory. The tenant must configure their Active Directory as shown below
and have the information ready to be used during the installation. It is highly recommended that you
confirm the values using an AD tool such as AD Explorer:
http://technet.microsoft.com/en-us/sysinternals/bb963907
Table 3–2 Network Information for DaaS Management Host
Field
Description
NETBIOS Name
Active Directory domain name
DNS Domain Name
Fully qualified Active Directory domain name
Protocol
LDAP or LDAPS, depending on your Active Directory setup
Bind Username
Domain administrator
Bind Password
Domain administrator password
Port
The defaults for this field are: LDAP -> 389 and LDAPS -> 636. You should
not need to modify this field unless you are using a non-standard port.
Domain Controller IP
(Optional) Specify a single preferred domain controller IP address if you
want AD traffic to hit a specific domain controller.
Context
This field is auto-populated based on the DNS Domain Name information
provided earlier.
3.2.8 SSL Certificate
If the tenant requires a certificate, the customer must provide the service provider with the necessary
certificate files in Apache SSL format. For more details, see Apply Tenant Certificates to Tenant Appliances.
3.2.9 About File Shares
File shares are used to allow the import of AppStacks and customization information.
There are two types of file shares:
● Application file shares can be used to import AppStacks. AppStacks are created using AppCapture.
● A Customizations file share contains the configuration files for a user's custom experience (for
example, mounting a network drive).
Some things to note:
● File shares can be in the same domain as the Active Directory that is added to Horizon DaaS Platform.
They can also be part of a CIFS share.
● Horizon DaaS Platformmust have read permissions on your file shares.
● AppStacks which are already present in the file share are imported automatically when the file share is
added. For more information, see the Administration console help.
Note: Once you create a file share, you cannot remove it. If you entered the wrong file path, just edit the
file share from the Administrator console and import it again.
37
Tenant Installation – vCenter
3.3 Tenant Installation
3.3.1 Overview
To install the tenant, you complete the following tasks:
1. Create tenant appliances.
2. Add desktop host(s).
3. Assign tenant appliances to resource managers.
4. Assign quota.
5. (Optional) Enter the Tenant’s Active Directory information.
3.3.2 Create Tenant Appliances
1. In Service Center, select tenants > register a tenant.
The Register a tenant page displays.
2. On the General Info tab, the only required fields are the Tenant Name, Administrator Name, and
Database Password. Enter this information and any of the non-required field data you want to
maintain.
3. On the Networks tab, next to the data center drop-down, click Add and enter values for the fields
listed in the table below (for both primary and secondary). The default networking option is to use
VLANs mapped to virtual networks on each of the vSphere hosts. When using vCenter there is an
option of using distributed virtual switches (DVS). This is an install time decision and cannot be
changed after the tenant has been installed.
Table 3–3 Data Centers
Field
Sample Value
Notes
Network ID
115
VLAN ID, VXLAN ID,
or DVPG Name
Network ID Type
VLAN
VLAN, VXLAN, or DVS
Network Label
My Network
Free Form Text Field
Gateway
172.16.115.1
DNS Name
172.16.115.2
Subnet mask
255.255.255.0
Directory Name server
4. On the Custom Fields tab, enter any site-specific information you want to maintain. These are freeform text fields with no data validation; the content is entirely up to you.
5. After entering your information on the General Info, Networks, and Custom Fields tabs, select Save
and Create Appliances.
6. On the Tenant Install page, enter values for the fields listed in the table below (primary and
secondary).
38
Tenant Installation – vCenter
Table 3–4 Schedule Tenant Creation
Field
Sample Value
Notes
Primary Name
TenantNode1
User-friendly name
TenantNode2
User-friendly name
Primary IP
Secondary Name
Secondary IP
Floating IP Address
Start Date/Time
7. Wait for the system to spawn your tenant appliances (time will vary depending on infrastructure). If
you want to check the status of a reservation, select appliances > reservations.
8. Verify that the appliances were created.
3.3.3 Add Desktop Compute Resources
Note: If you are using the same vCenter to host your management appliances and tenant desktops, you
do not need to complete this section. This is because you have already discovered the vCenter when
setting up your management appliances. Instead, continue to Assign Resources to Tenant below.
If you are using separate vCenters for management appliances and desktop hosts, you must complete this
section to add a physical desktop host for the tenant desktops. This host will be used for:
● Importing your initial starter desktop
● Hosting your pools
1. In Service Center, select service grid > resources to display the Resources screen. The left side of the
screen displays three panels: Resource Managers, Desktop Managers, and Compute Resources.
2. Select the Compute Resources panel.
The page redisplays with the Add Host Manager tab next to the General tab.
3. Click the Add Host Manager tab and enter values for the fields listed in the table below.
Table 3–5 HA Server
Field
Sample Value
IP Address/Hostname
Enter the DNS name or IP address of the Desktop vCenter.
User name
Administrator
Password
vCenterPsswd
Resource Manager
Select the tenant resource manager from the drop-down
4. Click the Add button.
The system prompts you to accept the certificate for the vCenter.
5. Click Accept.
6. If you have multiple vCenter Datacenters configured, select the one that will be used for the desktop
hosts. If you have only one vCenter Datacenter, you will not be prompted to select it.
7. When prompted, accept the certificate.
39
Tenant Installation – vCenter
Note: The assignment of individual ESXi or Cluster resources within this vCenter Datacenter to a
particular Desktop Manager will be made after this step.
3.3.4 (Optional) Configuring the Netapp VSC Plug-in
If you are not using the Netapp VSC plugin for storage then skip this section and proceed to the next section
entitled Create the Remaining Service Provider Appliances to continue the vCenter install. There is no need
to define storage when managing with vCenter in the DaaS platform as this is already configured through
the vCenter client.
If you are using the Netapp VSC plugin for storage perform the following steps:
1. Service Center, select service grid > resources.
2. Select the “Service Provider RMGR” in the pane on the left, then Storage Systems in the pane on the
right.
3. On the Storage Systems tab, select the Add Storage System link.
4. Enter values for the fields below.
Important: When adding a storage system to the DaaS platform, the Address field must match how
the VSC plug-in was discovered for vCenter. If the plug-in was discovered as an IP address you
must enter the IP address in the Address field. If it was discovered as a FQDN, then you must enter
the complete domain name in the Address field.
Table 3–6 Storage System
Field
Sample Value
Address
storage.desktone.com or 172.16.10.21
Username
root
Password
storagePswd
5. Click the Add Storage System button.
The system adds the name of the storage system to the Storage Systems tab.
3.3.5 Assign Resources to Tenant
1. In Service Center, select service grid > resources.
2. Select the Desktop Managers pane.
3. Select the appropriate Desktop manager listed in the Desktop Managers panel by clicking on the name
in the tree.
Note: You may need to click refresh if the expected Desktop Manager is not present.
4. Click on the Compute Resources tab, and click assign on the vCenter you have setup for the Tenant
Desktops
5. A list of both clusters and ESXi hosts will be displayed. Select one or more of the desired compute to
be assigned to the Desktop Manager. Click OK when done.
6. For each selected compute resource, a capacity popup will be displayed. Review the overallocation
settings and click Save if satisfactory.
Note: If the server capacity is not enough to meet the current usage base on overallocation, you will
need to increase overallocation or decrease the amount of VMs on the compute.
40
Tenant Installation – vCenter
3.3.6 Assign Networks to Desktop Manager(s)
1. In Service Center, select service grid > resources to display the Resources screen and select the
Desktop Managers tab
2. In the tab, you will find the desktop managers listed as <tenant name>_<desktop manager name>.
Click on the Desktop Manager in question and select the Networks tab.
Note: This tab will only be displayed once Compute Resources have been assigned and will only
display networks that are available across all assigned compute resources. If you do not see a
network that you expect, validate that the network is available and labelled correctly across all
compute resources.
3. Click assign on at least 1 network. The assigned networks effect what VMs will be detected as
belonging to the Desktop Manager.
3.3.7 Configure Datastores
The Datastores tab is used to specify datastores for your system to use.
Procedure
1. In the Service Center, select service grid > resources to display the Resources screen and select the
Desktop Managers tab
2. In the tab, you will find the desktop managers listed as <tenant name>_<desktop manager name>.
Select a desktop manager and then select the Datastores tab.
3. Double-click in the appropriate field (Desktop Storage, Appstack Regex, Writable Regex).
4. Enter a regular expression for the name(s) of the datastore(s).
Note: In order for a datastore to be added, it must first be configured on a Compute Resource.
3.3.8 Assign Desktop Model Quotas
1. In Service Center, select tenants > browse tenants.
2. In the table, click Edit for the tenant which you wish to alter, the Editing Tenant page displays.
3. Select the Quotas tab. This panel displays all of the controlling Quotas which regulate what a Tenant
is allowed to use at provisioning and desktop connection time. Included are the Protocol Quotas, Gold
Pattern Quotas, RDS Session Quotas, and Desktop Model Quotas.
Desktop Model Quotas are assigned by Desktop Manager and will be calculated based on assigned
compute and its associated population density (# of VMs).
4. Select the Datacenter & Desktop Manager from the drop downs which you wish to assign Quota to.
Add the amount desired to the VM Quota Column. Click Update and a status window will appear
after Quota assignment.
Note: You must enter a number that is at least as large as the In Use amount.
3.3.9 Assign Protocol Quotas
1. In Service Center, select tenants > browse tenants.
2. In the table, click Edit for the tenant for which you are applying the tenant model.
The Editing Tenant page appears.
41
Tenant Installation – vCenter
3. Select the Quotas tab.
The Quotas tab displays a table listing all the protocol quotas available.
4. If there are more than one datacenter for this Tenant, select the Data Center to which you wish to
assign Quota from the drop-down.
5. Under the Protocol Quota section, enter a value in the Quota column or select the Unlimited checkbox.
This Quota will be applied across the Tenant in a given Datacenter.
6. Click Update and a status window will appear after Quota assignment.
3.3.10 Set Up Desktop Connection Via Access Point
Note: You cannot deploy an Access Point VM from a vSphere Windows client. You must deploy it from
the vSphere web client.
1. Download the latest version of the Access Point OVA file.
2. Determine the IP addresses (DNS/Netmask/Gateway) for the required networks, as described below.
Configuration
3 NIC
2 NIC
1 NIC
Networks
Internet
Any network with internet access
Management
This can be your 169 network. Since this
does not have its own DNS or Gateway,
you can enter any numbers for DNS and
set the netmask to 255.255.255.0
Backend
Network that the Tenant uses for
desktops
Internet
Network the Tenant is on
Management
This can be your 169 network. Since this
does not have its own DNS or Gateway,
you can enter any numbers for DNS and
set the netmask to 255.255.255.0
Internet
Network that the Tenant is on
3. In the vSphere web client, follow the normal method for deploying a template. On the Properties page,
enter information as shown below.
Field
Value
Root Password
Enter initial password for root user.
Password must be at least eight characters long and must contain:




42
At least one upper case letter
At least one lower case letter
At least one number
At least one special character (!, @, #, etc.)
Admin Password
Enter password to be used for REST API Admin user
Locale
en_us
Tenant Installation – vCenter
Settings JSON
Leave blank
View Destination URL
Leave blank
View Destination URL
Thumbprints
Leave blank
View Proxy Pattern
Leave blank
DNS
Enter DNS of Internet network
Internet IP Address
Enter Internet Network IP address from the previous step
Management Network IP
Address
If configuration is 3 NIC or 2 NIC, enter Management Network IP
from the previous step.
If configuration is 1 NIC, this item does not display.
Backend Network IP
Address
If configuration is 3 NIC, enter Management Network IP from the
previous step.
If configuration is 1 NIC or 2 NIC, this item does not display.
4. Power on the VM and wait for the login screen to appear on the console.
5. On the tenant appliance, run the following command:
sudo /usr/local/desktone/scripts/apsetup.sh
6. Enter the requested information for the Access Point appliance. The response status returned will
indicate whether the configuration was successful.
Response status
Result
200
Configuration successful
400
Invalid input
401
Password incorrect. Confirm that password matches admin
password configured during OVA deployment
000
Network connection failure. Confirm that IP address matches
management IP address configured during OVA deployment
7. Configure NAT and firewall rules to allow access to the Access Point appliance through Internet
network.
Note: When using an edge gateway load balancer the NAT for ports 80 and 443 are not required.
These ports are forwarded automatically.
Port
Usage
4172/tcp, 4172/udp
PCoIP desktop access protocol
8443/tcp
HTML desktop access protocol
443/tcp
Secure web portal access
80/tcp
Insecure web portal access (will be redirected to 443)
43
Tenant Installation – vCenter
3.3.11 Enter the Tenant’s AD Information
Omit this step if the tenant will be entering their own AD information.
Tenant AD information is entered in the administration console. For more information see the tenant setup
and administration documentation.
If you do not use a domain admin account for the Service Account then you must set the tenant policy
fabric.ad.validateSysPrepUserPrivs to false.
3.3.12 Apply Tenant Certificates to Tenant Appliances
The DaaS platform allows you to upload custom SSL certificates for each tenant.
● If the tenant does not already have a certificate, you may generate it following the instructions in
section 3.3.12.1 below.
● If it already has a certificate, proceed directly to section 3.3.12.2
3.3.12.1 Generate Tenant Certificates
You can generate the tenant’s CSR file (certificate signing request) either on the Service Provider appliance
or the tenant nodes.
● If generating on the Service Provider appliance, please be sure to create in a tenant specific directory so
files are not confused among tenants.
● Always name the file using the domain for which the cert is being generated.
Procedure
1. Collect the following information for the tenant:
○ Country Code
○ State and Locality
○ Full Legal Company Name
○ Organizational Unit
2. At the command line run
openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr
where server is the domain you want to create a cert for - such as desktops.tenant.com
This will generate two files: the Private-Key file for the decryption of your SSL Certificate, and a
certificate signing request (CSR) file (used to apply for your SSL Certificate) with apache openssl.
3. When you are prompted for the Common Name (domain name), enter the fully qualified domain
name for the site you are securing. If you are generating an Apache CSR for a Wildcard SSL Certificate
your common name should start with an asterisk (such as *.example.com).
4. Once the .key and .csr files are created, zip them up and send them to the customer so they can request
a cert from a certificate authority.
5. Copy the files to /usr/local/desktone/cert on the tenant node so they are backed up by the automated
backup process.
44
Tenant Installation – vCenter
3.3.12.2 Apply Tenant Certificates
The DaaS platform allows you to upload custom SSL certificates for each tenant.
To enable a custom certificate, you upload three certificate files in Apache format: SSL Certificate, SSL Key,
and CA Certificate. The tenant might provide you with all three files. Or, to ensure the files are generated
properly, you can generate the public and private keys yourself, forward these keys to the tenant, and then
the tenant can request the signed certificate from the signing authority.
Note: To upload the three certificate files, you navigate to the Certificates tab under tenants (this is a
different Certificates tab than the one used for service providers).
Procedure
1. In the Service Center, select tenants > browse tenants.
2. On the Tenants screen, click Edit for the tenant.
3. Click the Certificates tab.
4. On the Certificates tab browse for and select the following three files:
○ CA Certificate: The public certificate from a certificate authority that was used to sign the tenant
certificate. This file will have a .pem or .crt extension.
○ SSL Certificate: The tenant’s public certificate, which was signed by the CA. This file has a .crt
extension, which indicates that it is a certificate file.
○ SSL Key: The private key used to decrypt the tenant’s SSL certificate. This is needed in order to
be able to respond to certificate requests. This file has a .key file extension.
5. Click Submit to upload the files.
You can upload the files before or after installing appliances:
● Before: The certificate is automatically installed on all the tenant appliances when you click the Submit
button.
● After: Click the link on the Certificates tab to install the certificate on the tenant appliances.
Note: If the IP address or URL for the tenant’s desktop portal does not resolve to the tenants CN in their
certificate, the tenant administrator may wish to include in their certificate a Subject Alternative Name so
that the desktop portal’s URL accessed by web clients can be matched to the uploaded tenant certificate. For
more details on how to add a Subject Alternative Name to the certificate, contact the certificate authority.
Backing Up: Copy the files to /usr/local/desktone/cert/temp on the primary Tenant appliance so they are
backed up by the automated backup process.
Upgrading: If you are upgrading to the latest release of the DaaS platform, your existing certificates are not
automatically imported. Make sure you have a backup copy of the three files so that you can upload them
again.
3.3.13 Extending a Tenant Across Datacenters
Note: You must have already created another Datacenter before you can extend a tenant to that
Datacenter. See Installing Multiple DaaS Datacenters for instructions on creating a second datacenter.
3.3.13.1 Adding a Network Component
1. In Service Center, go to the Tenants page and click the Edit button for the Tenant to be extended.
2. On the Networks tab, select the required datacenter in the Data Center dropdown.
45
Tenant Installation – vCenter
3. Click the Add Network Component link.
4. Enter values for the fields listed in the table below.
Note: The extended tenant must be placed on a different network than the tenant in the first
datacenter.
Table 3–7 Network Component
Field
Sample Value
Notes
Network ID Type
VLAN
VLAN, VXLAN, or DVS
Network ID
118
VLAN ID, VXLAN ID, or
DVPG Name
Network Label
My Network
Free Form Text Field
Gateway
172.16.118.1
Subnet mask
255.255.255.0
DNS Name
172.16.115.2
Directory Name server (Usually
the same as the DC1 tenant)
5. After entering your information, select Add Network Component.
3.3.13.2 Adding Appliances
1. On the Appliances tab, select the Add Appliances link.
2. Select the required datacenter in the Data Center dropdown.
3. Enter values for the fields listed in the table below.
Table 3–8 Tenant Install
Field
Sample Value
Notes
Primary Name
TenantNode1
User-friendly name
TenantNode2
User-friendly name
Primary IP Address
Secondary Name
Secondary IP Address
Floating IP Address
Start Date/Time
4. Wait for the system to spawn your tenant appliances (time will vary depending on infrastructure). If
you want to check the status of a reservation, select appliances > reservations.
5. Verify that the appliances were created.
6. Return to Add Desktop Compute Resources above and follow the steps to complete the extended
tenant setup.
46
Tenant Installation – vCenter
3.4 Creating a Windows 7 Gold Template
Before defining a VM as your gold template you need to create your template. We strongly recommend
against using a P2V (physical-to-virtual) conversion tool. Instead a new OS install should be customized to
VDI best practices.
There are numerous online publications on Windows 7 VDI best practices, such as
http://www.vmware.com/files/pdf/VMware-View-OptimizationGuideWindows7-EN.pdf
The following steps must be part of the VM preparation
1. Install VMWare tools and verify NIC settings:
a. From vCenter UI, right click the VM.
b. Select Guest > Install/Upgrade VMWare tools.
c. Select Edit Settings > Network Adapter.
d. Confirm adapter type is VMXNET3.
2. Enable Administrator account & confirm RDP access:
a. Right click Computer and select Manage.
b. Select Local Users and Groups.
c. Select Users.
d. Right click the Administrator user and select Properties.
e. On the General tab, uncheck account is disabled.
f. On the Member Of tab, confirm Administrator is a member of “Remote Desktop Users”
g. Set Administrator Password
3. Install DaaS Agent by copying the DaaS Agent msi onto your VM and running the install.
The DaaS Agent must be configured to point at the tenant appliances. This can be done 1 of 2 ways:
○ DaaS Agent Discovery: The tenant appliance addresses can be automatically discovered by the
DaaS Agent via DHCP by utilizing option code 74. The configuration of this option is described
in section 2.7.
○ Update of DaaS Agent configuration file: The tenant appliance addresses can be manually
updated in the DaaS Agent configuration file. Open the file C:\Program Files (x86)\DaaS
Agent\service\MonitorAgent.ini with a text editor like notepad (note: on 32-bit systems the path
will be exclude the “(x86)”). Remove the semi-colon on the line containing the parameter
“standby_address” and provide a comma separated list of the tenant appliance IP addresses. A
restart of the DaaS Agent Windows service is required after making this change.
4. Install the PCoIP protocol
a. Install VMware Horizon Agent (see Install Horizon Agent above).
b. Apply PCoIP GPO (.adm file) and configure protocol settings as appropriate
5. Join VM to domain and add the appropriate domain groups to “Remote Desktop Users” group.
6. Set power options for the VM to HIGH Performance:
a. Select Control Panel > System and Security > Power Options.
b. Select High Performance.
47
Tenant Installation – vCenter
7. Confirm Windows firewall is disabled, or at least the necessary ports are configured.
8. Confirm Windows Updates are current.
9. Optional: Log in as Administrator and remove all other accounts on the VM.
10.
(Optional if users connect through other browsers): Confirm Remote settings are not using NLA –
NLA may interfere with IE users trying to connect to their desktops:
a. Right click Computer and select Properties.
b. Select Remote Settings.
c. Select “Allow connections from computers running any version of Remote Desktop”
11. Optional: Disable Ctrl-Alt-Del Secure logon. Some protocols (and users) struggle with entering CTRLALT-DEL to log into their VM. To disable this:
a. Run “netplwiz”
b. Select Advanced Tab
c. Uncheck “Require Users to press CTRL-ALT-DEL”
12. Install the appropriate protocol drivers if you choose additional protocols beyond RDP.
Once your VM is configured as you wish with the appropriate software you can begin the conversion
process – note:
○ VM must exist on the same host, storage and VLAN mapped to the tenant.
○ Confirm the VM has been imported into the Administration console.
48
Tenant Installation – vCenter
Tenant Installation Worksheet
The following table lists the fields you will need to specify in the Service Center when installing a tenant.
The fields are listed in the order they must be provided during the installation.
Tenant Installation Fields in Service Center
Field
Values
Sample Value / Notes
Tenant VLAN/VXLAN ID
115
Tenant Gateway
172.16.115.1
Tenant DNS Name
172.16.115.2 (AD server)
Tenant Subnet mask
255.255.255.0
Primary Tenant Appliance
Name
TenantA-Node1
Primary Tenant Appliance IP
Address
172.16.115.21
Secondary Tenant Appliance
Name
TenantA-Node2
Secondary Tenant Appliance IP
Address
172.16.115.22
Floating IP Address
172.16.115.20
Tenant Host/Server IP Address
DNS name or IP of host
Tenant Host/Server User name
HostMgtAcct
Tenant Host/Server Password
hostPsswd
Tenant Storage System Address
storage.sp.com
Tenant Storage System Username
rootAccessAct
Tenant Storage System Password
storagePsswd
Name of Tenant Directory to
Mount
tenantAnfs
Tenant Remote Mount Point
/vol/tenanta
Optionally, if you are configuring the Tenant Active Directory, you will also need to collect the following:
Tenant Installation Fields in Service Center
Field
Values
Sample Value / Notes
Active Directory (AD) Name
TENANT
Domain
tenant.com
AD Protocol
ldaps
AD Port
636
Nameserver
172.16.115.2
Context
dc=tenant,dc=com
Service Account
CN=Administrator,CN=Users
Password
TenantAPsswd
49
Tenant Installation – vCenter
50
Admin Group
cn=enterprisecenteradmin,ou=groups
User Group
cn=portalusers,ou=groups
Admin User
Administrator
Password
AdminPsswd
Primary DNS
172.16.115.2
Service Provider Installation – vCenter
4 Service Provider Installation – vCenter
4.1 Overview
This guide provides you with information on how to install and configure the DaaS platform Service
Provider appliances using vCenter discovery of the management compute resources.
In this installation, you complete the following:
1. Install the first Service Provider appliance2 in the first datacenter.
2. Bootstrap the Service Provider appliance. Once bootstrapped, the Service Provider Appliance provides
the foundation to install the remainder of the DaaS application.
3. Start the Service Center on the DaaS Management Appliance and configure the Service Provider
environment. Create a second Service Provider appliance for high availability (HA).
The Service Center provides a web-based UI for managing data center resources (hosts, storage, and the
DaaS management appliances). You also use the Service Center to manage tenant contracts, configure tenant
appliances and networks, and create and assign roles and permissions.
4.2 Service Provider Prerequisites
Before you can install the DaaS platform, you first need to complete the tasks listed in this section. Contact
your customer service representative for help with any of these prerequisites. You will also need the Service
Provider Installation Worksheet and Platform Install Checklist found in the System Blueprint to help
organize and collect all the information needed to complete the install.
A DaaS appliance is a virtual machine combined with a functional unit of software in the DaaS platform. The Service provider appliance provides
two types of access to the system: via the Service Center web based UI; as a transit point for enabling ssh access to all the management
appliances in the data center.
2
51
Service Provider Installation – vCenter
The supported combinations for managing your management appliances and tenant desktops using
VMware vCenter hypervisor management software are the following:
Table 4–1 vCenter used as Hypervisor Manager – Supported (Y) and Non-supported (N)
Configurations
ESXi 4.0
ESXi 4.1
ESXi 5.0
ESXi 5.1
ESXi 5.5
ESXi 6.0
vCenter v 6.0 U3
N
N
N
N
Y
Y
vCenter v 6.0 U2
N
N
Y
Y
Y
Y
vCenter v 5.5 U3*
Y
Y
Y
Y
Y
N
vCenter v 5.5 U2
Y
Y
Y
Y
Y**
N
*Recommended: upgrade to ESX 5.5 Update 3b
**Strongly recommended: upgrade to vCenter 5.5 U3 to manage ESXi 5.5 U3 hosts.
Build the network infrastructure required to support multi-tenancy, typically accomplished with:
○ VLAN tagging for network separation at layer 2.
○ VRFs to isolate tenants and allow for a separate routing table per tenant.
The configured VLANs must be the same across all management hosts. In vCenter, there is an
additional option of using distributed virtual switches (DVS). By integrating either the VMware
vSphere Distributed Switch or the Cisco Nexus 1000V with vCenter, separation can be accomplished
using distributed switch port groups. The port group must be configured to use ephemeral port
binding. See System Blueprint for more details.
4. Allocate at least two ESXi hosts or at least one cluster for DaaS management appliances.
Install a vCenter management server meeting the version requirements. The ESXi hosts or cluster
allocated for DaaS management appliances should be in the same vCenter Datacenter. All ESXi hosts
should have the compute (RAM, CPU, local disk) required to meet expected Tenant Appliance density.
Please reference the DaaS Blueprint for more details.
Note: Single and Multiple vCenter(s) may be used to satisfy both desktop and appliance needs. If you
are using the same vCenter, all the compute resources required for management appliances and tenant
desktops must be in the same vCenter Datacenter.
5. Provide an account to access the hypervisor manager API.
On the vCenter, configure an account which can be used for the DaaS platform to manage the virtual
resources via the vSphere API. This account must have appropriate privileges.
6. Assign one subnet to the service provider network. This subnet also needs to have access to the API of
the hypervisors.
7. Assign service provider network.
Assign a VLAN, a VXLAN, or a Distributed Virtual Port Group (DVPG) to the service provider
network. This VLAN or DVPG must map to a virtual network assigned to all management hosts.
8. Assign a network to be used for DaaS platform management traffic.
Assign one VLAN (non-routable subnet), one VXLAN, or one Distributed Virtual Port Group (DVPG)
as the Link Local Network.
9. Allocate link-local addresses.
For a typical data center, it is recommended that you use a /22 network (for example, 169.254.16.0/22).
However, a demo environment or small data center can use a /24 network. You should not use
anything smaller than /24.
52
Service Provider Installation – vCenter
Note: If you have more than one data center, the link-local address space must be unique (nonoverlapping) across data centers.
A link-local address is an IP address used only for communications within a link (segment of a local
network) or a point-to-point connection to which a host is connected. Routers do not forward packets
with link-local addresses. The address block 169.254.1.0 through 169.254.254.255 is reserved for linklocal addressing in Internet Protocol Version 4. You cannot choose addresses outside this range. Refer
to Internet Engineering Task Force (IETF) RFC 3927 for more information.
10. Allocate storage for management appliances.
By default, the DaaS platform will clone out management appliances on local disk (via a local
datastore). This is considered a best practice. However, if desired, it is possible to use shared storage
for management appliances. If using shared storage for management appliances on vCenter there are
a few guidelines:
○ Any shared storage (NFS, iSCSI, or FC) can be used. Ensure that the appropriate configuration
has been done (e.g. LUN masking, zoning, etc.) to allow one or more LUNs to be mapped to all of
the management hosts. The DaaS platform will use the native vCenter cloning APIs to provision
management appliances using FC or iSCSI. In order for this to work properly, datastore(s) must
be created and mapped to the same LUN(s) on all the management hosts with the same exact
name (case sensitive).
○ Datastores must be manually created on each of the management hosts.
○ The datastore name must be identical (case sensitive) on each management host.
NFS Storage Requirements
If you are using NetApp, Isilon, or Nexenta storage for DaaS management appliances, create an
account on the storage system for access to the storage system APIs.
If you are using the NetApp VSC plugin for vSphere, it must be installed on the same machine as the
vCenter server. Supported versions of the NetApp VSC plugin are 4.1.P1 and 4.2.1. When registering
the VSC plug-in with vCenter, use the vCenter administrator account credentials, which will also be
used to discover the management hypervisor later in this document. For further information please
refer to the NetApp VSC installation and configuration guide.
11. DNS Configuration
There must be a DNS server available from the Service Provider (SP) network which can be used to
resolve the name of the domain so that the Service Center can authenticate. Confirm all vSphere
servers are defined in the DNS and that the hosts and storage systems are configured locally with the
matching DNS name as well.
12. IP Address Allocation
Allocate five IP addresses in the SP network: two for the Service Provider appliances plus one for the
shared floating IP and two for the Resource Manager appliances. If the Service Provider wants to
access the Service Center using a hostname instead of an IP address, setup a DNS record to point to the
floating IP address of the Service Provider appliance pair.
13. NTP Configuration
There must be at least one NTP server available from the SP network to allow for time
synchronization.
14. Active Directory Configuration
There must be an Active Directory accessible on the SP network for authentication. Have available the
information listed in the table below to configure the domain for the Service Provider. It is highly
recommended that you confirm the values using an AD tool such as AD Explorer. You can download
AD Explorer with the following link:
http://technet.microsoft.com/en-us/sysinternals/bb963907
53
Service Provider Installation – vCenter
Table 4–2 Network Information for DaaS Management Host
Configuration
Example
NETBIOS
SP
Domain Suffix
sp.desktone.com
Protocol
Ldap
Port
389
Context
dc=sp,dc=desktone,dc=com
Primary DNS Server IP and
name
172.16.109.2
(You only need to specify one Domain Name Server –
the rest should be automatically identified)
Service Account
Used to parse your AD
structure through a standard
LDAP query - may be read
only
CN= Administrator,CN=Users
(UserMustChangePassword = false, Password Never
Expires)
(Do not include the context in the service account name)
Service Account Password
ADPasswd
Super Admin (Service Center
Access)
cn=serviceprovideradmin,ou=groups
(do not include the context)
Admin Level1 (Optional)
cn=Admin-1,cn=admins,ou=groups
Admin Level2 (Optional)
cn=Admin-2,cn=admins,ou=groups
15. SSL Certificate
Provide an SSL Certificate in Apache2 format to install for a valid certificate. For more details, see
Apply Service Provider Certificate Files to Service Provider Appliances.
4.2.1 Required Files
Make sure you have all the files listed in the table below before you begin the installation. Contact your
support representative for the necessary files.
Table 4–3 Required Files
File Contents
File(s)
Appliance Template
CopperTemplate7_0_0_20170210.ova
Debians*
dt-platform-7_0_0.deb
cloud-connector-client_1.2.0.deb
av-manager-6.0.0-1071-dornFC.deb
dt-keybox_2.85.01.4205427.deb
dt-aux-7_0_0.deb
xmp-6.0.0-1109-dornFC.deb
wem-service-rbac-1.0.507.deb
DaaS Agent
VMware-DaaS-Agent-7.0.0-4883008.msi
Access Point
Latest version of .ova file
* Three of the .deb files listed—cloud-connector-client, av-manager, and wem-service-rbac—
are associated with products not supported in the current release of Horizon DaaS Platform.
The system requires that these files be installed for other reasons, not to support the
corresponding features.
54
Service Provider Installation – vCenter
4.3 Bootstrap Primary Service Provider (SP1) Appliance
Important: If you have already installed your first Datacenter and now need to install an additional
Datacenter, see Installing Multiple DaaS Datacenters.
4.3.1 Prepare Storage Configuration on Both Management Hosts
On both management hosts, add the Service Provider datastore. Be sure to use the same name on each host.
Warning: The name you use in vSphere for the storage needs to be exactly the same on each host and will
be entered into the platform as part of Section 5.
4.3.2 Deploy the DaaS OVA File
On a DaaS Management Host, using the vSphere client, deploy two copies of the DaaS ova file. The first
copy becomes the primary Service Provider (SP1) appliance upon completion of the bootstrap process. The
second copy becomes the template for all subsequent DaaS management appliances.
Note: Make sure that you have downloaded the DaaS ova file (Appliance Template) specified in section
2.1. You need to locate the file on a Windows drive accessible by the vSphere client in order to deploy it
from the vSphere client. vSphere cannot natively mount a Linux partition or connect to an NFS share.
1. Start the vSphere client.
2. Select File > Deploy OVF Template to deploy the first copy of the ova file, which becomes the first
Service Provider appliance. The vSphere client launches the Deploy OVF Template wizard. The wizard
has six steps. After completing each section, click Next.
a. Source: Browse for the .ova file you downloaded.
b. OVF Template Details: Click Next to skip this step.
c. Name and Location: Rename the VM with the name of the Service Provider Appliance you
defined in the Service Provider Installation Worksheet, for example “DatacenterName-sp1”.
d. Storage: Deploy the SP1 Appliance to local/shared storage on one of the two management hosts.
e. Disk Format: Click Next to skip this step.
f. Network Mapping: The first column lists the two Source Networks. For each, select a Destination
Network. The first network (VM Network) should point to the Service Provider Network. The
second network (Dev Network) should point to the Link Local Backbone Network.
g. Ready to Complete: Click Finish. A dialog indicates the status of the deployment.
3. Select File > Deploy OVF Template again to deploy the second copy of the ova file, which becomes
the template for all subsequent DaaS management appliances. In the wizard, specify the source of the
ova file (the same as in Step 2a), a name that distinguishes the file as the DaaS management appliance
template, the service provider local/NFS storage, and the destination network (as defined in the
previous step). We recommend you preface the name of your data center to the beginning of the
appliance template name, for example “DatacenterName-template”. The name of the template must
be unique across all datacenters. Also make sure that the password you enter in the wizard is that
same as the one you entered for the first copy of the file.
55
Service Provider Installation – vCenter
4.3.3 Run the Bootstrap Script to Configure Network on SP1 Appliance
Run the bootstrap script to configure the network on the Service Provider appliance.
1. From the vSphere client, power on the SP1 appliance and open the console window.
2. Login using User: desktone Password: Desktone1 (default template password)
Note: For greater security, you should specify a custom appliance password when prompted by the
bootstrap script. Each datacenter can have its own unique appliance password.
3. Begin the bootstrap process by executing the following command:
sudo /usr/local/desktone/scripts/bootstrap.sh
4. The bootstrap script prompts you to enter network information for the fields listed in the table below.
The values shown are sample values only; enter the values you noted in the Service Provider
Installation Worksheet and Platform Install Checklist found in the System Blueprint. After you finish
entering the network information, the host reboots. It might take five minutes for the appliance to
start after reboot. Because the node is not configured until the reboot completes, disregard any error
messages displayed on the console.
5. After the host reboots, you can login via putty or any other ssh terminal you choose.
Table 4–4 Installing the First Datacenter: Network Information for DaaS Management Host
56
Field
Sample Value
Notes
Accept EULA
agreement?
yes
If the EULA agreement is not accepted
the bootstrap script will exit.
Existing multidatacenter setup
no
This is asking are you building a new
environment or joining an existing one.
Select “no” to install the first datacenter.
Datacenter name
CityOfFirstDC
IP for eth1
(backbone)
169.254.4.20
For the Backbone network (must be a
link-local address)
Netmask CIDR
format (0-32)
22
For the Backbone network
IP for eth0 (SP)
172.16.109.20
For the SP network
Netmask CIDR
format (0-32)
24
For the SP network
Gateway
172.16.109.1
For the SP network
Hostname
(appliance FQDN)
SP1.DESKTONE.COM Match the name used for the IP on the SP
datacenter.
Name server
172.16.109.2
NTP server
172.16.3.1
Is this an HA
Service Provider
appliance setup?
yes
Floating IP Address
172.16.109.26
Enter a value only if you have a time
server.
Service Provider Installation – vCenter
Field
Sample Value
Notes
psql password
dtPasswd
Must have at least eight characters and
contain at least one each of upper case
letter, lower case letter, number, and
special character.
This alters the psql passwords for admin,
master, slave and slony user. The
password is not displayed on the screen.
Appliance
password
myPasswd
Must have at least eight characters and
contain at least one each of upper case
letter, lower case letter, number, and
special character.
The user-defined password for Service
Provider appliances in this datacenter.
Any Service Provider appliance
accessible by ssh requires this custom
password.
Does this
configuration look
correct?
yes or no.
The information echoed back includes
two internal values, Data Center UID
and VMGR UID, which you can ignore.
4.3.4 Copy the DaaS Software to the Service Provider Appliance
1. Log into the SP1 appliance via putty (or equivalent), using the following credentials:
User: desktone
Password: the appliance password you set previously
2. Copy the following files to the /tmp directory on the SP1 appliance. (Do not copy the files to
/data/repo at this time.)
dt-platform-7_0_0.deb
cloud-connector-client_1.2.0.deb
av-manager-6.0.0-1071-dornFC.deb
dt-keybox_2.85.01.4205427.deb
dt-aux-7_0_0.deb
xmp-6.0.0-1109-dornFC.deb
wem-service-rbac-1.0.507.deb
3. At the appliance command prompt, move the files into the /data/repo directory on the appliance by
running the following for each debian file:
sudo mv /tmp/<name of debian file> /data/repo
Note: You can confirm that you have copied the correct files by doing the following:
a. Navigate to repo directory:
sudo -i
cd /data/www/repo
b. Run the following for each debian file:
reprepro -A amd64 list precise <debian package name>
reprepro -A amd64 list precise dt-platform-7-0-0
precise|main|amd64: dt-platform-7-0-0 7.0.0
reprepro -A amd64 list precise dt-aux-7-0-0
precise|main|amd64: dt-aux-7-0-0 7.0.0
57
Service Provider Installation – vCenter
4.3.5 Run bootstrap Script to Install the DaaS Software
1. Run the bootstrap shell script a second time to install the DaaS software:
sudo /usr/local/desktone/scripts/bootstrap.sh
System reboots.
2. SSH into the system again and wait for the log file to appear and watch desktone.log file:
tail
–f
/var/log/desktone/desktone.log
The system is up when you see a message similar to this in the log:
Appliance deployed flag set to started
3. Open a browser and log into https://172.16.109.xxx/service, or whatever IP you supplied for “IP for
eth0 (SP)” above.
Note: It might take five minutes for the appliance to start after reboot. Because the node is not configured
until the reboot cycle completes, you can disregard any error messages displayed on the console.
4.4 Configure Service Center
4.4.1 Start the Service Center
1. Start the Service Center by entering the URL or IP address in a browser. For example:
https://<IP for eth0 (SP)>/service
Replace <IP for eth0 (SP)> with the IP address that you specified above.
2. You can safely ignore the warning about the website’s security certificate and proceed to the Service
Center page.
4.4.2 Register the Service Provider Domain
The first time you access the DaaS Service Center, the Register a domain page displays so that you can
provide Microsoft Active Directory domain information. You enter the information on two tabs: Domain
Bind and Group Info. This information is required to access Microsoft Active Directory and to authenticate
users. Make sure you have the DN information available to register the domains.
Important: You must enter all information on both tabs (Domain Bind and Group Info) without letting
your browser session expire. If you need to enter the information on the second tab (Group Info) at a
later date, make note of the URL of the first tab (Domain Bind) so that you can navigate back to the
Register a Domain page
1. On the Register a Domain page, Domain Bind tab, enter values for the fields listed in the table below.
Table 4–5 Domain Registration, Domain Bind Tab
58
Field
Sample Value
Name
SP
Domain Suffix
sp.desktone.com
Protocol
Ldap
Directory Server Name
MicrosoftAD (leave this default, you do not need to select a
Directory Server Name)
Service Provider Installation – vCenter
Port
389
Domain Controller IPs (DNS
Server)
172.16.109.2
Context
dc=sp,dc=desktone,dc=com
Domain Bind Account DN
CN=Administrator,CN=Users (do not include the context)
Password
ADPasswd
Password Verify
ADPasswd
2. Click Save.
3. On the Register a Domain page, Group Info tab, start typing a value for Admin Groups. The system
will offer suggestions for auto-complete. For example, cn=serviceadmins,ou=groups.
4. Click Save.
The Service Center login page is displayed.
5. Enter your username, password, and domain then click Login.
4.4.3 Discover the DaaS Management Server
The Discover Management Server page is displayed. Use this page to discover the vCenter Server which
holds the DaaS Management Appliance Template (.ova file) you imported. The template is used for creating
DaaS management appliances.
1. Enter values for the fields listed in the table below. Enter the IP address or FQDN of the vCenter
Server that is hosting the SP1 appliance.
Table 4–6 vCenter Management Discovery
Field
Sample Value
IP Address/Hostname
mgVC1.domain.desktone.com
Username
root
Password
vCenterPasswd
2. Click Discover Server.
The system prompts you to accept the certificate for the vCenter.
3. Click Accept.
The system indicates it is discovering host and calculating capacity.
4. If prompted, select the vCenter Datacenter that contains the compute resources for DaaS. This vDC
should also contain the DaaS Appliance Template (.ova file) that you imported.
Note: If you only have 1 vDC, it will be automatically selected and a prompt will not appear.
5. Select the Compute Resource(s) (ESXi hosts or Cluster) you have set assign for DaaS Appliances. These
will be used to provision Appliances.
Note: A minimum of 2 ESXi hosts is required for high availability (HA). A cluster is considered to
be HA on its own.
For each selected Compute, a dialog appears. If the server is too small to accommodate the ratios, you
may be prompted to re-configure them.
59
Service Provider Installation – vCenter
6. Make any desired changes to the ratios and/or the Usage setting (Service, Tenant, or Network), and
click Save to save the values.
7. After setting the ratios, a VM list will be displayed containing all the VMs from the compute selected
as part of step 5. Select the DaaS appliance template from this list.
Note: This should NOT be the Service Provider appliance itself, but should be the Appliance .ova
that was deployed earlier.
Once the system has discovered the appliance template, the Browse Tenants screen is displayed.
4.4.4 (Optional) Rename Resource Manager
1. In the Service Center, Select service grid > resources.
2. In the Resource Managers panel on the left, click on the IP address of the resource manager.
3. On the General tab, in the Name field, double-click on the IP address of the resource manager. A text
box opens in which you can change the name.
4. Change the name to the user friendly name, for example “Service Provider RMGR” and click OK.
4.4.5 (Optional) Turn off Local Disk Provisioning
If you are installing the DaaS management appliances on shared storage and you intend to deploy and run
your management appliances from the shared storage instead of local storage, then you must first change
the default behavior of the application:
1. Select tenants > policy to display the Policy Configuration screen.
2. Select Service Provider from the dropdown.
3. Scroll through the list of policies and double click the value to set the vmgr.appliance.local.disk policy
to false.
4.5 (Optional) Configuring the Netapp VSC Plugin
If you are not using the Netapp VSC plugin for storage then skip this section and proceed to the next
section entitled Create the Remaining Service Provider Appliances to continue the vCenter install. There is
no need to define storage when managing with vCenter in the DaaS platform as this is already configured
through the vCenter client.
If you are using the Netapp VSC plugin for storage perform the following steps:
1.
Service Center, select service grid > resources.
2. Select the “Service Provider RMGR” in the pane on the left, then Storage Systems in the pane on the
right.
3. On the Storage Systems tab, select the Add Storage System link.
4. Enter values for the fields listed in the table below.
Important: When adding a storage system to the DaaS platform, the Address field must match how
the VSC plug-in was discovered for vCenter. If the plug-in was discovered as an IP address you
must enter the IP address in the Address field. If it was discovered as a FQDN, then you must enter
the complete domain name in the Address field.
60
Service Provider Installation – vCenter
Table 4–7 Storage System
Field
Sample Value
Address
storage.desktone.com or
172.16.10.21
Username
root
Password
storagePswd
5. Click the Add Storage System button.
The system adds the name of the storage system to the Storage Systems tab.
4.6 Create the Remaining Service Provider Appliances
DaaS requires that Service Provider management appliances be installed as High Availability (HA) pairs. To
ensure physical hardware high availability, HA DaaS Management appliance pairs are distributed across
two physical DaaS Management Hosts. With vCenter, the appliances are automatically distributed to the
management hosts you selected in Section 4.3.
4.6.1 Create HA Service Provider (SP2) Appliance
1. Select service grid > data centers.
2. Click the Edit button (at end of line for new data center). The system displays the Edit Data Center
popup.
3. Verify the displayed information and click Add Appliances.
The Appliance Install screen displays.
4.
Select Service Provider Appliance from the Appliance Type drop-down and enter values for the fields
listed in the table below.
Table 4–8 Secondary Server
Field
Sample Value
Notes
Name
SP2
Do not use fully qualified domain name.
IP Address
172.16.109.21
5. Enter values for the New Reservation fields listed in the table below.
Table 4–9 New Reservation
Field
Sample Value
Friendly Name
Create SP2
Notes
Start Date
Select Today from the drop-down or enter the
month, day, and year.
Start Time
Enter 00:00 to indicate now, or the actual time
in UT format.
6. Click Create Appliance.
If you want to check the status of a reservation, select appliances > reservations.
61
Service Provider Installation – vCenter
4.6.2 Apply Service Provider Certificate Files to Service Provider Appliances
The DaaS platform allows you to upload custom SSL certificates for each service provider appliance. To
enable a custom certificate, you upload three certificate files in Apache format: SSL Certificate, SSL Key, and
CA Certificate.
Note: To upload the three certificate files, you navigate to the Certificates tab under configuration (this is
a different Certificates tab than the one used for tenants).
Procedure
1. In Service Center, select configuration > general.
2. Select the Click here link.
3. Click the Certificates tab.
4. On the Certificates tab, browse for and select the following three files:
○ CA Certificate: The public certificate from a certificate authority that was used to sign the service
provider certificate. This file will have a .pem or .crt extension.
○ SSL Certificate: The service provider’s public certificate, which was signed by the CA. This file
has a .crt extension, which indicates that it is a certificate file.
○ SSL Key: The private key used to decrypt the service provider’s SSL certificate. This is needed in
order to be able to respond to certificate requests. This file has a .key file extension.
5. Click Submit to upload the files.
6. Select the Click here link to install the certificate on the service provider appliances.
To get the SSL Certificate file the service provider administrator should submit a certificate sign request to
their certificate authority. Their certificate authority will provide the administrator with a certificate file
(.crt) which can be provided to the DaaS service provider to be uploaded. For more information on how to
get a signed certificate, contact the certificate authority.
Note: If the IP address or URL for the Service Center does not resolve to the service provider CN in their
certificate, the service provider administrator may wish to include in their certificate a Subject Alternative
Name so that the desktop portal’s URL accessed by web clients can be matched to the uploaded service
provider certificate. For more details on how to add a Subject Alternative Name to the certificate, contact the
certificate authority.
4.7 Add Tenant Resource Manager
4.7.1 Create a Tenant Resource Manager Appliance
1. In the Service Center, select service grid > data centers. The Data Centers page appears. The page
contains a table of the available data centers.
2. Find the line for your data center and click Edit. The Edit Data Center popup appears.
3. Click Add Appliances.
The Appliance Install page appears.
4. In the Appliance Type drop-down, select Resource Manager.
The page displays the data entry fields for the Primary and Secondary resource managers.
62
Service Provider Installation – vCenter
5. Enter values for the fields listed in the table below. The IPs belong in the Service Provider network (not
the link-local network)
Table 4–10 Resource Manager
Field
Sample Value
Primary Name
RMGR1
Primary IP
172.16.109.22
Secondary Name
RMGR2
Secondary IP
172.16.109.23
6. Enter values for the New Reservation fields listed in the table below.
Table 4–11 New Reservation
Field
Sample Value
Friendly Name
Create RSMGR
Notes
Start Date
Select Today from the drop-down or enter the
month, day, and year.
Start Time
Enter 00:00 to indicate now, or the actual time
in UT format.
7. Click Create Appliance.
If you want to check the status of a reservation, select appliances > reservations.
8. Select service grid > resources to see the tenant resource manager once it is up and running.
4.7.2 (Optional) Give the Tenant Resource Manager a Friendly Name
1. In the Resource Managers panel on the left side of the page, click on the IP address of the new resource
manager.
2. On the General tab, in the Name field, double-click on the IP address of the resource manager. A text
box opens in which you can change the name.
3. Change the name to the user friendly name, for example “Tenant RMGR” and click OK.
4.7.3 (Optional) Configuring Netapp VSC plugin for Tenants
If you are not using the Netapp VSC plugin for storage then skip this section and proceed to the next
section entitled Define Tenant Models to continue the vCenter install. There is no need to define storage
when managing with vCenter in the DaaS platform as this is already configured through the vCenter client.
See Service Provider Prerequisites for more details on allocating storage for management appliances.
If you using the Netapp VSC plugin for storage perform the steps below:
1. In the Service Center, select service grid > resources.
2. Select the “Tenant RMGR” in the pane on the left, then Storage Systems in the pane on the right.
3. On the Storage Systems tab, select the Add Storage System link.
4. Enter values for the fields listed in the table below.
63
Service Provider Installation – vCenter
Important: When adding a storage system to the DaaS platform, the Address field must match how
the VSC plug-in was discovered for vCenter. If the plug-in was discovered as an IP address you
must enter the IP address in the Address field. If it was discovered as a FQDN, then you must enter
the complete domain name in the Address field.
Table 4–12 Storage System
5.
Field
Sample Value
Address
storage.desktone.com or
172.16.10.21
Username
root
Password
storagePswd
Click the Add Storage System button.
The system adds the name of the storage system to the Storage Systems tab.
4.8 Define Desktop Models
1. In the Service Center, on the main menu, select configuration > desktop models.
The Desktop Models page appears.
2. Click the Add desktop model link.
3. Enter values for the fields listed below.
Table 4–13 Add Desktop Model
Field
Explanation
Name
Use a generic name that describes the hardware and that you can reuse
between tenants. The name you enter appears in the Administration console
Desktop Models page.
Session Based
Choose Yes to provision remote desktop connections using Microsoft RDSH
(Remote Desktop Services). Selecting Yes automatically sets the Desktop
Type to dynamic.
Reference ID
An optional text field that can be used for a customer specific tracking ID.
Desktop Type
Choose one of the types from the dropdown: Selectable, Static, or Dynamic.
Memory
Enter the memory allocated to each virtual desktop, specified in megabytes.
For a session-based model (RDSH), the memory the Administrator allocates
to each desktop is typically higher because each desktop is supporting many
sessions.
Number of CPUs
Enter the number of virtual CPUs allocated to each virtual desktop.
GPU Video Memory
Enter the amount of GPU video memory to be allocated to each virtual
desktop.
Important: Carefully review your model settings, models cannot be deleted once created.
4. Click Add desktop model.
A summary line for the new desktop model appears in the table.
64
Service Provider Installation – vCenter
4.9 Installing Multiple DaaS Datacenters
This appendix walks you through setting up an additional datacenter (DC2) after you have successfully
installed your first datacenter (DC1). You need to have completed the DC1 Service Provider Prerequisites
for DC2.
Note: Each DaaS Datacenter must be in it’s own vCenter. Therefore DC2 will require a separate vCenter
from the one hosting DC1.
Within that DC2 vCenter, you can host both your management appliances and tenant desktops or you can
use separate vCenters for each. If you are using the same vCenter, all the hosts required for management
appliances and tenant desktops must be in the same vCenter Datacenter.
4.9.1 Prepare Storage Configuration on Both Management Hosts
On both DC2 management hosts, add the Service Provider datastore. Be sure to use the same name on each
host.
4.9.2 Deploy OVA File
On the DaaS Management Host for DC2, using the vSphere client, deploy two copies of the DaaS
Management Appliance Template OVA file. The first copy becomes the SP1 appliance for DC2 upon
completion of the bootstrap process. The second copy becomes the template for all subsequent DaaS
management appliances in DC2.
Prerequisite: Make sure that you have downloaded the DaaS ova file (appliance template), as outlined in
Service Provider Prerequisites on page 2. You need to locate the DaaS ova file on a Windows drive
accessible by the vSphere client in order to deploy it from vSphere (vSphere cannot natively mount a Linux
partition or connect to the NFS share).
1. In the vSphere client, select Deploy OVF Template to deploy the first copy of the ova file, which
becomes the first Service Provider appliance in the new datacenter.
2. In the vSphere client, select Deploy OVF Template again to deploy the second copy of the ova file,
which becomes the template for all subsequent DaaS management appliances in this new datacenter.
We recommend you preface the name of your data center to the beginning of the appliance template
name. The name of the template must be unique across all datacenters.
4.9.3 Run the Bootstrap Script
Run the bootstrap script to configure the network on the Service Provider appliance. You run the bootstrap
script twice: once to configure the network on the appliance, and then a second time to complete the
bootstrap process.
1. From the vSphere client, power on the primary Service Provider appliance in DC2 and open the
console window.
2. Log in using the following credentials:
User: desktone
Password: Desktone1
Note: For greater security, you should specify a custom appliance password when prompted by the
bootstrap script. Each datacenter can have its own unique appliance password.
3. Begin the bootstrap process by executing the following command:
sudo /usr/local/desktone/scripts/bootstrap.sh
65
Service Provider Installation – vCenter
4. The bootstrap script prompts you to enter network information for the new Datacenter needed by the
DaaS Management Host. The network information requested by the script is listed in the table below.
The values shown are sample values only; the values you enter are the values you collected in the
Service Provider Installation Worksheet and Platform Install Checklist found in the System Blueprint.
Installing Subsequent Datacenter: Network Information for DaaS Management Host
Field
Sample Value
Notes
Accept EULA agreement?
Yes
If the EULA agreement is not accepted the
bootstrap script will exit.
Existing multi-datacenter
setup?
Yes
Because this is not the first datacenter but
is instead an additional datacenter, answer
yes.
Is this the master datacenter?
No
The master datacenter is always the first
datacenter
Master Datacenter Information
Enter the eth0 IP address of any 172.16.110.234
SP appliance in the other
Datacenter
Local Datacenter Information
Datacenter name
CityofSecondDC
IP for eth1 (backbone)
169.254.4.20
For the DC2 Backbone network (must be a
link-local address
Netmask CIDR format (0-32)
22
For the DC2 Backbone network
IP for eth0 (SP)
172.16.109.20
For the DC2 SP network
Netmask CIDR format (0-32)
24
For the DC2 SP network
Gateway
172.16.109.1
For the DC2 SP network
Hostname
SP1.DESKTONE.COM Match the name used for the IP on the SP
data center.
(appliance FQDN)
66
Name server
172.16.109.2
NTP server
172.16.3.1
Is this an HA Service Provider
appliance setup?
yes
Floating IP Address
172.16.109.26
desktone@<IP address>’s
password
myPasswd
<IP address> is the IP address for primary
Service Provider of primary data center
Appliance password
myPasswd
The user-defined password for Service
Provider appliances in this datacenter. Any
Service Provider appliance accessible by
ssh require this custom password.
Does this configuration look
correct?
yes or no.
The information echoed back includes two
internal values, Data Center UID and
VMGR UID, which you can ignore.
Enter a value only if you have a time
server.
Service Provider Installation – vCenter
5. After the host reboots, you can login to the SP1 appliance via putty or any other ssh terminal you
choose using the following credentials:
User: desktone
Password: the appliance password you established during bootstrap for DC2
4.9.4 Rerun the Bootstrap Script
You run the bootstrap script a second time to install the DaaS software. Note that you do not need to copy
the software to the primary Service Provider appliance in DC2. It will be automatically downloaded from
the first data center.
In the new Datacenter, rerun the bootstrap shell script:
sudo /usr/local/desktone/scripts/bootstrap.sh
Note: During the second bootstrap you will asked to provide the root appliance password for the primary
service provider in the primary DC.
Note: It might take five minutes for the appliance to start after reboot. Because the node is not configured
until the reboot cycle completes, you can disregard any error messages displayed on the console.
4.9.5 Start the Service Center and Discover Management Host
1. Open a browser window and start the Service Center on the new Service Provider Appliance you just
bootstrapped. For example:
https://<IP for eth0 (SP)>/service
Replace <IP for eth0 (SP)> with the IP address that you specified in the table above.
2. You can safely ignore the warning about the website’s security certificate and proceed to the Service
Center page.
3. Enter your username, password, and domain then click Login.
The Discover Management Server page is displayed. Use this page to discover the vCenter Server
which holds the virtual machine template (.ova file) you imported as a prerequisite to this installation.
The template is used for creating DaaS management appliances.
Enter values for the fields listed in the table below, including the IP address/fully qualified domain
name of the vCenter Server that is hosting the SP1 Service Provider appliance of DC2.
vCenter Management Discovery
Field
Sample Value
IP Address/Hostname
mgVC2.domain.desktone.com
Username
root
Password
vCenterPasswd
Note: Each DaaS Datacenter must be in its’ own vCenter. Therefore DC2 will require a separate
vCenter from the one hosting DC1.
4. Click Discover Server.
The system prompts you to accept the certificate for the vCenter.
5. Click Accept.
67
Service Provider Installation – vCenter
The system indicates it is discovering host and calculating capacity.
6. If prompted, select the vCenter Datacenter that contains the compute resources for DaaS. This vDC
should also contain the DaaS Appliance Template (.ova file) that you imported.
Note: If you only have 1 vDC, it will be automatically selected and a prompt will not appear
7. Select the Compute Resource(s) (ESXi hosts or Cluster) you have set assign for DaaS Appliances. These
will be used to provision Appliances.
Note: A minimum of 2 ESXi hosts is required for high availability (HA). A cluster is considered to
be HA on its own.
For each selected Compute, a dialog appears. If the server is too small to accommodate the ratios, you
may be prompted to re-configure them.
8. Make any desired changes to the ratios and/or the Usage setting (Service, Tenant, or Network), and
click Save to save the values.
9. After you set the ratios, a VM list will be displayed containing all the VMs from the compute selected
as part of step 5. Select the DaaS appliance template from this list.
Note: This should NOT be the Service Provider appliance itself, but should be the Appliance .ova
that was deployed earlier.
Once the system has discovered the appliance template, the Browse Tenants screen is displayed.
4.9.6 Complete the Datacenter Build Out
The remainder of the process for the additional Datacenter is the same as for the first Datacenter, beginning
in Section 4.4. You need to complete the following tasks:
● (Optional) Rename Resource Manager
● (Optional) Turn off Local Disk Provisioning
● (Optional) Add Storage for Service Provider Appliances
● Create the Remaining Service Provider Appliances
● Add Tenant Resource Manager
● Define Tenant Models
68
Access Point Setup
5 Access Point Setup
5.1 Overview
This section describes the process for setting up Access Point, which has replaced Remote Access Manager
(dtRAM) in DaaS deployments. Access Point is a VMware developed End-User Computing (EUC) appliance
that acts as a specialized gateway (or reverse proxy) that manages access to enterprise EUC products
deployed in a private or public cloud. It consolidates functionality that was previously implemented in
various enterprise EUC products, and simplifies deployments for customers who use multiple EUC
products within their environments.
The following are advantages of migrating to Access Point:
● Customers who migrate to Access Point can reduce their firewall open ports to 443, 4172 and 8443.
● Access Point properly handles SSL certificates for HTML Access (Blast) so that a certificate will no
longer be required on the virtual desktop.
Note: For internal access not via Access Point, desktops will still need to have SSL certificates.
5.1.1 High-Level Architecture
The diagram below shows the high-level architecture of Access Point configured in a DaaS deployment.
69
Access Point Setup
5.1.2 Basic Functionality
The basic functionality of Access Point is as follows.
● The client makes a connection to the reverse proxy, and when the response comes back, the client
intercepts it.
● The connection can be established by either a browser or the Horizon client.
● Once a virtual desktop session is established, the PCoIP SG, Blast SG, or View Tunnel may be used for
the virtual desktop traffic, depending on what protocol the user has selected. The tunnel is used for the
RDP protocol as well as USB connections.
Access Point used in a Horizon DaaS deployment has the following characteristics:
● There will be no authentication (at least for the first release). This responsibility will remain within the
Tenant Appliance.
● All communication will be proxied through Access Point if the end-user is accessing the solution from
outside of the corporate network. This includes:
○ All View specific protocol handling (XMLAPI, PCoIP, etc)
○ Any Tenant Appliance communication
5.1.3 Access Point vs. dtRAM
The main differences between dtRAM and Access Point are outlined in the table below.
70
dtRAM (no longer supported)
Access Point
Tenant appliance sits in front of the dtRAM and
controls its operations
Access Point appliance sits in front of the tenant
appliance so that the tenant does not know it
exists. The tenant requires software changes to
accommodate this new architectural shift.
Does not make use of a PSG (or BSG or Tunnel)
gateway that is installed
Makes use of a PSG (or BSG or Tunnel) gateway
that is installed
Needs to use a wide range of ports for PCoIP etc.
from the client and requires customers to open all
of these ports to allow access
All PCoIP traffic can come in on the standard port
(4172). Other single ports are used for BSG and
Tunnel.
BSD-based and uses "pf" to forward traffic
Linux appliance with built-in proxying capabilities
Supports HA clustering
HA clustering is possible if you choose to
configure load balancers (see example in Appendix
A)
Supports geographically dispersed datacenters
Does not support geographically dispersed
datacenters in the first release
Has security weaknesses because it can only
validate traffic based on source IP address
Uses deep protocol inspection techniques to ensure
that traffic from the client is properly validated
before it is passed on to the virtual desktops
Access Point Setup
5.1.4 Performance
The following are some considerations regarding Access Point performance.
● Capacity – Access Point has been tested with as many as 2,000 concurrent sessions, but the number of
sessions your system can handle depends on the amount of data being sent and received (for example,
video content).
● Monitoring– Access Point does not currently have an internal monitoring tool, but you are able to
obtain usage information using vCloud Director monitoring of the tenant appliance.
● Rebooting – Performing a reboot operation for Access Point disconnects all active users. The user's
desktop session remains active, but the user will need to reestablish the connection to regain access to
the desktop. If Access Points are deployed in a load balanced configuration with multiple Access
Points, then any active or new users will be able to immediately reconnect via the load balancer and
the connection will be handled by another Access Point while one is rebooting.
● High Availability / Failover – HA clustering is possible if you choose to configure load balancers (see
example in Appendix A).
5.2 Set Up Access Point
For more information about Access Point configuration, see VMware Access Point Documentation.
Note: You cannot deploy an Access Point VM from a vSphere Windows client. You must deploy it from
the vSphere web client.
Note: Default tenant appliance certificates should not be used for configuring Access Point 2.5 or above.
Custom certificates for Tenant should be uploaded from the Service Center user interface and those
certificates should be used for configuring Access Point.
1. Download the latest version of the Access Point OVA file.
2. Determine the IP addresses (DNS/Netmask/Gateway) for the required networks, as described below.
Configuration
3 NIC
(Recommended
configuration)
2 NIC
1 NIC
Networks
Internet
(NIC 1)
Any network with internet access
Management
(NIC 2)
This can be your 169 network. Since this does
not have its own DNS or Gateway, you can
enter any numbers for DNS and set the
netmask to 255.255.255.0
Backend
(NIC 3)
Network that the Tenant uses for desktops
Internet
(NIC 1)
Network the Tenant is on
Management
(NIC 2)
This can be your 169 network. Since this does
not have its own DNS or Gateway, you can
enter any numbers for DNS and set the
netmask to 255.255.255.0
Internet
(NIC 1)
Network that the Tenant is on
71
Access Point Setup
Note: If NIC 2 is present, then the administration server (port 9443) that provides the REST APIs
will only listen on that NIC. This server is accessed by the "apsetup.sh" script used in Step 5 below.
If NIC 2 is not present, then that administration server listens on all of the interfaces.
1. In the vSphere web client, follow the normal method for deploying a template. In the “Customize
template” step, enter information as shown below.
Note: The fields below may not all appear, depending on your configuration, and may also appear
in a different order than that shown below.
Heading
Field
Networking External IP
Properties
Address
Password
Options
Value
Enter the physical IP address of NIC 1.
Note: If user access is via a NAT address, do not enter that
address here.
DNS server
addresses
Enter IP of the DNS that the Access Point will use to resolve
Hostnames.
Management
network IP
Address
If configuration is 3 NIC or 2 NIC, enter Management
Network IP from the previous step.
Backend network
IP Address
If configuration is 3 NIC, enter Backend Network IP from
the previous step.
Password for the
root user of this
VM
Enter initial password for root user. This must be a valid
Linux password.
Password for the
admin user,
which enables
REST API access
Enter password to be used for REST API Admin user.
Password must be at least eight characters long and must
contain:
 At least one upper case letter
 At least one lower case letter
 At least one number
At least one special character (!, @, #, etc.)
System
Properties
Locale to use for
localized
messages
en_us
Syslog server URL Leave blank
Horizon
Properties
Horizon server
URL
Leave blank
Horizon server
thumbprints
Leave blank
2. When you have finished the deployment process, power on the VM and wait for the login screen to
appear on the console.
3. On the tenant appliance, run the following command:
sudo /usr/local/desktone/scripts/apsetup.sh
4. Enter yes or no to the initial two prompts, as described below.
72
Access Point Setup
Prompt
Value
Do you want to setup
this access point for
internal access . . . :
Default value is no. If you enter anything other than y or yes, it
will default to no and the access point will be configured for
external connections in the DMZ network. In most cases you will
use the external configuration.
Enter yes to make this an internal access point so that the PCoIP
traffic goes directly to the desktops, bypassing the access point.
Do you want to allow
Horizon Air Helpdesk
Console access . . . :
Enter yes to allow Horizon Air Helpdesk Console access though
the access point, or no to not allow access.
Horizon Air Helpdesk Console is a console access tool that allows
you to run health scans, provide remote assistance, and view
history and audit information for each VM in your system.
Note: This is a beta feature and is not supported at this time. It
is available only for vCloud deployments, not for vCenter
deployments. For more information about trying this tool,
please contact your deployment representative.
The system now proceeds to the Access Point Configuration prompts.
5. Enter the requested information for the Access Point appliance:
Prompt
Value
Admin Password:
Password for the admin user of the Access Point.
Management IP:
This is the same address you entered above for Management
network IP Address.
External IP:
The IP address for NIC 1 or the NAT IP address of NIC 1.
External Hostname
[xx.xx.xx.xx]:
[Default hostname in brackets]
External PCoIP Port
[4172]:
Default PCoIP Port shown in brackets: [4172]
External HTML Access
Port [8443]:
Default HTML Access Port in brackets: [8443]
External Tunnel Port
[443]:
Default Tunnel Port in brackets: [443]
6. The response status returned will indicate whether the configuration was successful.
Response status
Result
200
Configuration successful
400
Invalid input
401
Password incorrect. Confirm that password matches admin
password configured during OVA deployment.
000
One of the following:

Network connection failure. Confirm that IP address
matches management IP address configured during OVA
deployment.
73
Access Point Setup
Response status
Result

REST API password does not meet password criteria.
7. If dtRAM was in use on this environment previously, set the element.allocator.ram.use policy to false
and remove the associated NAT and firewall rules.
8. Configure NAT and firewall rules to allow access to the Access Point appliance through Internet
network.
Note: When using an edge gateway load balancer the NAT for ports 80 and 443 are not required.
These ports are forwarded automatically.
Port
Usage
4172/tcp, 4172/udp
PCoIP desktop access protocol
8443/tcp
HTML desktop access protocol
443/tcp
Secure web portal access
80/tcp
Insecure web portal access (will be redirected to 443)
5.3 Example of Load Balancer Configuration
Note: The following is an example of the process for configuring a load balancer. The settings you use
will be different.
1. Choose an external IP to use for NAT (for example, 1.2.3.4).
2. Choose three external ports per Access Point for NAT (for example, [41721, 8443, 4431], [41722, 8444,
4432]).
3. Log in to the vCloud Director interface as an Organization Administrator.
4. Navigate to Edge Gateway Services:
a. Click Administration in the top menu.
b. Click Virtual Datacenters in the Administration pane to the left.
c. Click the Virtual Datacenter name in the pane on the right.
d. The pane on the right has a row of tabs along the top. Click the Edge Gateways tab.
e. In the list of Edge Gateways, click one to select it.
f. Right-click the Edge Gateway and click Edge Gateway Services.
5. Configure DNAT:
a. On the Edge Gateway Services page, click the NAT tab.
b. Configure as shown below.
74
Applied On Type
Original IP Original Port Translated IP
Translated Port Protocol
external
DNAT
1.2.3.4
41721
192.168.0.10
4172
TCP & UDP
external
DNAT
1.2.3.4
8443
192.168.0.10
8443
TCP
external
DNAT
1.2.3.4
4431
192.168.0.10
443
TCP
Access Point Setup
external
DNAT
1.2.3.4
41722
192.168.0.11
4172
TCP & UDP
external
DNAT
1.2.3.4
8444
192.168.0.11
8443
TCP
external
DNAT
1.2.3.4
4432
192.168.0.11
443
TCP
6. Configure Firewall:
a. On the Edge Gateway Services page, click the Firewall tab.
b. Configure as shown below.
Name
Source
Destination
Protocol
Action
ap1-pcoip
any:any
1.2.3.4:41721
TCP & UDP
Allow
ap1-blast
any:any
1.2.3.4:8443
TCP
Allow
ap1-tunnel
any:any
1.2.3.4:4431
TCP
Allow
ap2-pcoip
any:any
1.2.3.4:41722
TCP & UDP
Allow
ap2-blast
any:any
1.2.3.4:8444
TCP
Allow
ap2-tunnel
any:any
1.2.3.4:4432
TCP
Allow
7. Configure load balancer pool servers:
a. On the Load Balancer tab, click Pool Servers and click Add.
b. On the Name & Description tab, type a name and optionally a description for the pool server.
c. Click Next.
d. On the Configure Service tab:
● Click Enable for HTTP and HTTPS services.
● Select IP Hash for the balancing method for both services.
● For default ports, enter the following:
○ HTTP – Port 80
○ HTTPS – Port 443
e. Click Next.
f. On the Configure Health-Check tab:
● For HTTP and HTTPS, enter Monitor Ports.
● For HTTPS, change Mode to TCP.
● In the URI for HTTP service field, enter /favicon.ico
g. Click Next.
h. On the Manage Members tab, add each Access point as a member, described below.
1) Click Add.
75
Access Point Setup
2) In the Add Member dialog:
○ Enter the IP address of the Internet AP interface, as defined when you deployed the
OVA.
○ For both HTTP and HTTPS, enter 80 for Port and 443 for Monitor Port.
3) Click OK.
8. Configure load balancer virtual server:
a. On the Load Balancer tab, click Virtual Servers and click Add.
b. Enter a name and description for the virtual server.
c. Select an external network from the Applied on drop-down menu.
d. Enter the external IP address of the virtual server.
e. From the drop-down menu, select the pool you created earlier.
f. In Services, select Enable for HTTP and HTTPS.
g. For Persistence Method, enter No persistence for HTTP and HTTPS.
h. Click Enabled to enable the virtual server.
i. Click OK.
76
HTML Access (Blast) Setup
6 HTML Access (Blast) Setup
6.1 Overview
6.1.1 About Desktop Protocols
The Horizon Agent has a very small footprint (90Kb) and supports the full Horizon Client capabilities:
PCoIP, RDP, HTTPS, SSL, SSO, USB Redirection, printer support, and session management.
6.1.2 About HTML Access (Blast)
HTML Access (Blast) enables access to a desktop via any HTML5 compliant web browser.
To use HTML Access:
● Each virtual desktop must be running the latest Horizon Agent.
● Each virtual desktop must be running the latest Horizon DaaS Agent.
● SSL certificate install automation must be configured as described under Automate SSL Installation
below.
6.1.3 System Requirements
Browser on client system:
● Chrome 41, 42, and 43
● Internet Explorer 10 and 11
● Safari 7 and 8 (Mobile Safari is not supported for this release.)
● Firefox 36, 37, and 38
Client operating systems:
● Windows 7 SP1 (32- or 64-bit)
● Windows 8.x desktop (32- or 64-bit)
● Windows 10 desktop (32- or 64-bit)
● Mac OS X Mavericks (10.9)
● Mac OS X Yosemite (10.10)
● Chrome OS 28.x or later
77
HTML Access (Blast) Setup
6.1.4 HTML Access (Blast) Support for RDSH Applications
Launching RDSH applications is supported in HTML Access 3.4.
6.2 Setup Procedure
6.2.1 Install Correct Browser
See list of supported browsers in System Requirements.
6.2.2 Prepare Desktops to Support Protocol
Before installing the software required to connect to desktops, complete the following pre-installation steps.
1. Uninstall all software components related to all other protocols.
Important: You must uninstall all software components related to all other protocols (e.g. HDX,
RGS). If you do not uninstall these other protocol components, your template will be corrupted and
you will no longer successfully boot into Windows. This warning does not apply to RDP; the
presence of RDP components does not cause problems.
2. Update VMware Tools.
3. Make sure that port 443 is not being used by any other software.
4. Enable the Windows Firewall if not already enabled.
5. Make sure that the following ports are open to TCP and/or UDP traffic as indicated:
Port(s)
Source
Destination
TCP
UDP
4172
Access Point
VM


443
Tenant Appliance
VM

22443
Access Point
VM

6.2.3 Install DaaS Agent
1. Copy the most recent DaaS Agent executable file (VMware-DaaS-Agent-x.x.x.msi) to each VM.
2. Run the DaaS Agent executable file.
6.2.4 Install Horizon Agent
There are three possible scenarios when installing the Horizon Agent:
● Install on desktop (Windows 7, Windows 8, Windows 8.1)
● Install on server (Windows Server 2008 R2, Windows Server 2012 R2) as Personal Desktop (NonRDSH)
● Install on server (Windows Server 2008 R2, Windows Server 2012 R2) as RDSH Role
For more information on the standard installation procedure, see Install Horizon Agent on a Virtual
Machine. For information on silent installation procedure, see Install Horizon Agent Silently. Note that if
78
HTML Access (Blast) Setup
you use the silent installation procedure, you still need to select/de-select custom options as detailed in the
steps below.
Note: If you have not installed the most recent version of the Horizon Agent, this can cause problems
with creating RDS pools. In this case, when you create a new RDS pool, the system can allow you to
select HTML Access (Blast) as a protocol, but this selection will not be applied to the pool even though it
appears to have been applied successfully.
6.2.4.1 Install on Desktop (Windows 7, Windows 8, Windows 8.1)
Procedure
1. Download the latest Horizon Agent from VMware’s website (https://my.vmware.com). Note that
there are separate downloads for 32-bit and 64-bit operating systems.
1. Double-click the Horizon Agent installation file (file name is: VMware-viewagent-x86_64-x.y.znnnnnnn.exe for the 64-bit installer).
2. Select custom setup.
○ De-select the option to install the View Composer component.
○ Accept other default settings.
3. Restart the virtual machine when prompted.
6.2.4.2 Install on Windows Server 2008 R2 or 2012 R2 as Personal Desktop (Non-RDSH)
Procedure
1. Download the latest Horizon Agent from VMware’s website (https://my.vmware.com).
2. Double-click the Horizon Agent installation file (file name is: VMware-viewagent-x86_64-x.y.znnnnnnn.exe for the 64-bit installer).
3. Select custom setup.
○ Select the option to install the Agent in ‘desktop mode’.
○ De-select the option to install the View Composer component.
○ Accept other default settings.
4. Restart the virtual machine when prompted.
6.2.4.3 Install on Windows Server 2008R2/2012 as RDSH Role
Note: To install the Horizon Agent in this scenario, you MUST run the command line install and cannot
use the default “double click” GUI.
Procedure
1. Download the latest Horizon Agent from VMware’s website (https://my.vmware.com).
2. Run the following on the command line as an administrator user:
VMware-viewagent-x86_64-x.y.z-nnnnnnn.exe /v "VDM_SKIP_BROKER_REGISTRATION=1"
3. Select custom setup.
○ De-select the option to install the View Composer component.
79
HTML Access (Blast) Setup
○ Accept other default settings.
4. Restart the virtual machine when prompted.
6.2.5 Add the HTML Access (Blast) Group Policy Settings to the Local Computer
Policy Environment
1. Download the View GPO Bundle .zip file from the VMware Horizon 6 download site at:
http://www.vmware.com/go/downloadview
The file is named VMware-Horizon-View-Extras-Bundle-x.x.x-yyyyyyy.zip, where x.x.x is the version
and yyyyyyy is the build number. All ADM and ADMX files that provide group policy settings for
View are available in this file.
2. Copy the file to your Active Directory server and unzip the file.
The HTML Access GPOs are included in the Blast-enUS.adm ADM Template file.
3. On the Active Directory server, edit the GPO.
Option
Windows 2008 or 2012
Description
a) Select Start > Administrative Tools > Group Policy Management.
b) Expand your domain, right-click the GPO that you created for the
group policy settings, and select Edit.
Windows 2003
a) Select Start > All Programs > Administrative Tools > Active
Directory Users and Computers.
b) Right-click the OU that contains your View desktops and select
Properties.
c) On the Group Policy tab, click Open to open the Group Policy
Management plug-in.
d) In the right pane, right-click the GPO that you created for the group
policy settings and select Edit.
The Group Policy Object Editor window appears.
4. In the Group Policy Object Editor, right-click Administrative Templates under Computer
Configuration and then select Add/Remove Templates.
5. Click Add, browse to the Blast-enUS.adm file, and click Open.
6. Click Close to apply the policy settings in the ADM Template file to the GPO.
The VMware Blast folder appears in the left pane under Administrative Templates > Classic
Administrative Templates.
7. Configure the HTML Access group policy settings.
8. Make sure your policy settings are applied to the remote desktops.
9. Run the gpupdate.exe command on the desktops.
10. Restart the desktops.
80
HTML Access (Blast) Setup
6.2.6 Automate SSL Installation
The process described in this section is needed to facilitate internal access that is not via Access Point. If you
do not have users requiring this type of access, you do not need to perform this procedure.
Note the following:
● You must follow this process on the gold pattern before converting the VM as a gold pattern or
reseal.
● You must repeat this process each time you open and re-seal a gold pattern.
You can install the certificate using post sysprep script execution in order to avoid sysprep issues and
duplicate certificate problems. You can also use your own own standard practice as well (for example,
Active Directory GPO and scripts). Please read the Horizon View feature pack documentation for SSL
certificate requirements.
Follow the steps below to configure post sysprep commands/scripts in the Horizon DaaS environment.
● Import certificate on test machine and note certificate thumbprint.
● Create post sysprep script/batch file on gold pattern image and copy certificate.
● Convert image to gold pattern or reseal.
6.2.6.1 Import Certificate and Record Certificate Thumbprint
Procedure
1. Add the certificate snap-in to MMC by performing the steps below.
In order to add certificates to the Windows certificate store, you must first add the certificate snap-in to
the Microsoft Management Console (MMC). Before you begin, verify that the MMC and certificate
snap-in are available on the Windows guest operating system.
a. On the desktop, click Start and type mmc.exe
b. In the MMC window, select File > Add/Remove Snap-in.
c. In the Add or Remove Snap-ins window, select Certificates and click Add.
d. In the Certificates snap-in window, select Computer account, click Next, select local computer,
and click Finish.
e. In the Add or Remove snap-in window, click OK.
2. Import a certificate for the HTML Access Agent into the Windows Certificate Store by performing the
steps below.
To replace a default HTML Access Agent certificate with a CA-signed certificate, you must import the
CA-signed certificate into the Windows local computer certificate store. Before you begin, verify that
the HTML Access Agent is installed, the CA-signed certificate was copied to the desktop, and the
certificate snap-in was added to MMC (see Step 1 above).
a. In the MMC window, expand the Certificates (Local Computer) node and select the Personal
folder.
b.
In the Actions pane, select More Actions > All Tasks > Import.
c. In the Certificate Import wizard, click Next and browse to the location where the certificate is
stored.
d. Select the certificate file and click Open.
81
HTML Access (Blast) Setup
To display your certificate file type, you can select its file format from the File name drop-down
menu.
e. Type the password for the private key that is included in the certificate file.
f. Select Mark this key as exportable.
g. Select Include all extendable properties.
h. Click Next and click Finish.
The new certificate appears in the Certificates (Local Computer) > Personal > Certificates folder.
i. Verify that the new certificate contains a private key.
1) In the Certificates (Local Computer) > Personal > Certificates folder, double-click the new
certificate.
2) In the General tab of the Certificate Information dialog box, verify that the following
statement appears: You have a private key that corresponds to this certificate.
3. Import root and intermediate certificates for the HTML Access Agent
If the root certificate and intermediate certificates in the certificate chain are not imported with the SSL
certificate that you imported for the HTML Access Agent, you must import these certificates into the
Windows local computer certificate store.
a. In the MMC console, expand the Certificates (Local Computer) node and go to the Trusted Root
Certification Authorities > Certificates folder.
● If your root certificate is in this folder, and there are no intermediate certificates in your
certificate chain, skip this procedure.
● If your root certificate is not in this folder, proceed to step b.
b. Right-click the Trusted Root Certification Authorities > Certificates folder and click All Tasks >
Import.
c. In the Certificate Import wizard, click Next and browse to the location where the root CA
certificate is stored.
d. Select the root CA certificate file and click Open.
e. Click Next, click Next, and click Finish.
f. If your server certificate was signed by an intermediate CA, import all intermediate certificates in
the certificate chain into the Windows local computer certificate store.
1) Go to the Certificates (Local Computer) > Intermediate Certification Authorities >
Certificates folder.
2) Repeat steps c through f for each intermediate certificate that must be imported.
4. In the certificate MMC window, navigate to the Certificates (Local Computer) > Personal > Certificates
folder.
5. Double-click the CA-signed certificate that you imported into the Windows certificate store.
6. In the Certificates dialog box, click the Details tab, scroll down, and select the Thumbprint icon.
7. Copy the selected thumbprint to a text file.
For example:
31 2a 32 50 1a 0b 34 b1 65 46 13 a8 0a 5e f7 43 6e a9 2c 3e
82
HTML Access (Blast) Setup
Note: When you copy the thumbprint, do not to include the leading space. If you inadvertently
paste the leading space with the thumbprint into the registry key (in Step 7), the certificate might
not be configured successfully. This problem can occur even though the leading space is not
displayed in the registry value text box.
6.2.6.2 Create Post Sysprep Script/Batch File on Gold Pattern Image and Copy Certificate
6.2.6.3 Windows 7 and Later
Use post build configuration script “SetupComplete.cmd “to import the SSL certificate and configure the
VMware HTML Access registry.
http://technet.microsoft.com/en-us/library/dd744268%28v=ws.10%29.aspx
For example:
● Copy the SSL certificate file under C: drive. For this example, the “C:\desktone_ca_cert” file.
● Create a file SetupComplete.cmd under "%WINDIR%\Setup\Scripts\" folder. Create “Scripts” folder
if it does not exist.
● Add following commands in SetupComplete.cmd file. The thumbprint value is what you copied in
Step 1.
● Note that if you have root certificate and intermediate certificates in the certificate chain, then you
need to add appropriate CertUtil commands in batch file.
CertUtil
-importPFX -f
-p "<password>" "C:\desktone_ca_cert.pfx"
reg add "HKLM\SOFTWARE\VMware, Inc.\VMware Blast\Config" /f /v "SslHash"
/t REG_SZ /d "31 2a 32 50 1a 0b 34 b1 65 46 13 a8 0a 5e f7 43 6e a9 2c 3e"
del /F /Q "C:\desktone_ca_cert.pfx"
del /F /Q "%systemroot%\setup\scripts\SetupComplete.cmd"
● Save the SetupComplete.cmd file. You can test the SetupComplete.cmd file on test machine.
6.2.6.4 Windows XP
● Follow the Desktone post sysprep command execution approach to import the SSL certificate and
configure the VMware HTML Access registry.
● Install the Administration Tools Pack for Windows XP as the CertUtil tool is not available with the OS
install.
http://www.microsoft.com/en-us/download/details.aspx?id=16770
For example:
○ Copy the SSL certificate file under C: drive. For this example, the C:\desktone_ca_cert.pfx
file.
○ Create a folder path C:\Sysprep\i386\$OEM$\
○ Now create postprep-extra.bat file under C:\Sysprep\i386\$OEM$\ and add the following
commands in the batch file. The thumbprint value is the one you recorded above after importing
the certificate.
○ Note that if you have root certificate and intermediate certificates in the certificate chain, then
you need to add appropriate CertUtil commands in the vbatch file.
83
HTML Access (Blast) Setup
CertUtil
-importPFX -f
-p "<password>" "C:\desktone_ca_cert.pfx"
del /F /Q "C:\desktone_ca_cert.pfx.pfx"
reg add "HKLM\SOFTWARE\VMware, Inc.\VMware Blast\Config" /f /v "SslHash" /t
REG_SZ /d "31 2a 32 50 1a 0b 34 b1 65 46 13 a8 0a 5e f7 43 6e a9 2c 3e"
○ Save the postprep-extra.bat file. You do not need a command to delete the batch postprepextra.bat file as sysprep deletes the C:\Sysprep folder after successful deployment.
○ You can test the SetupComplete.cmd file on the test machine.
6.2.6.5 Convert Image to Gold Pattern or Reseal
Procedure
1. Convert the image as a gold pattern or reseal, and create a pool.
2. Verify the HTML Access connection for the certificate, or check certificates and HTML Access registry
on desktops.
Note: If the HTML Access (Blast) service generates the self-signed certificate even after you set the valid
CA certificate as described above, then you can troubleshoot this issue by looking at the logs located
here: %ProgramData%\VMWare\Vmware Blast\Blast-worker.txt
6.3 Troubleshoot Connection Problems
There are several configuration/setup problems that can result in an inability to launch a HTML Access
(Blast) connection successfully:
● Browser is not HTML5 compliant. Check that the browser version is one cited in the requirements.
● Pop-up blocker enabled. The browser’s pop-up blocker could prevent opening the new window for a
HTML Access connection. Make sure that the user disables the pop-up blocker for the Desktop Portal.
● Windows firewall disabled. Make sure that the Windows Firewall is installed and running on the
user’s desktop. A disabled Windows Firewall will result in errors reported in the HTML Access logs.
● Certificate errors. If you receive an error that indicates a missing or non-matching certificate, review
the instructions above under Import Certificate and Record Certificate Thumbprint and confirm that
you have performed the necessary steps.
Note: You must repeat this process each time you open and re-seal a gold pattern.
6.4 Known Limitations and Workarounds
● An SSL certificate warning will be displayed upon connecting to the desktop. This is because the SSL
certificate process was not performed correctly on a tenant gold pattern.
● Changing resolution to 2560x1920 ends the HTML Access session. This happens due to lack of vRAM
allocation. For more information see Estimating Memory Requirements for Virtual Desktops in the
View documentation.
● If your client system uses a super high resolution monitor (such as 2560 x 1600), HTML Access fails to
display the desktop.
Workaround: Lower the resolution on your monitor and connect. The resolution on the client monitor
must be less than 2560 x 1600 if the remote desktop resolution is 1920 x 1200.
84
HTML Access (Blast) Setup
● Sound playback quality is best on browsers that have Web Audio API support, such as Chrome, Safari,
and Firefox 25. Browsers that do not have this support include Internet Explorer (up to and including
Internet Explorer 11) and Firefox 24 and earlier.
● Black artifacts appear on the screen on ESXi 5.1 or 5.0 hosts. This is a known HTML Access issue when
the desktop HW version is 9 (ESX 5.0/5.1) with 3D disabled and the Windows 7 basic theme is used.
This is not an issue when Aero is turned on or when the VM uses HW version 10 (ESX 5.5).
● Horizon Agent session timeout may occur before the Desktop Portal session timeout, resulting in
“Authentication error” connecting to the desktop via HTML Access. The workaround for this this is to
log out of Desktop Portal and log in again.
For additional known limitations, see Known Issues in the HTML Access Release Notes.
85
Tenant Customization
7 Tenant Customization
7.1 Custom Branding
If you have a custom branding scheme for Desktop Portal, you will need to check whether everything
appears as expected after upgrading a tenant. There are a few areas to which you should pay particular
attention due to VMware branding changes.
● Login page:
CSS selector: #productNameInner
You may need to adjust the margin-left property and/or decrease the font-size, for example :
font-size: 14px;
● Other pages:
You will likely need to make the same changes as for the login page. Additionally, you may need to
adjust the background-position of the #banner selector:
background-position: 0px 0px;
7.2 Configure NetApp Storage
7.2.1 Summary
This section describes the configuration for NetApp storage with the DaaS platform.
7.2.2 Hardware and Software Requirements
VMware recommends NetApp storage with FlexClone for the VM image storage to take advantage of
deduplication. For the other storage volumes, generic NFS will suffice.
NetApp hardware and software:
● FAS3140C e/w 3 shelves of 1TB drives (42)
● PAM card
● NFS
● NearStore A-SIS
● FlexVol
● FlexScale
7.2.3 NFS Exports
VMware recommends NetApp storage with FlexClone for the VM image storage (/vol/vol_tenanta and
/vol/vol_tenantb in this example) to take advantage of deduplication. For the other storage volumes,
generic NFS will suffice.
86
Tenant Customization
7.2.4 Local Mount Point Structure
On the transit server node, create a directory structure identical to the NFS exports that you will be
mounting. While this is not strictly required, it assists in solving mount related issues, and to remember
what is mounted where. For example, for the exports defined above, create the following directory structure
on the local management node:
/vol
|-- db
|-- vol_sp
|-- vol_tenanta
|-- vol_tenantb
|-- vol_dbbackup
`-- vol_upload
7.2.5 Permissions and Security
All mounted file systems should have the following export options set:
● Read-write access: Set for all hosts (you can choose to limit to specific hosts if desired, but you should
check this as you add management nodes).
● Root access: Specify a range of host addresses which should have root access (using CIDR notation).
This range must include all the management nodes.
● Security: Select Unix style security for the exports.
When you view the export options for the NFS exports, it should look like this:
Read-Write Access (All Hosts)
Root Access (10.155.0.0/24)
Security (sys)
7.2.6 Add a New NetApp Service Account
From your NetApp system, you need to add a new role, group, and user:
● Add a New Role
netapp2> useradmin role add desktone_role -c "Role for Desktone API Support" -a
login-http-admin,api-license-list-info,api-system-get-info,api-system-getversion,api-system-get-ontapi-version,api-nfs-status,api-nfs-exportfs-list-rules2,api-nfs-exportfs-modify-rule-2,api-clone-start,api-clone-stop,api-clone-liststatus,api-vfiler-list-info
Wed Nov 25 19:34:57 GMT [useradmin.added.deleted:info]: The role 'desktone_role'
has been added.
Role added.
● Add a New Group
netapp2> useradmin group add desktone_group -c "Group for Desktone" -r
desktone_role
Wed Nov 25 19:41:35 GMT [useradmin.added.deleted:info]: The group 'desktone_group'
has been added.
Group added.
● Add a New User
netapp2> useradmin user add desktone -c "Service account for Desktone" -n
"Desktone SA" -g desktone_group
New password:
Retype new password:
User added.
netapp2> Wed Nov 25 19:57:50 GMT [useradmin.added.deleted:info]: The user
'desktone' has been added.
87
Tenant Customization
7.3 Super Tenant
7.3.1 Overview
DaaS service providers and Managed service providers (MSPs) are increasingly targeting the SMB market.
Creating a separate tenant for each customer will consume significantly more resources than might be
necessary for customers who typically need no more than 20 desktops or sessions. MSPs prefer a shared
tenant where they can provision each customer into its own pool, but maintain logical separation between
the pools. This type of shared tenant will be referred to in this context as a Super Tenant.
It is assumed that the MSP will manage Administration Console, Active Directory, and user/group
mappings to the pool on behalf of the individual customers. Some MSPs will also need to separate
customers across hosts either for security reasons or for Microsoft licensing compliance.
Note: Please refer to the regular tenant install guide for further details.
7.3.2 Super Tenant Prerequisites
7.3.2.1 Networking Requirements
A Super Tenant must have a perimeter network (e.g. DMZ in figure above) where DaaS tenant appliances
and other external facing components like Access Point are behind a firewall. The tenant administrator
must create subnets for each customer to isolate. Each subnet must be labeled (typically with a customer id
or name) to easily identify customers on hypervisors and the DaaS platform. The administrator must
configure these subnets so that traffic is allowed between the DMZ and customer subnets (e.g. N1C1 and
N2C2 in diagram 2.5) and vice versa. The Administrator can use either DVS (Distributed Virtual Switch) or
standard vSwitch networks within vCenter. It may be advantageous to use distributed virtual networking
since the number of VLANs per datacenter is limited according network specifications (4096 VLANs or
less).
7.3.2.2 Tenant Active Directory and DNS Configuration
The tenant administrator can use a single Active Directory to serve all customers by creating customer
specific security groups. The domain controllers and DNS server must be in the DMZ network so that all
customer assigned subnets can connect. Please check Microsoft recommended best practices for use of
domain controllers across subnets.
The administrator should create security groups for each customer to ease the user management and
configuration in the DaaS platform such as user mapping. Customers can be separated by creating
Organizational Units with security groups under the OUs in Active Directory.
Example:
The figure below shows that the tenant administrator has created an OU named ‘DesktoneSuperTenant’ in
Active Directory, and created security groups for each customer such as Customer1_Grp,
FordDesktopUsers, etc.
The tenant administrator must then add each security group to the User Groups field on the Group Info tab
of the Domain configuration.
88
Tenant Customization
Sample Active Directory Structure
7.3.2.3 Tenant DHCP Configuration
The tenant administrator should configure DHCP considering the network topology of subnets. A single
DHCP server can be used to serve all subnet desktop clients by utilizing BOOTP-relay agent capability of a
network router, or having another computer that can function as a relay agent on each subnet.
Each DHCP scope should be verified to ensure the correct domain controller and DNS configuration for
each network subnet.
Please refer to Microsoft recommendations and best practices to configure the DHCP server:
http://technet.microsoft.com/en-us/library/cc771390.aspx
7.3.2.4 Gold Pattern and DaaS Agent
7.3.2.5 Gold Pattern
The tenant admin should create a gold pattern and customize it per the customer’s requirement. The admin
can create individual gold patterns for each customer if required.
Important: Using a Windows client operating system image, such as Windows 7, in a Super Tenant may not
be advisable due to Microsoft licensing restrictions. Specifically, the license associated with Windows 7
(also Windows XP and Window 8) requires that virtual instances run on isolated hardware per customer
(e.g. Windows 7 instances for customer A and customer B cannot be on the same server or blade). A
popular approach due to the licensing restrictions is to use individual Windows Server instances skinned as
Windows 7. This approach gives the end user a familiar desktop look and feel and also allows sharing of
infrastructure with SPLA licensing. For further information please refer to Microsoft Windows licensing for
virtualization at microsoft.com.
89
Tenant Customization
7.3.2.6 DaaS Agent
The recommended way to deploy the DaaS Agent is to use tenant appliance auto discovery where a specific
DHCP option code is used so that the DaaS Agent can automatically discover the IP addresses of the tenant
appliances. This process is detailed in the DaaS platform tenant installation guide. In our testing we’ve
found that auto discovery does not always work on subnets other than the subnet where DHCP resides. As
an alternative the standby address can be manually set in the MonitorAgent.ini file.
7.3.2.7 Tenant Infrastructure Overview Diagram
The figure below represents an overview of the Super Tenant network infrastructure. DaaS platform
communication between the tenant appliances and service provider appliances is done via backbone
connectivity. This is standard for a non-Super Tenant as well. The Super Tenant has separate subnets (e.g.
N1C1 and N2C2) for each customer. The subnet networks are isolated so that N1C1 network cannot access
N2C2 network resources directly, but they can still connect to the DMZ network for communication with
the tenant appliances and other tenant infrastructure components such as the domain controllers and DNS.
Tenant Infrastructure Overview
7.3.3 Service Center
This section describes the actions to be performed by the service provider admin in order to enable the
Super Tenant.
7.3.3.1 Create a Super Tenant
The service provider administrator should follow the steps below to enable Super Tenant capabilities at the
time of tenant registration. Please make sure that you have prepared the required infrastructure for the
Super Tenant as described above.
To enable the Super Tenant during initial registration, perform the steps below.
90
Tenant Customization
Procedure
1. Log in to the Service Center and click on the Tenants tab.
2. Click on the Register a tenant link and select the Super Tenant checkbox as shown in the figure below.
3. Follow the normal tenant registration steps.
Enabling a Super Tenant during Registration
7.3.3.2 Enable an Existing Tenant as a Super Tenant
A service provider administrator can also enable super tenant capabilities for an existing tenant. To enable
the Super Tenant after initial registration, perform the steps below.
Procedure
1. Log in to the Service Center and click on the Tenants tab.
2. Click the Edit button for the tenant which you want to promote.
3. On the General tab, select the Super Tenant checkbox and click Update as shown in the figure below.
91
Tenant Customization
Promoting an Existing Tenant to Super Tenant
7.3.3.3 Add Networks for a Super Tenant
A service provider administrator must add networks to be used for Super Tenant customers in the DaaS
platform. To add networks, follow the procedure below.
Procedure
1. Log in to the Service Center and click on the Tenants tab.
2. Click the Edit button for the appropriate tenant.
3. Select the Networks tab and then click the Add Network Component link.
4. Enter the network details and click Add Network Component as shown in the figure below.
Note: Fill in the Network Label field on the Networks tab with a user-friendly name associated
with the tenant network. This field appears at pool creation time to allow you to associate a pool
with a network.
92
Tenant Customization
Adding Super Tenant Networks
7.3.3.4 Disabling the Super Tenant Option
A service provider admin cannot disable the Super Tenant option for a tenant. This is by design.
7.3.4 Billing
The DtReportingManager now has an additional method ‘SuperTenantBillingReports’ and returns a
collection of DtSuperTenantBillingReport records as described. Please refer to the DaaS platform SDK for
further information.
DtReportingManager
Name
Description
Method Relationship
SuperTenantBillingReports
Retrieves a list of super tenant
billing reports based on the given
DtBillingReportFilter
POST
association
DtSuperTenantBillingReport
Collects billing data for super tenants by their customer ids.
Links
There are no links in this object.
93
Tenant Customization
Properties
Name
Description
Data Type
customerId
Sub-tenant customer ID pertaining to this
record
String
desktopCount
List of desktop model to the in-use count
of those desktop models by this customer
in a super tenant. Count for each desktop
model is wrapped within
DtDesktopCountWrapper instances.
Collection of
DtDesktopCountWrapper
organizationId
Organization ID of the super tenant
Long
sessionCount
Count of the number of sessions allocated Long
to this customer
Sample Script output:
SUPER TENANT BILLING SUMMARY
------------------------------------------------------------------------------------------------ORG
CUSTOMER
TYPE
COUNT
DESKTOPMODELID
-------------------------------------------------------------------------------------------------
94
1001 |
Audi001 |
SESSION |
4 |
1001 |
Audi001 |
DESKTOP |
2 |
1001 |
Ford001 |
SESSION |
4 |
1001 |
BMW001 |
SESSION |
0 |
1001 |
BMW001 |
DESKTOP |
1 |
1cb3f348-f987-4834-a33c-742ef30d356b
1cb3f348-f987-4834-a33c-742ef30d356b
Reports
8 Reports
8.1 Billing Summary Reports
8.1.1 Overview
DaaS captures Quota information along with usage for each tenant. This data is sorted by Datacenter, and is
intended to be used by the service provider for billing information.
All billing information retrieval must be done via REST APIs. The retrieval of billing information via scripts
is not supported.
Note: By default the Billing Summary Report will contain information about disabled and enabled
tenants, but there is a policy (‘billing.summary.skip.disabled.tenants’) that can alter this functionality to
only collect information for enabled tenants. To activate this option, set the policy value to ‘true’.
8.1.2 Override Report Intervals
By default, the platform captures billing summary information daily just after midnight (UTC) and purges
previous summaries older than 180 days. To override these default intervals, follow the procedure below.
Procedure
1. In the Service Center, select tenants > policy.
2. On the Policy Configuration page, set the following policies:
○ billing.summary.collection.interval
The interval between billing collections in milliseconds (ms). The default is 86400000 ms (24
hours).
○ billing.summary.purge.interval policies
The number of days to retain billing summary records in the database. The default is 180 days.
Records older than the interval are purged from the database. Set to 0 to retain all billing history
in the database.
95
Reports
8.1.3 Description of Record Layout
The table below describes the values returned in each column.
Column Name
Sample Value
Description
snapshot
201203220856
Date and Time of record (format:
yyyyMMddhhmm)
org_id
1001
The unique ID of the Organization/Tenant
org_name
Tenant A
The name for the tenant identified by org_id
datacenter_id
5925d361-4c1c-490e9616-c5041d067b8e
A unique ID that identifies the Datacenter location
status
enabled
The status of the tenant: Enabled, Disabled, Error
type_id
1cb3f348-f987-4834-a33c742ef30d356b
Unique Id for given Type:
● template for Template quota
● desktop if there are no quotas (all 0) and no
in_use_count numbers. In this case, quota and
in_use_count are set to -1 to indicate nothing
was found for the tenant.
desktop_model_name
Pro
model_protocols
31
This value is blank for a Template.Quota
If the type is PROTOCOL, this column indicates
the bitmask value for that protocol.
RDP = bit 0
RGS = bit 1
HDX = bit 2
VNC = bit 3
NX = bit 4
PCoIP = bit 5
For example, the bitmask 31 means RDP, RGS,
HDX, VNC, and NX are available and the bitmask 1
means only RDP. If the type is DESKTOPMODEL,
this column is ignored and contains 0.
quota
17
Error Value of -1
in_use_count
17
When Session Type= Number of sessions
provisioned for; otherwise -1 indicates an error
while the summary was being taken or that the
system could not communicate with the tenant.
The in_use_count can be higher than the quota only
if the VMs exist on the hypervisor or are created
outside the DaaS environment. This occurs only for
Imported or Utility Desktop Model Quota mapped
to the Imported, Recycle and Utility pools.
date_updated
2012-03-22 08:56:30.784
The last time this row was modified
type
PROTOCOL
Indicates the type of quota (DESKTOPMODEL,
SESSION, PROTOCOL, TEMPLATE)
ANT: Whatever you put on the following lines will automatically propagate into the header and footer. (You don’t need to open the
head
96
System Maintenance
9 System Maintenance
9.1 Backing Up and Restoring Databases
9.1.1 Back Up a Database
Procedure
 Run the following command in the appliance:
/usr/local/desktone/scripts/backup_db.sh –P '<postgres_db_password>'
This command extracts a PostgreSQL database into an archive file, creating a backup file of the form
<hostname>.<timestamp>.tar.gz in the /usr/local/desktone/backup folder.
Optional Commands
backup_db.sh accepts the following optional command line arguments.
Argument
Description
-P password
Password for database user admin
-V true
Enable verbose mode
-U username
PostgreSQL username (default is postgres).
9.1.2 Restore a Database
The procedure below restores one database.
Note the following:
● You must perform all restores on the primary appliance, and then re-initialize slony to populate the
database to the secondary appliance.
● If you need to restore a tenant appliance, you might need to restore both the edb and fdb databases.
Procedure
1. Run sudo bash and authenticate.
2. Stop dtService for both service provider appliances or for both tenant appliances:
service dtService stop
3. Stop slony:
service dtService stop
killall slon
4. On the primary appliance, complete these steps.
a. Copy the backup file to a directory in /tmp (the file has the form <hostname>.<timestamp>.tar.gz):
mkdir /tmp/backup_working
cp /usr/local/desktone/backup/<filename> /tmp/backup_working
97
System Maintenance
b. Extract the backup file:
cd /tmp/backup_working
tar zxvf <filename>
c. Move to the directory where the .bak file exists and perform the restore. For example:
cd usr/local/desktone/backup
env PGPASSWORD=<pswd> /usr/local/pgsql/bin/pg_restore -i -w -U admin -d <type>
-v --clean <filename>
where:
● <pswd> is the postgres database password
● <type> is the file type (either edb, fdb, or avdb)
● <filenname > is the name of the extracted backup file
5. On both appliances, re-initialize slony. For instructions, see Slony Reinitialization below.
9.2 Slony Reinitialization
On each appliance in an Organization run these commands as root.
Procedure
1. Stop dtService on all nodes:
service dtService stop
2. Stop slon daemons (kill daemons on target nodes):
killall slon
3. Run this command on the target db (FDB or EDB):
drop schema _slony cascade;
Note: Drop the schema only for the affected database pair.
4. If you stopped dtService on the Primary service provider node for re-initialization of the FDB on the
service provider appliances, then start the service again on the primary service provider node:
service dtService start
5. Start slon daemons as follows.
○ For the service provider org, start the daemon for the FDB:
/usr/local/desktone/scripts/start_slon_fdb.sh
○ For the tenant org, start the daemons for both the FDB and the EDB:
/usr/local/desktone/scripts/start_slon_fdb.sh
/usr/local/desktone/scripts/start_slon_edb.sh
98
System Maintenance
6. In the Service Center, select appliances > maintenance.
7. In the Slony Operations section of the page, enter the following information:
○ Organization id: Org ID of the appliance to which the init slony would be performed.
○ DB instance name: Name of database instance for init slony.
○ Element ID: ID of the Desktop Manager to list New Master IP for init slony operation. This
option is visible when DB instance name is 'edb'.
8. Click Init Slony.
9.3 Database Failover
9.3.1 Overview
When the primary appliance fails, the secondary database is read-only. There are two procedures outlined
below to address this situation:
● Enabling write operations on the secondary database
● Permanently promoting the secondary appliance to become the primary appliance
9.3.2 Enable Write Operations on the Secondary Database
The goal of this procedure is to switch the master database to the secondary appliance (tenant or service
provider) in the event that the primary appliance is not available. The goal is to enable write operations so
that the secondary appliance’s database is the master datasource.
Procedure
1. Stop the dtService on both the primary (if accessible) and the secondary appliance:
sudo service dtService stop
2. Stop all slony daemons on both the primary (if accessible) and the secondary appliances:
sudo killall slon
3. In the secondary appliance, connect to the fdb database and execute the following SQL command:
fdb=# drop schema _slony cascade;
4. Repeat step 3 for the EDB if the appliance belongs to a tenant organization.
5. If the database on the primary appliance is still accessible, then backup the database, copy the database
files and restore the database into the secondary appliance (for procedures, see Backing Up and
Restoring Databases).
6. Open the file /usr/local/desktone/release/active/conf/fdb.properties for edit and remove
the IP address of the primary appliance.
99
System Maintenance
7. Repeat step 6 for /usr/local/desktone/release/active/edb.properties if the appliance belongs to a
tenant organization.
8. Start dtService in the secondary appliance.
9.3.3 Promote the Secondary Appliance to Primary Appliance
To permanently promote the secondary appliance to be the primary appliance, you need to reinitialize slony
as described in the Slony Reinitialization section of this document.
9.4 Datacenters
9.4.1 Failover Master
Note: Wait for all appliances to come online before beginning this procedure.
To failover a failed data center's primary node to a healthy data center's primary node, follow the procedure
below.
Procedure
1. In the Service Center, select appliances > maintenance.
2. In the Fail Over section of the page, enter the following information.
Field
Description
Organization id
Org ID of the appliance to which the failover will be done.
Data Center name
Name of the Data Center where the appliance is residing to
which the failover will be done.
DB instance name
Name of database instance for failover.
Element ID:
ID of the Desktop Manager to list New Master IP for failover.
This option is visible when DB instance name is 'edb'.
New master IP
eth0 IP of the appliance to which the failover will be done.
3. Click the FailOverMaster button.
4. Restart the service provider appliances.
100
System Maintenance
9.4.2 Failback a Datacenter
Note: The Service Center Portal may be unavailable for some steps during this process, make sure that
you schedule the work at an appropriate time.
To failback to a restored data center's primary node from a failover, follow the procedure below.
Procedure
1. Stop the dtService on all appliances that belong to the organization, across all data centers:
service dtService stop
2. Back up the fabric database from the current master node:
/usr/local/desktone/scripts/backupdb.sh –P <’database password’>
This creates a file called <hostname>.<timestamp>.tar.gz in the /usr/local/desktone/backup folder.
3. SCP the backup file to the original master/primary node.
4. Extract the backup file:
tar –zxvf <hostname>.<timestamp>.tar.gz
5. Restore the backup on this original master node.
Note: do this once for each database type. This means you will do this twice for tenant appliances,
the first time should be for the fdb, and the second time should be for the edb.
env PGPASSWORD=<pswd> /usr/local/pgsql/bin/pg_restore –i –w –U admin –d <ft> -v -clean <fn>
where:
<pswd> = database password
<ft> = EDB or FDB (done for each for tenant appliances, or just once for service provider appliances)
<fn> = the path to the extracted file with respect to the <ft> parameter
6. Open a psql session to the fabric database on all service provider appliances:
psql –U admin fdb
7. Purge the _slony schema for all databases (master and slave):
drop schema _slony cascade;
8. Exit from the psql session:
\q
9. If you are restoring service provider appliances, start the dtService on the original master database
appliance (do not do this for tenant appliances):
service dtService start
10. In the Service Center, select appliances > maintenance.
11. In the Slony Operations section of the page, enter the following information:
101
System Maintenance
○ Organization id: Org ID of the appliance to which the init slony would be performed.
○ DB instance name: Name of database instance for init slony.
○ Element ID: ID of the Desktop Manager to list New Master IP for init slony operation. This
option is visible when DB instance name is 'edb'.
12. Click Init Slony.
13. Start dtService on all remaining appliances (including the master for a tenant restore):
service dtService start
9.4.3 Rename a Datacenter
9.4.3.1 Overview
When you bootstrap the primary service provider appliance, the bootstrap script (bootstrap.sh) prompts
you to enter the name of the datacenter that hosts the service provider appliance.
If you subsequently add an additional datacenter, do not enter the name of the first datacenter when
bootstrapping the primary service provider appliance for the new datacenter.
If you do inadvertently enter the wrong datacenter name, you will receive a FATAL message with a stack
trace when the second stage of bootstrap.sh is run on the primary service provider appliance in the new
datacenter. If this occurs, use the procedure described in this section to correct the datacenter name.
9.4.3.2 Edit final_config.txt
The bootstrap script saves the values you enter to a file named final_config.txt in the
/usr/local/desktone/scripts directory. To correct a misnamed Datacenter, you need to edit
/usr/local/desktone/scripts/final_config.txt on the new primary service provider appliance, using an
editor such as nano or vi.
Here is an example of final_config.txt, with the DataCenter Name line highlighted:
DataCenter Name: JAPAN
Backbone VLAN ID: 3127
Hostname: bob-dc2-sp1
Eth1 IP: 169.254.215.8
Eth1 Netmask: 255.255.255.0
Backbone IP Block: 169.254.215.0/24
SP VLAN ID: 2109
Eth0 IP: 172.18.109.8
Eth0 Netmask: 255.255.255.0
Eth0 CIDR: 24
Gateway: 172.18.109.1
HA Transit Server IP: 169.254.215.9
Floating IP: 172.18.109.7
DataCenter UID: 60cb3d08-ddc5-4a5a-9102-d35a7fbf6f73
PSQL pass:
Nameserver: 172.18.109.2
DataCenter Master: False
Multi-DataCenter: True
NTP Server 1: ntp.ubuntu.com
Other Service Provider IP: 172.16.109.17
To change the name from JAPAN to India:
DataCenter Name: INDIA
102
System Maintenance
9.4.3.3 Rerun bootstrap.sh
After updating the datacenter name and saving final_config.txt, rerun stage 2 of bootstrap.sh and
continue with the rest of the installation.
9.4.4 Decommision a Datacenter
Note: All commands should be run with root credentials.
9.4.4.1 Execute Initial Shutdown Steps
Procedure
1. Take snapshots of all service provider and resource manager appliances.
2. Take snapshots of all tenant appliances for any Multi-DC system.
3. Shut down service provider, resource manager and tenant appliances in DC2 (target datacenter to be
decommissioned).
9.4.4.2 Perform Initial Tenant Maintenance
Complete the following steps on the remaining datacenter for all affected tenants.
Procedure
1. Stop dtService on all tenant appliances:
service dtService stop
2. Delete this file on all tenant appliances:
/usr/local/desktone/release/active/conf/proxy.conf
3. Terminate Slony Daemon Process on all tenant appliances:
killall slon
4. Remove Slony Schema on all tenant appliances (both FDB and EDB):
drop schema _slony cascade;
5. Remove DC2 IP addresses from this file, on the line starting “host=” :
/usr/local/desktone/release/active/conf/fdb.properties
9.4.4.3 Promote the Primary Service Provider and Tenant to be the Primary Across Datacenters
Procedure
1.
Go to the psql prompt
2.
Execute the following commands:
○ update appliance set capabilities = 199 where name=’<primarysp>’
○ update appliance set capabilities = 240 where name=’<primarytenant>’
103
System Maintenance
9.4.4.4 Perform Initial Service Provider Maintenance on Remaining Datacenter
Perform the following steps on the remaining datacenter.
Procedure
1. Stop dtService on all service provider appliances:
service dtService stop
2. Stop dtService on all resource manager appliances:
service dtService stop
3. Delete this file on all resource manager appliances if it exists:
/usr/local/desktone/release/active/conf/proxy.conf
4. Terminate Slony Daemon Process on all service provider appliances:
killall slon
5. Remove Slony Schema on all service provider appliances (both FDB):
drop schema _slony cascade;
6. Remove DC2 IP addresses from this file found on the service provider appliances, on the line starting
“host=” :
/usr/local/desktone/release/active/conf/fdb.properties
9.4.4.5 Clean Up Proxychains Configuration
Procedure
 Replace /etc/proxychains.conf with the clean version on all service provider, resource manager, and
Multi-DC tenant appliances.
9.4.4.6 Clean Up FDB
All commands should be run on the primary node.
Procedure
1. On the service provider appliance:
select * from datacenter;
2. From the previous query results, select the ID associated with the datacenter to be decommissioned
and run the following commands on service provider FDB:
delete from billing_summary where datacenter_id='<prev_query_id>';
delete from datacenter where id='<prev_query_id>';
3. Run the same query from step 2 on the tenant FDB that is being decommissioned.
104
System Maintenance
9.4.4.7 Re-initialize Slony on Affected Nodes
Procedure
1. Start slony daemons on service provider appliances:
/usr/local/desktone/scripts/start_slon_fdb.sh
2. Start slony daemons on all affected tenant appliances:
/usr/local/desktone/scripts/start_slon_fdb.sh
/usr/local/desktone/scripts/start_slon_edb.sh
3. Restart memcached on service provider appliance:
service memcached restart
4. Start dtService on Primary service provider node:
service dtService start
5. Initialize FDB for service providers:
initSlonyForOrg(1000,<blank>,"fabric")
6. Initialize FDB for all affected tenants:
initSlonyForOrg(orgId,<blank>,"fabric")
7. Initialize EDB for all affected tenants:
initSlonyForOrg(orgId,remainingDCId,"element")
8. Confirm slony table replication set is limited to 2 nodes on both tenant and service provider appliances
(query should return 2 rows):
select * from _slony.sl_node;
Slony should now be initialized correctly and the socks proxy configurations should be removed.
9.4.4.8 Bring the System Up
Procedure
1. Restart memcached on other service provider appliance (not primary):
service memcached restart
2. Start dtService on other service provider appliance (not primary):
service dtService start
3. Reboot the resource manager appliances:
reboot now
4. Start dtService on tenant appliances:
service dtService start
5. Confirm that customers can access their desktops on the affected tenant.
6. [optional] Attempt to expand a pool on the affected tenant.
7. Review Quota and Hypervisor Host Assignment on affected tenant.
105
System Maintenance
9.4.4.9 Final Tasks
Once all systems appear to be functioning correctly:
● Delete the decommissioned datacenter’s appliances.
● Delete the existing datacenter’s appliance snapshots.
9.5 Monitoring
9.5.1 Introduction
This section describes basic monitoring of the DaaS environment. It also provides links to more detailed
information about DaaS CIM providers and information about connectivity and ports.
The intent of this section is to provide information on the major items that should be monitored in the DaaS
environment. At this time VMware does not have preference for the monitoring tool to be used, and the
choice is left to the provider. Therefore the methods of implementation will depend upon the monitoring
tool selected.
9.5.1.1 Critical Nodes
There are several nodes that are critical to proper functioning in a DaaS environment. In many cases the
DaaS software is able to "self-heal". However, any impairment to these nodes should still be noted and
potential action taken regardless of the DaaS software capability to "self-heal". Providing feedback on these
occurrences is also important to improving the quality of the DaaS software. The nodes (whether iron or
virtual) that should be actively monitored are listed below. Some of these are DaaS appliances and some are
not. More details of the items that can be monitored are outlined later in this section.
Service provider nodes:
● Active Directory
● ESX hosts
● Load balancer
● NFS server
● Network routers
● Time server
DaaS nodes:
● Service Provider
● Tenant
● Resource Manager
106
System Maintenance
9.5.1.2 Basic System Functions
For each of the nodes listed under "Critical Nodes", these basic functions should be monitored:
● File system space
● CPU usage
● Memory usage
The method of monitoring this information will vary depending upon the OS being monitored and the
monitoring software itself. Please consult your monitoring software documentation for details.
9.5.2 Web Application Monitoring
Basic verification of a DaaS installation includes connecting to the following web pages (both through a load
balancer, if applicable, and directly to each node):
● Desktop Portal
● Administration console
● Service Center
9.5.2.1 Port Response
In addition to using ping, monitoring software can check response of specific ports - that is, if they respond
to an "open socket" request. DNS and DHCP are exceptions which use UDP, and may require more
intelligent monitoring.
9.5.2.2 Monitoring CIM Classes
DaaS management nodes run a variety of CIM classes that provide information about system operation. See
CIM Providers on DaaS Management Nodes below for more details.
9.5.3 CIM Providers on DaaS Management Nodes
This chapter describes the CIM providers that monitor a DaaS installation. Key properties for monitoring
are highlighted in the descriptions below.
9.5.3.1 Operating Environment CIM Providers for DaaS Nodes
These CIM providers report on the operating environment for DaaS management nodes. They should be
monitored on all DaaS nodes:
● Linux_OperatingSystem
● Linux_EthernetPort
● Linux_ComputerSystem
● CIM_FileSystem
107
System Maintenance
9.5.3.1.1 Linux_OperatingSystem
Description
There will only be a single instance of this class per appliance.
Properties
● FreePhysicalMemory: If this reaches 0 that is a critical fault and needs to be resolved immediately (see
the calculation below).
● FreeVirtualMemory: If this reaches 0 0 that is a critical fault and needs to be resolved immediately (see
the calculation below).
● HealthState: Anything but a value of 5 indicates a problem.
● OperationalStatus: Anything but a value of 2 (OK) indicates a problem. However, an occasional value
of 4 (stressed) may appear. If repeated samplings indicate a value other than 2, you should raise an
alert.
● TotalVirtualMemorySize: The total amount of swap space available to the system.
Calculations
● PercentSwapUsed:
100 * ( TotalSwapSpaceSize – FreeSpaceInPagingFiles ) / TotalSwapSpaceSize
● It is useful to monitor for swap space usage. Once the system begins using swap space, performance
will degrade. The free memory alert should be triggered prior to the system using swap space so the
use of swap should be considered a serious problem.
Mitigation
Recommendation is to warn if PercentSwapUsed > 5% and alert if PercentSwapUsed > 20%.
If the memory used reaches high levels, you should check to see if there are any memory-intensive processes
that need to be restarted using top and shift-M on the node in question:
$ top
PID USER
PR
NI
VIRT
RES
6816 root
20
0 2069m 389m
6634 root
20
0
755m
SHR S %CPU %MEM
13m S
84m 9.8m S
TIME+
COMMAND
0.0 19.6
3:36.97 java
0.0
1:21.70 java
4.2
...
If no single application appears to be the culprit, restart the node.
9.5.3.1.2 Linux_EthernetPort
Description
There will typically be two instances of this class, one for the eth0 interface (tenant or service-provider
network) and one for the eth1 (management backbone) interface.
Properties
● EnabledState: Anything but the value 2 is a problem.
● Status: Anything but OK is a problem.
Mitigation
If the eth0 status is not OK, then use ifconfig to check that the interfaces are up and have an IP address.
You should also be able to ping the IPv4 gateway for each node.
108
System Maintenance
If the eth1 status is not OK, then try to connect to that appliance via ssh from the transit server. If this
works, then the eth1 interface is OK.
9.5.3.1.3 Linux_ComputerSystem
Description
There will only be a single instance of this class per appliance.
Properties
● EnabledState: Anything but a value of 2 indicates an issue.
Mitigation
If EnabledState is anything but 2, attempt to ping the node, ssh to the node, and check the status of the
dtService (service dtService status) on the node.
9.5.3.1.4 CIM_FileSystem
Description
There are several subclasses of this. (You can also check the CIM_LocalFileSystem class if you don't want to
view remote file systems.) The most important to focus on are all the Linux_Ext4FleSystem instances. In
addition to the root file system, there may be others that are important to check that they are not in
ReadOnly mode. Currently you should check these file systems:
● /(root)
● /boot
● /data
● /tmp
● /usr/local
● /var
Additionally on the resource manager nodes and the DB nodes there will be some number of Linux_NFS
instances. These are remotely mounted file systems. You can choose to monitor these mounts via our
appliances or an alternate mechanism based on the storage system.
Properties
● EnabledState: Any value other than 2 (enabled) on a remotely mounted NFS file system is cause for
alarm. However, local file systems in management nodes may show up with an EnabledState of 3.
● ReadOnly: This value should be FALSE. A value of TRUE is cause for alarm. If the CIM_FileSystem
class does not respond for a particular file system, the file system may be read-only and you should
restart the node. Contact DaaS support if the restart fails.
● Status: Any value other than OK is cause for alarm.
Go to the node and use mount to check that the file system is mounted. If the file system is mounted,
try to create a file.
● PercentageSpaceUsed: Displays percent of available disk space that is used. Recommendation is to
warn at 70% and then increase the alert priority in 10% increments (that is, 70, 80, 90).
Mitigation
If any of the file systems report high usage, please contact DaaS support for corrective action.
109
System Maintenance
9.5.3.2 Application-Specific CIM Providers for DaaS Management Appliances
Note: For non-DaaS-specific CIM provider classes, see Operating Environment CIM Providers for DaaS
Nodes.
9.5.3.2.1 Service Provider Appliances
The CIM providers for service provider appliances are as follows:
● Desktone_ApplicationServer
● Desktone_ApplicationServerStatistics
● Desktone_InstalledProduct
● Desktone_CommonDatabase
● Desktone_DatabaseService
● Desktone_DatabaseReplicationService
● Desktone_ActiveDirectoryStatus
● Desktone_HypervisorManagerStatus
● Desktone_NTPService
9.5.3.2.2 Resource Manager Appliances
The CIM providers for DaaS resource manager nodes are as follows:
● Desktone_ApplicationServer
● Desktone_ApplicationServerStatistics
● Desktone_InstalledProduct
● Desktone_NTPService
9.5.3.2.3 Tenant Appliances
The CIM providers for tenant appliances are as follows:
● Desktone_ApplicationServer
● Desktone_ApplicationServerStatistics
● Desktone_InstalledProduct
● Desktone_CommonDatabase
● Desktone_DatabaseService
● Desktone_DatabaseReplicationService
● Desktone_RemoteAccessManagerStatistics
● Desktone_ActiveDirectoryStatus
● Desktone_NTPService
● Desktone_XMPService
110
System Maintenance
9.5.3.2.4 Desktop Manager Appliances
● The CIM providers for Desktop Manager appliances are as follows:
● Desktone_ApplicationServer
● Desktone_ApplicationServerStatistics
● Desktone_InstalledProduct
● Desktone_CommonDatabase
● Desktone_DatabaseService
● Desktone_DatabaseReplicationService
● Desktone_NTPService
9.5.3.3 Description of DaaS CIM Providers
9.5.3.3.1 Desktone_ActiveDirectoryStatus
Description
ActiveDirectoryStatus provider is derived from CIM_LogicalElement, and it provides information and
status of domain controllers which are added in DaaS platform. This provider runs on service provider and
tenant appliances.
Properties
● CSCreationClassName [key]: Name of the class used to create the database instance.
● SystemName [key]: Name of the system on which the provider instance is running. Set to host name in
our case.
● CreationClassName [key]: Name of the class used to create the provider instance.
● DcAddress [key]: describes the unique domain controller address.
● DomainName: describes the domain name associated with domain controller.
● LdapUri: describes the LDAP Url of current domain controller
● ResponseTime: describes the response time in milliseconds for LDAP query from DaaS appliance. The
administrator should monitor this property and alert as required if it is preferred domain controller.
Example: 0-15 seconds response time is OK, 15-30 seconds is WARN, and >30 seconds is CRITICAL.
● LastUpdated: describes the last updated time for this controller
● IsPreferred: Indicates whether the domain controller is preferred domain controller or not in DaaS
platform
● CommunicationStatus [derived]: indicates the ability of the DaaS platform to communicate with
domain controller. 2 – OK, 4 – Lost Communication
● OperationalStatus [derived]: indicates the status of domain controller in DaaS platform . 2-OK, 13 –
Lost communication.
● Status [derived, deprecated]: indicates the current state of domain controller in DaaS platform (OK,
Lost Comm)
111
System Maintenance
Mitigation
Make sure that preferred domain controllers are up and running, and verify the latency between appliance
and domain controller if response time is high.
Check the required communication ports are open between domain controller and DaaS appliances.
9.5.3.3.2 Desktone_ApplicationServer
Description
Provides information about the application server used by the DaaS software.
Properties
● Name: Name by which the application server is identified. Set to "Jboss" for Element manager and
Resource manager.
● SoftwareElementID: Identifier for software element to be used in conjunction with other keys to
uniquely identify the element. Set to host name on which the application server is running.
● Version: Version of the application server.
● SoftwareElementState: This property defines the various states of software element's life cycle. For
example: Running, Executable, Deployable etc. A SoftwareElementState of 3 indicates that the
application server is running.
Mitigation
If the application server is not running, go to the node in question and check the status:
$ service dtService status
Desktone Service is running under PID 6761
If the Desktone Service is not running, start it (and watch the log file):
$ service dtService start
● TargetOperatingSystem: Specifies the node's operating system environment. Set to 36 (LINUX).
9.5.3.3.3 Desktone_ApplicationServerStatistics
Description
There will be a single instance of this class for all of the application appliances (that is, this will not be
present in DB appliances).
Properties
These properties report on operations of the JVM (Java virtual machine) used for the DaaS application.
● InstanceID: Key to uniquely identify the instance of this class. Set to DesktonehostName_Jboss.
● ThreadCount: Total number of threads running during the monitoring sample.
● ThreadGroupCount: Total number of thread groups that exist during the sample time.
● HeapSize: Current size of heap memory
● MaxHeapSize: Maximum heap memory allowed on the application server.
● Uptime: The length of time the application server has been running in milliseconds.
112
System Maintenance
Calculations
● Heap size used: 100*HeapSize/MaxHeapSize. Recommendation is to warn at 85% and then increase
the alert priority in 5% increments (that is, 90, 95, 100).
Mitigation
At 85%, schedule a restart of the dtService. At 90% or higher, restart the dtService immediately:
$ service dtService restart
If the heap memory used increases to high levels often (more than once per week), you should analyze your
environment in concert with DaaS support.
9.5.3.3.4 Desktone_CommonDatabase
Description
Describes the PostgreSQL server running on database nodes.
Properties
● InstanceID: Key to uniquely identify the instance of this class. Set to Desktone_hostName_postgreSQL.
● HomeDirectory: Home directory of the PostgreSQL service.
● DataDirectory: Data directory of the PostgreSQL service.
● DatabaseVersion: Version number of the database.
● MaxConnections: Maximum number of connections that the PostgreSQL server can manage
concurrently. The value is extracted from the PostgreSQL configuration file from the parameter
"max_connections".
● Status: Indicates the current status of the PostgreSQL server. OK indicates PostgreSQL is running.
STOPPED indicates that the database is stopped. If the database is down (status STOPPED), any other
data provided should be ignored.
● ListenAddress: The port and ip address on which postmaster process is listening for new connections.
Calculations
● Percent maximum connections used: You should total up the ActiveConnections used by each
database instance on the server (see Desktone_DatabaseService provider) and divide by the
MaxConnections from this class to determine the load on the database server. That is:
100*(Sum(ActiveConnections)/MaxConnections).
Mitigation
If the database is stopped, check the database server:
$ service postgresql status
If PostgreSQL is not running, start the service, then run the status command again:
$ service postgresql start
$ service postgresql status
If the database will not start, examine the PostgreSQL logs and contact DaaS support.
The recommendation is to warn at 80%, critical at 90% of Percent maximum connections used.
If the percent maximum connections reaches the critical level, you should examine the database server to
determine which cache node or nodes is consuming a large number of connections (5-10 connections is the
normal range for a cache node):
$ netstat -an | grep 5432
113
System Maintenance
9.5.3.3.5 Desktone_DatabaseService
Description
Specifies the details of database instances running on DaaS appliances. In the DaaS platform, appliances
have one or more database instances running, as follows:
● Service provider appliances – Fabric Database (FDB) only
● Tenant appliances - Fabric Database (FDB) and Element Database (EDB)
● Desktop manager appliances - Element Database (EDB) only
Properties
● Name: Unique identification of the service. Set to hostName_DBInstanceName. For rollback purposes,
upgrades will create a db name_version instance. You do not need to monitor the database instances
that have the version appended.
● ActiveConnections: Specifies the number of active connections to this database instance at the time of
sampling/monitoring. See the calculation for Desktone_CommonDatabase using this number totaled
across all database instances on a server compared to the maximum connections permitted on a single
database server.
9.5.3.3.6 Desktone_DatabaseReplicationService
Description
Provides information about database instances that are replicated. This provider runs on all Fabric database
servers. In the DaaS platform, appliances have one or more database instances running, as follows:
● Service provider appliances – Fabric Database (FDB) only
● Tenant appliances - Fabric Database (FDB) and Element Database (EDB)
● Desktop manager appliances - Element Database (EDB) only
Properties
● SystemCreationClassName: Name of the class used to create the database instance.
● SystemName: Name of the system on which the database instance is running. Set to host name in our
case.
● CreationClassName: Name of the class used to create the database instance.
● Name: Unique identification of the service. Set to hostName_databaseInstanceName.
● NodeID: Represents the UID of the node in the context of the replication system.
● Role: Indication of whether the database instance is master or slave instance.
● SyncStatus: Synchronization status applies to the slave instance only. This property does not have any
significance in case of master instance. For a slave instance, the SyncStatus value will be the number of
milliseconds since the last synchronization. For example, SyncStatus = 1200 means that the last
successful sync was 1.2 seconds before. Warn if the SyncStatus is more than 40 seconds old. Critical if
SyncStatus is more than 2 minutes old.
● Status: Indicates the current status of the replication service. OK indicates the replication service is
running. STOPPED indicates that the replication service is stopped. The replication service should be
running for all database instances in use.
114
System Maintenance
Mitigation
If replication is stopped (or if the SyncStatus is out of date), you should check that the replication daemon
(slony) is running properly on the database server:
$ ps -ef | grep db.conf
root 1062
1 0 Sep17 ? 00:00:00 /usr/local/pgsql/bin/slon
/usr/local/desktone/release/static/conf/slon_edb.conf
root 1121
1 0 Sep17 ? 00:00:00 /usr/local/pgsql/bin/slon
/usr/local/desktone/release/static/conf/slon_fdb.conf
root 1443 1062 0 Sep17 ? 00:07:39 /usr/local/pgsql/bin/slon
/usr/local/desktone/release/static/conf/slon_edb.conf
root 1446 1121 0 Sep17 ? 00:06:01 /usr/local/pgsql/bin/slon
/usr/local/desktone/release/static/conf/slon_fdb.conf
-f
-f
-f
-f
There should be 2 processes for each database instance. If replication is not running properly for any of the
instances, you can restart replication:
$ nohup /usr/local/pgsql/bin/slon -f
/usr/local/desktone/release/static/conf/slon_fdb.conf >/dev/null 2>&1 &
$ nohup /usr/local/pgsql/bin/slon -f
/usr/local/desktone/release/static/conf/slon_edb.conf >/dev/null 2>&1 &
9.5.3.3.7 Desktone_HypervisorManagerStatus
Description
HypervisorManagerStatus provider is derived from CIM_LogicalElement, and it provides information and
status of Hypervisor Managers in DaaS platform. The Hypervisor Manager is a DaaS entity which manages
the hypervisor hosts. This provider runs on service provider appliances only.
Properties
● CSCreationClassName [key]: Name of the class used to create the database instance.
● SystemName [key]: Name of the system on which the provider instance is running. Set to host name in
our case.
● CreationClassName [key]: Name of the class used to create the provider instance.
● HostAddress [key]: describes the hypervisor manager host address and version. It is an address of
vCenter, ESX, or vCloud host.
● Type: describes the type of hypervisor manager whether it is vCenter/ESX/ vCloud and its product
version. Ex: "ESX, 5.1.0"
● CommunicationStatus [derived]: indicates the ability of the DaaS Hypervisor Manager to
communicate with Hypervisor Host. 2 – OK, 4 – Lost Communication
● OperationalStatus [derived]: indicates the current status of the DaaS Hypervisor Manager in DaaS
platform. 2- OK, 13 – Lost communication,
● Status [derived, deprecated]: indicates the current status of DaaS Hypervisor Manager in DaaS
platform (OK, Lost Comm)
Mitigation
● Make sure that discovered host is assigned to resource manager.
● Make sure that Hypervisor host is running and reachable from service provider appliance.
● Please verify if there any API compatibility errors in service provider or resource manager desktone
logs.
● Check the required communication ports are open between DaaS appliances and hypervisor hosts.
115
System Maintenance
9.5.3.3.8 Desktone_InstalledProduct
Description
Provides information about the DaaS software, including the version and build number.
Properties
● ProductIdentifyingNumber: Product identification. This property contains build information.
● ProductName: Product's commonly used name. Set to "Virtual-D."
● ProductVendor: Vendor's name: Desktone.
● ProductVersion: Product version information
● SystemID: Host name where the product is installed.
9.5.3.3.9 Desktone_NTPService
Description
NTPService provider is derived from CIM_Service, and it provides information about NTP daemon which
runs on DaaS appliance. It also reports time synchronization status.
Properties
● CSCreationClassName [key, derived]: Name of the class used to create the database instance.
● SystemName [key, derived]: Name of the system on which the NTP daemon is running. Set to host
name in our case.
● CreationClassName [key, derived]: Name of the class used to create the provider instance.
● name [key, derived]: describes the name of the service. It is “NTPD” in our case.
● Started[derived]: Started is a Boolean that indicates whether the NTP Service has been started (TRUE),
or stopped (FALSE).
● ServerAddresses: describes the NTP server addresses configured in /etc/ntp.conf. It is a comma
separated string of addresses.
● PrimarySource: describes the current NTP source in use for time synchronization.
● SyncState: indicates NTP synchronization status. TRUE, if NTP is in sync with time source, otherwise
FALSE. The SyncState depends on jitter, condition of peer and reach status.
● Jitter: describes the jitter value in milliseconds of selected time source. If there is any problem to get
the jitter or no primary source is selected by NTP, it returns 60000 milliseconds in order to alert.
Providers marks SyncState property to FALSE if jitter is higher than 1000 milliseconds.
● OperationalStatus[derived]: indicates the current status of NTP daemon and time synchronization.
OperationalStatus=2 (OK) -> NTP time is in sync (SyncState =TRUE) and all time sources configured
are reachable.
OperationalStatus=5 (Predictive Failure) indicates NTP time is in sync, but one or more configured
time servers are not reachable or rejected.
OperationalStatus=6 (ERROR) time source is not in sync or NTP service is down
● StatusDescriptions [derived]: describes the OperationalStatus in detail which helps administrator
troubleshoot NTP time synchronization.
116
System Maintenance
9.5.3.3.10 Desktone_XMPService
Mitigation
● Make sure that NTP daemon is running. Troubleshoot NTP for time synchronization.
● Make sure that there is connectivity between the service provider nodes and the NTP source.
Description
Desktone_XMPService provides information about the XMP service.
Properties
PrimaryStatus: 0 indicates XMP service status Unknown; 1 indicates XMP service status is OK.
9.5.4 WBEM and CIM
The DaaS management appliances allow monitoring via the standard WBEM (web-based enterprise
management) CIM (common information model) interface. You can use any monitoring tool capable of
understanding the CIM data model (for example, Tivoli).
9.5.4.1 Connecting to the WBEM/CIM Server of a DaaS Management Appliance
To log in to the WBEM/CIM interface of one of the DaaS management appliances, you need the following
information:
● Host name: The DNS name or IP address of the management appliance
● Port number: 5989
● Namespace: /root/cimv2
For example, CIMSurfer is a basic browser for CIM information. In practice, you would use a different tool,
such as Tivoli, that automatically monitors a number of management appliances and provide alerts based
on conditions in the CIM classes of interest. This example also accesses the CIM server without a certificate.
CIM Server Login
The following figure shows the type of information available from the Linux_OperatingSystem class. Using
the properties, you can determine the amount or percentage of free memory that is still available.
117
System Maintenance
CIM Classes
9.5.4.2 Using WBEM/CIM to Monitor ESX Hosts
Since the ESX hosts also expose a WBEM/CIM interface, you can also monitor the ESX hosts. Logging in to
the ESX is the same as logging into a management appliance, except that user credentials are required.
The login credentials should be the same ones you use to access the host using the VI client:
Logging in to the ESX
118
System Maintenance
The following figure illustrates how the ESX hosts will expose different classes than the DaaS management
appliances. We recommend that you consult VMware to determine which classes are important to monitor.
Classes Exposed by ESX Host
+
119
System Maintenance
120
Appendix A Connection Matrix
Appendix A Connection Matrix
This appendix provides connection information for the Horizon DaaS platform release.
Legend
Management Appliances
Abbrev
Name
SP
Service Provider
DM
Desktop Manager
RM
Resource Manager
T
Tenant
UP
Upload Server
AP
Access Point
Other
Abbrev
HYP
SS
NFS
VM
EP
AD
MON
RSA
Name
Hypervisor
Storage System
NFS Server
Virtual Desktop VM
End Point Device
Active Directory
Monitoring System
RSA Authentication Manager
Networks
Abbrev
T
SP
BB
I
Name
Tenant Network
Service Provider Network
Backbone Network
Public Internet
121
Source
Destination
Ports In Use
Networks
Description
Connectivity Type
SP
tcp/1098,
tcp/1099,
tcp/3873
BB, SP
Used for invoking remote APIs via Java RMI. Ports 1098 and 1099 are used for
the naming service lookup and port 3873 is used for the actual remote method
invocation. Authentication is done via username/password.
SP
Local and Remote
SP
SP
tcp/11211
BB
Used for accessing memcached
Local Only
SP
SP
udp/694
SP
Periodic heartbeat between paired SP appliances (floating IP)
Local Only
SP
SP
tcp/5432
SP
Used to access the DB from the application, also replication
Local and Remote
Local and Remote
SP
SP
tcp/22
BB, SP
Provides SSH and SCP capabilities to management appliances for purposes of
installation and configuration. Authentication is done using a private/public ssh
key registered to the appliance at installation time.
SP
SP
tcp/20677
BB, SP
Used for proxying traffic between DCs
Local and Remote
SP
RM
tcp/8443
BB, SP
Used for invoking remote APIs via web services. Authentication is done via
username/password and SSL certificate validation.
Local and Remote
BB
Provides SSH and SCP capabilities to management appliances for purposes of
installation and configuration. Authentication is done using a private/public ssh
key registered to the appliance at installation time.
Local Only
Local Only
SP
RM
tcp/22
SP
T
tcp/1098,
tcp/1099,
tcp/3873
BB
Used for invoking remote APIs via Java RMI. Ports 1098 and 1099 are used for
the naming service lookup and port 3873 is used for the actual remote method
invocation. Authentication is done via username/password.
SP
T
tcp/8443
BB
Used for invoking remote APIs via web services. Authentication is done via
username/password and SSL certificate validation.
Local Only
Local Only
SP
T
tcp/22
BB
Provides SSH and SCP capabilities to management appliances for purposes of
installation and configuration. Authentication is done using a private/public ssh
key registered to the appliance at installation time.
SP
HYP
tcp/443
SP
Needed for access to the hypervisor management APIs. Authentication is done
via username/password.
Local Only
SS
tcp/22, tcp/80,
tcp/443
SP
Used to invoke APIs on a storage system. The specific ports will vary depending
on the type of storage system being used. Authentication is done via
username/password.
Local Only
SP
122
Appendix A Connection Matrix
Source
Destination
Ports In Use
Networks
Description
Connectivity Type
Local Only
SP
NFS
tcp/2049
SP
Used to communicate with the NFS server. The SP mounts the NFS shares used
to store the appliance template VM images for purposes of manufacturing and
configuration. Authentication is done via network identity.
SP
AD
tcp/389, tcp/636
SP
Used to authenticate users to the Service Center. The protocol choice of ldap or
ldaps is decided by the SP administrator at the time of domain registration.
Local and Remote
RM
SP
tcp/8443
BB
Used for invoking remote APIs via web services. Authentication is done via
username/password and SSL certificate validation.
Local Only
RM
SP
tcp/20677
BB
Used for proxying traffic between DCs
Local Only
RM
RM
tcp/11211
BB
Used for accessing memcached
Local Only
RM
T
tcp/8443
BB
Used for invoking remote APIs (state monitoring) via web services.
Authentication is done via username/password.
Local Only
RM
HYP
tcp/443
SP
Needed for access to the hypervisor management APIs. Authentication is done
via username/password.
Local Only
SS
tcp/22, tcp/80,
tcp/443
SP
Used to invoke APIs on a storage system. The specific ports will vary depending
on the type of storage system being used. Authentication is done via
username/password.
Local Only
SP
Used to communicate with the NFS server. The RMgr mounts the NFS shares
used to store the tenant VM images for purposes of manufacturing and
configuration. Authentication is done via network identity.
Local Only
T/SP
Global catalog port on AD servers for LDAP/LDAPS (respectively).
(Remote for T) and
(Local or Remote
for SP)
Kerberos (for new, more secure LDAP communication & password change
functionality)
(Remote for T) and
(Local or Remote
for SP)
RM
RM
NFS
tcp/2049
AD
AD servers:
tcp/3268/3269
T/SP
AD
AD servers:
tcp/88
T/SP
T
RM
tcp/6443
BB
T/SP
Local Only
123
Source
Destination
Ports In Use
Networks
T
T
tcp/4002
T
T
T
tcp/4101
BB
T
T
tcp/6443
Localhost
only
T
RM
tcp/6443
BB
T
T
tcp/4001
Localhost
only
T
T
tcp/6443
Localhost
only
T
SP
tcp/1098,
tcp/1099,
tcp/3873
T
SP
T
Description
Handles connections from agents. When agents startup they connect to the
message bus on this port on one of the Desktop Managers so that they can
receive messages from them.
Used for router clustering. JMS routers on HA pairs connect to each other on
this port so that they can route messages between Desktop Managers and
ensure messages reach the agent, regardless of which Desktop Manager the
agent is connected to.
Connectivity Type
Local Only
Local Only
Local Only
Listens on localhost only for messages from Desktop Managers.
Local Only
Local Only
BB
Used for invoking remote APIs via Java RMI. Ports 1098 and 1099 are used for
the naming service lookup and port 3873 is used for the actual remote method
invocation. Authentication is done via username/password.
Local Only
tcp/8443
BB
Used for invoking remote APIs via web services. Authentication is done via
username/password and SSL certificate validation.
Local Only
SP
tcp/20677
BB
Used for proxying traffic between DCs
Local Only
T
RM
tcp/1098,
tcp/1099,
tcp/3873
BB
Used for invoking remote APIs via Java RMI. Ports 1098 and 1099 are used for
the naming service lookup and port 3873 is used for the actual remote method
invocation.
Local Only
T
RM
tcp/8443
BB
Used for invoking remote APIs via web services. Authentication is done via
username/password and SSL certificate validation.
Local Only
T
T
udp/694
BB
Periodic heartbeat between paired tenant appliances (floating IP)
Local Only
T
T
tcp/5432
BB
Used to access the DB from the application, also replication
Local Only
T
T
tcp/11211
BB
Used for accessing memcached
Local Only
124
Appendix A Connection Matrix
Source
T
Destination
VM
Ports In Use
tcp/4915265535
Networks
Description
Connectivity Type
T
Used for downstream communication between the DaaS Agent (on a Windows
7 and later virtual desktop) and the tenant appliance. A dynamically
determined port in the range of 49152-65535 is determined at the time the
agent logs on. Authentication is done via a session key exchange between the
agent and tenant appliance.
Local Only
Local Only
T
VM
tcp/1025-5000
T
Used for downstream communication between the DaaS Agent (on a Windows
XP virtual desktop) and the tenant appliance. A dynamically determined port in
the range of 1025-5000 is determined at the time the agent logs on.
Authentication is done via a session key exchange between the agent and
tenant appliance.
T
VM
tcp/3389
T
Tenant appliance tests that the desktop is listening on port 3389 for RDP
connections.
Local Only
T
VM
tcp/8443,
tcp/443
T
For connection between the DaaS tenant appliance to the VMware View
connection agent that runs in the desktop.
Local Only
T
Used to authenticate users to the User Portal. Additionally the configured user
groups and their members are cached in the tenant fabric for performance
purposes. The protocol choice of ldap or ldaps is decided by the SP
administrator at the time of domain registration.
Local and Remote
Local and Remote
T
AD
tcp/389, tcp/636
T
RSA
udp/5500
T
Used for communicating with the RSA Authentication Manager when SecurID is
in use by the tenant. It is possible for the Authentication Manager to not be
located in the same data center as the tenant appliances. An HA authentication
manager used for failover could be located remotely as well.
T
AP
tcp/443
T
Used for Blast
Remote
T
AP
tcp/8443
T
Used for Blast
Remote
T
AP
tcp/4172
udp/4172
T
Used for PCoIP
Remote
T
AP
tcp/80
T
Redirects to 443
Remote
VM
T
tcp/3443
T
Local Only
125
Source
Destination
Ports In Use
Networks
VM
T
tcp/3443
T
T
tcp/8443,
tcp/443
VM
Description
Connectivity Type
Local Only
T
Used for upstream web services communication between the DaaS Agent and
the tenant appliance. Authentication is done via username/password and SSL
certificate validation.
Local Only
Local Only
VM
T
udp/5678
T
Used for upstream communication between the DaaS Agent and the tenant
appliance. Authentication is done via a session key exchange between the
agent and tenant appliance.
VM
KMS
tcp/1688
T
Access to the KMS server for purposes of licensing the version of Windows on
the virtual desktop
Local Only
UP
NFS
tcp/2049
SP
Used for storing the desktop images uploaded from tenants to NAS.
Authentication is done via network identity.
Local Only
BB
Provides access to monitoring information via CIM-XML over https. This is
available on all appliances and binds to all network interfaces. Best practice is
to limit access on the backbone only. The interface is unauthenticated.
Local Only
BB
Provides access to monitoring information via CIM-XML over https. This is
available on all appliances and binds to all network interfaces. Best practice is
to limit access on the backbone only. The interface is unauthenticated.
Local Only
Local Only
MON
MON
SP
RM
tcp/5989
tcp/5989
MON
T
tcp/5989
BB
Provides access to monitoring information via CIM-XML over https. This is
available on all appliances and binds to all network interfaces. Best practice is
to limit access on the backbone only. The interface is unauthenticated.
EP
VM
tcp/3389,
udp/3389
T
Provides access to the virtual desktop via RDP
Local Only
EP
VM
tcp/1494
T
Provides access to the virtual desktop via HDX
Local Only
EP
VM
T
Provides access to the virtual desktop via NX
Local Only
EP
VM
tcp/22
tcp/4172,
udp/4172,
tcp/32111
T
Provides access to the virtual desktop via PCoIP
Local Only
126
Appendix A Connection Matrix
Source
Destination
Ports In Use
Networks
Description
Connectivity Type
EP
VM
tcp/22443
T
Provides access to the virtual desktop via HTML Access (Blast)
Local Only
EP
VM
tcp/42966
T
Provides access to the virtual desktop via RGS
Local Only
EP
VM
tcp/5900
T
Provides access to the virtual desktop via VNC
Local Only
127
Appendix B Guest OS Support
SUPPORTED
Operating
System
Patch/
SP
32/64
Bit
VDI/
RDS
Additional
Variants/Specs
Remote Apps
WinXP
SP3
32 only
VDI
Professional
NS
Win7
Base/
SP1
Both
VDI
Professional/
Enterprise
NS
Remote apps are supported only with Enterprise edition with
RDP, which should be configured before upgrading to 7.0.
Win8.0
Both
VDI
Enterprise only
NS
Remote apps are supported only with Enterprise edition with
RDP, which should be configured before upgrading to 7.0.
Win8.1
Both
VDI
Professional/
Enterprise
NS
Remote apps are supported only with Enterprise edition with
RDP, which should be configured before upgrading to 7.0.
Win10
64 only
VDI
Enterprise
NS
64 only
VDI/
RDS
Data Center
Edition Only
Supported for RDSH PCoIP
WinServer
2008r2
SP1
WinServer
2012
64 only
RDS
Standard (Tested),
DataCenter
(Supported/Not
Tested)
Supported for RDSH PCoIP
WinServer
2012r2
64 only
RDS
Standard (Tested),
DataCenter
(Supported/Not
Tested)
Supported for RDSH PCoIP
128
Comments and Caveats
WinServer 2008r2 supported for both VDI and RDSH
Existing pools with VDI remote apps will continue to
function, but expanding existing pools and adding new pools
with VDI remote apps are no longer supported.
Appendix B Guest OS Support
NOT SUPPORTED
Operating
System
Patch/
SP
32/64
Bit
WinXP
SP3
64 only
Support no versions of XP other than SP3 - 32B
WinXP
SP2
Both
Support no versions of XP other than SP3 - 32B
Win2003 –
All
Std/Enterprise
Both
Win8.0
Both
WinServer
2008
Non R2
Both
Windows
Vista
All
Both
VDI/
RDS
Additional
Variants/Specs
Non-Enterprise
Remote Apps
Comments and Caveats
Only support Win8 Enterprise
Only support DataCenter, R2 version
Any variant
129
This page intentionally left blank.
130
Appendix B Guest OS Support
131