Download Guarded fabric and shielded VMs

Document related concepts
no text concepts found
Transcript
Table of Contents
Virtualization
Guarded fabric and shielded VMs
Overview
Plan
Deploy
Manage
Troubleshoot
Hyper-V
Technology Overview
What's new in Hyper-V
System requirements
Supported Windows guest operating systems
Supported Linux and FreeBSD VMs
Feature compatibility by generation and guest
Get started
Plan
Deploy
Manage
Hyper-V Virtual Switch
Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET)
Manage Hyper-V Virtual Switch
Virtualization
5/8/2017 • 3 min to read • Edit Online
Applies To: Windows Server 2016
Virtualization in Windows Server 2016 is one of the foundational technologies required to
create your software defined infrastructure. Along with networking and storage,
virtualization features deliver the flexibility you need to power workloads for your
customers.
Windows Server 2016 Virtualization technologies include updates to Hyper-V, Hyper-V
Virtual Switch, and Guarded Fabric and Shielded Virtual Machines (VMs), that improve
security, scalability, and reliability. Updates to failover clustering, networking, and storage make it even easier to
deploy and manage these technologies when used with Hyper-V.
Windows Containers is a new technology that offers you another way to deploy flexible, software-based computing
power.
NOTE
To download Windows Server 2016, see Windows Server Evaluations.
The following sections contain brief technology overviews and links to Virtualization documentation.
Guarded Fabric and Shielded VMs
As a cloud service provider or enterprise private cloud administrator, you can use a guarded fabric to provide a
more secure environment for VMs. A guarded fabric consists of one Host Guardian Service (HGS) - typically, a
cluster of three nodes - plus one or more guarded hosts, and a set of shielded VMs.
For more information, see Guarded Fabric and Shielded VMs.
Hyper-V
The Hyper-V technology provides computing resources through hardware virtualization. Hyper-V creates a
software version of computer, called a virtual machine, which you use to run an operating system and applications.
You can run multiple virtual machines at the same time, and can create and delete them as needed.
Hyper-V requires specific hardware to create the virtualization environment. For details, see System requirements
for Hyper-V on Windows Server 2016.
Hyper-V on Windows Server 2016
Hyper-V is a server role in both Windows Server 2016 Datacenter and Standard editions.
Learn more about Hyper-V, the hardware you need, the operating systems you can run in your virtual machines,
and more. If you're new to Hyper-V, start with the Hyper-V Technology Overview.
For more information, see Hyper-V on Windows Server 2016
Hyper-V on Windows 10
Hyper-V is available in some versions of Windows 10, Windows 8.1, and Windows 8.
Hyper-V on Windows is geared toward development and test activities and gives you a quick and easy way to run
different operating systems without deploying more hardware.
For more information, see Hyper-V on Windows 10.
Microsoft Hyper-V Server 2016
The Hyper-V technology is also available separately from Windows and Windows Server, as a free, standalone
product. Hyper-V Server is commonly used as the host in a virtualized desktop infrastructure (VDI) environment.
For more information, see Microsoft Hyper-V Server 2016.
Hyper-V Virtual Switch
The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is included in all versions of
Hyper-V.
Hyper-V Virtual Switch is available in Hyper-V Manager after you install the Hyper-V server role.
Included in Hyper-V Virtual Switch are programmatically managed and extensible capabilities that allow you to
connect virtual machines to both virtual networks and the physical network.
In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels.
Additional Virtualization Technologies for Windows Server 2016 and
Windows 10
Following are links to documentation for other Microsoft Windows virtualization technologies.
Windows Containers
You can use Windows Server and Hyper-V containers to provide standardized runtime environments for
development, test, and production teams.
Windows Containers provide operating system-level virtualization that allows multiple isolated applications to be
run on a single system. Two different types of container runtimes are included with the feature, each with a
different degree of application isolation.
Windows Server Containers achieve isolation through namespace and process isolation.
Hyper-V Containers encapsulate each container in a light-weight virtual machine.
For more information, see Windows Containers Documentation on the Microsoft Developer Network (MSDN).
Guarded fabric and shielded VMs
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
One of the most important goals of providing a hosted environment is to guarantee the security of the virtual
machines running in the environment. As a cloud service provider or enterprise private cloud administrator, you
can use a guarded fabric to provide a more secure environment for VMs. A guarded fabric consists of one Host
Guardian Service (HGS) - typically, a cluster of three nodes - plus one or more guarded hosts, and a set of shielded
virtual machines (VMs).
IMPORTANT
Ensure that you have installed the latest cumulative update before you deploy shielded virtual machines in production.
Videos, blog, and overview topic about guarded fabrics and shielded
VMs
Video: Introduction to Shielded Virtual Machines in Windows Server 2016
Video: Dive into Shielded VMs with Windows Server 2016 Hyper-V
Video: Deploying Shielded VMs and a Guarded Fabric with Windows Server 2016
Blog: Datacenter and Private Cloud Security Blog
Overview: Guarded fabric and shielded VMs overview
Planning topics
Planning guide for hosters
Planning guide for tenants
Deployment topics
Deployment Guide
Quick start
Deploy HGS
Deploy guarded hosts
Configuring the fabric DNS for hosts that will become guarded hosts
Deploy a guarded host using AD mode
Deploy a guarded host using TPM mode
Confirm guarded hosts can attest
Shielded VMs - Hosting service provider deploys guarded hosts in VMM
Deploy shielded VMs
Create a shielded VM template
Prepare a VM Shielding helper VHD
Set up Windows Azure Pack
Create a shielding data file
Deploy a shielded VM by using Windows Azure Pack
Deploy a shielded VM by using Virtual Machine Manager
Operations and management topic
Managing the Host Guardian Service
Guarded fabric and shielded VMs overview
4/24/2017 • 12 min to read • Edit Online
Applies To: Windows Server 2016
Overview of the guarded fabric
Virtualization security is a major investment area in Windows Server 2016 Hyper-V. In addition to protecting hosts
or other virtual machines from a virtual machine running malicious software, we also need to protect virtual
machines from a compromised host. Since a virtual machine is just a file, shielded VMs can protect it from attacks
via the storage system, the network, or while it is backed up. This is a fundamental danger for every virtualization
platform today, whether it's Hyper-V, VMware or any other. Quite simply, if a virtual machine gets out of an
organization (either maliciously or accidentally), that virtual machine can be run on any other system. Protecting
high value assets in your organization, such as domain controllers, sensitive file servers, and HR systems, is a top
priority.
To help protect against compromised fabric, Windows Server 2016 Hyper-V introduces shielded VMs. A shielded
VM is a generation 2 VM (supported on Windows Server 2012 and later) that has a virtual TPM, is encrypted using
BitLocker and can only run on healthy and approved hosts in the fabric. Shielded VMs and guarded fabric enable
cloud service providers or enterprise private cloud administrators to provide a more secure environment for tenant
VMs.
A guarded fabric consists of:
1 Host Guardian Service (HGS) (typically, a cluster of 3 nodes)
1 or more guarded hosts
A set of shielded virtual machines. The diagram below shows how the Host Guardian Service uses attestation to
ensure that only known, valid hosts can start the shielded VMs, and key protection to securely release the keys
for shielded VMs.
When a tenant creates shielded VMs that run on a guarded fabric, the Hyper-V hosts and the shielded VMs
themselves are protected by the HGS. The HGS provides two distinct services: attestation and key protection. The
Attestation service ensures only trusted Hyper-V hosts can run shielded VMs while the Key Protection Service
provides the keys necessary to power them on and to live migrate them to other guarded hosts.
Video: Introduction to shielded virtual machines in Windows Server
2016
Attestation modes in the Guarded Fabric solution
The HGS supports two different attestation modes for a guarded fabric:
TPM-trusted attestation (Hardware based)
Admin-trusted attestation (Active Directory based)
TPM-trusted attestation is recommended because it offers stronger assurances, as explained in the following table,
but it requires that your Hyper-V hosts have TPM 2.0. If you currently do not have TPM 2.0, you can use Admintrusted attestation. If you decide to move to TPM-trusted attestation when you acquire new hardware, you can
switch the attestation mode on the Host Guardian Service with little or no interruption to your fabric.
ATTESTATION MODE YOU CHOOSE FOR HOSTS
HOST ASSURANCES
TPM-trusted attestation: Offers the strongest possible
protections but also requires more configuration steps. Host
hardware and firmware must include TPM 2.0 and UEFI 2.3.1
with secure boot enabled.
Guarded hosts that can run shielded VMs are approved based
on their TPM identity, measured boot sequence and code
integrity policies so that you can ensure that these hosts are
only running approved code.
Admin-trusted attestation: Intended to support existing
host hardware where TPM 2.0 is not available. Requires fewer
configuration steps and is compatible with commonplace
server hardware.
Guarded hosts that can run shielded VMs are approved by
the Host Guardian Service based on membership in a
designated Active Directory Domain Services (AD DS) security
group.
Assurances provided by the Host Guardian Service
HGS, together with the methods for creating shielded VMs, help provide the following assurances.
TYPE OF ASSURANCE FOR VMS
SHIELDED VM ASSURANCES, FROM KEY PROTECTION SERVICE AND
FROM CREATION METHODS FOR SHIELDED VMS
TYPE OF ASSURANCE FOR VMS
SHIELDED VM ASSURANCES, FROM KEY PROTECTION SERVICE AND
FROM CREATION METHODS FOR SHIELDED VMS
BitLocker encrypted disks (OS disks and data disks)
Shielded VMs use BitLocker to protect their disks. The
BitLocker keys needed to boot the VM and decrypt the disks
are protected by the shielded VM's virtual TPM using
industry-proven technologies such as secure measured boot.
While shielded VMs only automatically encrypt and protect
the operating system disk, you can encrypt data drives
attached to the shielded VM as well.
Deployment of new shielded VMs from "trusted"
template disks/images
When deploying new shielded VMs, tenants are able to specify
which template disks they trust. Shielded template disks have
signatures that are computed at a point in time when their
content is deemed trustworthy. The disk signatures are then
stored in a signature catalog, which tenants securely provide
to the fabric when creating shielded VMs. During provisioning
of shielded VMs, the signature of the disk is computed again
and compared to the trusted signatures in the catalog. If the
signatures match, the shielded VM is deployed. If the
signatures do not match, the shielded template disk is
deemed untrustworthy and deployment fails.
Protection of passwords and other secrets when a
shielded VM is created
When creating VMs, it is necessary to ensure that VM secrets,
such as the trusted disk signatures, RDP certificates, and the
password of the VM's local Administrator account, are not
divulged to the fabric. These secrets are stored in an
encrypted file called a shielding data file (a .PDK file), which is
protected by tenant keys and uploaded to the fabric by the
tenant. When a shielded VM is created, the tenant selects the
shielding data to use which securely provides these secrets
only to the trusted components within the guarded fabric.
Tenant control of where the VM can be started
Shielding data also contains a list of the guarded fabrics on
which a particular shielded VM is permitted to run. This is
useful, for example, in cases where a shielded VM typically
resides in an on-premises private cloud but may need to be
migrated to another (public or private) cloud for disaster
recovery purposes. The target cloud or fabric must support
shielded VMs and the shielded VM must permit that fabric to
run it.
What is shielding data and why is it necessary?
A shielding data file (also called a provisioning data file or PDK file) is an encrypted file that a tenant or VM owner
creates to protect important VM configuration information, such as the administrator password, RDP and other
identity-related certificates, domain-join credentials, and so on. A fabric administrator uses the shielding data file
when creating a shielded VM, but is unable to view or use the information contained in the file.
Among others, a shielding data files contain secrets such as:
Administrator credentials
An answer file (unattend.xml)
A security policy that determines whether VMs created using this shielding data are configured as shielded or
encryption supported
Remember, VMs configured as shielded are protected from fabric admins whereas encryption supported
VMs are not
An RDP certificate to secure remote desktop communication with the VM
A volume signature catalog that contains a list of trusted, signed template-disk signatures that a new VM is
allowed to be created from
A Key Protector (or KP) that defines which guarded fabrics a shielded VM is authorized to run on
The shielding data file (PDK file) provides assurances that the VM will be created in the way the tenant intended.
For example, when the tenant places an answer file (unattend.xml) in the shielding data file and delivers it to the
hosting provider, the hosting provider cannot view or make changes to that answer file. Similarly, the hosting
provider cannot substitute a different VHDX when creating the shielded VM, because the shielding data file
contains the signatures of the trusted disks that shielded VMs can be created from.
The following figure shows the shielding data file and related configuration elements.
What are the types of virtual machines that a guarded fabric can run?
Guarded fabrics are capable of running VMs in one of three possible ways:
1. A normal VM offering no protections above and beyond previous versions of Hyper-V
2. An encryption-supported VM whose protections can be configured by a fabric admin
3. A shielded VM whose protections are all switched on and cannot be disabled by a fabric admin
Encryption-supported VMs are intended for use where the fabric administrators are fully trusted. For example, an
enterprise might deploy a guarded fabric in order to ensure VM disks are encrypted at-rest for compliance
purposes. Fabric administrators can continue to use convenient management features, such VM console
connections, PowerShell Direct, and other day-to-day management and troubleshooting tools.
Shielded VMs are intended for use in fabrics where the data and state of the VM must be protected from both
fabric administrators and untrusted software that might be running on the Hyper-V hosts. For example, shielded
VMs will never permit a VM console connection whereas a fabric administrator can turn this protection on or off
for encryption supported VMs.
The following table summarizes the differences between encryption-supported and shielded VMs.
CAPABILITY
GENERATION 2 ENCRYPTION SUPPORTED
GENERATION 2 SHIELDED
Secure Boot
Yes, required but configurable
Yes, required and enforced
Vtpm
Yes, required but configurable
Yes, required and enforced
Encrypt VM state and live migration
traffic
Yes, required but configurable
Yes, required and enforced
Integration components
Configurable by fabric admin
Certain integration components
blocked (e.g. data exchange, PowerShell
Direct)
Virtual Machine Connection (Console),
HID devices (e.g. keyboard, mouse)
On, cannot be disabled
Disabled (cannot be enabled)
COM/Serial ports
Supported
Disabled (cannot be enabled)
Attach a debugger (to the VM process)
Supported
Disabled (cannot be enabled)
Both shielded VMs and encryption-supported VMs continue to support commonplace fabric management
capabilities, such as Live Migration, Hyper-V replica, VM checkpoints, and so on.
The Host Guardian Service in action: How a shielded VM is powered on
1. VM01 is powered on.
Before a guarded host can power on a shielded VM, it must first be affirmatively attested that it is healthy.
To prove it is healthy, it must present a certificate of health to the Key Protection service (KPS). The certificate
of health is obtained through the attestation process.
2. Host requests attestation.
The guarded host requests attestation. The mode of attestation is dictated by the Host Guardian Service:
Admin-trusted attestation: Hyper-V host sends a Kerberos ticket, which identifies the security groups that
the host is in. HGS validates that the host belongs to a security group that was configured earlier by the
trusted HGS admin.
TPM-trusted attestation: Host sends information that includes:
TPM-identifying information (its endorsement key)
Information about processes that were started during the most recent boot sequence (the TCG log)
Information about the Code Integrity (CI) policy that was applied on the host
With admin-trusted attestation, the health of the host is determined exclusively by its membership in
a trusted security group.
With TPM-trusted attestation, the host's boot measurements and code integrity policy determine its
health.
Attestation happens when the host starts and every 8 hours thereafter. If for some reason a host
doesn't have an attestation certificate when a VM tries to start, this also triggers attestation.
3. Attestation succeeds (or fails).
The attestation service uses the attestation mode to determine which checks it needs to make (group
membership vs. boot measurements, and so on) in order to affirmatively (successfully) attest the host.
4. Attestation certificate sent to host.
Assuming attestation was successful, a health certificate is sent to the host and the host is considered
"guarded" (authorized to run shielded VMs). The host uses the health certificate to authorize the Key
Protection Service to securely release the keys needed to work with shielded VMs
5. Host requests VM key.
Guarded host do not have the keys needed to power on a shielded VM (VM01 in this case). To obtain the
necessary keys, the guarded host must provide the following to KPS:
The current health certificate
An encrypted secret (a Key Protector or KP) that contains the keys necessary to power on VM01. The
secret is encrypted using other keys that only KPS knows.
6. Release of key.
KPS examines the health certificate to determine its validity. The certificate must not have expired and KPS
must trust the attestation service that issued it.
7. Key is returned to host.
If the health certificate is valid, KPS attempts to decrypt the secret and securely return the keys needed to
power on the VM. Note that the keys are encrypted to the guarded host's VBS.
8. Host powers on VM01.
Guarded fabric and shielded VM glossary
TERM
DEFINITION
Host Guardian Service (HGS)
A Windows Server role that is installed on a secured cluster of
bare-metal servers that is able to measure the health of a
Hyper-V host and release keys to healthy Hyper-V hosts
when powering-on or live migrating shielded VMs. These two
capabilities are fundamental to a shielded VM solution and are
referred to as the Attestation service and Key Protection
Service respectively.
guarded host
A Hyper-V host on which shielded VMs can run. A host can
only be considered guarded when it has been deemed healthy
by HGS' Attestation service. Shielded VMs cannot be
powered-on or live migrated to a Hyper-V host that has not
yet attested or that failed attestation.
TERM
DEFINITION
guarded fabric
This is the collective term used to describe a fabric of Hyper-V
hosts and their Host Guardian Service that has the ability to
manage and run shielded VMs.
shielded virtual machine (VM)
A virtual machine that can only run on guarded hosts and is
protected from inspection, tampering and theft from malicious
fabric admins and host malware.
fabric administrator
A public or private cloud administrator that can manage
virtual machines. In the context of a guarded fabric, a fabric
administrator does not have access to shielded VMs, or the
policies that determine which hosts shielded VMs can run on.
HGS administrator
A trusted administrator in the public or private cloud that has
the authority to manage the policies and cryptographic
material for guarded hosts, that is, hosts on which a shielded
VM can run.
provisioning data file or shielding data file (PDK file)
An encrypted file that a tenant or user creates to hold
important VM configuration information and to protect that
information from access by others. For example, a shielding
data file can contain the password that will be assigned to the
local Administrator account when the VM is created.
Virtualization-based Security (VBS)
A Hyper-V based processing and storage environment on
Windows Server 2016 that is protected from administrators.
Virtual Secure Mode provides the system with the ability to
store operating system keys that are not visible to an
operating system administrator.
virtual TPM
A virtualized version of a Trusted Platform Module (TPM). In
Windows Server 2016, with the Hyper-V role, you can provide
a virtual TPM 2.0 device so that virtual machines can be
encrypted, just as a physical TPM allows a physical machine to
be encrypted.
See also
Guarded fabric and shielded VMs
Blog: Datacenter and Private Cloud Security Blog
Video: Introduction to Shielded Virtual Machines in Windows Server 2016
Video: Dive into Shielded VMs with Windows Server 2016 Hyper-V
Planning a Guarded Fabric
4/24/2017 • 1 min to read • Edit Online
The following topics cover planning for the deployment of a guarded fabric and shielded virtual machines (VMs):
Guarded Fabric and Shielded VM Planning Guide for Hosters
Compatible hardware with Windows Server 2016 Virtualization-based protection of Code Integrity
Guarded Fabric and Shielded VM Planning Guide for Tenants
Guarded Fabric and Shielded VM Planning Guide for
Hosters
4/24/2017 • 5 min to read • Edit Online
Applies To: Windows Server 2016
This topic covers planning decisions that will need to be made to enable shielded virtual machines to run on your
fabric. Whether you upgrade an existing Hyper-V fabric or create a new fabric, running shielded VMs consists of
two main components:
The Host Guardian Service (HGS) provides attestation and key protection so that you can make sure that
shielded VMs will run only on approved and healthy Hyper-V hosts.
Approved and healthy Windows Server 2016 Hyper-V hosts on which shielded VMs (and regular VMs) can run
— these are known as guarded hosts.
Decision #1: Trust level in the fabric
How you implement the Host Guardian Service and guarded Hyper-V hosts will depend mainly on the strength of
trust that you are looking to achieve in your fabric. The strength of trust is governed by the attestation mode. There
are two mutually-exclusive options:
1. Admin-trusted attestation
If your requirements are primarily driven by compliance that requires virtual machines be encrypted both at
rest as well as in-flight, then you will use admin-trusted attestation. This option works well for general
purpose datacenters where you are comfortable with Hyper-V host and fabric administrators having access
to the guest operating systems of virtual machines for day-to-day maintenance and operations.
2. TPM-trusted attestation
If your goal is to help protect virtual machines from malicious admins or a compromised fabric, then you
will use TPM-trusted attestation. This option works well for multi-tenant hosting scenarios as well as for
high-value assets in enterprise environments, such as domain controllers or content servers like SQL or
SharePoint.
The trust level you choose will dictate the hardware requirements for your Hyper-V hosts as well as the policies
that you apply on the fabric. If necessary, you can deploy your guarded fabric using existing hardware and admintrusted attestation and then convert it to TPM-trusted attestation when the hardware has been upgraded and you
need to strengthen fabric security.
Decision #2: Existing Hyper-V fabric versus a new separate Hyper-V
fabric
If you have an existing fabric (Hyper-V or otherwise), it is very likely that you can use it to run shielded VMs along
with regular VMs. Some customers choose to integrate shielded VMs into their existing tools and fabrics while
others separate the fabric for business reasons.
HGS admin planning for the Host Guardian Service
The Host Guardian Service (HGS) is at the center of guarded fabric security so it must be installed and managed
separately from the Hyper-V fabric.
AREA
Recommended installation
DETAILS
Three node cluster for high availability
Physical machines with a TPM (both 1.2 and 2.0 are
supported)
Windows Server 2016 Server Core
Network line of sight to the fabric allowing HTTPS
HTTPS certificate for access validation
Sizing
Each mid-size (8 core/4 GB) HGS server node can handle
1,000 Hyper-V hosts
Management
Designate specific people who will manage HGS. They should
be separate from fabric administrators. For comparison, HGS
clusters can be thought of in the same manner as a Certificate
Authority (CA) in terms of administrative isolation, physical
deployment and overall level of security sensitivity.
Host Guardian Service Active Directory
By default, HGS installs its own internal Active Directory for
management. This is a self-contained, self-managed forest and
is the recommended configuration to help isolate HGS from
your fabric.
If you already have a highly privileged Active Directory forest
that you use for isolation, you can use that forest instead of
the HGS default forest. It is important that HGS is not joined
to a domain in the same forest as the Hyper-V hosts or your
fabric management tools. Doing so could allow a fabric admin
to gain control over HGS.
Disaster recovery
There are two options:
1. Install a separate HGS cluster in each datacenter and
authorize shielded VMs to run in both the primary and
the backup datacenters. This avoids the need to
stretch the cluster across a WAN and allows you to
isolate virtual machines such that they run only in their
designated site.
2. Install HGS on a stretch cluster between two (or more)
datacenters. This provides resiliency if the WAN goes
down, but pushes the limits of failover clustering. You
cannot isolate workloads to one site; a VM authorized
to run in one site can run on any other.
You should also backup every HGS by exporting its
configuration so that you can always recover locally. For more
information, see Export-HgsServerState and ImportHgsServerState.
AREA
DETAILS
Host Guardian Service keys
A Host Guardian Service uses two asymmetric key pairs — an
encryption key and a signing key — each represented by an
SSL certificate. There are two options to generate these keys:
1. Internal certificate authority – you can generate these
keys using your internal PKI infrastructure. This is
suitable for a datacenter environment.
2. Publicly trusted certificate authorities – use a set of
keys obtained from a publicly trusted certificate
authority. This is the option that hosters should use.
Note that while it is possible to use self-signed certificates, it is
not recommended for deployment scenarios other than
proof-of-concept labs.
In addition to having HGS keys, a hoster can use "bring your
own key," where tenants can provide their own keys so that
some (or all) tenants can have their own specific HGS key. This
option is suitable for hosters that can provide an out-of-band
process for tenants to upload their keys.
Host Guardian Service key storage
For the strongest possible security, we recommend that HGS
keys are created and stored exclusively in a Hardware Security
Module (HSM). If you are not using HSMs, applying BitLocker
on the HGS servers is strongly recommended.
Fabric admin planning for guarded hosts
AREA
Hardware
DETAILS
Admin-trusted attestation: You can use any existing
hardware as your guarded host. There are a few
exceptions (to make sure that your host can use the
new Windows Server 2016 security mechanisms, see
Compatible hardware with Windows Server 2016
Virtualization-based protection of Code Integrity.
TPM-trusted attestation: You can use any hardware
that has the Hardware Assurance Additional
Qualification as long as it is configured appropriately
(see Server configurations that are compliant with
Shielded VMs and Virtualization-based protection of
code integrity for the specific configuration). This
includes TPM 2.0, and UEFI version 2.3.1c and above.
OS
We recommend using Windows Server 2016 Server Core for
the Hyper-V host OS.
Performance implications
Based on performance testing, we anticipate a roughly 5%
density-difference between running shielded VMs and nonshielded VMs. This means that if a given Hyper-V host can run
20 non-shielded VMs, we expect that it can run 19 shielded
VMs.
Make sure to verify sizing with your typical workloads. For
example, there might be some outliers with intensive writeoriented IO workloads that will further affect the density
difference.
AREA
DETAILS
Branch office considerations
If your Hyper-V host is running in a branch office, it needs to
have connectivity to the Host Guardian Service to power-on
or to live migrate shielded VMs.
Compatible hardware with Windows Server 2016
Virtualization-based protection of Code Integrity
4/24/2017 • 1 min to read • Edit Online
Windows Server 2016 introduces a new Virtualization-based code protection to help protect physical and virtual
machines from attacks that modify system code. To achieve this high protection level, Microsoft works in tandem
with the computer hardware manufactures (Original Equipment Manufacturers, or OEMs) to prevent malicious
writes into system execution code. This protection can be applied to any system and is being used as one of the
building blocks for implementing the Hyper-V host health for shielded virtual machines (VMs).
As with any hardware based protection, some systems might not be compliant due to issues such as incorrect
marking of memory pages as executables or by actually trying to modify code at run time, which may result in
unexpected failures including data loss or a blue screen error (also called a stop error).
To be compatible and fully support the new security feature in Windows Server 2016, OEMs need to implement
the Memory Address Table defined in UEFI 2.6, which was published in Jan. 2016. The adoption of the new UEFI
standard takes time; meanwhile, to prevent customers encountering issues, we want to provide information about
systems and configurations that we have tested this feature set with as well as systems that we know to be not
compatible.
Non-compatible systems
The following configurations are known to be non-compatible with Virtualization-based protection of code
integrity and cannot be used as a host for Shielded VMs:
Dell PowerEdge Servers running PERC H330 RAID Controllers For more information, see the following article
from Dell Support H330 – Enabling “Host Guardian Hyper-V Support” or “Device Guard” on Win 2016 OS
causes OS boot failure.
Compatible systems
These are the systems we and our partners have been testing in our environment. Please make sure that you verify
the system works as expected in your environment:
Virtual Machines – You can enable Virtualization-based protection of code integrity on virtual machines that
run on a Windows Server 2016 Hyper-V host.
Guarded Fabric and Shielded VM Planning Guide for
Tenants
4/24/2017 • 6 min to read • Edit Online
Applies To: Windows Server 2016
This topic focuses on VM owners who would like to protect their virtual machines (VMs) for compliance and
security purposes. Regardless of whether the VMs run on a hosting provider’s guarded fabric or a private guarded
fabric, VM owners need to control the security level of their shielded VMs, which includes maintaining the ability to
decrypt them if needed.
There are three areas to consider when using shielded VMs:
The security level for the VMs
The cryptographic keys used to protect them
Shielding data—sensitive information used to create shielded VMs
Security level for the VMs
When deploying shielded VMs, one of two security levels must be selected:
Shielded
Encryption Supported
Both shielded and encryption-supported VMs have a virtual TPM attached to them and those that run Windows are
protected by BitLocker. The primary difference is that shielded VMs block access by fabric administrators while
encryption-supported VMs permit fabric administrators the same level of access as they would have to a regular
VM. For more details about these differences, see Guarded fabric and shielded VMs overview.
Choose Shielded VMs if you are looking to protect the VM from a compromised fabric (including compromised
administrators). They should be used in environments where fabric administrators and the fabric itself are not
trusted. Choose Encryption Supported VMs if you are looking to meet a compliance bar that might require both
encryption at-rest and encryption of the VM on the wire (e.g., during live migration).
Encryption-supported VMs are ideal in environments where fabric administrators are fully trusted but encryption
remains a requirement.
You can run a mixture of regular VMs, shielded VMs, and encryption-supported VMs on a guarded fabric and even
on the same Hyper-V host.
Whether a VM is shielded or encryption-supported is determined by the shielding data that is selected when
creating the VM. VM owners configure the security level when creating the shielding data (see the Shielding data
section). Note that once this choice has been made, it cannot be changed while the VM remains on the
virtualization fabric.
Cryptographic keys used for shielded VMs
Shielded VMs are protected from virtualization fabric attack vectors using encrypted disks and various other
encrypted elements which can only be decrypted by:
An Owner key – this is a cryptographic key maintained by the VM-owner that is typically used for last-resort
recovery or troubleshooting. VM owners are responsible for maintaining owner keys in a secure location.
One or more Guardians (Host Guardian keys) – each Guardian represents a virtualization fabric on which an
owner authorizes shielded VMs to run. Enterprises often have both a primary and a disaster recovery (DR)
virtualization fabric and would typically authorize their shielded VMs to run on both. In some cases, the
secondary (DR) fabric might be hosted by a public cloud provider. The private keys for any guarded fabric are
maintained only on the virtualization fabric, while its public keys can be downloaded and are contained within
its Guardian.
How do I create an owner key? An owner key is represented by two certificates. A certificate for encryption and a
certificate for signing. You can create these two certificates using your own PKI infrastructure or obtain SSL
certificates from a public certificate authority (CA). For test purposes, you can also create a self-signed certificate on
any device that runs Windows 10 or Windows Server 2016.
How many owner keys should you have? You can use a single owner key or multiple owner keys. Best practices
recommend a single owner key for a group of VMs that share the same security, trust or risk level, and for
administrative control. You can share a single owner key for all your domain-joined shielded VMs and escrow that
owner key to be managed by the domain administrators.
Can I use my own keys for the Host Guardian? Yes, you can “Bring Your Own” key to the hosting provider and
use that key for your shielded VMs. This enables you to use your specific keys (vs. using the hosting provider key)
and can be used when you have specific security or regulations that you need to abide by. For key hygiene
purposes, the Host Guardian keys should be different than the Owner key.
Shielding data
Shielding data contains the secrets necessary to deploy shielded or encryption-supported VMs. It is also used when
converting regular VMs to shielded VMs.
Shielding data is created using the Shielding Data File Wizard and is stored in PDK files which VM owners upload
to the guarded fabric.
Shielded VMs help protect against attacks from a compromised virtualization fabric, so we need a safe mechanism
to pass sensitive initialization data, such as the administrator’s password, domain join credentials, or RDP
certificates, without revealing these to the virtualization fabric itself or to its administrators. In addition, shielding
data contains the following:
1.
2.
3.
4.
Security level – Shielded or encryption-supported
Owner and list of trusted Host Guardians where the VM can run
Virtual machine initialization data (unattend.xml, RDP certificate)
List of trusted signed template disks for creating the VM in the virtualization environment
When creating a shielded or encryption-supported VM or converting an existing VM, you will be asked to select the
shielding data instead of being prompted for the sensitive information.
How many shielding data files do I need? A single shielding data file can be used to create every shielded VM.
If, however, a given shielded VM requires that any of the four items be different, then an additional shielding data
file is necessary. For example, you might have one shielding data file for your IT department and a different
shielding data file for the HR department because their initial administrator password and RDP certificates differed.
While using separate shielding data files for each shielded VM is possible, it is not necessarily the optimal choice
and should be done for the right reasons. For example, if every shielded VM needs to have a different administrator
password, consider instead using a password management service or tool such as Microsoft’s Local Administrator
Password Solution (LAPS).
Creating a shielded VM on a virtualization fabric
There are several options for creating a shielded VM on a virtualization fabric (the following is relevant for both
shielded and encryption-supported VMs):
1. Create a shielded VM in your environment and upload it to the virtualization fabric
2. Create a new shielded VM from a signed template on the virtualization fabric
3. Shield an existing VM (the existing VM must be generation 2 and must be running Windows Server 2012 or
later)
Creating new VMs from a template is normal practice. However, since the template disk that is used to create new
Shielded VM resides on the virtualization fabric, additional measures are necessary to ensure that it has not been
tampered with by a malicious fabric administrator or by malware running on the fabric. This problem is solved
using signed template disks—signed template disks and their disk signatures are created by trusted administrators
or the VM owner. When a shielded VM is created, the template disk’s signature is compared with the signatures
contained within the specified shielding data file. If any of the shielding data file’s signatures match the template
disk’s signature, the deployment process continues. If no match can be found, the deployment process is aborted,
ensuring that VM secrets will not be compromised because of an untrustworthy template disk.
When using signed template disks to create shielded VMs, two options are available:
1. Use an existing signed template disk that is provided by your virtualization provider. In this case, the
virtualization provider maintains signed template disks.
2. Upload a signed template disk to the virtualization fabric. The VM owner is responsible for maintaining signed
template disks.
Deploying the Host Guardian Service for guarded
hosts and shielded VMs
6/12/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
One of the most important goals of providing a hosted environment is to guarantee the security of the virtual
machines running in the environment. As a cloud service provider or enterprise private cloud administrator, you
can use a guarded fabric to provide a more secure environment for VMs. A guarded fabric consists of one Host
Guardian Service (HGS) - typically, a cluster of three nodes - plus one or more guarded hosts, and a set of shielded
virtual machines (VMs).
The following topics tell how to set up a guarded fabric.
Deploy HGS: Setting up the Host Guardian Service - HGS
Prepare for the Host Guardian Service deployment
Configure the first HGS node
Configure additional HGS nodes
Verify the HGS configuration
Deploy Guarded hosts Configuration steps for Hyper-V hosts that will become guarded hosts
Configure the fabric DNS
Add a guarded host in AD mode
Add a guarded host in TPM mode
Add host info to HGS
Add host info for AD mode
Add host info for TPM mode
Confirm attestation
Using System Center VMM to deploy guarded hosts
Deployment tasks for guarded fabrics and shielded VMs
The following table breaks down the tasks to deploy a guarded fabric and create shielded VMs according to
different administrator roles. Note that when the HGS admin configures HGS with authorized Hyper-V hosts, a
fabric admin will collect and provide identifying information about the hosts at the same time.
Verify HGS prerequisites
Configure first HGS node
Configure additional HGS nodes
Verify HGS configuration
Configure fabric DNS
Verify host prerequisites (Admin)
Verify host prerequisites (TPM)
Configure HGS with host
information
Collect host information
(Admin)
Collect host information (TPM)
Confirm hosts can attest
Configure VMM (optional)
Create template disks
Create a VM shielding helper
disk for VMM (optional)
Set up Windows Azure Pack
(optional)
Create shielding data file
Create shielded VMs using
Windows Azure Pack
Create shielded VMs
using VMM
See also
Guarded fabric and shielded VMs
Quick start for guarded fabric deployment
4/24/2017 • 7 min to read • Edit Online
This topic explains what a guarded fabric is, its requirements, and a summary of the deployment process. For
detailed deployment steps, see Deploying the Host Guardian Service for guarded hosts and shielded VMs.
Prefer video? See the Microsoft Virtual Academy course Deploying Shielded VMs and a Guarded Fabric with
Windows Server 2016.
What is a guarded fabric
A guarded fabric is a Windows Server 2016 Hyper-V fabric capable of protecting tenant workloads against
inspection, theft, and tampering from malware running on the host, as well as from system administrators. These
virtualized tenant workloads—protected both at rest and in-flight—are called shielded VMs.
What are the requirements for a guarded fabric
The requirements for a guarded fabric include:
A place to run shielded VMs that is free from malicious software.
These are called guarded hosts. Guarded hosts are Windows Server 2016 Datacenter edition Hyper-V hosts
that can run shielded VMs only if they can prove they are running in a known, trusted state to an external
authority called the Host Guardian Service (HGS). The HGS is a new server role in Windows Server 2016, and
is typically deployed as a three-node cluster.
A way to verify a host is in a healthy state.
The HGS performs attestation, where it measures the health of guarded hosts.
A process to securely release keys to healthy hosts.
The HGS performs key protection and key release, where it releases the keys back to healthy hosts.
Management tools to automate the secure provisioning and hosting of shielded VMs.
Optionally, you can add these management tools to a guarded fabric:
System Center 2016 Virtual Machine Manager (VMM). VMM is recommended because it provides
additional management tooling beyond what you get from using just the PowerShell cmdlets that come
with Hyper-V and the guarded fabric workloads).
System Center 2016 Service Provider Foundation (SPF). This is an API layer between Windows Azure
Pack and VMM, and a prerequisite for using Windows Azure Pack.
Windows Azure Pack provides a good graphical web interface to manage a guarded fabric and shielded
VMs.
In practice, one decision must be made up front: the mode of attestation used by the guarded fabric. There are two
means—two mutually exclusive modes—by which HGS can measure that a Hyper-V host is healthy. When you
initialize HGS, you need to choose the mode:
Admin-based attestation, or AD mode, is less secure but easier to adopt
TPM-based attestation, or TPM mode, is more secure but requires more configuration and specific hardware
If necessary, you can deploy in AD mode using existing Hyper-V hosts that have been upgraded to Windows Server
2016 Datacenter edition, and then convert to the more secure TPM mode when supporting server hardware
(including TPM 2.0) is available.
Now that you know what the pieces are, let’s walk through an example of the deployment model.
How to get from a current Hyper-V fabric to a guarded fabric
Let's imagine this scenario—you have an existing Hyper-V fabric, like Contoso.com in the following picture, and
you want to build a Windows Server 2016 guarded fabric.
Step 1: Deploy the Hyper-V hosts running Windows Server 2016
The Hyper-V hosts need to run Windows Server 2016 Datacenter edition. If you are upgrading hosts, you can
upgrade from Standard edition to Datacenter edition.
Step 2: Deploy the Host Guardian Service (HGS )
Then install the HGS server role and deploy it as a three-node cluster, like the Relecloud.com example in the
following picture. This requires three PowerShell cmdlets:
To add the HGS role, use Install-WindowsFeature
To install the HGS, use Install-HgsServer
To initialize the HGS with your chosen mode of attestation, use
Initialize-HgsServer
If your existing Hyper-V servers don’t meet the prerequisites for TPM mode (for example, they do not have TPM
2.0), you can initialize HGS using Admin-based attestation (AD mode), which requires an Active Directory trust with
the fabric domain.
In our example, let’s say Contoso initially deploys in AD mode in order to immediately meet compliance
requirements, and plans to convert to the more secure TPM-based attestation after suitable server hardware can be
purchased.
Step 3: Extract identities, hardware baselines, and code integrity policies
The process to extract identities from Hyper-V hosts depends on the attestation mode being used.
For AD mode, the ID of the host is its domain-joined computer account, which must be a member of a designated
security group in the fabric domain. Membership in the designated group is the only determination of whether the
host is healthy or not. Stated another way, none of the following rigorous validation steps used for TPM mode are
used for AD mode.
For TPM mode, three things are required:
1. A public endorsement key (or EKpub) from the TPM 2.0 on each and every Hyper-V host. To capture the EKpub,
use Get-PlatformIdentifier .
2. A hardware baseline. If each of your Hyper-V hosts are identical, then a single baseline is all you need. If they are
not, then you’ll need one for each class of hardware. The baseline is in the form of a Trustworthy Computing
Group logfile, or TCGlog. The TCGlog contains everything that the host did, from the UEFI firmware, through the
kernel, right up to where the host is entirely booted. To capture the hardware baseline, install the Hyper-V role
and the Host Guardian Hyper-V Support feature and use Get-HgsAttestationBaselinePolicy .
3. A Code Integrity policy. If each of your Hyper-V hosts are identical, then a single CI policy is all you need. If they
are not, then you’ll need one for each class of hardware. Windows Server 2016 and Windows 10 both have a
new form of enforcement for CI policies, called Hypervisor Code Integrity (HVCI). HVCI provides strong
enforcement and ensures that a host is only allowed to run binaries that a trusted admin has allowed it to run.
Those instructions are wrapped in a CI policy that is added to HGS. HGS measures each host’s CI policy before
they’re permitted to run shielded VMs. To capture a CI policy, use New-CIPolicy . The policy must then be
converted to its binary form using
ConvertFrom-CIPolicy
.
That’s all—the guarded fabric is built, in terms of the infrastructure to run it.
Now you can create a shielded VM template disk and a shielding data file so shielded VMs can be provisioned
simply and securely.
Step 4: Create a template for shielded VMs
A shielded VM template protects template disks by creating a signature of the disk at a known trustworthy point in
time. If the template disk is later infected by malware, its signature will differ original template which will be
detected by the secure shielded VM provisioning process. Shielded template disks are created by running the
Shielded Template Disk Creation Wizard against a regular template disk.
This wizard is included with the Shielded VM Tools feature in the Remote Server Administration Tools for
Windows 10. After you download RSAT, run this command to install the Shielded VM Tools feature:
Install-WindowsFeature RSAT-Shielded-VM-Tools -Restart
A trustworthy administrator, such as the fabric administrator or the VM owner, will need a certificate (often
provided by a Hosting Service Provider) to sign the VHDX template disk.
The disk signature is computed over the OS partition of the virtual disk. If anything changes on the OS partition, the
signature will also change. This allows users to strongly identify which disks they trust by specifying the
appropriate signature.
Review the template disk requirements before you get started.
Step 5: Create a shielding data file
A shielding data file, also known as a .pdk file, captures sensitive information about the virtual machine, such as the
Administrator password.
The shielding data file also includes the security policy setting for the shielded VM. You must choose one of two
security policies when you create a shielding data file:
Shielded
The most secure option, which eliminates many administrative attack vectors.
Encryption supported
A lesser level of protection that still provides the compliance benefits of being able to encrypt a VM, but
allows Hyper-V admins to do things like use VM console connection and PowerShell Direct.
You can add optional management pieces like VMM or Windows Azure Pack. If you’d like to create a VM without
installing those pieces, see Step by step – Creating Shielded VMs without VMM.
Step 6: Create a shielded VM
Creating shielded virtual machines differs very little from regular virtual machines. In Windows Azure Pack, the
experience is even easier than creating a regular VM because you only need to supply a name, shielding data file
(containing the rest of the specialization information), and the VM network.
Deploy the Host Guardian Service (HGS)
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
These subtopics cover HGS prerequisites, setting up the HGS nodes, and verifying the HGS configuration.
Prepare for the Host Guardian Service deployment
Configure the first HGS node
Configure additional HGS nodes
Verify the HGS configuration
See also
Deploying the Host Guardian Service for guarded hosts and shielded VMs
Configuration steps for Hyper-V hosts that will become guarded hosts
Prepare for the Host Guardian Service deployment
6/13/2017 • 7 min to read • Edit Online
Applies To: Windows Server 2016
This topic covers HGS prerequisites and initial steps to prepare for the HGS deployment.
Prerequisites for the Host Guardian Service
Hardware: HGS can be run on physical or virtual machines, but physical machines are recommended.
If you want to run HGS as a three-node physical cluster (for availability), you must have three physical
servers. (As a best practice for clustering, the three servers should have very similar hardware.)
Operating system: Windows Server 2016, Standard or Datacenter edition.
Server Roles: Host Guardian Service and supporting server roles.
Configuration permissions/privileges for the fabric (host) domain: You will need to configure DNS
forwarding between the fabric (host) domain and the HGS domain. If you are using Admin-trusted
attestation (AD mode), you will need to configure an Active Directory trust between the fabric domain and
the HGS domain.
Supported upgrade scenarios
Before you deploy a guarded fabric, make sure the servers have installed the latest Cumulative Update. If you
deployed a guarded fabric before the release of the October 27, 2016 Cumulative Update, the servers need to be
upgraded:
Guarded hosts can be upgraded in-place by installing the latest Cumulative Update.
HGS servers need to be rebuilt, including configuring certificates and information about the hosts, as explained
in this topic.
Shielded VMs that ran on a guarded host with an earlier operating system version, such as TP5, can still run after
the host is upgraded to Windows Server 2016. New shielded VMs cannot be created from template disks that were
prepared using the template disk wizard from a Technical Preview build.
Obtain certificates for HGS
When you deploy HGS, you will be asked to provide signing and encryption certificates that are used to protect the
sensitive information needed to start up a shielded VM. These certificates never leave HGS, and are only used to
decrypt shielded VM keys when the host on which they're running has proven it is healthy. Tenants (VM owners)
use the public half of the certificates to authorize your datacenter to run their shielded VMs. This section covers the
steps required to obtain compatible signing and encryption certificates for HGS.
Request certificates from your certificate authority
While not required, it is strongly recommended that you obtain your certificates from a trusted certificate authority.
Doing so helps VM owners verify that they are authorizing the correct HGS server (i.e. service provider or
datacenter) to run their shielded VMs. In an enterprise scenario, you may choose to use your own enterprise CA to
issue these certs. Hosters and service providers should consider using a well-known, public CA instead.
Both the signing and encryption certificates must be issued with the following certificiate properties (unless
marked "recommended"):
CERTIFICATE TEMPLATE PROPERTY
REQUIRED VALUE
Crypto provider
Any Key Storage Provider (KSP). Legacy Cryptographic Service
Providers (CSPs) are not supported.
Key algorithm
RSA
Minimum key size
2048 bits
Signature algorithm
Recommended: SHA256
Key usage
Digital signature and data encipherment
Enhanced key usage
Server authentication
Key renewal policy
Renew with the same key. Renewing HGS certificates with
different keys will prevent shielded VMs from starting up.
Subject name
Recommended: your company's name or web address. This
information will be shown to VM owners in the shielding data
file wizard.
These requirements apply whether you are using certificates backed by hardware or software. For security reasons,
it is recommended that you create your HGS keys in a Hardware Security Module (HSM) to prevent the private
keys from being copied off the system. Follow the guidance from your HSM vendor to request certificates with the
above attributes and be sure to install and authorize the HSM KSP on every HGS node.
Every HGS node will require access to the same signing and encryption certificates. If you are using softwarebacked certificates, you can export your certificates to a PFX file with a password and allow HGS to manage the
certificates for you. You can also choose to install the certs into the local machine's certificate store on each HGS
node and provide the thumbprint to HGS. Both options are explained in the Initialize the HGS Cluster topic.
Create self signed certificates for test scenarios
If you are creating an HGS lab environment and do not have or want to use a certificate authority, you can create
self-signed certificates. You will receive a warning when importing the certificate information in the shielding data
file wizard, but all functionality will remain the same.
To create self-signed certificates and export them to a PFX file, run the following commands in PowerShell:
$certificatePassword = Read-Host -AsSecureString -Prompt "Enter a password for the PFX file"
$signCert = New-SelfSignedCertificate -Subject "CN=HGS Signing Certificate"
Export-PfxCertificate -FilePath .\signCert.pfx -Password $certificatePassword -Cert $signCert
Remove-Item $signCert.PSPath
$encCert = New-SelfSignedCertificate -Subject "CN=HGS Encryption Certificate"
Export-PfxCertificate -FilePath .\encCert.pfx -Password $certificatePassword -Cert $encCert
Remove-Item $encCert.PSPath
Request an SSL certificate
All keys and sensitive information transmitted between Hyper-V hosts and HGS are encrypted at the message level
-- that is, the information is encrypted with keys known either to HGS or Hyper-V, preventing someone from
sniffing your network traffic and stealing keys to your VMs. However, if you have compliance reqiurements or
simply prefer to encrypt all communications between Hyper-V and HGS, you can configure HGS with an SSL
certificate which will encrypt all data at the transport level.
Both the Hyper-V hosts and HGS nodes will need to trust the SSL certificate you provide, so it is recommended that
you request the SSL certificate from your enterprise certificate authority. When requesting the certificate, be sure to
specify the folloiwng:
SSL CERTIFICATE PROPERTY
REQUIRED VALUE
Subject name
Name of your HGS cluster (distributed network name). This
will be the concatenation of your HGS service name provided
to Initialize-HgsServer and your HGS domain name.
Subject alternative name
If you will be using a different DNS name to reach your HGS
cluster (e.g. if it is behind a load balancer), be sure to include
those DNS names in the SAN field of your certificate request.
The options for specifying this certificate when initializing the HGS server are covered in Configure the first HGS
node. You can also add or change the SSL certificate at a later time using the Set-HgsServer cmdlet.
Choose whether to install HGS in its own new forest or in an existing
bastion forest
The Active Directory forest for HGS is sensitive because its administrators have access to the keys that control
shielded VMs. The default installation will set up a new forest for HGS and configure other dependencies. This
option is recommended because the environment is self-contained and known to be secure when it is created. For
the PowerShell syntax and an example, see install HGS in its own new forest.
The only technical requirement for installing HGS in an existing forest is that it be added to the root domain; nonroot domains are not supported. But there are also operational requirements and security-related best practices for
using an existing forest. Suitable forests are purposely built to serve one sensitive function, such as the forest used
by Privileged Access Management for AD DS or an Enhanced Security Administrative Environment (ESAE) forest.
Such forests usually exhibit the following characteristics:
They have few admins (separate from fabric admins)
They have a low number of logons
They are not general-purpose in nature
General purpose forests such as production forests are not suitable for use by HGS. Fabric forests are also
unsuitable because HGS needs to be isolated from fabric administrators.
Requirements for adding HGS to an existing forest
To add HGS to an existing bastion forest, it must be added to the root domain. Before you initialize HGS, you will
need to join each target server of the HGS cluster to the root domain, and then add these objects:
A Group Managed Service Account (gMSA) that is configured for use on the machine(s) that host HGS.
Two Active Directory groups that you will use for Just Enough Administration (JEA). One group is for users
who can perform HGS administration through JEA, and the other is for users who can only view HGS
through JEA.
For setting up the cluster, either prestaged cluster objects or, for the user who runs Initialize-HgsServer,
permissions to prestage the cluster objects.
For the PowerShell syntax to add HGS to a forest, see initialize HGS in an existing bastion forest.
Configure name resolution
The fabric DNS needs to be configured so that guarded hosts can resolve the name of the HGS cluster. For an
example, see Configure the fabric DNS.
If you're using Active Directory-based attestation, the HGS domain needs DNS forwarding set up and a one-way
forest trust so HGS can validate the the group membership of guarded hosts. For an example, see configuring DNS
forwarding and domain trust.
See also
Deploying the Host Guardian Service for guarded hosts and shielded VMs
Configuration steps for Hyper-V hosts that will become guarded hosts
Configure the first Host Guardian Service (HGS)
node
6/14/2017 • 12 min to read • Edit Online
Applies To: Windows Server 2016
The steps in this section guide you through setting up your first HGS node. You should perform these steps on a
physical server with Windows Server 2016 installed.
Add the HGS Server Role
Add the Host Guardian Service role by using Server Manager or by running the following command in an elevated
Windows PowerShell console:
Install-WindowsFeature -Name HostGuardianServiceRole -IncludeManagementTools -Restart
Install HGS in a new forest
The Host Guardian Service should be installed in a separate Active Directory forest than the Hyper-V hosts and
fabric managers. If a secure bastion forest is not already available in your environment, follow the steps in this
section to have HGS set up one for you. To install HGS in an existing bastion forest, skip to Install HGS in an
existing bastion forest.
Create a new forest for HGS
Ensure that the HGS machine is not joined to a domain before performing these steps.
1. In an elevated Windows PowerShell console, run the following commands to install the Host Guardian
Service and configure its domain. The password you specify here will only apply to the Directory Services
Restore Mode password for Active Directory; it will not change your admin account's login password. You
may provide any domain name of your choosing to the -HgsDomainName parameter.
$adminPassword = ConvertTo-SecureString -AsPlainText '<password>' -Force
Install-HgsServer -HgsDomainName 'relecloud.com' -SafeModeAdministratorPassword $adminPassword -Restart
2. After the computer restarts, log in as the domain administrator using the same password you previously
used as the local administrator (regardless of the password you specified in the previous step).
Initialize the HGS cluster
The following commands will complete the configuration of the first HGS node.
1. Determine a suitable distributed network name (DNN) for the HGS cluster. This name will be registered in
the HGS DNS service to make it easy for Hyper-V hosts to contact any node in the HGS cluster. As an
example, if you have 3 HGS nodes with hostnames HGS01, HGS02, and HGS03, respectively, you might
decide to choose "HGS" or "HgsCluster" for the DNN. Do not provide a fully qualified domain name to the
Initialize-HgsServer cmdlet (e.g. use "hgs" not "hgs.relecloud.com").
2. Locate your HGS guardian certificates, as detailed in the topic. You will need one signing certificate and one
encryption certificate to intitialize the HGS cluster. The easiest way to provide certificates to HGS is to create
Prepare f or HGS
a password-protected PFX file for each certificate which contains both the public and private keys. If you are
using HSM-backed keys or other non-exportable certificates, you must ensure the certificate is installed into
the local machine's certificate store before continuing.
3. Select an attestation mode for HGS: AD-trusted or TPM-trusted. If you do not have compatible hardware for
TPM attestation, you can switch to TPM attestation in the future when you obtain supported hardware.
4. Run the Initialize-HgsServer cmdlet in an elevated PowerShell window on the first HGS node. The syntax of
this cmdlet supports many different inputs, but the 2 most common invocations are below:
If you are using PFX files for your signing and encryption certificates, run the following commands:
$signingCertPass = Read-Host -AsSecureString -Prompt "Signing certificate password"
$encryptionCertPass = Read-Host -AsSecureString -Prompt "Encryption certificate password"
Initialize-HgsServer -HgsServiceName 'MyHgsDNN' -SigningCertificatePath '.\signCert.pfx' SigningCertificatePassword $signingCertPass -EncryptionCertificatePath '.\encCert.pfx' EncryptionCertificatePassword $encryptionCertPass -TrustTpm
If you are using non-exportable certificates that are installed in the local certificate store, run the
following command. If you do not know the thumbprints of your certificates, you can list available
certificates by running Get-ChildItem Cert:\LocalMachine\My .
Initialize-HgsServer -HgsServiceName 'MyHgsDNN' -SigningCertificateThumbprint '1A2B3C4D5E6F...'
-EncryptionCertificateThumbprint '0F9E8D7C6B5A...' -TrustTpm
NOTE
Provide -TrustActiveDirectory to the command instead of -TrustTpm if you are using AD-trusted attestation.
5. If you provided any certificates to HGS using thumbprints, you will be instructed to grant HGS read access
to the private key of those certificates. On a server with the full Windows user interface, complete the
following steps:
a. Open the local computer certificate manager (certlm.msc)
b. Find the certificate(s) > right click > all tasks > manage private keys
c. Click Add
d. In the object picker window, click Object types and enable service accounts
e. Enter the name of the service account mentioned in the warning text from Initialize-HgsServer
f. Ensure the gMSA has "Read" access to the private key.
On server core, you will need to download a PowerShell module to assist in setting the private key
permissions.
a. Run
on the HGS server if it has Internet connectivity, or run
on another computer and copy the module over to the HGS server.
GuardedFabricTools . This will add additional properties to certificate objects found in
Install-Module GuardedFabricTools
Save-Module GuardedFabricTools
b. Run Import-Module
PowerShell.
c. Find your certificate thumbprint in PowerShell with Get-ChildItem Cert:\LocalMachine\My
d. Update the ACL, replacing the thumbprint with your own and the gMSA account in the code below
with the account listed in the warning text of Initialize-HgsServer .
$certificate = Get-Item "Cert:\LocalMachine\1A2B3C..."
$certificate.Acl = $certificate.Acl | Add-AccessRule "HgsSvc_1A2B3C" Read Allow
If you are using HSM-backed certificates, or certificates stored in a third party key storage provider, these
steps may not apply to you. Consult your key storage provider's documentation to learn how to manage
permissions on your private key. In some cases, there is no authorization, or authorization is provided to
the entire computer when the certificate is installed.
6. If you initialized HGS in TPM mode, or intend to migrate to TPM mode in the future, follow the steps at the
end of this article to install trusted TPM root certificates on this and all additional HGS servers.
7. That's it! In a production environment, you should continue to add additional HGS nodes to your cluster. In
a test environment, you can skip to validating your HGS configuration.
Install HGS in an existing bastion forest
If your datacenter has a secure bastion forest where you want to join HGS nodes, follow the steps in this section.
These steps also apply if you want to configure 2 or more independent HGS clusters that are joined to the same
domain.
Join the HGS server to the existing domain
First, use Server Manager or Add-Computer to join your HGS servers to the desired domain.
Prepare Active Directory objects
To initialize HGS in an existing domain, you need to create a group managed service account and 2 security
groups. You can also pre-stage the cluster objects if the account you are initializing HGS with does not have
permission to create computer objects in the domain.
Group managed service account
To create the group managed service account, the identity used by HGS to retrieve and use its certificates, use
New-ADServiceAccount. If this is the first gMSA in the domain, you will need to add a Key Distribution Service root
key.
Each HGS node will need to be permitted to access the group managed service account password. The easiest way
to configure this is to create a security group that contains all of your HGS nodes and grant that security group
access to retrieve the gMSA password.
# Check if the KDS root key has been set up
if (-not (Get-KdsRootKey)) {
# Adds a KDS root key effective immediately (ignores normal 10 hour waiting period)
Add-KdsRootKey -EffectiveTime ((Get-Date).AddHours(-10))
}
# Create a security group for HGS nodes
$hgsNodes = New-ADGroup -Name 'HgsServers' -GroupScope DomainLocal -PassThru
# Add your HGS nodes to this group
# If your HGS server object is under an organizational unit, provide the full distinguished name instead of
"HGS01"
Add-ADGroupMember -Identity $hgsNodes -Members "HGS01"
# Create the gMSA
New-ADServiceAccount -Name 'HGSgMSA' -DnsHostName 'HGSgMSA.yourdomain.com' PrincipalsAllowedToRetrieveManagedPassword $hgsNodes
NOTE
Group managed service accounts are available beginning with the Windows Server 2012 Active Directory schema. Consult
the Active Directory documentation for information on group managed service account requirements.
JEA security groups
When you set up HGS, a Just Enough Administration (JEA) PowerShell endpoint is configured to allow admins to
manage HGS without needing full local administrator privileges. You are not required to use JEA to manage HGS,
but it still must be configured when running Initialize-HgsServer. The configuration of the JEA endpoint consists of
designating 2 security groups that contain your HGS admins and HGS reviewers. Users who belong to the admin
group can add, change, or remove policies on HGS; reviewers can only view the current configuration.
Create 2 security groups for these JEA groups using Active Directory admin tools or New-ADGroup.
New-ADGroup -Name 'HgsJeaReviewers' -GroupScope DomainLocal
New-ADGroup -Name 'HgsJeaAdmins' -GroupScope DomainLocal
Cluster objects
Lastly, if the account you are using to set up HGS does not have permission to create new computer objects in the
domain, you will need to pre-stage the cluster objects. These steps are explained in the Prestage Cluster Computer
Objects in Active Directory Domain Services article.
To set up your first HGS node, you will need to create one Cluster Name Object (CNO) and one Virtual Computer
Object (VCO). The CNO represents the name of the cluster, and is primarily used internally by Failover Clustering.
The VCO represents the HGS service that resides on top of the cluster and will be the name registered with the
DNS server.
To quickly prestage your CNO and VCO, have an Active Directory admin run the following PowerShell commands:
# Create the CNO
$cno = New-ADComputer -Name 'HgsCluster' -Description 'HGS CNO' -Enabled $false -Passthru
# Create the VCO
$vco = New-ADComputer -Name 'HgsService' -Description 'HGS VCO' -Passthru
# Give the CNO full control over the VCO
$vcoPath = Join-Path "AD:\" $vco.DistinguishedName
$acl = Get-Acl $vcoPath
$ace = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $cno.SID, "GenericAll", "Allow"
$acl.AddAccessRule($ace)
Set-Acl -Path $vcoPath -AclObject $acl
Initialize HGS in the bastion forest
When setting up HGS in an existing forest, you should not run Install-HgsServer or promote the HGS server to a
domain controller for that domain. Active Directory Domain Services will be installed on the machine, but should
remain unconfigured.
Now you are ready to initalize the HGS server.
If you are using PFX-based certificates, run the following commands on the HGS server:
$signingCertPass = Read-Host -AsSecureString -Prompt "Signing certificate password"
$encryptionCertPass = Read-Host -AsSecureString -Prompt "Encryption certificate password"
Install-ADServiceAccount -Identity 'HGSgMSA'
Initialize-HgsServer -UseExistingDomain -ServiceAccount 'HGSgMSA' -JeaReviewersGroup 'HgsJeaReviewers' JeaAdministratorsGroup 'HgsJeaAdmins' -HgsServiceName 'HgsService' -SigningCertificatePath
'C:\temp\SigningCert.pfx' -SigningCertificatePassword $signPass -EncryptionCertificatePath
'C:\temp\EncryptionCert.pfx' -EncryptionCertificatePassword $encryptionCertPass -TrustTpm
NOTE
Provide -TrustActiveDirectory instead of -TrustTpm if you are using AD-trusted attestation. Additionally, if you prestaged
cluster objects, ensure that the virtual computer object name matches the value provided to -HgsServiceName .
If you are using certificates installed on the local machine (such as HSM-backed certificates and non-exportable
certificates), use the -SigningCertificateThumbprint and -EncryptionCertificateThumbprint parameters instead, as
documented in the cmdlet help.
Install trusted TPM root certificates
If you chose TPM mode, or expect to migrate to TPM mode in the future, you need to install root certificates to
issue the endorsement key in each host's TPM module. These root certificates are different from those installed by
default in Windows and represent the specific root and intermediate certificates used by TPM vendors. The
following steps help you install certificates for TPMs produced by Microsoft's partners. Some TPMs do not use
endorsement key certificates or use certificates not included in this package. Consult your TPM vendor or server
OEM for further assistance in these cases.
1. Download the latest package from http://tpmsec.microsoft.com/OnPremisesDHA/TrustedTPM.cab.
2. Validate that the package is digitally signed by Microsoft. Do not expand the CAB file if it does not have a
valid signature.
Get-AuthenticodeSignature -FilePath <Path-To-TrustedTpm.cab>
3. Expand the cab file.
mkdir .\TrustedTPM
expand.exe -F:* <Path-To-TrustedTpm.cab> .\TrustedTPM
4. By default, the configuration script will install certificates for every TPM vendor. If you only want to import
certificates for your specific TPM vendor, delete the folders for TPM vendors not trusted by your
organization.
5. Install the trusted certificate package by running the setup script in the expanded folder.
cd .\TrustedTPM
.\setup.cmd
The TrustedTpm.cab file is updated regularly with new vendor certificates as they become available. To add new
certificates or ones intentionally skipped during an earlier installation, simply repeat the above steps on every
node in your HGS cluster. Existing certificates will remain trusted but new certificates found in the expanded cab
file will be added to the trusted TPM stores.
Configuring HGS for HTTPS communications
By default, when you initialize the HGS server it will configure the IIS web sites for HTTP-only communications. All
sensitive material being transmitted to and from HGS (including the encryption keys for the VM) are always
encrypted using message-level encryption, however if you desire a higher level of security you can also enable
HTTPS by configuring HGS with an SSL certificate.
First, obtain an SSL certificate for HGS from your certificate authority. Each Hyper-V host will need to trust the SSL
certificate, so it is recommended that you issue the SSL certificate from your company's public key infrastructure
or a third party CA. Any SSL certificate supported by IIS is supported by HGS, however the subject name on the
certificate must match the fully qualified HGS service name (cluster distributed network name). For instance,
if the HGS domain is "secure.contoso.com" and your HGS service name is "hgs", your SSL certificate should be
issued for "hgs.secure.contoso.com". You can add additional DNS names to the certificate's subject alternative
name field if necessary.
Once you have the SSL certificate, you can either provide the certificate to the
haven't already run it, or use Set-HgsServer if you've already initialized HGS.
Initialize-HgsServer
cmdlet if you
If you haven't already initialized HGS
Append the following SSL-related parameters to the Initialize-HgsServer command from the Initialize the HGS
cluster or Initialize HGS in the bastion forest sections.
$sslPassword = Read-Host -AsSecureString -Prompt "SSL Certificate Password"
Initialize-HgsServer <OtherParametersHere> -Http -Https -HttpsCertificatePath 'C:\temp\HgsSSLCertificate.pfx'
-HttpsCertificatePassword $sslPassword
If your certificate is installed in the local certificate store and cannot be exported to a PFX file with the private key
intact, you can provide the SSL certificate by its thumbprint instead:
Initialize-HgsServer <OtherParametersHere> -Http -Https -HttpsCertificateThumbprint 'A1B2C3D4E5F6...'
If you've already initialized HGS
Run Set-HgsServer to configure the new SSL certificate. This step must be repeated on every HGS node in your
cluster.
$sslPassword = Read-Host -AsSecureString -Prompt "SSL Certificate Password"
Set-HgsServer -Http -Https -HttpsCertificatePath 'C:\temp\HgsSSLCertificate.pfx' -HttpsCertificatePassword
$sslPassword
Or, if you have already installed the certificate into the local certificate store and want to reference it by
thumbprint:
Set-HgsServer -Http -Https -HttpsCertificateThumbprint 'A1B2C3D4E5F6...'
IMPORTANT
Configuring HGS with an SSL certificate does not disable the HTTP endpoint. If you wish to only allow use of the HTTPS
endpoint, configure Windows Firewall to block inbound connections to port 80. Do not modify the IIS bindings for HGS
websites to remove the HTTP endpoint; it is unsupported to do so.
Next steps
Configure additional HGS nodes
Validate the HGS configuration
Configure additional HGS nodes
6/7/2017 • 5 min to read • Edit Online
Applies To: Windows Server 2016
In production environments, HGS should be set up in a high availability cluster to ensure that shielded VMs can be
powered on even if an HGS node goes down. For test environments, secondary HGS nodes are not required.
The following steps will add a node to the HGS cluster.
Prerequisites
Before you can add an additional node to your HGS cluster, ensure the following prerequisites are met:
1. The additional node has the same hardware and software configuration as the primary node. This is a
failover clustering best practice.
2. The additional node is connected to the same network as the other HGS servers, and can resolve the other
HGS servers by their DNS names.
3. The Host Guardian Service role is installed on the machine. To install the role, run the following command
in an elevated PowerShell console:
Install-WindowsFeature -Name HostGuardianServiceRole -IncludeManagementTools -Restart
Join the node to the HGS domain
Your new HGS node must be joined to the same domain as your first HGS node. If you used the
Install-HgsServer command to set up a new domain for your first HGS node, follow the steps under Promote to
a domain controller in the HGS forest. If you joined the first HGS node to an existing bastion forest, skip to Join an
existing bastion forest.
Promote to a domain controller in the HGS forest
If you set up a new forest for your Host Guardian Service using Install-HgsServer on the primary node, you must
promote your additional HGS servers to domain controllers using the same command.
1. Ensure at least one NIC on the node is configured to use the DNS server on your first HGS server for name
resolution. This is necessary for the machine to resolve and join the domain.
2. Run Install-HgsServer to join the domain and promote the node to a domain controller.
$adSafeModePassword = ConvertTo-SecureString -AsPlainText '<password>' -Force
$cred = Get-Credential 'relecloud\Administrator'
Install-HgsServer -HgsDomainName 'relecloud.com' -HgsDomainCredential $cred SafeModeAdministratorPassword $adSafeModePassword -Restart
3. When the server reboots, log in with a domain administrator account and follow the steps to Initialize HGS
on the node.
Join an existing bastion forest
If you joined your first Host Guardian Service node to an existing bastion forest, do not run Install-HgsServer . It
is unsupported to make HGS a domain controller when you are joining an existing bastion forest (note that the
Active Directory Domain Services role will be installed, it will just remain unconfigured).
1. Ensure at least one NIC on the node is configured to use the DNS server for your existing domain. This is
necessary for the machine to resolve and join the domain.
2. Join the computer to a domain using Server Manager or Add-Computer.
3. Reboot the computer to complete the domain join process.
4. Have a directory services admin add the computer account for your new node to the security group
containing all of your HGS servers that is permissioned to allow those servers to use the HGS gMSA
account.
5. Reboot the new node to obtain a new Kerberos ticket that includes the computer's membership in that
security group. After the reboot completes, sign in with a domain identity that belongs to the local
administrators group on the computer.
6. Install the HGS group managed service account on the node.
Install-ADServiceAccount -Identity <HGSgMSAAccount>
7. Continue installation by following the steps to Initialize HGS on the node.
Initialize HGS on the node
With HGS joined to the domain, sign in with a domain account that has local administrator privileges. Open an
elevated PowerShell prompt and run the following command to join the existing HGS cluster.
Initialize-HgsServer -HgsServerIPAddress <IP address of first HGS Server>
It will take up to 10 minutes for the encryption and signing certificates from the first HGS server to replicate to this
node. If you did not provide a PFX file for either the encryption or signing certificates on the first HGS server, only
the public key will be replicated to this server. You will need to install the private key by importing a PFX file
containing the private key into the local certificate store or, in the case of HSM-backed keys, configuring the Key
Storage Provider and associating it with your certificates per your HSM manufacturer's instructions.
Install trusted TPM root certificates
If you initialized your first HGS server in TPM mode, or expect to migrate to TPM mode in the future, you will need
to install root certificates used to issue the endorsement key in hosts' TPM modules. These root certificates are
different from those installed by default in Windows and represent the specific root and intermediate certificates
used by TPM vendors. The steps below will help you install certificates for TPMs produced by Microsoft's partners.
Some TPMs do not use endorsement key certificates or use certificates not included in this package. Consult your
TPM vendor or server OEM for further assistance in these cases.
1. Download the latest package from http://tpmsec.microsoft.com/OnPremisesDHA/TrustedTPM.cab.
2. Change directory to the Downloads folder (or saved location), and validate that the package is digitally
signed by Microsoft. Do not expand the CAB file if it does not have a valid signature.
Get-AuthenticodeSignature -FilePath <Path-To-TrustedTpm.cab>
3. Expand the cab file in the same folder.
mkdir .\TrustedTPM
expand.exe -F:* .\TrustedTPM.cab .\TrustedTPM
4. By default, the configuration script will install certificates for every TPM vendor. If you only want to import
certificates for your specific TPM vendor, delete the folders for TPM vendors not trusted by your
organization.
5. Install the trusted certificate package by running the setup script in the expanded folder.
cd .\TrustedTPM
.\setup.cmd
The TrustedTpm.cab file is updated regularly with new vendor certificates as they become available. To add new
certificates or ones intentionally skipped during an earlier installation, simply repeat the above steps on every
node in your HGS cluster. Existing certificates will remain trusted but new certificates found in the expanded cab
file will be added to the trusted TPM stores.
Configure HGS for HTTPS communications
If you want to secure HGS endpoints with an SSL certificate, you must configure the SSL certificate on this node,
as well as every other node in the HGS cluster. SSL certificates are not replicated by HGS and do not need to use
the same keys for every node (i.e. you can have different SSL certs for each node).
When requesting an SSL cert, ensure the cluster fully qualified domain name (as shown in the output of
Get-HgsServer ) is either the subject common name of the cert, or included as a subject alternative DNS name.
When you've obtained a certificate from your certificate authority, you can configure HGS to use it with SetHgsServer.
$sslPassword = Read-Host -AsSecureString -Prompt "SSL Certificate Password"
Set-HgsServer -Http -Https -HttpsCertificatePath 'C:\temp\HgsSSLCertificate.pfx' -HttpsCertificatePassword
$sslPassword
If you already installed the certificate into the local certificate store and want to reference it by thumbprint, run the
following command instead:
Set-HgsServer -Http -Https -HttpsCertificateThumbprint 'A1B2C3D4E5F6...'
HGS will always expose both the HTTP and HTTPS ports for communication. It is unsupported to remove the HTTP
binding in IIS, however you can use the Windows Firewall or other network firewall technologies to block
communications over port 80.
Next steps
Validate the HGS configuration
Verify the HGS configuration
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
Next, we need to validate that things are working as expected. To do so, run the following command in an elevated
Windows PowerShell console:
Get-HgsTrace -RunDiagnostics
Because the HGS configuration does not yet contain information about the hosts that will be in the guarded fabric,
the diagnostics will indicate that no hosts will be able to attest successfully yet. Ignore this result, and review the
other information provided by the diagnostics.
NOTE
When running the Guarded Fabric diagnostics tool (Get-HgsTrace -RunDiagnostics), incorrect status may be returned
claiming that the HTTPS configuration is broken when it is, in fact, not broken or not being used. This error can be returned
regardless of HGS’ attestation mode. The possible root-causes are as follows:
HTTPS is indeed improperly configured/broken
You’re using admin-trusted attestation and the trust relationship is broken
- This is irrespective of whether HTTPS is configured properly, improperly, or not in use at all.
Note that the diagnostics will only return this incorrect status when targeting a Hyper-V host. If the diagnostics are
targeting the Host Guardian Service, the status returned will be correct.
Run the diagnostics on each node in your HGS cluster.
Deploy guarded hosts
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
The topics in this section describe the steps that a fabric administrator takes to configure Hyper-V hosts to work
with the Host Guardian Service (HGS). Before you can start these steps, at least one node in the HGS cluster must
be set up.
For Admin-trusted attestation:
1. Configure the fabric DNS: Tells how to set up a DNS forwarder from the fabric domain to the HGS domain.
2. Create a security group: Tells how to set up an Active Directory security group in the fabric domain, add
guarded hosts as members of that group, and provide that group identifier to the HGS administrator.
3. Confirm guarded hosts can attest
For TPM-trusted attestation:
1. Configure the fabric DNS: Tells how to set up a DNS forwarder from the fabric domain to the HGS domain.
2. Capture information required by HGS: Tells how to capture TPM identifiers (also called platform identifiers),
create a Code Integrity policy, and create a TPM baseline. Then you will provide this information to the HGS
administrator to configure attestation.
3. Confirm guarded hosts can attest
See also
Deployment tasks for guarded fabrics and shielded VMs
Configure the fabric DNS for guarded hosts
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
A fabric administrator needs to configure the fabric DNS takes to allow guarded hosts to resolve the HGS cluster.
The HGS cluster must already be set up by the HGS administrator.
There are many ways to configure name resolution for the fabric domain. One simple way is to set up a
conditional forwarder zone in DNS for the fabric. To set up this zone, run the following commands in an elevated
Windows PowerShell console on a fabric DNS server. Substitute the names and addresses in the Windows
PowerShell syntax below as needed for your environment. Add master servers for the additional HGS nodes.
Add-DnsServerConditionalForwarderZone -Name <HGS domain name> -ReplicationScope "Forest" -MasterServers <IP
addresses of HGS server>
Next step
The next step is to capture information from the hosts and add it to HGS. How you do this depends on which
attestation mode you are using:
ACTIONS
SECTION
Admin-trusted attestation: Create an Active Directory
security group in the fabric domain, add guarded hosts as
members, and provide that group identifier to the HGS
admin.
See Create a security group for guarded hosts
TPM-trusted attestation: Capture TPM identifiers (also
called platform identifiers), create a TPM baseline, and create
a Code Integrity policy. Provide those artifacts to the HGS
admin.
See Capturing TPM-mode information required by HGS
See also
Configuration steps for Hyper-V hosts that will become guarded hosts
Deployment tasks for guarded fabrics and shielded VMs
Create a security group for guarded hosts
6/12/2017 • 4 min to read • Edit Online
This topic describes the intermediate steps that a fabric administrator takes to prepare Hyper-V hosts to become
guarded hosts using Admin-trusted attestation (AD mode). Before taking these steps, complete the steps in
Configuring the fabric DNS for hosts that will become guarded hosts.
For a video that illustrates the deployment process, see Guarded fabric deployment using AD mode.
Prerequisites
Hyper-V hosts must meet the following prerequisites for AD mode:
Hardware: Any server capable of running Hyper-V in Windows Server 2016. One host is required for initial
deployment. To test Hyper-V live migration for shielded VMs, you need at least two hosts.
Operating system: Windows Server 2016 Datacenter edition
IMPORTANT
Install the latest cumulative update.
Role and features: Hyper-V role and the Host Guardian Hyper-V Support feature, which is only available in
Windows Server 2016 Datacenter edition.
WARNING
The Host Guardian Hyper-V Support feature enables Virtualization-based protection of code integrity that may be
incompatible with some devices. We strongly recommend testing this configuration in your lab before enabling this feature.
Failure to do so may result in unexpected failures up to and including data loss or a blue screen error (also called a stop
error). For more information, see Compatible hardware with Windows Server 2016 Virtualization-based protection of Code
Integrity.
Create a security group and add hosts
1. Create a new GLOBAL security group in the fabric domain and add Hyper-V hosts that will run shielded
VMs. Restart the hosts to update their group membership.
2. Use Get-ADGroup to obtain the security identifier (SID) of the security group and provide it to the HGS
administrator.
Get-ADGroup "Guarded Hosts"
Configure Nano Server as a guarded host
There are a few different ways to deploy Nano Servers. This section follows the Nano Server Quick Start. In a
nutshell, you will create a VHDX file using the New-NanoServerImage cmdlet, and on a physical computer, modify
the boot entry to set the default boot option to the VHDX file. All the management of the Nano server is done
through Windows PowerShell remoting.
NOTE
Nano Server can only run shielded VMs in a guarded fabric; it cannot be used to run shielded VMs without the use of HGS
to unlock the virtual TPM devices. This means that you cannot use Nano Server to create a new shielded VM on-premises
and move it to a guarded fabric, because Nano Server does not support the use of local key material to start up shielded
VMs.
Creating a Nano Server image
These steps are from Nano Server Quick Start:
1. Copy the NanoServerImageGenerator folder from the \NanoServer folder in the Windows Server 2016
ISO to a folder on your hard drive.
2. Start Windows PowerShell as an administrator, change directory to the folder where you have placed the
NanoServerImageGenerator folder (e.g. c:\NanoServer\NanoServerImageGenerator) and then run:
Import-Module .\NanoServerImageGenerator -Verbose
3. Create the Nano Server image using the following set of commands:
$mediapath = <location of the mounted ISO image, e.g. E:\>
New-NanoServerImage -MediaPath $mediapath -TargetPath <nanoVHD path> -ComputerName "<nanoserver name>"
-OEMDrivers -Compute -DeploymentType Host -Edition Datacenter -Packages Microsoft-NanoServerSecureStartup-Package, Microsoft-NanoServer-ShieldedVM-Package -EnableRemoteManagementPort -DomainName
<Domain Name>
The command will complete in a few minutes.
When you specify the domain name, the Nano Server will be joined to the defined domain. To do this, the machine
you are running the cmdlet from should be already be joined to the same domain. This allows the cmdlet to
harvest a domain blob from the local computer. If you are planning to use TPM mode, you are not required to join
the Nano Server to a domain. For AD mode, the Nano Server must be joined to a domain.
Configuring the boot menu
After the Nano Server image is created, you will need to set it as the default boot OS.
Copy the Nano Server VHDX file to the physical computer and configure it to boot from that computer:
1. In File Explorer, right-click the generated VHDX and select Mount. In this example, it is mounted under D:\.
2. Run:
bcdboot d:\windows
3. Right-click the VHDX file and select Eject to unmount it.
Restart the computer to boot into Nano Server. Now the Nano Server is ready.
Getting the IP address
You will need to get the Nano Server IP address or use the server name for management.
Log on to the Recovery Console, using the administrator account and password you supplied to build the
Nano image.
Obtain the IP address of the Nano Server computer and use Windows PowerShell remoting or other
remote management tools to connect to and remotely manage the Nano Server.
Remote management
To run commands on Nano Server such as Get-PlatformIdentifier, you need to connect to the Nano Server from a
separate management server running an operating system with a graphical user interface using Windows
PowerShell Remoting.
To enable the management server to start the Windows PowerShell remoting session:
Set-Item WSMan:\localhost\Client\TrustedHosts <nano server ip address or name>
This cmdlet just needs to be run once on the management server.
To start a remote session:
Enter-PSSession -ComputerName <nano server name or ip address> -Credential <nano server>\administrator
Occasionally, you might get an access denied error when running the preceding cmdlets. You can reset the WinRM
settings and firewall rules on the Nano Server by logging on to the Nano Server recovery console and selecting
the WinRM option.
Next step
Confirm hosts can attest successfully
See also
Deploying the Host Guardian Service for guarded hosts and shielded VMs
Capture TPM-mode information required by HGS
6/12/2017 • 10 min to read • Edit Online
Applies To: Windows Server 2016
This topic describes the intermediate steps that a fabric administrator takes to prepare Hyper-V hosts to become
guarded hosts using TPM-trusted attestation (TPM mode). Before taking these steps, complete the steps in
Configuring the fabric DNS for hosts that will become guarded hosts.
For a video that illustrates the deployment process, see Guarded fabric deployment using TPM mode.
Prerequisites
Guarded hosts using TPM mode must meet the following prerequisites:
Hardware: One host is required for initial deployment. To test Hyper-V live migration for shielded VMs, you
must have at least two hosts.
Hosts must have:
IOMMU and Second Level Address Translation (SLAT)
TPM 2.0
UEFI 2.3.1 or later
Configured to boot using UEFI (not BIOS or "legacy" mode)
Secure boot enabled
Operating system: Windows Server 2016 Datacenter edition
IMPORTANT
Make sure you install the latest cumulative update.
Role and features: Hyper-V role and the Host Guardian Hyper-V Support feature. The Host Guardian
Hyper-V Support feature is only available on Datacenter editions of Windows Server 2016. Special
instructions for Nano Server are near the end of this topic.
WARNING
The Host Guardian Hyper-V Support feature enables Virtualization-based protection of code integrity that may be
incompatible with some devices. We strongly recommend testing this configuration in your lab before enabling this feature.
Failure to do so may result in unexpected failures up to and including data loss or a blue screen error (also called a stop
error). For more information, see Compatible hardware with Windows Server 2016 Virtualization-based protection of Code
Integrity.
Capture hardware and software information
TPM mode uses a TPM identifier (also called a platform identifier or endorsement key [EKpub]) to begin
determining whether a particular host is authorized as "guarded." This mode of attestation uses secure boot and
code integrity measurements to ensure that a given Hyper-V host is in a healthy state and is running only trusted
code. In order for attestation to understand what is and is not healthy, you must capture the following information:
1. TPM identifier (EKpub)
This information is unique to each Hyper-V host
2. TPM baseline (boot measurements)
This is applicable to all Hyper-V hosts that run on the same class of hardware
3. Code Integrity policies (a white list of allowed binaries)
This is applicable to all Hyper-V hosts that share both common hardware and software
We recommend that you capture the baseline and CI policies from a "reference host" that is representative of each
unique class of Hyper-V hardware configuration within your datacenter.
Capture the TPM identifier (platform identifier or EKpub) for each host
1. Ensure the TPM on each host is ready for use - that is, the TPM is initialized and ownership obtained. You
can check the status of the TPM by opening the TPM Management Console (tpm.msc) or by running GetTpm in an elevated Windows PowerShell window. If your TPM is not in the Ready state, you will need to
initialize it and set its ownership. This can be done in the TPM Management Console or by running
Initialize-Tpm.
2. On each guarded host, run the following command in an elevated Windows PowerShell console to obtain
its EKpub. For <HostName> , substitute the unique host name with something suitable to identify this host this can be its hostname or the name used by a fabric inventory service (if available). For convenience, name
the output file using the host's name.
(Get-PlatformIdentifier -Name '<HostName>').InnerXml | Out-file <Path><HostName>.xml -Encoding UTF8
3. Repeat the preceding steps for each host that will become a guarded host, being sure to give each XML file
a unique name.
4. Provide the resulting XML files to the HGS administrator, who can register each host's TPM identifier with
HGS.
Create and apply a Code Integrity policy
A Code Integrity policy helps ensure that only the executables you trust to run on a host are allowed to run.
Malware and other executables outside the trusted executables are prevented from running.
Each guarded host must have a code integrity policy applied in order to run shielded VMs in TPM mode. You
specify the exact code integrity policies you trust by adding them to HGS. Code integrity policies can be configured
to enforce the policy, blocking any software that does not comply with the policy, or simply audit (log an event
when software not defined in the policy is executed). It is recommended that you first create the CI policy in audit
(logging) mode to see if it's missing anything, then enforce the policy for host production workloads. For more
information about generating CI policies and the enforcement mode, see:
Planning and getting started on the Device Guard deployment process
Deploy Device Guard: deploy code integrity policies
Before you can use the New-CIPolicy cmdlet to generate a Code Integrity policy, you will need to decide the rule
levels to use. For Server Core, we recommend a primary level of FilePublisher with fallback to Hash. This allows
files with publishers to be updated without changing the CI policy. Addition of new files or modifications to files
without publishers (which are measured with a hash) will require you to create a new CI policy matching the new
system requirements. For Server with Desktop Experience, we recommend a primary level of Publisher with
fallback to Hash. For more information about the available CI policy rule levels, see Deploy code integrity policies:
policy rules and file rules and cmdlet help.
1. On the reference host, generate a new code integrity policy. The following commands create a policy at the
FilePublisher level with fallback to Hash. It then converts the XML file to the binary file format Windows
and HGS need to apply and measure the CI policy, respectively.
New-CIPolicy -Level FilePublisher -Fallback Hash -FilePath 'C:\temp\HW1CodeIntegrity.xml' -UserPEs
ConvertFrom-CIPolicy -XmlFilePath 'C:\temp\HW1CodeIntegrity.xml' -BinaryFilePath
'C:\temp\HW1CodeIntegrity.p7b'
Note The above command creates a CI policy in audit mode only. It will not block unauthorized
binaries from running on the host. You should only use enforced policies in production.
2. Keep your Code Integrity policy file (XML file) where you can easily find it. You will need to edit this file later
to enforce the CI policy or merge in changes from future updates made to the system.
3. Apply the CI policy to your reference host:
a. Copy the binary CI policy file (HW1CodeIntegrity.p7b) to the following location on your reference
host (the filename must exactly match):
C:\Windows\System32\CodeIntegrity\SIPolicy.p7b
b. Restart the host to apply the policy.
4. Test the code integrity policy by running a typical workload. This may include running VMs, any fabric
management agents, backup agents, or troubleshooting tools on the machine. Check if there are any code
integrity violations and update your CI policy if necessary.
5. Change your CI policy to enforced mode by running the following commands against your updated CI
policy XML file.
Set-RuleOption -FilePath 'C:\temp\HW1CodeIntegrity.xml' -Option 3 -Delete
ConvertFrom-CIPolicy -XmlFilePath 'C:\temp\HW1CodeIntegrity.xml' -BinaryFilePath
'C:\temp\HW1CodeIntegrity_enforced.p7b'
6. Apply the CI policy to all of your hosts (with identical hardware and software configuration) using the
following commands:
Copy-Item -Path '<Path to HW1CodeIntegrity\_enforced.p7b>' -Destination
'C:\Windows\System32\CodeIntegrity\SIPolicy.p7b'
Restart-Computer
Note Be careful when applying CI policies to hosts and when updating any software on these
machines. Any kernel mode drivers that are non-compliant with the CI Policy may prevent the machine
from starting up. For best practices regarding CI policies, see Planning and getting started on the Device
Guard deployment process and Deploy Device Guard: deploy code integrity policies.
7. Provide the binary file (in this example, HW1CodeIntegrity_enforced.p7b) to the HGS administrator, who
can register the CI policy with HGS.
Capture the TPM baseline for each unique class of hardware
A TPM baseline is required for each unique class of hardware in your datacenter fabric. Use a "reference host"
again.
1. On the reference host, make sure that the Hyper-V role and the Host Guardian Hyper-V Support feature are
installed.
Warning The Host Guardian Hyper-V Support feature enables Virtualization-based protection of code
integrity that may be incompatible with some devices. We strongly recommend testing this
configuration in your lab before enabling this feature. Failure to do so may result in unexpected failures
up to and including data loss or a blue screen error (also called a stop error).
Install-WindowsFeature Hyper-V, HostGuardian -IncludeManagementTools -Restart
2. To capture the baseline policy, run the following command in an elevated Windows PowerShell console.
Get-HgsAttestationBaselinePolicy -Path 'HWConfig1.tcglog'
Note You will need to use the -SkipValidation flag if the reference host does not have Secure Boot
enabled, an IOMMU present, Virtualization Based Security enabled and running, or a code integrity
policy applied. These validations are designed to make you aware of the minimum requirements of
running a shielded VM on the host. Using the -SkipValidation flag does not change the output of the
cmdlet; it merely silences the errors.
3. Provide the TPM baseline (TCGlog file) to the HGS administrator so they can register it as a trusted baseline.
Configure Nano Server as a guarded host
There are a few different ways to deploy Nano Servers. This section follows the Nano Server Quick Start. In a
nutshell, you will create a VHDX file using the New-NanoServerImage cmdlet, and on a physical computer, modify
the boot entry to set the default boot option to the VHDX file. All the management of the Nano server is done
through Windows PowerShell remoting.
NOTE
Nano Server can only run shielded VMs in a guarded fabric; it cannot be used to run shielded VMs without the use of HGS
to unlock the virtual TPM devices. This means that you cannot use Nano Server to create a new shielded VM on-premises
and move it to a guarded fabric, because Nano Server does not support the use of local key material to start up shielded
VMs.
Creating a Nano Server image
These steps are from Nano Server Quick Start:
1. Copy the NanoServerImageGenerator folder from the \NanoServer folder in the Windows Server 2016
ISO to a folder on your hard drive.
2. Start Windows PowerShell as an administrator, change directory to the folder where you have placed the
NanoServerImageGenerator folder (e.g. c:\NanoServer\NanoServerImageGenerator) and then run:
Import-Module .\NanoServerImageGenerator -Verbose
3. Create the Nano Server image using the following set of commands:
$mediapath = <location of the mounted ISO image, e.g. E:\>
New-NanoServerImage -MediaPath $mediapath -TargetPath <nanoVHD path> -ComputerName "<nanoserver name>"
-OEMDrivers -Compute -DeploymentType Host -Edition Datacenter -Packages Microsoft-NanoServerSecureStartup-Package, Microsoft-NanoServer-ShieldedVM-Package -EnableRemoteManagementPort -DomainName
<Domain Name>
The command will complete in a few minutes.
When you specify the domain name, the Nano Server will be joined to the defined domain. To do this, the machine
you are running the cmdlet from should be already be joined to the same domain. This allows the cmdlet to
harvest a domain blob from the local computer. If you are planning to use TPM mode, you are not required to join
the Nano Server to a domain. For AD mode, the Nano Server must be joined to a domain.
Creating a code integrity policy
If you are using TPM mode, you can only create the code integrity policy against an offline Nano Server image.
Mount the Nano Server VHDX and run New-CIPolicy in a command like the following from the management
server. This example uses G: as the mounted drive.
New-CIPolicy -FilePath .\NanoCI.xml -Level FilePublisher -Fallback Hash -ScanPath G:\ -PathToCatroot
G:\Windows\System32\CatRoot\ -UserPEs
Note The CI policy will need to be added to the HGS Server.
Configuring the boot menu
After the Nano Server image is created, you will need to set it as the default boot OS.
Copy the Nano Server VHDX file to the physical computer and configure it to boot from that computer:
1. In File Explorer, right-click the generated VHDX and select Mount. In this example, it is mounted under D:\.
2. Run:
bcdboot d:\windows
3. Right-click the VHDX file and select Eject to unmount it.
Restart the computer to boot into Nano Server. Now the Nano Server is ready.
Getting the IP address
You will need to get the Nano Server IP address or use the server name for management.
Log on to the Recovery Console, using the administrator account and password you supplied to build the
Nano image.
Obtain the IP address of the Nano Server computer and use Windows PowerShell remoting or other
remote management tools to connect to and remotely manage the Nano Server.
Remote management
To run commands on Nano Server such as Get-PlatformIdentifier, you need to connect to the Nano Server from a
separate management server running an operating system with a graphical user interface using Windows
PowerShell Remoting.
To enable the management server to start the Windows PowerShell remoting session:
Set-Item WSMan:\localhost\Client\TrustedHosts <nano server ip address or name>
This cmdlet just needs to be run once on the management server.
To start a remote session:
Enter-PSSession -ComputerName <nano server name or ip address> -Credential <nano server>\administrator
Occasionally, you might get an access denied error when running the preceding cmdlets. You can reset the WinRM
settings and firewall rules on the Nano Server by logging on to the Nano Server recovery console and selecting
the WinRM option.
Next step
Confirm hosts can attest successfully
See also
Deploying the Host Guardian Service for guarded hosts and shielded VMs
Add host information to the HGS configuration
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
Get information about the hosts from the fabric admin and add it to HGS. This step varies depending on the mode
of attestation:
Add host information for Admin-trusted attestation
Add host information for TPM-trusted attestation
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
Add host information for Admin-trusted attestation
The following two sections describe how to configure DNS forwarding, and add host information to HGS for AD
mode:
Configuring DNS forwarding and domain trust
Adding security group information to the HGS configuration
Configure DNS forwarding and domain trust
Use the following steps to set up necessary DNS forwarding from the HGS domain to the fabric domain, and to
establish a one-way forest trust to the fabric domain. These steps allow the HGS to locate the fabric domain's
domain controllers and validate group membership of the Hyper-V hosts.
1. To configure a DNS forwarder that allows HGS to resolve resources located in the fabric (host) domain, run
the following command in an elevated PowerShell session.
Replace fabrikam.com with the name of the fabric domain and type the IP addresses of DNS servers in the
fabric domain. For higher availability, point to more than one DNS server.
Add-DnsServerConditionalForwarderZone -Name "fabrikam.com" -ReplicationScope "Forest" -MasterServers
<DNSserverAddress1>, <DNSserverAddress2>
2. To create a one-way forest trust from the HGS domain to the fabric domain, run the following command in
an elevated Command Prompt:
Replace relecloud.com with the name of the HGS domain and fabrikam.com with the name of the fabric
domain. Provide the password for and admin of the fabric domain.
netdom trust relecloud.com /domain:fabrikam.com /userD:fabrikam.com\Administrator /passwordD:<password>
/add
Add security group information to the HGS configuration
1. Obtain the SID of the security group for guarded hosts from the fabric administrator and run the following
command to register the security group with HGS. Re-run the command if necessary for additional groups.
Provide a friendly name for the group. It does not need to match the Active Directory security group name.
Add-HgsAttestationHostGroup -Name "<GuardedHostGroup>" -Identifier "<SID>"
2. To verify the group was added, run Get-HgsAttestationHostGroup.
This completes the process of configuring an HGS cluster for AD mode. The fabric administrator might need you to
provide two URLs from HGS before the configuration can be completed for the hosts. To obtain these URLs, on an
HGS server, run Get-HgsServer.
5/20/2017 • 2 min to read • Edit Online
Applies To: Windows Server 2016
Add host information for TPM -trusted attestation
For TPM mode, the fabric administrator captures three kinds of host information, each of which needs to be
added to the HGS configuration:
A TPM identifier (EKpub) for each Hyper-V host
Code Integrity policies, a white list of allowed binaries for the Hyper-V hosts
A TPM baseline (boot measurements) that represents a set of Hyper-V hosts that run on the same class of
hardware
The process that the fabric administrator uses is described in TPM-trusted attestation for a guarded fabric capturing information required by HGS.
After the fabric administrator captures the information, add it to the HGS configuration as described in the
following procedure.
1. Obtain the XML files that contain the EKpub information and copy them to an HGS server. There will be one
XML file per host. Then, in an elevated Windows PowerShell console on an HGS server, run the command
below. Repeat the command for each of the XML files.
Add-HgsAttestationTpmHost -Path <Path><Filename>.xml -Name <HostName>
NOTE
If you encounter an error when adding a TPM identifier regarding an untrusted Endorsement Key Certificate
(EKCert), ensure that the trusted TPM root certificates have been added to the HGS node. Additionally, some TPM
vendors do not use EKCerts. You can check if an EKCert is missing by opening the XML file in an editor such as
Notepad and checking for an error message indicating no EKCert was found. If this is the case, and you trust that
the TPM in your machine is authentic, you can use the -Force flag to override this safety check and add the host
identifier to HGS.
2. Obtain the code integrity policy that the fabric administrator created for the hosts, in binary format (*.p7b).
Copy it to an HGS server. Then run the following command.
For <PolicyName> , specify a name for the CI policy that describes the type of host it applies to. A best
practice is to name it after the make/model of your machine and any special software configuration
running on it.
For <Path> , specify the path and filename of the code integrity policy.
Add-HgsAttestationCIPolicy -Path <Path> -Name '<PolicyName>'
3. Obtain the TCGlog file that the fabric administrator captured from a reference host. Copy the file to an HGS
server. Then run the following command. Typically, you will name the policy after the class of hardware it
represents (for example, "Manufacturer Model Revision").
Add-HgsAttestationTpmPolicy -Path <Filename>.tcglog -Name '<PolicyName>'
This completes the process of configuring an HGS cluster for TPM mode. The fabric administrator might need you
to provide two URLs from HGS before the configuration can be completed for the hosts. To obtain these URLs, on
an HGS server, run Get-HgsServer.
Confirm guarded hosts can attest
6/5/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
A fabric administrator needs to confirm that Hyper-V hosts can run as guarded hosts. Complete the following
steps on at least one guarded host:
1. If you have not already installed the Hyper-V role and Host Guardian Hyper-V Support feature, install them
with the following command:
Install-WindowsFeature Hyper-V, HostGuardian -IncludeManagementTools -Restart
2. Configure the host's Key Protection and Attestation URLs:
Through Windows PowerShell: You can configure the Key Protection and Attestation URLs by
executing the following command in an elevated Windows PowerShell console. For <FQDN>, use
the Fully Qualified Domain Name (FQDN) of your HGS cluster (for example, hgs.relecloud.com, or
ask the HGS administrator to run the Get-HgsServer cmdlet on the HGS server to retrieve the
URLs).
Set-HgsClientConfiguration -AttestationServerUrl 'http://<FQDN>/Attestation' KeyProtectionServerUrl 'http://<FQDN>/KeyProtection'
Through VMM: If you are using System Center 2016 - Virtual Machine Manager (VMM), you can
configure Attestation and Key Protection URLs in VMM. For details, see Configure global HGS
settings in Provision guarded hosts in VMM.
Notes
If the HGS administrator enabled HTTPS on the HGS server, begin the URLs with https:// .
If the HGS administrator enabled HTTPS on the HGS server and used a self-signed certificate, you
will need to import the certificate into the Trusted Root Certificate Authorities store on every host.
To do this, run the following command on each host:
Import-Certificate -FilePath "C:\temp\HttpsCertificate.cer" -CertStoreLocation
Cert:\LocalMachine\Root
3. To initiate an attestation attempt on the host and view the attestation status, run the following command:
Get-HgsClientConfiguration
The output of the command indicates whether the host passed attestation and is now guarded. If
IsHostGuarded does not return True, you can run the HGS diagnostics tool, Get-HgsTrace, to investigate.
To run diagnostics, enter the following command in an elevated Windows PowerShell prompt on the host:
Get-HgsTrace -RunDiagnostics -Detailed
Next steps
For hosting service providers, see Hosting service provider configuration steps for guarded hosts and shielded
VMs.
For a list of all tasks for configuring a guarded fabric, see Deployment tasks for guarded fabrics and shielded VMs.
See also
Deploy the Host Guardian Service (HGS)
Deploy shielded VMs
Deploy shielded VMs
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
The following topics describe how a tenant can work with shielded VMs.
1. (Optional) Create a template disk. The template disk can be created by either the tenant or the hosting
service provider.
2. (Optional) Convert an existing VM to a shielded VM.
3. Create shielding data to define a shielded VM.
For a description and diagram of a shielding data file, see What is shielding data and why is it necessary?
For information about creating an answer file to include in a shielded data file, see Shielded VMs Generate an answer file by using the New-ShieldingDataAnswerFile function.
4. Create a shielded VM:
Using Windows Azure Pack: Deploy a shielded VM by using Windows Azure Pack
Using Virtual Machine Manager: Deploy a shielded VM by using Virtual Machine Manager
See also
Guarded fabric and shielded VMs
Shielded VMs - Hosting service provider creates a
shielded VM template
4/24/2017 • 8 min to read • Edit Online
As with regular VMs, you can create a VM template (for example, a VM template in Virtual Machine Manager
(VMM)) to make it easy for tenants and administrators to deploy new VMs on the fabric using a template disk.
Because shielded VMs are security-sensitive assets, there are additional steps to create a VM template that
supports shielding. This topic covers the steps to create a shielded template disk and a VM template in VMM.
To understand how this topic fits in the overall process of deploying shielded VMs, see Hosting service provider
configuration steps for guarded hosts and shielded VMs.
Prepare an operating system VHDX
First prepare an OS disk that you will then run through the Shielded Template Disk Creation Wizard. This disk will
be used as the OS disk in your tenant's VMs. You can use any existing tooling to create this disk, such as Microsoft
Desktop Image Service Manager (DISM), or manually set up a VM with a blank VHDX and install the OS onto that
disk. When setting up the disk, it must adhere to the following requirements that are specific to generation 2
and/or shielded VMs:
REQUIREMENT FOR VHDX
REASON
Must be a GUID Partition Table (GPT) disk
Needed for generation 2 virtual machines to support UEFI
Disk type must be Basic as opposed to Dynamic.
Note: This refers to the logical disk type, not the "dynamically
expanding" VHDX feature supported by Hyper-V.
BitLocker does NOT support dynamic disks.
The disk has at least two partitions. One partition must
include the drive on which Windows is installed. This is the
drive that BitLocker will encrypt. The other partition is the
active partition, which contains the bootloader and remains
unencrypted so that the computer can be started.
Needed for BitLocker
File system is NTFS
Needed for BitLocker
The operating system installed on the VHDX is one of the
following:
- Windows Server 2016, Windows Server 2012 R2, or
Windows Server 2012
- Windows 10, Windows 8.1, Windows 8
Needed to support generation 2 virtual machines and the
Microsoft Secure Boot template
Operating system must be generalized (run sysprep.exe)
Template provisioning involves specializing VMs for a specific
tenant's workload
NOTE
If you use VMM, do not copy the template disk into the VMM library at this stage.
Required packages to create a Nano Server template disk
If you are planning to run Nano Server as your guest OS in shielded VMs, you must ensure your Nano Server
image includes the following packages:
Microsoft-NanoServer-Guest-Package
Microsoft-NanoServer-SecureStartup-Package
Run Windows Update on the template operating system
On the template disk, verify that the operating system has all of the latest Windows updates installed. Recently
released updates improve the reliability of the end-to-end shielding process - a process that may fail to complete if
the template operating system is not up-to-date.
Prepare and protect the VHDX with the template disk wizard
To use a template disk with shielded VMs, the disk must be prepared and encrypted with BitLocker by using the
Shielded Template Disk Creation Wizard. This wizard will generate a hash for the disk and add it to a volume
signature catalog (VSC). The VSC is signed using a certificate you specify and is used during the provisioning
process to ensure the disk being deployed for a tenant has not been altered or replaced with a disk the tenant does
not trust. Finally, BitLocker is installed on the disk's operating system (if it is not already there) to prepare the disk
for encryption during VM provisioning.
NOTE
The template disk wizard will modify the template disk you specify in-place. You may want to make a copy of the
unprotected VHDX before running the wizard to make updates to the disk at a later time. You will not be able to modify a
disk that has been protected with the template disk wizard.
Perform the following steps on a computer running Windows Server 2016 (does not need to be a guarded host or
a VMM server):
1. Copy the generalized VHDX created in Prepare an operating system VHDX to the server, if it is not already
there.
2. To administer the server locally, install the Shielded VM Tools feature from Remote Server
Administration Tools on the server.
Install-WindowsFeature RSAT-Shielded-VM-Tools -Restart
You can also administer the server from a client computer on which you have installed the Windows 10
Remote Server Administration Tools.
3. Obtain or create a certificate to sign the VSC for the VHDX that will become the template disk for new
shielded VMs. Details about this certificate will be shown to tenants when they create their shielding data
files and are authorizing disks they trust. Therefore, it is important to obtain this certificate from a certificate
authority mutually trusted by you and your tenants. In enterprise scenarios where you are both the hoster
and tenant, you might consider issuing this certificate from your PKI.
If you are setting up a test environment and just want to use a self-signed certificate to prepare your
template disk, run a command similar to the following:
New-SelfSignedCertificate -DnsName publisher.fabrikam.com
4. Start the Template Disk Wizard from the Administrative Tools folder on the Start menu or by typing
TemplateDiskWizard.exe into a command prompt.
5. On the Certificate page, click Browse to display a list of certificates. Select the certificate with which to
prepare the disk template. Click OK and then click Next.
6. On the Virtual Disk page, click Browse to select the VHDX that you have prepared, then click Next.
7. On the Signature Catalog page, provide a friendly disk name and version. These fields are present to help
you identify the disk once it has been prepared.
For example, for disk name you could type WS2016 and for Version, 1.0.0.0
8. Review your selections on the Review Settings page of the wizard. When you click Generate, the wizard will
enable BitLocker on the template disk, compute the hash of the disk, and create the Volume Signature
Catalog, which is stored in the VHDX metadata.
Wait until the prep process has finished before attempting to mount or move the template disk. This process
may take a while to complete, depending on the size of your disk.
9. On the Summary page, information about the disk template, the certificate used to sign the VSC, and the
certificate issuer is shown. Click Close to exit the wizard.
If you use VMM, follow the steps in the remaining sections in this topic to incorporate a template disk into a
shielded VM template in VMM.
Copy the template disk to the VMM Library
If you use VMM, after you create a template disk, you need to copy it to a VMM library share so hosts can
download and use the disk when provisioning new VMs. Use the following procedure to copy the template disk
into the VMM library and then refresh the library.
1. Copy the VHDX file to the VMM library share folder. If you used the default VMM configuration, copy the
template disk to \\MSSCVMMLibrary\VHDs.
2. Refresh the library server. Open the Library workspace, expand Library Servers, right-click on the library
server that you want to refresh, and click Refresh.
3. Next, provide VMM with information about the operating system installed on the template disk:
a. Find your newly imported template disk on your library server in the Library workspace.
b. Right-click the disk and then click Properties.
c. For operating system, expand the list and select the operating system installed on the disk. Selecting an
operating system indicates to VMM that the VHDX is not blank.
d. When you have updated the properties, click OK.
The small shield icon next to the disk's name denotes the disk as a prepared template disk for shielded VMs. You
can also right click the column headers and toggle the Shielded column to see a textual representation indicating
whether a disk is intended for regular or shielded VM deployments.
Create the shielded VM template in VMM using the prepared template
disk
With a prepared template disk in your VMM library, you are ready to create a VM template for shielded VMs. VM
templates for shielded VMs differ slightly from traditional VM templates in that certain settings are fixed
(generation 2 VM, UEFI and Secure Boot enabled, and so on) and others are unavailable (tenant customization is
limited to a few, select properties of the VM). To create the VM template, perform the following steps:
1. In the Library workspace, click Create VM Template on the home tab at the top.
2. On the Select Source page, click Use an existing VM template or a virtual hard disk stored in the
library, and then click Browse.
3. In the window that appears, select a prepared template disk from the VMM library. To more easily identify
which disks are prepared, right-click a column header and enable the Shielded column. Click OK then Next.
4. Specify a VM template name and optionally a description, and then click Next.
5. On the Configure Hardware page, specify the capabilities of VMs created from this template. Ensure that at
least one NIC is available and configured on the VM template. The only way for a tenant to connect to a
shielded VM is through Remote Desktop Connection, Windows Remote Management, or other preconfigured remote management tools that work over networking protocols.
If you choose to leverage static IP pools in VMM instead of running a DHCP server on the tenant network,
you will need to alert your tenants to this configuration. When a tenant supplies their shielding data file,
which contains the unattend file for the VMM, they will need to provide special placeholder values for the
static IP pool information. For more information about VMM placeholders in tenant unattend files, see
Create an answer file.
6. On the Configure Operating System page, VMM will only show a few options for shielded VMs, including
the product key, time zone, and computer name. Some secure information, such as the administrator
password and domain name, is specified by the tenant through a shielding data file (.PDK file).
NOTE
If you choose to specify a product key on this page, ensure it is valid for the operating system on the template disk. If
an incorrect product key is used, the VM creation will fail.
After the template is created, tenants can use it to create new virtual machines. You will need to verify that the VM
template is one of the resources available to the Tenant Administrator user role (in VMM, user roles are in the
Settings workspace).
See also
Hosting service provider configuration steps for guarded hosts and shielded VMs
Guarded fabric and shielded VMs
Shielded VMs - Hosting service provider prepares a
VM Shielding Helper VHD
4/24/2017 • 2 min to read • Edit Online
Applies To: Windows Server 2016
IMPORTANT
Before beginning these procedures, ensure that you have installed the latest cumulative update for Windows Server 2016 or
are using the latest Windows 10 Remote Server Administration Tools. Otherwise, the procedures will not work.
This section outlines steps performed by a hosting service provider to enable support for converting existing VMs
to shielded VMs.
To understand how this topic fits in the overall process of deploying shielded VMs, see Hosting service provider
configuration steps for guarded hosts and shielded VMs.
Prepare Helper VHD
1. On a machine with Hyper-V and the Remote Server Administration Tools feature Shielded VM Tools
installed, create a new generation 2 VM with a blank VHDX and install Windows Server 2016 on it using the
Windows Server ISO installation media. This VM should not be shielded and cannot use the Nano Server
installation option (it must be Server Core or Server with Desktop Experience).
IMPORTANT
The VM Shielding Helper VHD must not be related to the template disks you created in Hosting service provider
creates a shielded VM template. If you re-use a template disk, there will be a disk signature collision during the
shielding process because both disks will have the same GPT disk identifier. You can avoid this by creating a new
(blank) VHD and installing Windows Server 2016 onto it using your ISO installation media.
2. Start the VM, complete any setup steps, and log into the desktop. Once you have verified the VM is in a
working state, shut down the VM.
3. In an elevated Windows PowerShell window, run the following command to prepare the VHDX created
earlier to become a VM shielding helper disk. Update the path with the correct path for your environment.
Initialize-VMShieldingHelperVHD -Path 'C:\VHD\shieldingHelper.vhdx'
4. Once the command has completed successfully, copy the VHDX to your VMM library share. Do not start up
the VM from step 1 again. Doing so will corrupt the helper disk.
5. You can now delete the VM from step 1 in Hyper-V.
Configure VMM Host Guardian Server Settings
In the VMM Console, open the settings pane and then Host Guardian Service Settings under General. At the
bottom of this window, there is a field to configure the location of your helper VHD. Use the browse button to
select the VHD from your library share. If you do not see your disk in the share, you may need to manually refresh
the library in VMM for it to show up.
See also
Hosting service provider configuration steps for guarded hosts and shielded VMs
Guarded fabric and shielded VMs
Shielded VMs - Hosting service provider sets up
Windows Azure Pack
4/24/2017 • 4 min to read • Edit Online
This topic describes how a hosting service provider can configure Windows Azure Pack so that tenants can use it to
deploy shielded VMs. Windows Azure Pack is a web portal that extends the functionality of System Center Virtual
Machine Manager to allow tenants to deploy and manage their own VMs through a simple web interface. Windows
Azure Pack fully supports shielded VMs and makes it even easier for your tenants to create and manage their
shielding data files.
To understand how this topic fits in the overall process of deploying shielded VMs, see Hosting service provider
configuration steps for guarded hosts and shielded VMs.
Setting up Windows Azure Pack
You will complete the following tasks to set up Windows Azure Pack in your environment:
1. Complete configuration of System Center 2016 - Virtual Machine Manager (VMM) for your hosting fabric.
This includes setting up VM templates and a VM cloud, which will be exposed through Windows Azure Pack:
Scenario - Deploy guarded hosts and shielded virtual machines in VMM
2. Install and configure System Center 2016 - Service Provider Foundation (SPF). This software enables
Windows Azure Pack to communicate with your VMM servers:
Deploying Service Provider Foundation - SPF
3. Install Windows Azure Pack and configure it to communicate with SPF:
Install Windows Azure Pack (in this topic)
Configure Windows Azure Pack (in this topic)
4. Create one or more hosting plans in Windows Azure Pack to allow tenants access to your VM clouds:
Create a plan in Windows Azure Pack (in this topic)
Install Windows Azure Pack
Install and configure Windows Azure Pack (WAP) on the machine where you wish to host the web portal for your
tenants. This machine will need to be able to reach the SPF server and be reachable by your tenants.
1. Reviewing WAP system requirements and install the prerequisite software.
2. Download and install the Web Platform Installer. If the machine is not connected to the Internet, follow the
offline installation instructions.
3. Open the Web Platform Installer and find Windows Azure Pack: Portal and API Express under the
Products tab. Click Add, then Install at the bottom of the window.
4. Proceed through the installation. After the installation completes, the configuration site
(https://<wapserver>:30101/) opens in your web browser. On this website, provide information about your
SQL server and finish configuring WAP.
For help setting up Windows Azure Pack, see Install an express deployment of Windows Azure Pack.
NOTE
If you already run Windows Azure Pack in your environment, you may use your existing installation. In order to work with the
latest shielded VM features, however, you will need to upgrade your installation to at least Update Rollup 10.
Configure Windows Azure Pack
Before you use Windows Azure Pack, you should already have it installed and configured for your infrastructure.
1. Navigate to the Windows Azure Pack admin portal at https://<wapserver>:30091, and then log in using your
administrator credentials.
2. In the left pane, click VM Clouds.
3. Connect Windows Azure Pack to the Service Provider Foundation instance by clicking Register System
Center Service Provider Foundation. You will need to specify the URL for Service Provider Foundation, as
well as a username and password.
4. Once completed, you should be able to see the VM clouds set up in your VMM environment. Ensure you
have at least one VM cloud that supports shielded VMs available to WAP before continuing.
Create a plan in Windows Azure Pack
In order to allow tenants to create VMs in WAP, you must first create a hosting plan to which tenants can subscribe.
Plans define the allowed VM clouds, templates, networks, and billing entities for your tenants.
1. On the lower pane of the portal, click +NEW > PLAN > CREATE PLAN.
2. In the first step of the wizard, choose a name for your Plan. This is the name your tenants will see when
subscribing.
3. In the second step, select VIRTUAL MACHINE CLOUDS as one of the services to offer in the plan.
4. Skip the step about selecting any add-ons for the plan.
5. Click OK (check mark) to create the plan. Although this creates the plan, it is not yet in a configured state.
6. To begin configuring the Plan, click its name.
7. On the next page, under plan services, click Virtual Machine Clouds. This opens the page where you can
configure quotas for this plan.
8. Under basic, select the VMM Management Server and Virtual Machine Cloud you wish to offer to your
tenants. Clouds that can offer shielded VMs will be displayed with (shielding supported) next to their
name.
9. Select the quotas you want to apply in this Plan. (For example, limits on CPU core and RAM usage). Make
sure to leave the Allow Virtual Machines To Be Shielded checkbox selected.
10. Scroll down to the section titled templates, and then select one or more templates to offer to your tenants.
You can offer both shielded and unshielded templates to tenants, but a shielded template must be offered to
give tenants end-to-end assurances about the integrity of the VM and their secrets.
11. In the networks section, add one or more networks for your tenants.
12. After setting any other settings or quotas for the Plan, click Save at the bottom.
13. At the top left of the screen, click on the arrow to take you back to the Plan page.
14. At the bottom of the screen, change the Plan from being Private to Public so that tenants can subscribe to
the Plan.
At this point, Windows Azure Pack is configured and tenants will be able to subscribe to the plan you just
created and deploy shielded VMs. For additional steps that tenants need to complete, see Shielded VMs for
tenants - Deploying a shielded VM by using Windows Azure Pack.
See also
Hosting service provider configuration steps for guarded hosts and shielded VMs
Guarded fabric and shielded VMs
Shielded VMs for tenants - Creating shielding data to
define a shielded VM
4/24/2017 • 11 min to read • Edit Online
Applies To: Windows Server 2016
A shielding data file (also called a provisioning data file or PDK file) is an encrypted file that a tenant or VM owner
creates to protect important VM configuration information, such as the administrator password, RDP and other
identity-related certificates, domain-join credentials, and so on. This topic provides information about how to
create a shielding data file. Before you can create the file, you must either obtain a template disk from your hosting
service provider, or create a template disk as described in Shielded VMs for tenants - Creating a template disk
(optional).
For a list and a diagram of the contents of a shielding data file, see What is shielding data and why is it necessary?.
IMPORTANT
The steps in this section should be completed on a tenant machine running Windows Server 2016. That machine must not
be part of a guarded fabric (that is, should not be configured to use an HGS cluster).
To prepare to create a shielding data file, take the following steps:
Obtain a certificate for Remote Desktop Connection
Create an answer file
Get the volume signature catalog file
Select trusted fabrics
Then you can create the shielding data file:
Create a shielding data file and add guardians
Obtain a certificate for Remote Desktop Connection
Since tenants are only able to connect to their shielded VMs using Remote Desktop Connection or other remote
management tools, it is important to ensure that tenants can verify they are connecting to the right endpoint (that
is, there is not a "man in the middle" intercepting the connection).
One way to verify you are connecting to the intended server is to install and configure a certificate for Remote
Desktop Services to present when you initiate a connection. The client machine connecting to the server will check
whether it trusts the certificate and show a warning if it does not. Generally, to ensure the connecting client trusts
the certificate, RDP certificates are issued from the tenant's PKI. More information about Using certificates in
Remote Desktop Services can be found on TechNet.
NOTE
When selecting an RDP certificate to include in your shielding data file, be sure to use a wildcard certificate. One shielding
data file may be used to create an unlimited number of VMs. Since each VM will share the same certificate, a wildcard
certificate ensures the certificate will be valid regardless of the VM's hostname.
If you are evaluating shielded VMs and are not yet ready to request a certificate from your certificate authority, you
can create a self-signed certificate on the tenant machine by running the following Windows PowerShell command
(where contoso.com is the tenant's domain):
$rdpCertificate = New-SelfSignedCertificate -DnsName '\*.contoso.com'
$password = ConvertTo-SecureString -AsPlainText 'Password1' -Force
Export-PfxCertificate -Cert $RdpCertificate -FilePath .\rdpCert.pfx -Password $password
Create an answer file
Since the signed template disk in VMM is generalized, tenants are required to provide an answer file to specialize
their shielded VMs during the provisioning process. The answer file (often called the unattend file) can configure
the VM for its intended role - that is, it can install Windows features, register the RDP certificate created in the
previous step, and perform other custom actions. It will also supply required information for Windows setup,
including the default administrator's password and product key.
For information about obtaining and using the New-ShieldingDataAnswerFile function to generate an answer
file (Unattend.xml file) for creating shielded VMs, see Generate an answer file by using the NewShieldingDataAnswerFile function. Using the function, you can more easily generate an answer file that reflects
choices such as the following:
Is the VM intended to be domain joined at the end of the initialization process?
Will you be using a volume license or specific product key per VM?
Are you using DHCP or static IP?
Will you use a Remote Desktop Protocol (RDP) certificate that will be used to prove that the VM belongs to your
organization?
Do you want to run a script at the end of the initialization?
Are you using a Desired State Configuration (DSC) server for further configuration?
Answer files used in shielding data files will be used on every VM created using that shielding data file. Therefore,
you should make sure that you do not hard code any VM-specific information into the answer file. VMM supports
some substitution strings (see the table below) in the unattend file to handle specialization values that may change
from VM to VM. You are not required to use these; however, if they are present VMM will take advantage of them.
When creating an unattend.xml file for shielded VMs, keep in mind the following restrictions:
The unattend file must result in the VM being turned off after it has been configured. This is to allow VMM
to know when it should report to the tenant that the VM finished provisioning and is ready for use. VMM
will automatically power the VM back on once it detects it has been turned off during provisioning.
It is strongly recommended that you configure an RDP certificate to ensure you are connecting to the right
VM and not another machine configured for a man-in-the-middle attack.
Be sure to enable RDP and the corresponding firewall rule so you can access the VM after it has been
configured. You cannot use the VMM console to access shielded VMs, so you will need RDP to connect to
your VM. If you prefer to manage your systems with Windows PowerShell remoting, ensure WinRM is
enabled, too.
The only substitution strings supported in shielded VM unattend files are the following:
REPLACEABLE ELEMENT
SUBSTITUTION STRING
ComputerName
@ComputerName@
REPLACEABLE ELEMENT
SUBSTITUTION STRING
TimeZone
@TimeZone@
ProductKey
@ProductKey@
IPAddr4-1
@IP4Addr-1@
IPAddr6-1
@IP6Addr-1@
MACAddr-1
@MACAddr-1@
Prefix-1-1
@Prefix-1-1@
NextHop-1-1
@NextHop-1-1@
Prefix-1-2
@Prefix-1-2@
NextHop-1-2
@NextHop-1-2@
When using substitution strings, it is important to ensure that the strings will be populated during the VM
provisioning process. If a string such as @ProductKey@ is not supplied at deployment time, leaving the
<ProductKey> node in the unattend file blank, the specialization process will fail and you will be unable to connect
to your VM.
Also, note that the networking-related substitution strings towards the end of the table are only used if you are
leveraging VMM Static IP Address Pools. Your hosting service provider should be able to tell you if these
substitution strings are required. For more information about static IP addresses in VMM templates, see the
following in the VMM documentation:
Guidelines for IP address pools
Set up static IP address pools in the VMM fabric
Finally, it is important to note that the shielded VM deployment process will only encrypt the OS drive. If you
deploy a shielded VM with one or more data drives, it is strongly recommended that you add an unattend
command or Group Policy setting in the tenant domain to automatically encrypt the data drives.
Get the volume signature catalog file
Shielding data files also contain information about the template disks a tenant trusts. Tenants acquire the disk
signatures from trusted template disks in the form of a volume signature catalog (VSC) file. These signatures are
then validated when a new VM is deployed. If none of the signatures in the shielding data file match the template
disk trying to be deployed with the VM (i.e. it was modified or swapped with a different, potentially malicious disk),
the provisioning process will fail.
IMPORTANT
While the VSC ensures that a disk has not been tampered with, it is still important for the tenant to trust the disk in the first
place. If you are the tenant and the template disk is provided by your hoster, deploy a test VM using that template disk and
run your own tools (antivirus, vulnerability scanners, and so on) to validate the disk is, in fact, in a state that you trust.
There are two ways to acquire the VSC of a template disk:
The hoster (or tenant, if the tenant has access to VMM) uses the VMM PowerShell cmdlets to save the VSC
and gives it to the tenant. This can be performed on any machine with the VMM console installed and
configured to manage the hosting fabric's VMM environment. The PowerShell cmdlets to save the VSC are:
$disk = Get-SCVirtualHardDisk -Name "templateDisk.vhdx"
$vsc = Get-SCVolumeSignatureCatalog -VirtualHardDisk $disk
$vsc.WriteToFile(".\templateDisk.vsc")
The tenant has access to the template disk file. This may be the case if the tenant creates a template disk to
uploaded to a hosting service provider or if the tenant can download the hoster's template disk. In this case,
without VMM in the picture, the tenant would run the following cmdlet (installed with the Shielded VM
Tools feature, part of Remote Server Administration Tools):
Save-VolumeSignatureCatalog -TemplateDiskPath templateDisk.vhdx -VolumeSignatureCatalogPath
templateDisk.vsc
Select trusted fabrics
The last component in the shielding data file relates to the owner and guardians of a VM. Guardians are used to
designate both the owner of a shielded VM and the guarded fabrics on which it is authorized to run.
To authorize a hosting fabric to run a shielded VM, you must obtain the guardian metadata from the hosting
service provider's Host Guardian Service. Often, the hosting service provider will provide you with this metadata
through their management tools. In an enterprise scenario, you may have direct access to obtain the metadata
yourself.
You or your hosting service provider can obtain the guardian metadata from HGS by performing one of the
following actions:
Obtain the guardian metadata directly from HGS by running the following Windows PowerShell command,
or browsing to the website and saving the XML file that is displayed:
Invoke-WebRequest 'http://hgs.relecloud.com/KeyProtection/service/metadata/2014-07/metadata.xml' OutFile .\RelecloudGuardian.xml
Obtain the guardian metadata from VMM using the VMM PowerShell cmdlets:
$relecloudmetadata = Get-SCGuardianConfiguration
$relecloudmetadata.InnerXml | Out-File .\RelecloudGuardian.xml -Encoding UTF8
Obtain the guardian metadata files for each guarded fabric you wish to authorize your shielded VMs to run on
before continuing.
Create a shielding data file and add guardians
Run the Shielding Data File wizard to create a shielding data (PDK) file. Here, you'll add the RDP certificate,
unattend file, volume signature catalogs, owner guardian and the downloaded guardian metadata obtained in the
preceding step.
1. Install Remote Server Administration Tools > Feature Administration Tools > Shielded VM Tools on
your machine using Server Manager or the following Windows PowerShell command:
Install-WindowsFeature RSAT-Shielded-VM-Tools
2. Open the Shielding Data File Wizard from the Administrator Tools section on your Start menu or by running
the following executable C:\Windows\System32\ShieldingDataFileWizard.exe.
3. On the first page, use the second file selection box to choose a location and file name for your shielding data
file. Normally, you would name a shielding data file after the entity who owns any VMs created with that
shielding data (for example, HR, IT, Finance) and the workload role it is running (for example, file server, web
server, or anything else configured by the unattend file). Leave the radio button set to Shielding data for
Shielded templates.
NOTE
In the Shielding Data File Wizard you will notice the two options below:
Shielding data for Shielded templates
Shielding data for existing VMs and non-Shielded templates
The first option is used when creating new shielded VMs from shielded templates. The second option allows you
to create shielding data that can only be used when converting existing VMs or creating shielded VMs from nonshielded templates.
Additionally, you must choose whether VMs created using this shielding data file will be truly shielded or
configured in "encryption supported" mode. For more information about these two options, see What are
the types of virtual machines that a guarded fabric can run?.
IMPORTANT
Pay careful attention to the next step as it defines the owner of your shielded VMs and which fabrics your shielded
VMs will be authorized to run on.
Possession of owner guardian is required in order to later change an existing shielded VM from Shielded to
Encryption Supported or vice-versa.
4. Your goal in this step is two-fold:
Create or select an owner guardian that represents you as the VM owner
Import the guardian that you downloaded from the hosting provider's (or your own) Host Guardian
Service in the preceding step
To designate an existing owner guardian, select the appropriate guardian from the drop down menu. Only
guardians installed on your local machine with the private keys intact will show up in this list. You can also
create your own owner guardian by selecting Manage Local Guardians in the lower right corner and
clicking Create and completing the wizard.
Next, we import the guardian metadata downloaded earlier again using the Owner and Guardians page.
Select Manage Local Guardians from the lower right corner. Use the Import feature to import the
guardian metadata file. Click OK once you have imported or added all of the necessary guardians. As a best
practice, name guardians after the hosting service provider or enterprise datacenter they represent. Finally,
select all the guardians that represent the datacenters in which your shielded VM is authorized to run. You
do not need to select the owner guardian again. Click Next once finished.
5. On the Volume ID Qualifiers page, click Add to authorize a signed template disk in your shielding data file.
When you select a VSC in the dialog box, it will show you information about that disk's name, version, and
the certificate that was used to sign it. Repeat this process for each template disk you wish to authorize.
6. On the Specialization Values page, click Browse to select your unattend.xml file that will be used to
specialize your VMs.
Use the Add button at the bottom to add any additional files to the PDK that are needed during the
specialization process. For example, if your unattend file is installing an RDP certificate onto the VM (as
described in Generate an answer file by using the New-ShieldingDataAnswerFile function), you should add
the RDPCert.pfx file referenced in the unattend file here. Note that any files you specify here will
automatically be copied to C:\temp\ on the VM that is created. Your unattend file should expect the files to
be in that folder when referencing them by path.
7. Review your selections on the next page, and then click Generate.
8. Close the wizard after it has completed.
See also
Deploy shielded VMs
Guarded fabric and shielded VMs
Shielded VMs for tenants - Deploying a shielded VM
by using Windows Azure Pack
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
If your hosting service provider supports it, you can use Windows Azure Pack to deploy a shielded VM.
Complete the following steps:
1. Subscribe to one or more plans offered in Windows Azure Pack.
2. Create a shielded VM by using Windows Azure Pack.
Use shielded virtual machines, which is described in the following topics:
Create shielding data (and upload the shielding data file, as described in the second procedure in the
topic).
NOTE
As part of creating shielding data, you will download your guardian key file, which will be an XML file in UTF8 format. Do not change the file to UTF-16.
Create a shielded virtual machine - with Quick Create, through a shielded template, or through a
regular template.
WARNING
If you Create a shielded virtual machine by using a regular template, it is important to note that the VM is
provisioned unshielded. This means that the template disk is not verified against the list of trusted disks in
your shielding data file, nor are the secrets in your shielding data file used to provision the VM. If a shielded
template is available, it is preferable to deploy a shielded VM with a shielded template to provide end-to-end
protection of your secrets.
Convert a Generation 2 virtual machine to a shielded virtual machine
NOTE
If you convert a virtual machine to a shielded virtual machine, existing checkpoints and backups are not
encrypted. You should delete old checkpoints when possible to prevent access to your old, decrypted data.
See also
Hosting service provider configuration steps for guarded hosts and shielded VMs
Guarded fabric and shielded VMs
Shielded VMs for tenants - Deploying a shielded VM
by using Virtual Machine Manager
4/24/2017 • 1 min to read • Edit Online
If you have access to System Center 2016 - Virtual Machine Manager (VMM), you can deploy a shielded VM for
which a shielded VM template has already been created.
To deploy a shielded VM in VMM, use the instructions in Provision a new shielded VM.
See also
Deploy shielded VMs
Guarded fabric and shielded VMs
Managing a Guarded Fabric
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
The following topics describe how to manage a guarded fabric.
Manage the Host Guardian Service
See also
Deploying a guarded fabric
Managing the Host Guardian Service
4/24/2017 • 40 min to read • Edit Online
Applies To: Windows Server 2016
The Host Guardian Service (HGS) is the centerpiece of the guarded fabric solution. It is responsible for ensuring
that Hyper-V hosts in the fabric are known to the hoster or enterprise and running trusted software and for
managing the keys used to start up shielded VMs. When a tenant decides to trust you to host their shielded VMs,
they are placing their trust in your configuration and management of the Host Guardian Service. Therefore, it is
very important to follow best practices when managing the Host Guardian Service to ensure the security,
availability and reliability of your guarded fabric. The guidance in the following sections addresses the most
common operational issues facing administrators of HGS.
Limiting admin access to HGS
Due to the security sensitive nature of HGS, it is important to ensure that its administrators are highly trusted
members of your organization and, ideally, separate from the administrators of your fabric resources. Additionally,
it is recommended that you only manage HGS from secure workstations using secure communication protocols,
such as WinRM over HTTPS.
Separation of Duties
When setting up HGS, you are given the option of creating an isolated Active Directory forest just for HGS or to join
HGS to an existing, trusted domain. This decision, as well as the roles you assign the admins in your organization,
determine the trust boundary for HGS. Whoever has access to HGS, whether directly as an admin or indirectly as
an admin of something else (e.g. Active Directory) that can influence HGS, has control over your guarded fabric.
HGS admins choose which Hyper-V hosts are authorized to run shielded VMs and manage the certificates
necessary to start up shielded VMs. An attacker or malicious admin who has access to HGS can use this power to
authorize compromised hosts to run shielded VMs, initiate a denial-of-service attack by removing key material, and
more.
To avoid this risk, it is strongly recommended that you limit the overlap between the admins of your HGS
(including the domain to which HGS is joined) and Hyper-V environments. By ensuring no one admin has access to
both systems, an attacker would need to compromise 2 different accounts from 2 individuals to complete his
mission to change the HGS policies. This also means that the domain and enterprise admins for the two Active
Directory environments should not be the same person, nor should HGS use the same Active Directory forest as
your Hyper-V hosts. Anyone who can grant themselves access to more resources poses a security risk.
Using Just Enough Administration
HGS comes with Just Enough Administration (JEA) roles built in to help you manage it more securely. JEA helps by
allowing you to delegate admin tasks to non-admin users, meaning the people who manage HGS policies need not
actually be admins of the entire machine or domain. JEA works by limiting what commands a user can run in a
PowerShell session and using a temporary local account behind the scenes (unique for each user session) to run
the commands which normally require elevation.
HGS ships with 2 JEA roles preconfigured:
HGS Administrators which allows users to manage all HGS policies, including authorizing new hosts to run
shielded VMs.
HGS Reviewers which only allows users the right to audit existing policies. They cannot make any changes to
the HGS configuration.
To use JEA, you first need to create a new standard user and make them a member of either the HGS admins or
HGS reviewers group. If you used Install-HgsServer to set up a new forest for HGS, these groups will be named
"servicenameAdministrators" and "servicenameReviewers", respectively, where servicename is the network name
of the HGS cluster. If you joined HGS to an existing domain, you should refer to the group names you specified in
Initialize-HgsServer .
Create standard users for the HGS administrator and reviewer roles
$hgsServiceName = (Get-ClusterResource HgsClusterResource | Get-ClusterParameter DnsName).Value
$adminGroup = $hgsServiceName + "Administrators"
$reviewerGroup = $hgsServiceName + "Reviewers"
New-ADUser -Name 'hgsadmin01' -AccountPassword (Read-Host -AsSecureString -Prompt 'HGS Admin Password') ChangePasswordAtLogon $false -Enabled $true
Add-ADGroupMember -Identity $adminGroup -Members 'hgsadmin01'
New-ADUser -Name 'hgsreviewer01' -AccountPassword (Read-Host -AsSecureString -Prompt 'HGS Reviewer Password')
-ChangePasswordAtLogon $false -Enabled $true
Add-ADGroupMember -Identity $reviewerGroup -Members 'hgsreviewer01'
Audit policies with the reviewer role
On a remote machine that has network connectivity to HGS, run the following commands in PowerShell to enter
the JEA session with the reviewer credentials. It is important to note that since the reviewer account is just a
standard user, it cannot be used for regular Windows PowerShell remoting, Remote Desktop access to HGS, etc.
Enter-PSSession -ComputerName <hgsnode> -Credential '<hgsdomain>\hgsreviewer01' -ConfigurationName
'microsoft.windows.hgs'
You can then check which commands are allowed in the session with Get-Command and run any allowed commands
to audit the configuration. In the below example, we are checking which policies are enabled on HGS.
Get-Command
Get-HgsAttestationPolicy
Type the command
Exit-PSSession
or its alias,
exit
, when you are done working with the JEA session.
Add a new policy to HGS using the administrator role
To actually change a policy, you need to connect to the JEA endpoint with an identity that belongs to the
'hgsAdministrators' group. In the below example, we show how you can copy a new code integrity policy to HGS
and register it using JEA. The syntax may be different from what you are used to. This is to accommodate some of
the restrictions in JEA like not having access to the full file system.
$cipolicy = Get-Item "C:\temp\cipolicy.p7b"
$session = New-PSSession -ComputerName <hgsnode> -Credential '<hgsdomain>\hgsadmin01' -ConfigurationName
'microsoft.windows.hgs'
Copy-Item -Path $cipolicy -Destination 'User:' -ToSession $session
# Now that the file is copied, we enter the interactive session to register it with HGS
Enter-PSSession -Session $session
Add-HgsAttestationCiPolicy -Name 'New CI Policy via JEA' -Path 'User:\cipolicy.p7b'
# Confirm it was added successfully
Get-HgsAttestationPolicy -PolicyType CiPolicy
# Finally, remove the PSSession since it is no longer needed
Exit-PSSession
Remove-PSSession -Session $session
Monitoring HGS
Event sources and forwarding
Events from HGS will show up in the Windows event log under 2 sources:
HostGuardianService-Attestation
HostGuardianService-KeyProtection
You can view these events by opening Event Viewer and navigating to Microsoft-Windows-HostGuardianServiceAttestation and Microsoft-Windows-HostGuardianService-KeyProtection.
In a large environment, it is often preferable to forward events to a central Windows Event Collector to make
analyzation of the events easier. For more information, check out the Windows Event Forwarding documentation.
Using System Center Operations Manager
You can also use System Center 2016 - Operations Manager to monitor HGS and your guarded hosts. The guarded
fabric management pack has event monitors to check for common misconfigurations that can lead to datacenter
downtime, including hosts not passing attestation and HGS servers reporting errors.
To get started, install and configure SCOM 2016 and download the guarded fabric management pack. The included
management pack guide explains how to configure the management pack and understand the scope of its
monitors.
Backing up and restoring HGS
Disaster recovery planning
When drafting your disaster recovery plans, it is important to consider the unique requirements of the Host
Guardian Service in your guarded fabric. Should you lose some or all of your HGS nodes, you may face immediate
availability problems that will prevent users from starting up their shielded VMs. In a scenario where you lose your
entire HGS cluster, you will need to have complete backups of the HGS configuration on hand to restore your HGS
cluster and resume normal operations. This section covers the steps necessary to prepare for such a scenario.
First, it's important to understand what about HGS is important to back up. HGS retains several pieces of
information that help it determine which hosts are authorized to run shielded VMs. This includes:
1. Active Directory security identifiers for the groups containing trusted hosts (when using Active Directory
attestation);
2. Unique TPM identifiers for each host in your environment;
3. TPM policies for each unique configuration of host; and
4. Code integrity policies that determine which software is allowed to run on your hosts.
These attestation artifacts require coordination with the admins of your hosting fabric to obtain, potentially making
it difficult to get this information again after a disaster.
Additionally, HGS requires access to 2 or more certificates used to encrypt and sign the information required to
start up a shielded VM (the key protector). These certificates are well known (used by the owners of shielded VMs
to authorize your fabric to run their VMs) and must be restored after a disaster for a seamless recovery experience.
Should you not restore HGS with the same certificates after a disaster, each VM would need to be updated to
authorize your new keys to decrypt their information. For security reasons, only the VM owner can update the VM
configuration to authorize these new keys, meaning failure to restore your keys after a disaster will result in each
VM owner needing to take action to get their VMs running again.
Preparing for the worst
To prepare for a complete loss of HGS, there are 2 steps you must take:
1. Back up the HGS attestation policies
2. Back up the HGS keys
Guidance on how to perform both of these steps is provided in the Backing up HGS section.
It is additionally recommended, but not required, that you back up the list of users authorized to manage HGS in its
Active Directory domain or Active Directory itself.
Backups should be taken regularly to ensure the information is up to date and stored securely to avoid tampering
or theft.
It is not recommended to back up or attempt to restore an entire system image of an HGS node. In the event you
have lost your entire cluster, the best practice is to set up a brand new HGS node and restore just the HGS state, not
the entire server OS.
Recovering from the loss of one node
If you lose one or more nodes (but not every node) in your HGS cluster, you can simply add nodes to your cluster
following the guidance in the deployment guide. The attestation policies will sync automatically, as will any
certificates which were provided to HGS as PFX files with accompanying passwords. For certificates added to HGS
using a thumbprint (non-exportable and hardware backed certificates, commonly), you will need to ensure each
new node has access to the private key of each certificate.
Recovering from the loss of the entire cluster
If your entire HGS cluster goes down and you are unable to bring it back online, you will need to restore HGS from
a backup. Restoring HGS from a backup involves first setting up a new HGS cluster per the guidance in the
deployment guide. It is highly recommended, but not required, to use the same cluster name when setting up the
recovery HGS environment to assist with name resolution from hosts. Using the same name avoids having to
reconfigure hosts with new attestation and key protection URLs. If you restored objects to the Active Directory
domain backing HGS, it is recommended that you remove the objects representing the HGS cluster, computers,
service account and JEA groups before initializing the HGS server.
Once you have set up your first HGS node (e.g. it has been installed and initialized), you will follow the procedures
under Restoring HGS from a backup to restore the attestation policies and public halves of the key protection
certificates. You will need to restore the private keys for your certificates manually according to the guidance of
your certificate provider (e.g. import the certificate in Windows, or configure access to HSM-backed certificates).
After the first node is set up, you can continue to install additional nodes to the cluster until you have reached the
capacity and resiliency you desire.
Backing up HGS
The HGS administrator should be responsible for backing up HGS on a regular basis. A complete backup will
contain sensitive key material that must be appropriately secured. Should an untrusted entity gain access to these
keys, they could use that material to set up a malicious HGS environment for the purpose of compromising
shielded VMs.
Backing up the attestation policies To back up the HGS attestation policies, run the following command on any
working HGS server node. You will be prompted to provide a password. This password is used to encrypt any
certificates added to HGS using a PFX file (instead of a certificate thumbprint).
Export-HgsServerState -Path C:\temp\HGSBackup.xml
NOTE
If you are using admin-trusted attestation, you must separately back up membership in the security groups used by HGS to
authorize guarded hosts. HGS will only back up the SID of the security groups, not the membership within them. In the event
these groups are lost during a disaster, you will need to recreate the group(s) and add each guarded host to them again.
Backing up certificates
The Export-HgsServerState command will back up any PFX-based certificates added to HGS at the time the
command is run. If you added certificates to HGS using a thumbprint (typical for non-exportable and hardwarebacked certificates), you will need to manually back up the private keys for your certificates. To identify which
certificates are registered with HGS and need to be backed up manually, run the following PowerShell command
on any working HGS server node.
Get-HgsKeyProtectionCertificate | Where-Object { $_.CertificateData.GetType().Name -eq 'CertificateReference'
} | Format-Table Thumbprint, @{ Label = 'Subject'; Expression = { $_.CertificateData.Certificate.Subject } }
For each of the certificates listed, you will need to manually back up the private key. If you are using softwarebased certificate that is non-exportable, you should contact your certificate authority to ensure they have a backup
of your certificate and/or can reissue it on demand. For certificates created and stored in hardware security
modules, you should consult the documentation for your device for guidance on disaster recovery planning.
You should store the certificate backups alongside your attestation policy backups in a secure location so that both
pieces can be restored together.
Additional configuration to back up
The backed up HGS server state will not include the name of your HGS cluster, any information from Active
Directory, or any SSL certificates used to secure communications with the HGS APIs. These settings are important
for consistency but not critical to get your HGS cluster back online after a disaster.
To capture the name of the HGS service, run Get-HgsServer and note the flat name in the Attestation and Key
Protection URLs. For example, if the Attestation URL is "http://hgs.contoso.com/Attestation", "hgs" is the HGS
service name.
The Active Directory domain used by HGS should be managed like any other Active Directory domain. When
restoring HGS after a disaster, you will not necessarily need to recreate the exact objects that are present in the
current domain. However, it will make recovery easier if you back up Active Directory and keep a list of the JEA
users authorized to manage the system as well as the membership of any security groups used by admin-trusted
attestation to authorize guarded hosts.
To identify the thumbprint of the SSL certificates configured for HGS, run the following command in PowerShell.
You can then back up those SSL certificates according to your certificate provider's instructions.
Get-WebBinding -Protocol https | Select-Object certificateHash
Restoring HGS from a backup
The following steps describe how to restore HGS settings from a backup. The steps are relevant to both situations
where you are trying to undo changes made to your already-running HGS instances and when you are standing up
a brand new HGS cluster after a complete loss of your previous one.
Set up a replacement HGS cluster
Before you can restore HGS, you need to have an initialized HGS cluster to which you can restore the configuration.
If you are simply importing settings that were accidentally deleted to an existing (running) cluster, you can skip this
step. If you are recovering from a complete loss of HGS, you will need to install and initialize at least one HGS node
following the guidance in the deployment guide.
Specifically, you will need to:
1. Set up the HGS domain or join HGS to an existing domain
2. Initialize the HGS server using your existing keys or a set of temporary keys. You can remove the temporary
keys after importing your actual keys from the HGS backup files.
3. Import HGS settings from your backup to restore the trusted host groups, code integrity policies, TPM baselines,
and TPM identifiers
TIP
The new HGS cluster does not need to use the same certificates, service name, or domain as the HGS instance from which
your backup file was exported.
Import settings from a backup
To restore attestation policies, PFX-based certificates, and the public keys of non-PFX certificates to your HGS node
from a backup file, run the following command on an initialized HGS server node. You will be prompted to enter
the password you specified when creating the backup.
Import-HgsServerState -Path C:\Temp\HGSBackup.xml
If you only want to import admin-trusted attestation policies or TPM-trusted attestation policies, you can do so by
specifying the -ImportActiveDirectoryModeState or -ImportTpmModeState flags to Import-HgsServerState.
Ensure the latest cumulative update for Windows Server 2016 is installed before running
Failure to do so may result in an import error.
Import-HgsServerState
.
NOTE
If you restore policies on an existing HGS node that already has one or more of those policies installed, the import command
will show an error for each duplicate policy. This is an expected behavior and can be safely ignored in most cases.
Reinstall private keys for certificates
If any of the certificates used on the HGS from which the backup was created were added using thumbprints, only
the public key of those certificates will be included in the backup file. This means that you will need to manually
install and/or grant access to the private keys for each of those certificates before HGS can service requests from
Hyper-V hosts. The actions necessary to complete that step varies depending on how your certificate was originally
issued. For software-backed certificates issued by a certificate authority, you will need to contact your CA to get the
private key and install it on each HGS node per their instructions. Similarly, if your certificates are hardwarebacked, you will need to consult your hardware security module vendor's documentation to install the necessary
driver(s) on each HGS node to connect to the HSM and grant each machine access to the private key.
As a reminder, certificates added to HGS using thumbprints require manual replication of the private keys to each
node. You will need to repeat this step on each additional node you add to the restored HGS cluster.
Review imported attestation policies
After you've imported your settings from a backup, it is recommended to closely review all the imported policies
using Get-HgsAttestationPolicy to make sure only the hosts you trust to run shielded VMs will be able to
successfully attest. If you find any policies which no longer match your security posture, you can disable or remove
them.
Run diagnostics to check system state
After you have finished setting up and restoring the state of your HGS node, you should run the HGS diagnostics
tool to check the state of the system. To do this, run the following command on the HGS node where you restored
the configuration:
Get-HgsTrace -RunDiagnostics
If the "Overall Result" is not "Pass", additional steps are required to finish configuring the system. Check the
messages reported in the subtest(s) that failed for more information.
Patching HGS
It is important to keep your Host Guardian Service nodes up to date by installing the latest cumulative update when
it comes out. If you are setting up a brand new HGS node, it is highly recommended that you install any available
updates before installing the HGS role or configuring it. This will ensure any new or changed functionality will take
effect immediately.
When patching your guarded fabric, it is strongly recommended that you first upgrade all Hyper-V hosts before
upgrading HGS. This is to ensure that any changes to the attestation policies on HGS are made after the Hyper-V
hosts have been updated to provide the information needed for them. If an update is going to change the behavior
of policies, they will not automatically be enabled to avoid disrupting your fabric. Such updates require that you
follow the guidance in the following section to activate the new or changed attestation policies. We encourage you
to read the release notes for Windows Server and any cumulative updates you install to check if the policy updates
are required.
Updates requiring policy activation
If an update for HGS introduces or significantly changes the behavior of an attestation policy, an additional step is
required to activate the changed policy. Policy changes are only enacted after exporting and importing the HGS
state. You should only activate the new or changed policies after you have applied the cumulative update to all
hosts and all HGS nodes in your environment. Once every machine has been updated, run the following
commands on any HGS node to trigger the upgrade process:
$password = Read-Host -AsSecureString -Prompt "Enter a temporary password"
Export-HgsServerState -Path .\temporaryExport.xml -Password $password
Import-HgsServerState -Path .\temporaryExport.xml -Password $password
If a new policy was introduced, it will be disabled by default. To enable the new policy, first find it in the list of
Microsoft policies (prefixed with 'HGS_') and then enable it using the following commands:
Get-HgsAttestationPolicy
Enable-HgsAttestationPolicy -Name <Hgs_NewPolicyName>
Managing attestation policies
HGS maintains several attestation policies which define the minimum set of requirements a host must meet in
order to be deemed "healthy" and allowed to run shielded VMs. Some of these policies are defined by Microsoft,
others are added by you to define the allowable code integrity policies, TPM baselines, and hosts in your
environment. Regular maintenance of these policies is necessary to ensure hosts can continue attesting properly as
you update and replace them, and to ensure any untrusted hosts or configurations are blocked from successfully
attesting.
For admin-trusted attestation, there is only one policy which determines if a host is healthy: membership in a
known, trusted security group. TPM attestation is more complicated, and involves various policies to measure the
code and configuration of a system before determining if it is healthy.
A single HGS can be configured with both Active Directory and TPM policies at once, but the service will only check
the policies for the current mode which it is configured for when a host tries attesting. To check the mode of your
HGS server, run Get-HgsServer .
Default policies
For TPM-trusted attestation, there are several built-in policies configured on HGS. Some of these policies are
"locked" -- meaning that they cannot be disabled for security reasons. The table below explains the purpose of each
default policy.
POLICY NAME
PURPOSE
Hgs_SecureBootEnabled
Requires hosts to have Secure Boot enabled. This is necessary
to measure the startup binaries and other UEFI-locked
settings.
Hgs_UefiDebugDisabled
Ensures hosts do not have a kernel debugger enabled. Usermode debuggers are blocked with code integrity policies.
Hgs_SecureBootSettings
Negative policy to ensure hosts match at least one (admindefined) TPM baseline.
Hgs_CiPolicy
Negative policy to ensure hosts are using one of the admindefined CI policies.
Hgs_HypervisorEnforcedCiPolicy
Requires the code integrity policy to be enforced by the
hypervisor. Disabling this policy weakens your protections
against kernel-mode code integrity policy attacks.
Hgs_FullBoot
Ensures the host did not resume from sleep or hibernation.
Hosts must be properly restarted or shut down to pass this
policy.
Hgs_VsmIdkPresent
Requires virtualization based security to be running on the
host. The IDK represents the key necessary to encrypt
information sent back to the host's secure memory space.
Hgs_PageFileEncryptionEnabled
Requires pagefiles to be encrypted on the host. Disabling this
policy could result in information exposure if an unencrypted
pagefile is inspected for tenant secrets.
Hgs_BitLockerEnabled
Requires BitLocker to be enabled on the Hyper-V host. This
policy is disabled by default for performance reasons and is
not recommended to be enabled. This policy has no bearing
on the encryption of the shielded VMs themselves.
POLICY NAME
PURPOSE
Hgs_IommuEnabled
Requires that the host have an IOMMU device in use to
prevent direct memory access attacks. Disabling this policy
and using hosts without an IOMMU enabled can expose
tenant VM secrets to direct memory attacks.
Hgs_NoHibernation
Requires hibernation to be disabled on the Hyper-V host.
Disabling this policy could allow hosts to save shielded VM
memory to an unencrypted hibernation file.
Hgs_NoDumps
Requires memory dumps to be disabled on the Hyper-V host.
If you disable this policy, it is recommended that you
configure dump encryption to prevent shielded VM memory
from being saved to unencrypted crash dump files.
Hgs_DumpEncryption
Requires memory dumps, if enabled on the Hyper-V host, to
be encrypted with an encryption key trusted by HGS. This
policy does not apply if dumps are not enabled on the host. If
this policy and Hgs_NoDumps are both disabled, shielded VM
memory could be saved to an unencrypted dump file.
Hgs_DumpEncryptionKey
Negative policy to ensure hosts configured to allow memory
dumps are using an admin-defined dump file encryption key
known to HGS. This policy does not apply when
Hgs_DumpEncryption is disabled.
Authorizing new guarded hosts
To authorize a new host to become a guarded host (e.g. attest successfully), HGS must trust the host and (when
configured to use TPM-trusted attestation) the software running on it. The steps to authorize a new host differ
based on the attestation mode for which HGS is currently configured. To check the attestation mode for your
guarded fabric, run Get-HgsServer on any HGS node.
Software configuration
On the new Hyper-V host, ensure that Windows Server 2016 Datacenter edition is installed. Windows Server 2016
Standard cannot run shielded VMs in a guarded fabric. The host may be installed with any of the installation
options (server with desktop experience, Server Core, and Nano Server).
On server with desktop experience and Server Core, you need to install the Hyper-V and Host Guardian Hyper-V
Support server roles:
Install-WindowsFeature Hyper-V, HostGuardian -IncludeManagementTools -Restart
For Nano Server, you should prepare the image with the Compute, Microsoft-NanoServer-SecureStartupPackage and Microsoft-NanoServer-ShieldedVM-Package packages.
Admin-trusted attestation
To register a new host in HGS when using admin-trusted attestation, you must first add the host to a security
group in the domain to which it's joined. Typically, each domain will have one security group for guarded hosts. If
you have already registered that group with HGS, the only action you need to take is to restart the host to refresh
its group membership.
You can check which security groups are trusted by HGS by running the following command:
Get-HgsAttestationHostGroup
To register a new security group with HGS, first capture the security identifier (SID) of the group in the host's
domain and register the SID with HGS.
Add-HgsAttestationHostGroup -Name "Contoso Guarded Hosts" -Identifier "S-1-5-21-3623811015-336104434830300820-1013"
Instructions on how to set up the trust between the host domain and HGS are available in the deployment guide.
TPM-trusted attestation
When HGS is configured in TPM mode, hosts must pass all locked policies and "enabled" policies prefixed with
"Hgs_", as well as at least one TPM baseline, TPM identifier, and code integrity policy. Each time you add a new host,
you will need to register the new TPM identifier with HGS. As long as the host is running the same software (and
has the same code integrity policy applied) and TPM baseline as another host in your environment, you will not
need to add new CI policies or baselines.
Adding the TPM identifier for a new host On the new host, run the following command to capture the TPM
identifier. Be sure to specify a unique name for the host that will help you look it up on HGS. You will need this
information if you decommission the host or want to prevent it from running shielded VMs in HGS.
(Get-PlatformIdentifier -Name "Host01").InnerXml | Out-File C:\temp\host01.xml -Encoding UTF8
Copy this file to your HGS server, then run the following command to register the host with HGS.
Add-HgsAttestationTpmHost -Name 'Host01' -Path C:\temp\host01.xml
Adding a new TPM baseline If the new host is running a new hardware or firmware configuration for your
environment, you may need to take a new TPM baseline. To do this, run the following command on the host.
Get-HgsAttestationBaselinePolicy -Path 'C:\temp\hardwareConfig01.tcglog'
NOTE
If you receive an error saying your host failed validation and will not successfully attest, do not worry. This is a prerequisite
check to make sure your host can run shielded VMs, and likely means that you have not yet applied a code integrity policy or
other required setting. Read the error message, make any changes suggested by it, then try again. Alternatively, you can skip
the validation at this time by adding the -SkipValidation flag to the command.
Copy the TPM baseline to your HGS server, then register it with the following command. We encourage you to use
a naming convention that helps you understand the hardware and firmware configuration of this class of Hyper-V
host.
Add-HgsAttestationTpmPolicy -Name 'HardwareConfig01' -Path 'C:\temp\hardwareConfig01.tcglog'
Adding a new code integrity policy If you have changed the code integrity policy running on your Hyper-V
hosts, you will need to register the new policy with HGS before those hosts can successfully attest. On a reference
host, which serves as a master image for the trusted Hyper-V machines in your environment, capture a new CI
policy using the New-CIPolicy command. We encourage you to use the FilePublisher level and Hash fallback for
Hyper-V host CI policies. You should first create a CI policy in audit mode to ensure that everything is working as
expected. After validating a sample workload on the system, you can enforce the policy and copy the enforced
version to HGS. For a complete list of code integrity policy configuration options, consult the Device Guard
documentation.
# Capture a new CI policy with the FilePublisher primary level and Hash fallback and enable user mode code
integrity protections
New-CIPolicy -FilePath 'C:\temp\ws2016-hardware01-ci.xml' -Level FilePublisher -Fallback Hash -UserPEs
# Apply the CI policy to the system
ConvertFrom-CIPolicy -XmlFilePath 'C:\temp\ws2016-hardware01-ci.xml' -BinaryFilePath 'C:\temp\ws2016hardware01-ci.p7b'
Copy-Item 'C:\temp\ws2016-hardware01-ci.p7b' 'C:\Windows\System32\CodeIntegrity\SIPolicy.p7b'
Restart-Computer
# Check the event log for any untrusted binaries and update the policy if necessary
# Consult the Device Guard documentation for more details
# Change the policy to be in enforced mode
Set-RuleOption -FilePath 'C:\temp\ws2016-hardare01-ci.xml' -Option 3 -Delete
# Apply the enforced CI policy on the system
ConvertFrom-CIPolicy -XmlFilePath 'C:\temp\ws2016-hardware01-ci.xml' -BinaryFilePath 'C:\temp\ws2016hardware01-ci.p7b'
Copy-Item 'C:\temp\ws2016-hardware01-ci.p7b' 'C:\Windows\System32\CodeIntegrity\SIPolicy.p7b'
Restart-Computer
Once you have your policy created, tested and enforced, copy the binary file (.p7b) to your HGS server and register
the policy.
Add-HgsAttestationCiPolicy -Name 'WS2016-Hardware01' -Path 'C:\temp\ws2016-hardware01-ci.p7b'
Adding a memory dump encryption key
When the Hgs_NoDumps policy is disabled and Hgs_DumpEncryption policy is enabled, guarded hosts are allowed
to have memory dumps (including crash dumps) to be enabled as long as those dumps are encrypted. Guarded
hosts will only pass attestation if they either have memory dumps disabled or are encrypting them with a key
known to HGS. By default, no dump encryption keys are configured on HGS.
To add a dump encryption key to HGS, use the Add-HgsAttestationDumpPolicy cmdlet to provide HGS with the hash
of your dump encryption key. If you capture a TPM baseline on a Hyper-V host configured with dump encryption,
the hash is included in the tcglog and can be provided to the Add-HgsAttestationDumpPolicy cmdlet.
Add-HgsAttestationDumpPolicy -Name 'DumpEncryptionKey01' -Path
'C:\temp\TpmBaselineWithDumpEncryptionKey.tcglog'
Alternatively, you can directly provide the string representation of the hash to the cmdlet.
Add-HgsAttestationDumpPolicy -Name 'DumpEncryptionKey02' -PublicKeyHash '<paste your hash here>'
Be sure to add each unique dump encryption key to HGS if you choose to use different keys across your guarded
fabric. Hosts that are encrypting memory dumps with a key not known to HGS will not pass attestation.
Consult the Hyper-V documentation for more information about configuring dump encryption on hosts.
Check if the system passed attestation
After registering the necessary information with HGS, you should check if the host passes attestation. On the
newly-added Hyper-V host, run Set-HgsClientConfiguration and supply the correct URLs for your HGS cluster.
These URLs can be obtained by running Get-HgsServer on any HGS node.
Set-HgsClientConfiguration -KeyProtectionServerUrl 'http://hgs.relecloud.com/KeyProtection' AttestationServerUrl 'http://hgs.relecloud.com/Attestation'
If the resulting status does not indicate "IsHostGuarded : True" you will need to troubleshoot the configuration. On
the host that failed attestation, run the following command to get a detailed report about issues that may help you
resolve the failed attestation.
Get-HgsTrace -RunDiagnostics -Detailed
Review attestation policies
To review the current state of the policies configured on HGS, run the following commands on any HGS node:
# List all trusted security groups for admin-trusted attestation
Get-HgsAttestationHostGroup
# List all policies configured for TPM-trusted attestation
Get-HgsAttestationPolicy
If you find a policy enabled that no longer meets your security requirement (e.g. an old code integrity policy which
is now deemed unsafe), you can disable it by replacing the name of the policy in the following command:
Disable-HgsAttestationPolicy -Name 'PolicyName'
Similarly, you can use
Enable-HgsAttestationPolicy
to re-enable a policy.
If you no longer need a policy and wish to remove it from all HGS nodes, run
Remove-HgsAttestationPolicy -Name 'PolicyName' to permanently delete the policy.
Changing attestation modes
If you started your guarded fabric using admin-trusted attestation, you will likely want to upgrade to the muchstronger TPM attestation mode as soon as you have enough TPM 2.0-compatible hosts in your environment. When
you're ready to switch, you can pre-load all of the attestation artifacts (CI policies, TPM baselines and TPM
identifiers) in HGS while continuing to run HGS with admin-trusted attestation. To do this, simply follow the
instruction in the authorizing a new guarded host section.
Once you've added all of your policies to HGS, the next step is to run a synthetic attestation attempt on your hosts
to see if they would pass attestation in TPM mode. This does not affect the current operational state of HGS. The
commands below must be run on a machine that has access to all of the hosts in the environment and at least one
HGS node. If your firewall or other security policies prevent this, you can skip this step. When possible, we
recommend running the synthetic attestation to give you a good indication of whether "flipping" to TPM mode will
cause downtime for your VMs.
# Get information for each host in your environment
$hostNames = 'host01.contoso.com', 'host02.contoso.com', 'host03.contoso.com'
$credential = Get-Credential -Message 'Enter a credential with admin privileges on each host'
$targets = @()
$hostNames | ForEach-Object { $targets += New-HgsTraceTarget -Credential $credential -Role GuardedHost HostName $_ }
$hgsCredential = Get-Credential -Message 'Enter an admin credential for HGS'
$targets += New-HgsTraceTarget -Credential $hgsCredential -Role HostGuardianService -HostName
'HGS01.relecloud.com'
# Initiate the synthetic attestation attempt
Get-HgsTrace -RunDiagnostics -Target $targets -Diagnostic GuardedFabricTpmMode
After the diagnostics complete, review the outputted information to determine if any hosts would have failed
attestation in TPM mode. Re-run the diagnostics until you get a "pass" from each host, then proceed to change HGS
to TPM mode.
Changing to TPM mode takes just a second to complete. Run the following command on any HGS node to
update the attestation mode.
Set-HgsServer -TrustTpm
If you run into problems and need to switch back to Active Directory mode, you can do so by running
Set-HgsServer -TrustActiveDirectory .
Once you have confirmed everything is working as expected, you should remove all trusted Active Directory host
groups from HGS and remove the trust between the HGS and fabric domains. If you leave the Active Directory trust
in place, you risk someone re-enabling the trust and switching HGS to Active Directory mode, which could allow
untrusted code to run unchecked on your guarded hosts.
Key management
The guarded fabric solution uses several public/private key pairs to validate the integrity of various components in
the solution and encrypt tenant secrets. The Host Guardian Service is configured with at least two certificates (with
public and private keys), which are used for signing and encrypting the keys used to start up shielded VMs. Those
keys must be carefully managed. If the private key is acquired by an adversary, they will be able to unshield any
VMs running on your fabric or set up an imposter HGS cluster that uses weaker attestation policies to bypass the
protections you put in place. Should you lose the private keys during a disaster and not find them in a backup, you
will need to set up a new pair of keys and have each VM re-keyed to authorize your new certificates.
This section covers general key management topics to help you configure your keys so they are functional and
secure.
Adding new keys
While HGS must be initialized with one set of keys, you can add more than one encryption and signing key to HGS.
The two most common reasons why you would add new keys to HGS are:
1. To support "bring your own key" scenarios where tenants copy their private keys to your hardware security
module and only authorize their keys to start up their shielded VMs.
2. To replace the existing keys for HGS by first adding the new keys and keeping both sets of keys until each VM
configuration has been updated to use the new keys.
The process to add your new keys differs based on the type of certificate you are using.
Option 1: Adding a certificate stored in an HSM
Our recommended approach for securing HGS keys is to use certificates created in a hardware security module
(HSM). HSMs ensure use of your keys is tied to physical access to a security-sensitive device in your datacenter.
Each HSM is different and has a unique process to create certificates and register them with HGS. The steps below
are intended to provide rough guidance for using HSM backed certificates. Consult your HSM vendor's
documentation for exact steps and capabilities.
1. Install the HSM software on each HGS node in your cluster. Depending on whether you have a network or local
HSM device, you may need to configure the HSM to grant your machine access to its key store.
2. Create 2 certificates in the HSM with 2048 bit RSA keys for encryption and signing
a. Create an encryption certificate with the Data Encipherment key usage property in your HSM
b. Create a signing certificate with the Digital Signature key usage property in your HSM
3. Install the certificates in each HGS node's local certificate store per your HSM vendor's guidance.
4. If your HSM uses granular permissions to grant specific applications or users permission to use the private key,
you will need to grant your HGS group managed service account access to the certificate. You can find the name
of the HGS gMSA account by running (Get-IISAppPool -Name KeyProtection).ProcessModel.UserName
5. Add the signing and encryption certificates to HGS by replacing the thumbprints with those of your
certificates' in the following commands:
Add-HgsKeyProtectionCertificate -CertificateType Encryption -Thumbprint
"AABBCCDDEEFF00112233445566778899"
Add-HgsKeyProtectionCertificate -CertificateType Signing -Thumbprint "99887766554433221100FFEEDDCCBBAA"
Option 2: Adding non-exportable software certificates
If you have a software-backed certificate issued by your company's or a public certificate authority that has a nonexportable private key, you will need to add your certificate to HGS using its thumbprint.
1. Install the certificate on your machine according to your certificate authority's instructions.
2. Grant the HGS group managed service account read-access to the private key of the certificate. You can find the
name of the HGS gMSA account by running (Get-IISAppPool -Name KeyProtection).ProcessModel.UserName
3. Register the certificate with HGS using the following command and substituting in your certificate's
thumbprint (change Encryption to Signing for signing certificates):
Add-HgsKeyProtectionCertificate -CertificateType Encryption -Thumbprint
"AABBCCDDEEFF00112233445566778899"
IMPORTANT
You will need to manually install the private key and grant read access to the gMSA account on each HGS node. HGS cannot
automatically replicate private keys for any certificate registered by its thumbprint.
Option 3: Adding certificates stored in PFX files
If you have a software-backed certificate with an exportable private key that can be stored in the PFX file format
and secured with a password, HGS can automatically manage your certificates for you. Certificates added with PFX
files are automatically replicated to every node of your HGS cluster and HGS secures access to the private keys. To
add a new certificate using a PFX file, run the following commands on any HGS node (change Encryption to Signing
for signing certificates):
$certPassword = Read-Host -AsSecureString -Prompt "Provide the PFX file password"
Add-HgsKeyProtectionCertificate -CertificateType Encryption -CertificatePath "C:\temp\encryptionCert.pfx" CertificatePassword $certPassword
Identifying and changing the primary certificates While HGS can support multiple signing and encryption
certificates, it uses one pair as its "primary" certificates. These are the certificates that will be used if someone
downloads the guardian metadata for that HGS cluster. To check which certificates are currently marked as your
primary certificates, run the following command:
Get-HgsKeyProtectionCertificate -IsPrimary $true
To set a new primary encryption or signing certificate, find the thumbprint of the desired certificate and mark it as
primary using the following commands:
Get-HgsKeyProtectionCertificate
Set-HgsKeyProtectionCertificate -CertificateType Encryption -Thumbprint "AABBCCDDEEFF00112233445566778899" IsPrimary
Set-HgsKeyProtectionCertificate -CertificateType Signing -Thumbprint "99887766554433221100FFEEDDCCBBAA" IsPrimary
Renewing or replacing keys
When you create the certificates used by HGS, the certificates will be assigned an expiration date according to your
certificate authority's policy and your request information. Normally, in scenarios where the validity of the
certificate is important such as securing HTTP communications, certificates must be renewed before they expire to
avoid a service disruption or worrisome error message. HGS does not use certificates in that sense. HGS is simply
using certificates as a convenient way to create and store an asymmetric key pair. An expired encryption or signing
certificate on HGS does not indicate a weakness or loss of protection for shielded VMs. Further, certificate
revocation checks are not performed by HGS. If an HGS certificate or issuing authority's certificate is revoked, it will
not impact HGS' use of the certificate.
The only time you need to worry about an HGS certificate is if you have reason to believe that its private key has
been stolen. In that case, the integrity of your shielded VMs is at risk because possession of the private half of the
HGS encryption and signing key pair is enough to remove the shielding protections on a VM or stand up a fake
HGS server that has weaker attestation policies.
If you find yourself in that situation, or are required by compliance standards to refresh certificate keys regularly,
the following steps outline the process to change the keys on an HGS server. Please note that the following
guidance represents a significant undertaking that will result in a disruption of service to each VM served by the
HGS cluster. Proper planning for changing HGS keys is required to minimize service disruption and ensure the
security of tenant VMs.
On an HGS node, perform the following steps to register a new pair of encryption and signing certificates. See the
section on adding new keys for detailed information the various ways to add new keys to HGS.
1. Create a new pair of encryption and signing certificates for your HGS server. Ideally, these will be created in a
hardware security module.
2. Register the new encryption and signing certificates with Add-HgsKeyProtectionCertificate
Add-HgsKeyProtectionCertificate -CertificateType Signing -Thumbprint <Thumbprint>
Add-HgsKeyProtectionCertificate -CertificateType Encryption -Thumbprint <Thumbprint>
3. If you used thumbprints, you'll need to go to each node in the cluster to install the private key and grant the
HGS gMSA access to the key.
4. Make the new certificates the default certificates in HGS
Set-HgsKeyProtectionCertificate -CertificateType Signing -Thumbprint <Thumbprint> -IsPrimary
Set-HgsKeyProtectionCertificate -CertificateType Signing -Thumbprint <Thumbprint> -IsPrimary
At this point, shielding data created with metadata obtained from the HGS node will use the new certificates, but
existing VMs will continue to work because the older certificates are still there. In order to ensure all existing VMs
will work with the new keys, you will need to update the key protector on each VM. This is an action that requires
the VM owner (person or entity in possession of the "owner" guardian) to be involved. For each shielded VM,
perform the following steps:
1. Shut down the VM. The VM cannot be turned back on until the remaining steps are complete or else you will
need to start the process over again.
2. Save the current key protector to a file: Get-VMKeyProtector -VMName 'VM001' | Out-File '.\VM001.kp'
3. Transfer the KP to the VM owner
4. Have the owner download the updated guardian info from HGS and import it on their local system
5. Read the current KP into memory, grant the new guardian access to the KP, and save it to a new file by
running the following commands:
$kpraw = Get-Content -Path .\VM001.kp
$kp = ConvertTo-HgsKeyProtector -Bytes $kpraw
$newGuardian = Get-HgsGuardian -Name 'UpdatedHgsGuardian'
$updatedKP = Grant-HgsKeyProtectorAccess -KeyProtector $kp -Guardian $newGuardian
$updatedKP.RawData | Out-File .\updatedVM001.kp
6. Copy the updated KP back to the hosting fabric
7. Apply the KP to the original VM:
$updatedKP = Get-Content -Path .\updatedVM001.kp
Set-VMKeyProtector -VMName VM001 -KeyProtector $updatedKP
8. Finally, start the VM and ensure it runs successfully.
NOTE
If the VM owner sets an incorrect key protector on the VM and does not authorize your fabric to run the VM, you will be
unable to start up the shielded VM. To return to the last known good key protector, run
Set-VMKeyProtector -RestoreLastKnownGoodKeyProtector
Once all VMs have been updated to authorize the new guardian keys, you can disable and remove the old keys.
1. Get the thumbprints of the old certificates from Get-HgsKeyProtectionCertificate -IsPrimary $false
2. Disable each certificate by replacing the certificate type and thumbprint in the following command:
Set-HgsKeyProtectionCertificate -CertificateType Encryption -Thumbprint <Thumbprint> -IsEnabled $false
3. After ensuring VMs are still able to start with the certificates disabled, remove the certificates from HGS with
Remove-HgsKeyProtectionCertificate -CertificateType Encryption -Thumbprint <Thumbprint>
IMPORTANT
VM backups will contain old key protector information that allow the old certificates to be used to start up the VM. If you are
aware that your private key has been compromised, you should assume that the VM backups can be compromised, too, and
take appropriate action. Destroying the VM configuration from the backups (.vmcx) will remove the key protectors, at the
cost of needing to use the BitLocker recovery password to boot the VM the next time.
Key replication between nodes
Every node in the HGS cluster must be configured with the same encryption, signing, and (when configured) SSL
certificates. This is necessary to ensure Hyper-V hosts reaching out to any node in the cluster can have their
requests serviced successfully.
If you initialized HGS server with PFX-based certificates then HGS will automatically replicate both the public
and private key of those certificates across every node in the cluster. You only need to add the keys on one node.
If you initialized HGS server with certificate references or thumbprints, then HGS will only replicate the public
key in the certificate to each node. Additionally, HGS cannot grant itself access to the private key on any node in
this scenario. Therefore, it is your responsibility to:
1. Install the private key on each HGS node
2. Grant the HGS group managed service account (gMSA) access to the private key on each node These tasks add
extra operational burden, however they are required for HSM-backed keys and certificates with non-exportable
private keys.
SSL Certificates are never replicated in any form. It is your responsibility to initialize each HGS server with the
same SSL certificate and update each server whenever you choose to renew or replace the SSL certificate. When
replacing the SSL certificate, it is recommended that you do so using the Set-HgsServer cmdlet.
Unconfiguring HGS
If you need to decommission or significantly reconfigure an HGS server, you can do so using the Clear-HgsServer
or Uninstall-HgsServer cmdlets.
Clearing the HGS configuration
To remove a node from the HGS cluster, use the Clear-HgsServer cmdlet. This cmdlet will make the following
changes on the server where it is run:
Unregisters the attestation and key protection services
Removes the "microsoft.windows.hgs" JEA management endpoint
Removes the local computer from the HGS failover cluster
If the server is the last HGS node in the cluster, the cluster and its corresponding Distributed Network Name
resource will also be destroyed.
# Removes the local computer from the HGS cluster
Clear-HgsServer
After the clear operation completes, the HGS server can be re-initialized with Initialize-HgsServer. If you used
Install-HgsServer to set up an Active Directory Domain Services domain, that domain will remain configured and
operational after the clear operation.
Uninstalling HGS
If you wish to remove a node from the HGS cluster and demote the Active Directory Domain Controller running on
it, use the Uninstall-HgsServer cmdlet. This cmdlet will make the following changes on the server where it is run:
Unregisters the attestation and key protection services
Removes the "microsoft.windows.hgs" JEA management endpoint
Removes the local computer from the HGS failover cluster
Demotes the Active Directory Domain Controller, if configured
If the server is the last HGS node in the cluster, the domain, failover cluster, and the cluster's Distributed Network
Name resource will also be destroyed.
# Removes the local computer from the HGS cluster and demotes the ADDC (restart required)
$newLocalAdminPassword = Read-Host -AsSecureString -Prompt "Enter a new password for the local administrator
account"
Uninstall-HgsServer -LocalAdministratorPassword $newLocalAdminPassword -Restart
After the uninstall operation is complete and the computer has been restarted, you can reinstall ADDC and HGS
using Install-HgsServer or join the computer to a domain and initialize the HGS server in that domain with
Initialize-HgsServer.
If you no longer intend to use the computer as a HGS node, you can remove the role from Windows.
Uninstall-WindowsFeature HostGuardianServiceRole
Troubleshooting a Guarded Fabric
4/24/2017 • 1 min to read • Edit Online
The following topics cover how to troubleshoot a guarded fabric:
Troubleshooting Using the Guarded Fabric Diagnostic Tool
Troubleshooting the Host Guardian Service
Troubleshooting Guarded Hosts
See also
Deploying the Host Guardian Service for guarded hosts and shielded VMs
Managing a guarded fabric
Troubleshooting Using the Guarded Fabric Diagnostic
Tool
4/24/2017 • 13 min to read • Edit Online
Applies To: Windows Server 2016
This topic describes the use of the Guarded Fabric Diagnostic Tool to identify and remediate common failures in
the deployment, configuration, and on-going operation of guarded fabric infrastructure. This includes the Host
Guardian Service (HGS), all guarded hosts, and supporting services such as DNS and Active Directory. The
diagnostic tool can be used to perform a first-pass at triaging a failing guarded fabric, providing administrators
with a starting point for resolving outages and identifying misconfigured assets. The tool is not a replacement for a
sound grasp of operating a guarded fabric and only serves to rapidly verify the most common issues encountered
during day-to-day operations.
Documentation of the cmdlets used in this topic can be found on TechNet.
NOTE
When running the Guarded Fabric diagnostics tool (Get-HgsTrace -RunDiagnostics), incorrect status may be returned
claiming that the HTTPS configuration is broken when it is, in fact, not broken or not being used. This error can be returned
regardless of HGS’ attestation mode. The possible root-causes are as follows:
HTTPS is indeed improperly configured/broken
You’re using admin-trusted attestation and the trust relationship is broken
- This is irrespective of whether HTTPS is configured properly, improperly, or not in use at all.
Note that the diagnostics will only return this incorrect status when targeting a Hyper-V host. If the diagnostics are targeting
the Host Guardian Service, the status returned will be correct.
Quick Start
You can diagnose either a guarded host or an HGS node by calling the following from a Windows PowerShell
session with local administrator privileges:
Get-HgsTrace -RunDiagnostics -Detailed
This will automatically detect the role of the current host and diagnose any relevant issues that can be
automatically detected. All of the results generated during this process are displayed due to the presence of the
-Detailed switch.
The remainder of this topic will provide a detailed walkthrough on the advanced usage of Get-HgsTrace for doing
things like diagnosing multiple hosts at once and detecting complex cross-node misconfiguration.
Diagnostics Overview
Guarded fabric diagnostics are available on any host with shielded virtual machine related tools and features
installed, including hosts running Windows Server Core and Nano editions. Presently, diagnostics are included with
the following features/packages:
1.
2.
3.
4.
5.
Host Guardian Service Role
Host Guardian Hyper-V Support
VM Shielding Tools for Fabric Management
Remote Server Administration Tools (RSAT)
Nano Server Compute
This means that diagnostic tools will be available on all guarded hosts, HGS nodes, certain fabric management
servers, and any Windows 10 workstations with RSAT installed. Diagnostics can be invoked from any of the above
machines with the intent of diagnosing any guarded host or HGS node in a guarded fabric; using remote trace
targets, diagnostics can locate and connect to hosts other than the machine running diagnostics.
Every host targeted by diagnostics is referred to as a "trace target." Trace targets are identified by their hostnames
and roles. Roles describe the function a given trace target performs in a guarded fabric. Presently, trace targets
support HostGuardianService and GuardedHost roles. Note it is possible for a host to occupy multiple roles at once
and this is also supported by diagnostics, however this should not be done in production environments. The HGS
and Hyper-V hosts should be kept separate and distinct at all times.
Administrators can begin any diagnostic tasks by running Get-HgsTrace . This command performs two distinct
functions based on the switches provided at runtime: trace collection and diagnosis. These two combined make up
the entirety of the Guarded Fabric Diagnostic Tool. Though not explicitly required, most useful diagnostics require
traces that can only be collected with administrator credentials on the trace target. If insufficient privileges are held
by the user executing trace collection, traces requiring elevation will fail while all others will pass. This allows partial
diagnosis in the event an under-privileged operator is performing triage.
Trace collection
By default, Get-HgsTrace will only collect traces and save them to a temporary folder. Traces take the form of a
folder, named after the targeted host, filled with specially formatted files that describe how the host is configured.
The traces also contain metadata that describe how the diagnostics were invoked to collect the traces. This data is
used by diagnostics to rehydrate information about the host when performing manual diagnosis.
If necessary, traces can be manually reviewed. All formats are either human-readable (XML) or may be readily
inspected using standard tools (e.g. X509 certificates and the Windows Crypto Shell Extensions). Note however that
traces are not designed for manual diagnosis and it is always more effective to process the traces with the
diagnosis facilities of Get-HgsTrace .
The results of running trace collection do not make any indication as to the health of a given host. They simply
indicate that traces were collected successfully. It is necessary to use the diagnosis facilities of Get-HgsTrace to
determine if the traces indicate a failing environment.
Using the -Diagnostic parameter, you can restrict trace collection to only those traces required to operate the
specified diagnostics. This reduces the amount of data collected as well as the permissions required to invoke
diagnostics.
Diagnosis
Collected traces can be diagnosed by provided Get-HgsTrace the location of the traces via the -Path parameter
and specifying the -RunDiagnostics switch. Additionally, Get-HgsTrace can perform collection and diagnosis in a
single pass by providing the -RunDiagnostics switch and a list of trace targets. If no trace targets are provided, the
current machine is used as an implicit target, with its role inferred by inspecting the installed Windows PowerShell
modules.
Diagnosis will provide results in a hierarchical format showing which trace targets, diagnostic sets, and individual
diagnostics are responsible for a particular failure. Failures include remediation and resolution recommendations if
a determination can be made as to what action should be taken next. By default, passing and irrelevant results are
hidden. To see everything tested by diagnostics, specify the -Detailed switch. This will cause all results to appear
regardless of their status.
It is possible to restrict the set of diagnostics that are run using the -Diagnostic parameter. This allows you to
specify which classes of diagnostic should be run against the trace targets, and suppressing all others. Examples of
available diagnostic classes include networking, best practices, and client hardware. Consult the cmdlet
documentation to find an up-to-date list of available diagnostics.
WARNING
Diagnostics are not a replacement for a strong monitoring and incident response pipeline. There is a System Center
Operations Manager package available for monitoring guarded fabrics, as well as various event log channels that can be
monitored to detect issues early. Diagnostics can then be used to quickly triage these failures and establish a course of
action.
Targeting Diagnostics
operates against trace targets. A trace target is an object that corresponds to an HGS node or a
guarded host inside a guarded fabric. It can be thought of as an extension to a PSSession which includes
information required only by diagnostics such as the role of the host in the fabric. Targets can be generated
implicitly (e.g. local or manual diagnosis) or explicitly with the New-HgsTraceTarget command.
Get-HgsTrace
Local Diagnosis
By default, Get-HgsTrace will target the localhost (i.e. where the cmdlet is being invoked). This is referred as the
implicit local target. The implicit local target is only used when no targets are provided in the -Target parameter
and no pre-existing traces are found in the -Path .
The implicit local target uses role inference to determine what role the current host plays in the guarded fabric. This
is based on the installed Windows PowerShell modules which roughly correspond to what features have been
installed on the system. The presence of the HgsServer module will cause the trace target to take the role
HostGuardianService and the presence of the HgsClient module will cause the trace target to take the role
GuardedHost . It is possible for a given host to have both modules present in which case it will be treated as both a
HostGuardianService and a GuardedHost .
Therefore, the default invocation of diagnostics for collecting traces locally:
Get-HgsTrace
...is equivalent to the following:
New-HgsTraceTarget -Local | Get-HgsTrace
TIP
Get-HgsTrace can accept targets via the pipeline or directly via the
the two operationally.
-Target
parameter. There is no difference between
Remote Diagnosis Using Trace Targets
It is possible to remotely diagnose a host by generating trace targets with remote connection information. All that
is required is the hostname and a set of credentials capable of connecting using Windows PowerShell remoting.
$server = New-HgsTraceTarget -HostName "hgs-01.secure.contoso.com" -Role HostGuardianService -Credential
(Enter-Credential)
Get-HgsTrace -RunDiagnostics -Target $server
This example will generate a prompt to collect the remote user credentials, and then diagnostics will run using the
remote host at hgs-01.secure.contoso.com to complete trace collection. The resulting traces are downloaded to the
localhost and then diagnosed. The results of diagnosis are presented the same as when performing local diagnosis.
Similarly, it is not necessary to specify a role as it can be inferred based on the Windows PowerShell modules
installed on the remote system.
Remote diagnosis utilizes Windows PowerShell remoting for all accesses to the remote host. Therefore it is a
prerequisite that the trace target have Windows PowerShell remoting enabled (see Enable PSRemoting) and that
the localhost is properly configured for launching connections to the target.
NOTE
In most cases, it is only necessary that the localhost be a part of the same Active Directory forest and that a valid DNS
hostname is used. If your environment utilizes a more complicated federation model or you wish to use direct IP addresses
for connectivity, you may need to perform additional configuration such as setting the WinRM trusted hosts.
You can verify that a trace target is properly instantiated and configured for accepting connections by using the
Test-HgsTraceTarget cmdlet:
$server = New-HgsTraceTarget -HostName "hgs-01.secure.contoso.com" -Role HostGuardianService -Credential
(Enter-Credential)
$server | Test-HgsTraceTarget
This command will return $True if and only if Get-HgsTrace would be able to establish a remote diagnostic
session with the trace target. Upon failure, this cmdlet will return relevant status information for further
troubleshooting of the Windows PowerShell remoting connection.
Implicit Credentials
When performing remote diagnosis from a user with sufficient privileges to connect remotely to the trace target, it
is not necessary to supply credentials to New-HgsTraceTarget . The Get-HgsTrace cmdlet will automatically reuse the
credentials of the user that invoked the cmdlet when opening a connection.
WARNING
Some restrictions apply to reusing credentials, particularly when performing what is known as a "second hop." This occurs
when attempting to reuse credentials from inside a remote session to another machine. It is necessary to setup CredSSP to
support this scenario, but this is outside of the scope of guarded fabric management and troubleshooting.
Using Windows PowerShell Just Enough Administration (JEA) and Diagnostics
Remote diagnosis supports the use of JEA-constrained Windows PowerShell endpoints. By default, remote trace
targets will connect using the default microsoft.powershell endpoint. If the trace target has the
HostGuardianService role, it will also attempt to use the microsoft.windows.hgs endpoint which is configured when
HGS is installed.
If you want to use a custom endpoint, you must specify the session configuration name while constructing the
trace target using the -PSSessionConfigurationName parameter, such as below:
New-HgsTraceTarget -HostName "hgs-01.secure.contoso.com" -Role HostGuardianService -Credential (EnterCredential) -PSSessionConfigurationName "microsoft.windows.hgs"
Diagnosing Multiple Hosts
You can pass multiple trace targets to Get-HgsTrace at once. This includes a mix of local and remote targets. Each
target will be traced in turn and then traces from every target will be diagnosed simultaneously. The diagnostic tool
can use the increased knowledge of your deployment to identify complex cross-node misconfigurations that would
not otherwise be detectable. Using this feature only requires providing traces from multiple hosts simultaneously
(in the case of manual diagnosis) or by targeting multiple hosts when calling Get-HgsTrace (in the case of remote
diagnosis).
Here is an example of using remote diagnosis to triage a fabric composed of two HGS nodes and two guarded
hosts, where one of the guarded hosts is being used to launch Get-HgsTrace .
$hgs01 = New-HgsTraceTarget -HostName "hgs-01.secure.contoso.com" -Credential (Enter-Credential)
$hgs02 = New-HgsTraceTarget -HostName "hgs-02.secure.contoso.com" -Credential (Enter-Credential)
$gh01 = New-HgsTraceTarget -Local
$gh02 = New-HgsTraceTarget -HostName "guardedhost-02.contoso.com"
Get-HgsTrace -Target $hgs01,$hgs02,$gh01,$gh02 -RunDiagnostics
NOTE
You do not need to diagnose your entire guarded fabric when diagnosing multiple nodes. In many cases it is sufficient to
include all nodes that may be involved in a given failure condition. This is usually a subset of the guarded hosts, and some
number of nodes from the HGS cluster.
Manual Diagnosis Using Saved Traces
Sometimes you may want to re-run diagnostics without collecting traces again, or you may not have the necessary
credentials to remotely diagnose all of the hosts in your fabric simultaneously. Manual diagnosis is a mechanism
by which you can still perform a whole-fabric triage using Get-HgsTrace , but without using remote trace collection.
Before performing manual diagnosis, you will need to ensure the administrators of each host in the fabric that will
be triaged are ready and willing to execute commands on your behalf. Diagnostic trace output does not expose any
information that is generally viewed as sensitive, however it is incumbent on the user to determine if it is safe to
expose this information to others.
NOTE
Traces are not anonymized and reveal network configuration, PKI settings, and other configuration that is sometimes
considered private information. Therefore, traces should only be transmitted to trusted entities within an organization and
never posted publicly.
Steps to performing a manual diagnosis are as follows:
1. Request that each host administrator run Get-HgsTrace specifying a known
diagnostics you intend to run against the resulting traces. For example:
-Path
and the list of
Get-HgsTrace -Path C:\Traces -Diagnostic Networking,BestPractices
2. Request that each host administrator package the resulting traces folder and send it to you. This process can
be driven over e-mail, via file shares, or any other mechanism based on the operating policies and
procedures established by your organization.
3. Merge all received traces into a single folder, with no other contents or folders.
For example, assume you had your administrators send you traces collected from four machines
named HGS-01, HGS-02, RR1N2608-12, and RR1N2608-13. Each administrator would have sent you
a folder by the same name. You would assemble a directory structure that appears as follows:
FabricTraces
|- HGS-01
| |- TargetMetadata.xml
| |- Metadata.xml
| |- [any other trace files for this host]
|- HGS-02
| |- [...]
|- RR1N2608-12
| |- [...]
|- RR1N2608-13
|- [..]
4. Execute diagnostics, providing the path to the assembled trace folder on the -Path parameter and
specifying the -RunDiagnostics switch as well as those diagnostics for which you asked your administrators
to collect traces. Diagnostics will assume it cannot access the hosts found inside the path and will therefore
attempt to use only the pre-collected traces. If any traces are missing or damaged, diagnostics will fail only
the affected tests and proceed normally. For example:
Get-HgsTrace -RunDiagnostics -Diagnostic Networking,BestPractices -Path ".\FabricTraces"
Mixing Saved Traces with Additional Targets
In some cases, you may have a set of pre-collected traces that you wish to augment with additional host traces. It is
possible to mix pre-collected traces with additional targets that will be traced and diagnosed in a single call of
diagnostics.
Following the instructions to collect and assemble a trace folder specified above, call
trace targets not found in the pre-collected trace folder:
Get-HgsTrace
with additional
$hgs03 = New-HgsTraceTarget -HostName "hgs-03.secure.contoso.com" -Credential (Enter-Credential)
Get-HgsTrace -RunDiagnostics -Target $hgs03 -Path .\FabricTraces
The diagnostic cmdlet will identify all pre-collected hosts, and the one additional host that still needs to be traced
and will perform the necessary tracing. The sum of all pre-collected and freshly gathered traces will then be
diagnosed. The resulting trace folder will contain both the old and new traces.
Troubleshooting the Host Guardian Service
4/24/2017 • 6 min to read • Edit Online
Applies To: Windows Server 2016
This topic describes resolutions to common problems encountered when deploying or operating a Host Guardian
Service (HGS) server in a guarded fabric. If you are unsure of the nature of your problem, first try running the
guarded fabric diagnostics on your HGS servers and Hyper-V hosts to narrow down the potential causes.
Certificates
HGS requires several certificates in order to operate, including the admin-configured encryption and signing
certificate as well as an attestation certificate managed by HGS itself. If these certificates are incorrectly configured,
HGS will be unable to serve requests from Hyper-V hosts wishing to attest or unlock key protectors for shielded
VMs. The following sections cover common problems related to certificates configured on HGS.
Certificate Permissions
HGS must be able to access both the public and private keys of the encryption and signing certificates added to
HGS by the certificate thumbprint. Specifically, the group managed service accoung (gMSA) that runs the HGS
service needs access to the keys. To find the gMSA used by HGS, run the following command in an elevated
PowerShell prompt on your HGS server:
(Get-IISAppPool -Name KeyProtection).ProcessModel.UserName
How you grant the gMSA account access to use the private key depends on where the key is stored: on the machine
as a local certificate file, on a hardware security module (HSM), or using a custom third-party key storage provider.
Grant access to software-backed private keys
If you are using a self-signed certificate or a certificate issued by a certificate authority that is not stored in a
hardware security module or custom key storage provider, you can change the private key permissions by
performing the following steps:
1.
2.
3.
4.
5.
6.
7.
Open local certificate manager (certlm.msc)
Expand Personal > Certificates and find the signing or encryption certificate that you want to update.
Right click the certificate and select All Tasks > Manage Private Keys.
Click Add to grant a new user access to the certiciate's private key.
In the object picker, enter the gMSA account name for HGS found earlier, then click OK.
Ensure the gMSA has Read access to the certificate.
Click OK to close the permission window.
If you are running HGS on Server Core or are managing the server remotely, you will not be able to manage private
keys using the local certificate manager. Instead, you will need to download the Guarded Fabric Tools PowerShell
module which will allow you to manage the permissions in PowerShell.
1. Open an elevated PowerShell console on the Server Core machine or use PowerShell Remoting with an account
that has local administrator permissions on HGS.
2. Run the following commands to install the Guarded Fabric Tools PowerShell module and grant the gMSA
account access to the private key.
$certificateThumbprint = '<ENTER CERTIFICATE THUMBPRINT HERE>'
# Install the Guarded Fabric Tools module, if necessary
Install-Module -Name GuardedFabricTools -Repository PSGallery
# Import the module into the current session
Import-Module -Name GuardedFabricTools
# Get the certificate object
$cert = Get-Item "Cert:\LocalMachine\My\$certificateThumbprint"
# Get the gMSA account name
$gMSA = (Get-IISAppPool -Name KeyProtection).ProcessModel.UserName
# Grant the gMSA read access to the certificate
$cert.Acl = $cert.Acl | Add-AccessRule $gMSA Read Allow
Grant access to HSM or custom provider-backed private keys
If your certificate's private keys are backed by a hardware security module (HSM) or a custom key storage provider
(KSP), the permission model will depend on your specific software vendor. For the best results, consult your
vendor's documentation or support site for information on how private key permissions are handled for your
specific device/software.
Some hardware security modules do not support granting specific user accounts access to a private key; rather,
they allow the computer account access to all keys in a specific key set. For such devices, it is usually sufficient to
give the computer access to your keys and HGS will be able to leverage that connection.
Tips for HSMs
Below are suggested configuration options to help you successfully use HSM-backed keys with HGS based on
Microsoft and its partners' experiences. These tips are provided for your convenience and are not guaranteed to be
correct at the time of reading, nor are they endorsed by the HSM manufacturers. Contact your HSM manufacturer
for accurate information pertaining to your specific device if you have further questions.
HSM BRAND/SERIES
SUGGESTION
Gemalto SafeNet
Ensure the Key Usage Property in the certificate request file is
set to 0xa0, allowing the certificate to be used for signing and
encryption. Additionally, you must grant the gMSA account
read access to the private key using the local certificate
manager tool (see steps above).
Thales nShield
Ensure each HGS node has access to the security world
containing the signing and encryption keys. You do not need
to configure gMSA-specific permissions.
Utimaco CryptoServers
Ensure the Key Usage Property in the certificate request file is
set to 0x13, allowing the certificate to be used for encryption,
decryption, and signing.
Certificate requests
If you are using a certificate authority to issue your certificates in a public key infrastructure (PKI) environment, you
will need to ensure your certificate request includes the minimum requirements for HGS' usage of those keys.
Signing Certificates
CSR PROPERTY
REQUIRED VALUE
Algorithm
RSA
Key Size
At least 2048 bits
Key Usage
Signature/Sign/DigitalSignature
Encryption Certificates
CSR PROPERTY
REQUIRED VALUE
Algorithm
RSA
Key Size
At least 2048 bits
Key Usage
Encryption/Encrypt/DataEncipherment
Active Directory Certificate Services Templates
If you are using Active Directory Certificate Services (ADCS) certificate templates to create the certificates it is
recommended you use a template with the following settings:
ADCS TEMPLATE PROPERTY
REQUIRED VALUE
Provider Category
Key Storage Provider
Algorithm Name
RSA
Minimum Key Size
2048
Purpose
Signature and Encryption
Key Usage Extension
Digital Signature, Key Encipherment, Data Encipherment
("Allow encryption of user data")
Time Drift
If your server's time has drifted significantly from that of other HGS nodes or Hyper-V hosts in your guarded fabric,
you may encounter issues with the attestation signer certificate validity. The attestation signer certificate is created
and renewed behind the scenes on HGS and is used to sign health certificates issued to guarded hosts by the
Attestation Service.
To refresh the attestation signer certificate, run the following command in an elevated PowerShell prompt.
Start-ScheduledTask -TaskPath \Microsoft\Windows\HGSServer -TaskName
AttestationSignerCertRenewalTask
Alternatively, you can manually run the scheduled task by opening Task Scheduler (taskschd.msc), navigating to
Task Scheduler Library > Microsoft > Windows > HGSServer and running the task named
AttestationSignerCertRenewalTask.
Switching Attestation Modes
If you switch HGS from TPM mode to Active Directory mode or vice versa using the Set-HgsServer cmdlet, it may
take up to 10 minutes for every node in your HGS cluster to start enforcing the new attestation mode. This is
normal behavior. It is advised that you do not remove any policies allowing hosts from the previous attestation
mode until you have verified that all hosts are attesting successfully using the new attestation mode.
Known issue when switching from TPM to AD mode
If you intialized your HGS cluster in TPM mode and later switch to Active Directory mode, there is a known issue
which will prevent other nodes in your HGS cluster from switching to the new attestation mode. To ensure all HGS
servers are enforcing the correct attestation mode, run Set-HgsServer -TrustActiveDirectory on each node of
your HGS cluster. This issue does not apply if you are switching from TPM mode to AD mode and the cluster was
originally set up in AD mode.
You can verify the attestation mode of your HGS server by running Get-HgsServer.
Memory dump encryption policies
If you are trying to configure memory dump encryption policies and do not see the default HGS dump policies
(Hgs_NoDumps, Hgs_DumpEncryption and Hgs_DumpEncryptionKey) or the dump policy cmdlet (AddHgsAttestationDumpPolicy), it is likely that you do not have the latest cumulative update installed. To fix this,
update your HGS server to the latest cumulative Windows update and activate the new attestation policies. Ensure
you update your Hyper-V hosts to the same cumulative update before activating the new attestation policies, as
hosts that do not have the new dump encryption capabilities installed will likely fail attestation once the HGS policy
is activated.
Troubleshooting Guarded Hosts
4/24/2017 • 4 min to read • Edit Online
Applies To: Windows Server 2016
This topic describes resolutions to common problems encountered when deploying or operating a guarded HyperV host in your guarded fabric. If you are unsure of the nature of your problem, first try running the guarded fabric
diagnostics on your Hyper-V hosts to narrow down the potential causes.
Attestation failures
If a host does not pass attestation with the Host Guardian Service, it will be unable to run shielded VMs. The output
of Get-HgsClientConfiguration on that host will show you information about why that host failed attestation.
The table below explains the values that may appear in the AttestationStatus field and potential next steps, if
appropriate.
ATTESTATIONSTATUS
EXPLANATION
Expired
The host passed attestation previously, but the health
certificate it was issued has expired. Ensure the host and HGS
time are in sync.
InsecureHostConfiguration
The host did not pass attestation because it did not comply
with the attestation policies configured on HGS. Consult the
AttestationSubStatus table for more information.
NotConfigured
The host is not configured to use a HGS for attestation and
key protection. It is configured for local mode, instead. If this
host is in a guarded fabric, use Set-HgsClientConfiguration to
provide it with the URLs for your HGS server.
Passed
The host passed attestation.
TransientError
The last attestation attempt failed due to a networking,
service, or other temporary error. Retry your last operation.
TpmError
The host could not complete its last attestation attempt due
to an error with your TPM. Consult your TPM logs for more
information.
UnauthorzedHost
The host did not pass attestation becuase it was not
authorized to run shielded VMs. Ensure the host belongs to a
security group trusted by HGS to run shielded VMs.
Unknown
The host has not attempted to attest with HGS yet.
When AttestationStatus is reported as InsecureHostConfiguration, one or more reasons will be populated in
the AttestationSubStatus field. The table below explains the possible values for AttestationSubStatus and tips on
how to resolve the problem.
ATTESTATIONSUBSTATUS
WHAT IT MEANS AND WHAT TO DO
BitLocker
The host's OS volume is not encrypted by BitLocker. To
resolve this, enable BitLocker on the OS volume or disable the
BitLocker policy on HGS.
CodeIntegrityPolicy
The host is not configured to use a code integrity policy or is
not using a policy trusted by the HGS server. Ensure a code
integrity policy has been configured, that the host has been
restarted, and the policy is registered with the HGS server. For
more information, see Create and apply a code integrity policy.
DumpsEnabled
The host is configured to allow crash dumps or live memory
dumps, which is not allowed by your HGS policies. To resolve
this, disable dumps on the host.
DumpEncryption
The host is configured to allow crash dumps or live memory
dumps but does not encrypt those dumps. Either disable
dumps on the host or configure dump encryption.
DumpEncryptionKey
The host is configured to allow and encrypt dumps, but is not
using a certificate known to HGS to encrypt them. To resolve
this, update the dump encryption key on the host or register
the key with HGS.
FullBoot
The host resumed from a sleep state or hibernation. Restart
the host to allow for a clean, full boot.
HibernationEnabled
The host is configured to allow hibernation without encrypting
the hibernation file, which is not allowed by your HGS policies.
Disable hibernation and restart the host, or configure dump
encryption.
HypervisorEnforcedCodeIntegrityPolicy
The host is not configured to use a hypervisor-enforced code
integrity policy. Verify that code integrity is enabled,
configured, and enforced by the hypervisor. See the Device
Guard deployment guide for more information.
Iommu
The host's Virtualization Based Security features are not
configured to require an IOMMU device for protection against
Direct Memory Access attacks, as required by your HGS
policies. Verify the host has an IOMMU, that it is enabled, and
that Device Guard is configured to require DMA protections
when launching VBS.
PagefileEncryption
Page file encryption is not enabled on the host. To resolve this,
run fsutil behavior set encryptpagingfile 1 to enable
page file encryption. For more information, see fsutil behavior.
SecureBoot
Secure Boot is either not enabled on this host or not using the
Microsoft Secure Boot template. Enable Secure Boot with the
Microsoft Secure Boot template to resolve this issue.
ATTESTATIONSUBSTATUS
WHAT IT MEANS AND WHAT TO DO
SecureBootSettings
The TPM baseline on this host does not match any of those
trusted by HGS. This can occur when your UEFI launch
authorities, DBX variable, debug flag, or custom Secure Boot
policies are changed by installing new hardware or software. If
you trust the current hardware, firmware, and software
configuration of this machine, you can capture a new TPM
baseline and register it with HGS.
TcgLogVerification
The TCG log (TPM baseline) cannot be obtained or verified.
This can indicate a problem with the host's firmware, the TPM,
or other hardware components. If your host is configured to
attempt PXE boot before booting Windows, an outdated Net
Boot Program (NBP) can also cause this error. Ensure all NBPs
are up to date when PXE boot is enabled.
VirtualSecureMode
Virtualization Based Security features are not running on the
host. Ensure VBS is enabled and that your system meets the
configured platform security features. Consult the Device
Guard documentation for more information about VBS
requirements.
Hyper-V on Windows Server 2016
4/24/2017 • 2 min to read • Edit Online
Applies To: Windows Server 2016
The Hyper-V role in Windows Server lets you create a virtualized computing environment where you can create
and manage virtual machines. You can run multiple operating systems on one physical computer and isolate the
operating systems from each other. With this technology, you can improve the efficiency of your computing
resources and free up your hardware resources.
See the topics in the following table to learn more about Hyper-V on Windows Server 2016.
Hyper-V resources for IT Pros
TASK
RESOURCES
Evaluate Hyper-V
- Hyper-V Technology Overview
- What's new in Hyper-V on Windows Server 2016
- System requirements for Hyper-V on Windows Server 2016
- Supported Windows guest operating systems for Hyper-V
- Supported Linux and FreeBSD virtual machines
- Feature compatibility by generation and guest
Plan for Hyper-V
- Choose a generation
- Hyper-V scalability
- Hyper-V networking
- Hyper-V security
Get started with Hyper-V
- Download and install Windows Server 2016
Nano Server as virtual machine host
- Deploy Nano Server
- Use Hyper-V on Nano Server
Server Core or GUI installation option of Windows Server
2016 as virtual machine host
-Install the Hyper-V role on Windows Server
- Create a virtual switch for Hyper-V virtual machines
- Create a virtual machine in Hyper-V
TASK
RESOURCES
Upgrade Hyper-V hosts and virtual machines
- Upgrade Windows Server cluster nodes
- Upgrade virtual machine version
Configure and manage Hyper-V
- Set up hosts for live migration without Failover Clustering
- Managing Nano Server remotely
- Choose between standard or production checkpoints
- Enable or disable checkpoints
-Manage Windows virtual machines with PowerShell Direct
- Set up Hyper-V Replica
Blogs
Check out the latest posts from Program Managers, Product
Managers, Developers and Testers in the Microsoft
Virtualization and Hyper-V teams.
- Ben Armstrong's Virtualization Blog
-Virtualization Blog
- Windows Server Blog
Forum and newsgroups
Got questions? Talk to your peers, MVPs, and the Hyper-V
product team.
- Windows Server Hyper-V
Stay connected
Keep connected with the latest happening with Hyper-V.
Follow us on Twitter
Related technologies
The following table lists technologies that you might want to use in your virtualization computing environment.
TECHNOLOGY
DESCRIPTION
Client Hyper-V
Virtualization technology included with Windows 8, Windows
8.1 and Windows 10 that you can install through Programs
and Features in the Control Panel.
Failover Clustering
Windows Server feature that provides high availability for
Hyper-V hosts and virtual machines.
Nano Server
New installation option for Windows Server 2016 that is a
remotely administered server operating system optimized for
hosting in private clouds and datacenters.
TECHNOLOGY
DESCRIPTION
Virtual Machine Manager
System Center component that provides a management
solution for the virtualized datacenter. You can configure and
manage your virtualization hosts, networks, and storage
resources so you can create and deploy virtual machines and
services to private clouds that you've created.
Windows Containers
Use Windows Server and Hyper-V containers to provide
standardized environments for development, test, and
production teams.
Hyper-V Technology Overview
4/24/2017 • 5 min to read • Edit Online
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016
Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a software version of a computer,
called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and
programs. When you need computing resources, virtual machines give you more flexibility, help save time and
money, and are a more efficient way to use hardware than just running one operating system on physical
hardware.
Hyper-V runs each virtual machine in its own isolated space, which means you can run more than one virtual
machine on the same hardware at the same time. You might want to do this to avoid problems such as a crash
affecting the other workloads, or to give different people, groups or services access to different systems.
Some ways Hyper-V can help you
Hyper-V can help you:
Establish or expand a private cloud environment. Provide more flexible, on-demand IT services by
moving to or expanding your use of shared resources and adjust utilization as demand changes.
Use your hardware more effectively. Consolidate servers and workloads onto fewer, more powerful
physical computers to use less power and physical space.
Improve business continuity. Minimize the impact of both scheduled and unscheduled downtime of your
workloads.
Establish or expand a virtual desktop infrastructure (VDI). Use a centralized desktop strategy with VDI
can help you increase business agility and data security, as well as simplify regulatory compliance and
manage desktop operating systems and applications. Deploy Hyper-V and Remote Desktop Virtualization
Host (RD Virtualization Host) on the same server to make personal virtual desktops or virtual desktop pools
available to your users.
Make development and test more efficient. Reproduce different computing environments without
having to buy or maintain all the hardware you'd need if you only used physical systems.
Hyper-V and other virtualization products
Hyper-V in Windows and Windows Server replaces older hardware virtualization products, such as Microsoft
Virtual PC, Microsoft Virtual Server, and Windows Virtual PC. Hyper-V offers networking, performance, storage and
security features not available in these older products.
Hyper-V and most third-party virtualization applications that require the same processor features aren't
compatible. That's because the processor features, known as hardware virtualization extensions, are designed to
not be shared. For details, see Virtualization applications do not work together with Hyper-V, Device Guard, and
Credential Guard.
What features does Hyper-V have?
Hyper-V offers many features. This is an overview, grouped by what the features provide or help you do.
Computing environment - A Hyper-V virtual machine includes the same basic parts as a physical computer, such
as memory, processor, storage, and networking. All these parts have features and options that you can configure
different ways to meet different needs. Storage and networking can each be considered categories of their own,
because of the many ways you can configure them.
Disaster recovery and backup - For disaster recovery, Hyper-V Replica creates copies of virtual machines,
intended to be stored in another physical location, so you can restore the virtual machine from the copy. For
backup, Hyper-V offers two types. One uses saved states and the other uses Volume Shadow Copy Service (VSS)
so you can make application-consistent backups for programs that support VSS.
Optimization - Each supported guest operating system has a customized set of services and drivers, called
integration services, that make it easier to use the operating system in a Hyper-V virtual machine.
Portability - Features such as live migration, storage migration, and import/export make it easier to move or
distribute a virtual machine.
Remote connectivity - Hyper-V includes Virtual Machine Connection, a remote connection tool for use with both
Windows and Linux. Unlike Remote Desktop, this tool gives you console access, so you can see what's happening
in the guest even when the operating system isn't booted yet.
Security - Secure boot and shielded virtual machines help protect against malware and other unauthorized access
to a virtual machine and its data.
For a summary of the features introduced in this version, see What's new in Hyper-V on Windows Server 2016.
Some features or parts have a limit to how many can be configured. For details, see Plan for Hyper-V scalability in
Windows Server 2016.
How to get Hyper-V
Hyper-V is available in Windows Server and Windows, as a server role available for x64 versions of Windows
Server. For server instructions, see Install the Hyper-V role on Windows Server. On Windows, it's available as
feature in some 64-bit versions of Windows. It's also available as a downloadable, standalone server product,
Microsoft Hyper-V Server.
Supported operating systems
Many operating systems will run on virtual machines. In general, an operating system that uses an x86 architecture
will run on a Hyper-V virtual machine. Not all operating systems that can be run are tested and supported by
Microsoft, however. For lists of what's supported, see:
Supported Linux and FreeBSD virtual machines for Hyper-V on Windows
Supported Windows guest operating systems for Hyper-V on Windows Server
How Hyper-V works
Hyper-V is a hypervisor-based virtualization technology. Hyper-V uses the Windows hypervisor, which requires a
physical processor with specific features. For hardware details, see System requirements for Hyper-V on Windows
Server 2016.
In most cases, the hypervisor manages the interactions between the hardware and the virtual machines. This
hypervisor-controlled access to the hardware gives virtual machines the isolated environment in which they run. In
some configurations, a virtual machine or the operating system running in the virtual machine has direct access to
graphics, networking, or storage hardware.
What does Hyper-V consist of?
Hyper-V has required parts that work together so you can create and run virtual machines. Together, these parts
are called the virtualization platform. They're installed as a set when you install the Hyper-V role. The required
parts include Windows hypervisor, Hyper-V Virtual Machine Management Service, the virtualization WMI provider,
the virtual machine bus (VMbus), virtualization service provider (VSP) and virtual infrastructure driver (VID).
Hyper-V also has tools for management and connectivity. You can install these on the same computer that Hyper-V
role is installed on, and on computers without the Hyper-V role installed. These tools are:
Hyper-V Manager
Hyper-V module for Windows PowerShell
Virtual Machine Connection (sometimes called VMConnect)
Windows PowerShell Direct
Related technologies
These are some technologies from Microsoft that are often used with Hyper-V:
Failover Clustering
Remote Desktop Services
Virtual Machine Manager
Various storage technologies: cluster shared volumes, SMB 3.0, storage spaces direct
Windows containers offer another approach to virtualization. See the Windows Containers library on MSDN.
What's new in Hyper-V on Windows Server 2016
4/24/2017 • 13 min to read • Edit Online
Applies To: Microsoft Hyper-V Server 2016, Windows Server 2016
This article explains the new and changed functionality of Hyper-V on Windows Server 2016 and Microsoft HyperV Server 2016. To use new features on virtual machines created with Windows Server 2012 R2 and moved or
imported to a server that runs Hyper-V on Windows Server 2016, you'll need to manually upgrade the virtual
machine configuration version. For instructions, see Upgrade virtual machine version.
Here's what's included in this article and whether the functionality is new or updated.
Compatible with Connected Standby (new)
When the Hyper-V role is installed on a computer that uses the Always On/Always Connected (AOAC) power
model, the Connected Standby power state is now available.
Discrete device assignment (new)
This feature lets you give a virtual machine direct and exclusive access to some PCIe hardware devices. Using a
device in this way bypasses the Hyper-V virtualization stack, which results in faster access. For details on
supported hardware, see "Discrete device assignment" in System requirements for Hyper-V on Windows Server
2016. For details, including how to use this feature and considerations, see the post "Discrete Device Assignment -Description and background" in the Virtualization blog.
Encryption support for the operating system disk in generation 1
virtual machines (new)
You can now protect the operating system disk using BitLocker drive encryption in generation 1 virtual machines.
A new feature, key storage, creates a small, dedicated drive to store the system drive’s BitLocker key. This is done
instead of using a virtual Trusted Platform Module (TPM), which is available only in generation 2 virtual machines.
To decrypt the disk and start the virtual machine, the Hyper-V host must either be part of an authorized guarded
fabric or have the private key from one of the virtual machine's guardians. Key storage requires a version 8 virtual
machine. For information on virtual machine version, see Upgrade virtual machine version in Hyper-V on
Windows 10 or Windows Server 2016.
Host resource protection (new)
This feature helps prevent a virtual machine from using more than its share of system resources by looking for
excessive levels of activity. This can help prevent a virtual machine's excessive activity from degrading the
performance of the host or other virtual machines. When monitoring detects a virtual machine with excessive
activity, the virtual machine is given fewer resources. This monitoring and enforcement is off by default. Use
Windows PowerShell to turn it on or off. To turn it on, run this command:
Set-VMProcessor -EnableHostResourceProtection $true
For details about this cmdlet, see Set-VMProcessor.
Hot add and remove for network adapters and memory (new)
You can now add or remove a network adapter while the virtual machine is running, without incurring downtime.
This works for generation 2 virtual machines that run either Windows or Linux operating systems.
You can also adjust the amount of memory assigned to a virtual machine while it's running, even if you haven't
enabled Dynamic Memory. This works for both generation 1 and generation 2 virtual machines, running Windows
Server 2016 or Windows 10.
Hyper-V Manager improvements (updated)
Alternate credentials support - You can now use a different set of credentials in Hyper-V Manager when
you connect to another Windows Server 2016 or Windows 10 remote host. You can also save these
credentials to make it easier to log on again.
Manage earlier versions - With Hyper-V Manager in Windows Server 2016 and Windows 10, you can
manage computers running Hyper-V on Windows Server 2012, Windows 8, Windows Server 2012 R2 and
Windows 8.1.
Updated management protocol - Hyper-V Manager now communicates with remote Hyper-V hosts
using the WS-MAN protocol, which permits CredSSP, Kerberos or NTLM authentication. When you use
CredSSP to connect to a remote Hyper-V host, you can do a live migration without enabling constrained
delegation in Active Directory. The WS-MAN-based infrastructure also makes it easier to enable a host for
remote management. WS-MAN connects over port 80, which is open by default.
Integration services delivered through Windows Update (updated)
Updates to integration services for Windows guests are distributed through Windows Update. For service
providers and private cloud hosters, this puts the control of applying updates into the hands of the tenants who
own the virtual machines. Tenants can now update their Windows virtual machines with all updates, including the
integration services, using a single method. For details about integration services for Linux guests, see Linux and
FreeBSD Virtual Machines on Hyper-V.
IMPORTANT
The vmguest.iso image file is no longer needed, so it isn't included with Hyper-V on Windows Server 2016.
Linux Secure Boot (new)
Linux operating systems running on generation 2 virtual machines can now boot with the Secure Boot option
enabled. Ubuntu 14.04 and later, SUSE Linux Enterprise Server 12 and later, Red Hat Enterprise Linux 7.0 and later,
and CentOS 7.0 and later are enabled for Secure Boot on hosts that run Windows Server 2016. Before you boot
the virtual machine for the first time, you must configure the virtual machine to use the Microsoft UEFI Certificate
Authority. You can do this from Hyper-V Manager, Virtual Machine Manager, or an elevated Windows Powershell
session. For Windows PowerShell, run this command:
Set-VMFirmware vmname -SecureBootTemplate MicrosoftUEFICertificateAuthority
For more information about Linux virtual machines on Hyper-V, see Linux and FreeBSD Virtual Machines on
Hyper-V. For more information about the cmdlet, see Set-VMFirmware.
More memory and processors for generation 2 virtual machines and
Hyper-V hosts (updated)
Starting with version 8, generation 2 virtual machines can use significantly more memory and virtual processors.
Hosts also can be configured with significantly more memory and virtual processors than were previously
supported. These changes support new scenarios such as running e-commerce large in-memory databases for
online transaction processing (OLTP) and data warehousing (DW). The Windows Server blog recently published
the performance results of a virtual machine with 5.5 terabytes of memory and 128 virtual processors running 4
TB in-memory database. Performance was greater than 95% of the performance of a physical server. For details,
see Windows Server 2016 Hyper-V large-scale VM performance for in-memory transaction processing. For details
about virtual machine versions, see Upgrade virtual machine version in Hyper-V on Windows 10 or Windows
Server 2016. For the full list of supported maximum configurations, see Plan for Hyper-V scalability in Windows
Server 2016.
Nested virtualization (new)
This feature lets you use a virtual machine as a Hyper-V host and create virtual machines within that virtualized
host. This can be especially useful for development and test environments. To use nested virtualization, you'll need:
To run at least Windows Server 2016 or Windows 10 on both the physical Hyper-V host and the virtualized
host.
A processor with Intel VT-x (nested virtualization is available only for Intel processors at this time).
For details and instructions, see Nested Virtualization.
Networking features (new)
New networking features include:
Remote direct memory access (RDMA) and switch embedded teaming (SET). You can set up RDMA
on network adapters bound to a Hyper-V virtual switch, regardless of whether SET is also used. SET
provides a virtual switch with some of same capabilities as NIC teaming. For details, see Remote Direct
Memory Access (RDMA) and Switch Embedded Teaming (SET).
Virtual machine multi queues (VMMQ). Improves on VMQ throughput by allocating multiple hardware
queues per virtual machine. The default queue becomes a set of queues for a virtual machine, and traffic is
spread between the queues.
Quality of service (QoS) for software-defined networks. Manages the default class of traffic through
the virtual switch within the default class bandwidth.
For more about new networking features, see What's new in Networking.
Production checkpoints (new)
Production checkpoints are "point-in-time" images of a virtual machine. These give you a way to apply a
checkpoint that complies with support policies when a virtual machine runs a production workload. Production
checkpoints are based on backup technology inside the guest instead of a saved state. For Windows virtual
machines, the Volume Snapshot Service (VSS) is used. For Linux virtual machines, the file system buffers are
flushed to create a checkpoint that's consistent with the file system. If you'd rather use checkpoints based on saved
states, choose standard checkpoints instead. For details, see Choose between standard or production checkpoints.
IMPORTANT
New virtual machines use production checkpoints as the default.
Rolling Hyper-V Cluster upgrade (new)
You can now add a node running Windows Server 2016 to a Hyper-V Cluster with nodes running Windows Server
2012 R2. This allows you to upgrade the cluster without downtime. The cluster runs at a Windows Server 2012 R2
feature level until you upgrade all nodes in the cluster and update the cluster functional level with the Windows
PowerShell cmdlet, Update-ClusterFunctionalLevel.
IMPORTANT
After you update the cluster functional level, you can't return it to Windows Server 2012 R2.
For a Hyper-V cluster with a functional level of Windows Server 2012 R2 with nodes running Windows Server
2012 R2 and Windows Server 2016, note the following:
Manage the cluster, Hyper-V, and virtual machines from a node running Windows Server 2016 or Windows
10.
You can move virtual machines between all of the nodes in the Hyper-V cluster.
To use new Hyper-V features, all nodes must run Windows Server 2016 and the cluster functional level
must be updated.
The virtual machine configuration version for existing virtual machines isn't upgraded. You can upgrade the
configuration version only after you upgrade the cluster functional level.
Virtual machines that you create are compatible with Windows Server 2012 R2, virtual machine
configuration level 5.
After you update the cluster functional level:
You can enable new Hyper-V features.
To make new virtual machine features available, use the Update-VmConfigurationVersion cmdlet to
manually update the virtual machine configuration level. For instructions, see Upgrade virtual machine
version.
You can't add a node to the Hyper-V Cluster that runs Windows Server 2012 R2.
NOTE
Hyper-V on Windows 10 doesn't support failover clustering.
For details and instructions, see the Cluster Operating System Rolling Upgrade.
Shared virtual hard disks (updated)
You can now resize shared virtual hard disks (.vhdx files) used for guest clustering, without downtime. Shared
virtual hard disks can be grown or shrunk while the virtual machine is online. Guest clusters can now also protect
shared virtual hard disks by using Hyper-V Replica for disaster recovery.
Enable replication on the collection. Enabling replication on a collection is only exposed through the WMI
interface. See the documentation for Msvm_CollectionReplicationService class for more details. You cannot
manage replication of a collection through PowerShell cmdlet or UI. The VMs should be on hosts that are
part of a Hyper-V cluster to access features that are specific to a collection. This includes Shared VHD - shared
VHDs on stand-alone hosts are not supported by Hyper-V Replica.
Follow the guidelines for shared VHDs in Virtual Hard Disk Sharing Overview, and be sure that your shared VHDs
are part of a guest cluster.
A collection with a shared VHD but no associated guest cluster cannot create reference points for the collection
(regardless of whether the shared VHD is included in the reference point creation or not).
Shielded virtual machines (new)
Shielded virtual machines use several features to make it harder for Hyper-V administrators and malware on the
host to inspect, tamper with, or steal data from the state of a shielded virtual machine. Data and state is encrypted,
Hyper-V administrators can't see the video output and disks, and the virtual machines can be restricted to run only
on known, healthy hosts, as determined by a Host Guardian Server. For details, see Guarded Fabric and Shielded
VMs.
NOTE
As of Technical Preview 5, shielded virtual machines are compatible with Hyper-V Replica. To replicate a shielded virtual
machine, the host you want to replicate to must be authorized to run that shielded virtual machine.
Start order priority for clustered virtual machines (new)
This feature gives you more control over which clustered virtual machines are started or restarted first. This makes
it easier to start virtual machines that provide services before virtual machines that use those services. Define sets,
place virtual machines in sets, and specify dependencies. Use Windows PowerShell cmdlets to manage the sets,
such as New-ClusterGroupSet, Get-ClusterGroupSet, and Add-ClusterGroupSetDependency. .
Storage quality of service (QoS) (updated)
You can now create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks
on Hyper-V virtual machines. Storage performance is automatically readjusted to meet policies as the storage load
fluctuates. For details, see Storage Quality of Service.
Virtual machine configuration file format (updated)
Virtual machine configuration files use a new format that makes reading and writing configuration data more
efficient. The format also makes data corruption less likely if a storage failure occurs. Virtual machine configuration
data files use a .vmcx file name extension and runtime state data files use a .vmrs file name extension.
IMPORTANT
The .vmcx file name extension indicates a binary file. Editing .vmcx or .vmrs files isn't supported.
Virtual machine configuration version (updated)
The version represents the compatibility of the virtual machine's configuration, saved state, and snapshot files with
the version of Hyper-V. Virtual machines with version 5 are compatible with Windows Server 2012 R2 and can run
on both Windows Server 2012 R2 and Windows Server 2016 . Virtual machines with versions introduced in
Windows Server 2016 won't run in Hyper-V on Windows Server 2012 R2.
If you move or import a virtual machine to a server that runs Hyper-V on Windows Server 2016 from Windows
Server 2012 R2, the virtual machine's configuration isn't automatically updated. This means you can move the
virtual machine back to a server that runs Windows Server 2012 R2. But, this also means you can't use the new
virtual machine features until you manually update the version of the virtual machine configuration.
For instructions on checking and upgrading the version, see Upgrade virtual machine version. This article also lists
the version in which some features were introduced.
IMPORTANT
After you update the version, you can't move the virtual machine to a server that runs Windows Server 2012 R2.
You can't downgrade the configuration to a previous version.
The Update-VMVersion cmdlet is blocked on a Hyper-V Cluster when the cluster functional level is Windows Server 2012
R2.
Virtualization-based security for generation 2 virtual machines (new)
Virtualization-based security powers features such as Device Guard and Credential Guard, offering increased
protection of the operating system against exploits from malware. Virtualization based-security is available in
generation 2 guest virtual machines starting with version 8. For information on virtual machine version, see
Upgrade virtual machine version in Hyper-V on Windows 10 or Windows Server 2016.
Windows Containers (new)
Windows Containers allow many isolated applications to run on one computer system. They're fast to build and
are highly scalable and portable. Two types of container runtime are available, each with a different degree of
application isolation. Windows Server Containers use namespace and process isolation. Hyper-V Containers use a
light-weight virtual machine for each container.
Key features include:
Support for web sites and applications using HTTPS
Nano server can host both Windows Server and Hyper-V Containers
Ability to manage data through container shared folders
Ability to restrict container resources
For details, including quick start guides, see the Windows Container documentation.
Windows PowerShell Direct (new)
This gives you a way to run Windows PowerShell commands in a virtual machine from the host. Windows
PowerShell Direct runs between the host and the virtual machine. This means it doesn't require networking or
firewall requirements, and it works regardless of your remote management configuration.
Windows PowerShell Direct is an alternative to the existing tools that Hyper-V administrators use to connect to a
virtual machine on a Hyper-V host:
Remote management tools such as PowerShell or Remote Desktop
Hyper-V Virtual Machine Connection (VMConnect)
Those tools work well, but have trade-offs: VMConnect is reliable, but can be hard to automate. Remote
PowerShell is powerful, but can be hard to set up and maintain. These trade-offs may become more important as
your Hyper-V deployment grows. Windows PowerShell Direct addresses this by providing a powerful scripting
and automation experience that's as simple as using VMConnect.
For requirements and instructions, see Manage Windows virtual machines with PowerShell Direct.
See also
What's new in Hyper-V on Windows 10
System requirements for Hyper-V on Windows
Server 2016
4/24/2017 • 3 min to read • Edit Online
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016
Hyper-V has specific hardware requirements, and some Hyper-V features have additional requirements. Use the
details in this article to decide what requirements your system must meet so you can use Hyper-V the way you
plan to. Then, review the Windows Server catalog. Keep in mind that requirements for Hyper-V exceed the general
minimum requirements for Windows Server 2016 because a virtualization environment requires more
computing resources.
If you're already using Hyper-V, it's likely that you can use your existing hardware. The general hardware
requirements haven't changed significantly from Windows Server 2012 R2 . But, you will need newer hardware to
use shielded virtual machines or discrete device assignment. Those features rely on specific hardware support, as
described below. Other than that, the main difference in hardware is that second-level address translation (SLAT)
is now required instead of recommended.
For details about maximum supported configurations for Hyper-V, such as the number of running virtual
machines, see Plan for Hyper-V scalability in Windows Server 2016. The list of operating systems you can run in
your virtual machines is covered in Supported Windows guest operating systems for Hyper-V on Windows
Server.
General requirements
Regardless of the Hyper-V features you want to use, you'll need:
A 64-bit processor with second-level address translation (SLAT). To install the Hyper-V virtualization
components such as Windows hypervisor, the processor must have SLAT. However, it's not required to
install Hyper-V management tools like Virtual Machine Connection (VMConnect), Hyper-V Manager, and
the Hyper-V cmdlets for Windows PowerShell. See "How to check for Hyper-V requirements," below, to
find out if your processor has SLAT.
VM Monitor Mode extensions
Enough memory - plan for at least 4 GB of RAM. More memory is better. You'll need enough memory for
the host and all virtual machines that you want to run at the same time.
Virtualization support turned on in the BIOS or UEFI:
Hardware-assisted virtualization. This is available in processors that include a virtualization option specifically processors with Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V)
technology.
Hardware-enforced Data Execution Prevention (DEP) must be available and enabled. For Intel
systems, this is the XD bit (execute disable bit). For AMD systems, this is the NX bit (no execute bit).
How to check for Hyper-V requirements
Open Windows PowerShell or a command prompt and type:
Systeminfo.exe
Scroll to the Hyper-V Requirements section to review the report.
Requirements for specific features
Here are the requirements for discrete device assignment and shielded virtual machines. For descriptions of these
features, see What's new in Hyper-V on Windows Server 2016.
Discrete device assignment
Host requirements are similar to the existing requirements for the SR-IOV feature in Hyper-V.
The processor must have either Intel's Extended Page Table (EPT) or AMD's Nested Page Table (NPT).
The chipset must have:
Interrupt remapping - Intel's VT-d with the Interrupt Remapping capability (VT-d2) or any version of
AMD I/O Memory Management Unit (I/O MMU).
DMA remapping - Intel's VT-d with Queued Invalidations or any AMD I/O MMU.
Access control services (ACS) on PCI Express root ports.
The firmware tables must expose the I/O MMU to the Windows hypervisor. Note that this feature might be
turned off in the UEFI or BIOS. For instructions, see the hardware documentation or contact your hardware
manufacturer.
Devices need GPU or non-volatile memory express (NVMe). For GPU, only certain devices support discrete
device assignment. To verify, see the hardware documentation or contact your hardware manufacturer. For
details about this feature, including how to use it and considerations, see the post "Discrete Device Assignment -Description and background" in the Virtualization blog.
Shielded virtual machines
These virtual machines rely on virtualization-based security, which supports several new features in Windows
Server 2016w.
Host requirements are:
UEFI 2.3.1c - supports secure, measured boot
The following two are optional for virtualization-based security in general, but required for the host if you
want the protection these features provide:
TPM v2.0 - protects platform security assets
IOMMU (Intel VT-D) - so the hypervisor can provide direct memory access (DMA) protection
Virtual machine requirements are:
Generation 2
Windows Server 2016, Windows Server 2012 R2, or Windows Server 2012 as the guest operating system
Supported Windows guest operating systems for
Hyper-V on Windows Server 2016
6/12/2017 • 2 min to read • Edit Online
Applies To: Windows Server 2016
Hyper-V supports several versions of Windows Server, Windows, and Linux distributions to run in virtual
machines, as guest operating systems. This article covers supported Windows Server and Windows guest
operating systems. For Linux and FreeBSD distributions, see Supported Linux and FreeBSD virtual machines for
Hyper-V on Windows.
Some operating systems have the integration services built-in. Others require that you install or upgrade
integration services as a separate step after you set up the operating system in the virtual machine. For more
information, see the sections below and Integration Services.
Supported Windows Server guest operating systems
Following are the versions of Windows Server that are supported as guest operating systems for Hyper-V in
Windows Server 2016.
GUEST OPERATING SYSTEM
(SERVER)
MAXIMUM NUMBER OF
VIRTUAL PROCESSORS
Windows Server 2016
240 for generation 2;
64 for generation 1
Built-in
Windows Server 2012 R2
64
Built-in
Windows Server 2012
64
Built-in
Windows Server 2008 R2
with Service Pack 1 (SP 1)
64
Install all critical Windows
updates after you set up the
guest operating system.
Datacenter, Enterprise,
Standard and Web editions.
Windows Server 2008 with
Service Pack 2 (SP2)
8
Install all critical Windows
updates after you set up the
guest operating system.
Datacenter, Enterprise,
Standard and Web editions
(32-bit and 64-bit).
INTEGRATION SERVICES
NOTES
Supported Windows client guest operating systems
Following are the versions of Windows that are supported as guest operating systems for Hyper-V in Windows
Server 2016.
GUEST OPERATING SYSTEM
(CLIENT)
MAXIMUM NUMBER OF
VIRTUAL PROCESSORS
INTEGRATION SERVICES
Windows 10
32
Built-in
Windows 8.1
32
Built-in
NOTES
GUEST OPERATING SYSTEM
(CLIENT)
MAXIMUM NUMBER OF
VIRTUAL PROCESSORS
Windows 7 with Service
Pack 1 (SP 1)
4
INTEGRATION SERVICES
NOTES
Upgrade the integration
services after you set up the
guest operating system.
Ultimate, Enterprise, and
Professional editions (32-bit
and 64-bit).
Guest operating system support on other versions of Windows
The following table gives links to information about guest operating systems supported for Hyper-V on other
versions of Windows.
HOST OPERATING SYSTEM
TOPIC
Windows 10
Supported Guest Operating Systems for Client Hyper-V in
Windows 10
Windows Server 2012 R2 and Windows 8.1
- Supported Windows Guest Operating Systems for Hyper-V
in Windows Server 2012 R2 and Windows 8.1
- Linux and FreeBSD Virtual Machines on Hyper-V
Windows Server 2012 and Windows 8
Supported Windows Guest Operating Systems for Hyper-V in
Windows Server 2012 and Windows 8
Windows Server 2008 and Windows Server 2008 R2
About Virtual Machines and Guest Operating Systems
How Microsoft provides support for guest operating systems
Microsoft provides support for guest operating systems in the following manner:
Issues found in Microsoft operating systems and in integration services are supported by Microsoft
support.
For issues found in other operating systems that have been certified by the operating system vendor to run
on Hyper-V, support is provided by the vendor.
For issues found in other operating systems, Microsoft submits the issue to the multi-vendor support
community, TSANet.
See also
Linux and FreeBSD Virtual Machines on Hyper-V
Supported Guest Operating Systems for Client Hyper-V in Windows 10
Supported Linux and FreeBSD virtual machines for
Hyper-V on Windows
4/24/2017 • 2 min to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1,
Windows 8, Windows 7.1, Windows 7
Hyper-V supports both emulated and Hyper-V-specific devices for Linux and FreeBSD virtual machines. When
running with emulated devices, no additional software is required to be installed. However emulated devices do
not provide high performance and cannot leverage the rich virtual machine management infrastructure that the
Hyper-V technology offers. In order to make full use of all benefits that Hyper-V provides, it is best to use HyperV-specific devices for Linux and FreeBSD. The collection of drivers that are required to run Hyper-V-specific
devices are known as Linux Integration Services (LIS) or FreeBSD Integration Services (BIS).
LIS has been added to the Linux kernel and is updated for new releases. But Linux distributions based on older
kernels may not have the latest enhancements or fixes. Microsoft provides a download containing installable LIS
drivers for some Linux installations based on these older kernels. Because distribution vendors include versions
of Linux Integration Services, it is best to install the latest downloadable version of LIS, if applicable, for your
installation.
For other Linux distributions LIS changes are regularly integrated into the operating system kernel and
applications so no separate download or installation is required.
For older FreeBSD releases (before 10.0), Microsoft provides ports that contain the installable BIS drivers and
corresponding daemons for FreeBSD virtual machines. For newer FreeBSD releases, BIS is built in to the FreeBSD
operating system, and no separate download or installation is required except for a KVP ports download that is
needed for FreeBSD 10.0.
TIP
Download Windows Server 2016 from the Evaluation Center.
DownloadMicrosoft Hyper-V Server 2016 from the Evaluation Center.
The goal of this content is to provide information that helps facilitate your Linux or FreeBSD deployment on
Hyper-V. Specific details include:
Linux distributions or FreeBSD releases that require the download and installation of LIS or BIS drivers.
Linux distributions or FreeBSD releases that contain built-in LIS or BIS drivers.
Feature distribution maps that indicate the features in major Linux distributions or FreeBSD releases.
Known issues and workarounds for each distribution or release.
Feature description for each LIS or BIS feature.
Want to make a suggestion about features and functionality? Is there something we could do better?You
can use the Windows Server User Voice site to suggest new features and capabilities for Linux and FreeBSD
Virtual Machines on Hyper-V, and to see what other peopleare saying.
In this section
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Debian virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Best practices for running FreeBSD on Hyper-V
Supported CentOS and Red Hat Enterprise Linux
virtual machines on Hyper-V
6/16/2017 • 11 min to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
The following feature distribution maps indicate the features that are present in built-in and downloadable
versions of Linux Integration Services. The known issues and workarounds for each distribution are listed after the
tables.
The built-in Red Hat Enterprise Linux Integration Services drivers for Hyper-V (available since Red Hat Enterprise
Linux 6.4) are sufficient for Red Hat Enterprise Linux guests to run using the high performance synthetic devices
on Hyper-V hosts.These built-in drivers are certified by Red Hat for this use. Certified configurations can be viewed
on this Red Hat web page: Red Hat Certification Catalog. It isn't necessary to download and install Linux
Integration Services packages from the Microsoft Download Center, and doing so may limit your Red Hat support
as described in Red Hat Knowledgebase article 1067: Red Hat Knowledgebase 1067.
Because of potential conflicts between the built-in LIS support and the downloadable LIS support when you
upgrade the kernel, disable automatic updates, uninstall the LIS downloadable packages, update the kernel, reboot,
and then install the latest LIS release, and reboot again.
In this section:
RHEL/CentOS 7.x Series
RHEL/CentOS 6.x Series
RHEL/CentOS 5.x Series
Notes
Table legend
Built in - LIS are included as part of this Linux distribution. The kernel module version numbers for the built
in LIS (as shown by lsmod, for example) are different from the version number on the Microsoft-provided
LIS download package. A mismatch does not indicate that the built in LIS is out of date.
✔ - Feature available
(blank) - Feature not available
RHEL/CentOS 7.x Series
This series only has 64-bit kernels.
FEATURE
WINDOWS
SERVER
VERSION
Availabilit
y
7.0-7.3
7.0-7.2
7.3
7.2
7.1
7.0
LIS 4.2
LIS 4.1
Built in
Built in
Built in
Built in
✔
✔
✔
✔
✔
Core
2016, 2012
R2, 2012,
2008 R2
✔
Windows
Server 2016
Accurate
Time
2016
✔
Jumbo
frames
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
✔
VLAN
tagging and
trunking
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
✔
Live
Migration
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
✔
Static IP
Injection
2016, 2012
R2, 2012
✔ Note 2
✔ Note 2
✔ Note 2
✔ Note 2
✔ Note 2
✔ Note 2
vRSS
2016, 2012
R2
✔
✔
✔
✔
✔
TCP
Segmentati
on and
Checksum
Offloads
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
SR-IOV
2016
✔
VHDX
resize
2016, 2012
R2
✔
✔
✔
✔
✔
✔
Virtual Fibre
Channel
2016, 2012
R2
✔ Note 3
✔ Note 3
✔ Note 3
✔ Note 3
✔ Note 3
✔ Note 3
Live virtual
machine
backup
2016, 2012
R2
✔ Note 5
✔ Note 5
✔ Note 4,
✔ Note 4,
✔ Note 4,
✔ Note 4,
5
5
5
5
Networkin
g
✔
Storage
FEATURE
WINDOWS
SERVER
VERSION
TRIM
support
SCSI WWN
7.0-7.3
7.0-7.2
7.3
7.2
7.1
7.0
2016, 2012
R2
✔
✔
✔
✔
2016, 2012
R2
✔
✔
PAE Kernel
Support
2016, 2012
R2, 2012,
2008 R2
N/A
N/A
N/A
N/A
N/A
N/A
Configurati
on of
MMIO gap
2016, 2012
R2
✔
✔
✔
✔
✔
✔
Dynamic
Memory Hot-Add
2016, 2012
R2, 2012
✔ Note 8,
9, 10
✔ Note 8,
✔ Note 9,
✔ Note 9,
✔ Note 9,
✔ Note 8,
9, 10
10
10
10
9, 10
Dynamic
Memory Ballooning
2016, 2012
R2, 2012
✔ Note 8,
✔ Note 8,
✔ Note 9,
✔ Note 9,
✔ Note 9,
✔ Note 8,
9, 10
9, 10
10
10
10
9, 10
Runtime
Memory
Resize
2016
✔
✔
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
✔
Key-Value
Pair
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
✔
NonMaskable
Interrupt
2016, 2012
R2
✔
✔
✔
✔
✔
✔
File copy
from host
to guest
2016, 2012
R2
✔
✔
✔
✔
✔
lsvmbus
command
2016, 2012
R2, 2012,
2008 R2
✔
✔
Memory
Video
Hyper-Vspecific
video
device
Miscellane
ous
WINDOWS
SERVER
VERSION
7.0-7.3
7.0-7.2
Hyper-V
Sockets
2016
✔
✔
PCI
Passthroug
h/DDA
2016
✔
Boot using
UEFI
2016, 2012
R2
✔ Note 14
✔ Note 14
Secure boot
2016
✔
✔
FEATURE
7.3
7.2
7.1
7.0
✔ Note 14
✔ Note 14
✔ Note 14
✔ Note 14
✔
✔
✔
✔
✔
Generation
2 virtual
machines
RHEL/CentOS 6.x Series
The 32-bit kernel for this series is PAE enabled. There is no built-in LIS support for RHEL/CentOS 6.0-6.3.
FEATURE
WINDOWS
SERVER
VERSION
Availabilit
y
Core
2016, 2012
R2, 2012,
2008 R2
Windows
Server 2016
Accurate
Time
2016
6.0-6.8
6.0-6.8
6.9, 6.8
6.6, 6.7
6.5
6.4
LIS 4.2
LIS 4.1
Built in
Built in
Built in
Built in
✔
✔
✔
✔
✔
✔
Networkin
g
Jumbo
frames
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
✔
VLAN
tagging and
trunking
2016, 2012
R2, 2012,
2008 R2
✔ Note 1
✔ Note 1
✔ Note 1
✔ Note 1
✔ Note 1
✔ Note 1
Live
Migration
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
✔
Static IP
Injection
2016, 2012
R2, 2012
✔ Note 2
✔ Note 2
✔ Note 2
✔ Note 2
✔ Note 2
✔ Note 2
FEATURE
WINDOWS
SERVER
VERSION
6.0-6.8
6.0-6.8
6.9, 6.8
6.6, 6.7
vRSS
2016, 2012
R2
✔
✔
✔
✔
TCP
Segmentati
on and
Checksum
Offloads
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
SR-IOV
2016
6.5
6.4
Storage
VHDX
resize
2016, 2012
R2
✔
✔
✔
✔
✔
Virtual Fibre
Channel
2016, 2012
R2
✔ Note 3
✔ Note 3
✔ Note 3
✔ Note 3
✔ Note 3
Live virtual
machine
backup
2016, 2012
R2
✔ Note 5
✔ Note 5
✔ Note 4,
✔ Note 4,
✔ Note 4,
✔ Note 4,
5
5
5, 6
5, 6
TRIM
support
2016, 2012
R2
✔
✔
SCSI WWN
2016, 2012
R2
✔
✔
PAE Kernel
Support
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
✔
Configurati
on of
MMIO gap
2016, 2012
R2
✔
✔
✔
✔
✔
✔
Dynamic
Memory Hot-Add
2016, 2012
R2, 2012
✔ Note 7,
✔ Note 7,
✔ Note 7,
✔ Note 7,
✔ Note 7,
9, 10
9, 10
9, 10
8, 9, 10
8, 9, 10
Dynamic
Memory Ballooning
2016, 2012
R2, 2012
✔ Note 7,
✔ Note 7,
✔ Note 7,
✔ Note 7,
✔ Note 7,
✔ Note 7,
9, 10
9, 10
9, 10
9, 10
9, 10
9, 10, 11
Runtime
Memory
Resize
2016
✔
Memory
Video
FEATURE
WINDOWS
SERVER
VERSION
6.0-6.8
6.0-6.8
6.9, 6.8
6.6, 6.7
6.5
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
Key-Value
Pair
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔ Note 12
✔ Note 12
✔ Note 12,
✔ Note 12,
13
13
NonMaskable
Interrupt
2016, 2012
R2
✔
✔
✔
✔
✔
✔
File copy
from host
to guest
2016, 2012
R2
✔
✔
✔
✔
lsvmbus
command
2016, 2012
R2, 2012,
2008 R2
✔
✔
Hyper-V
Sockets
2016
✔
✔
PCI
Passthroug
h/DDA
2016
✔ Note 14
✔ Note 14
Hyper-Vspecific
video
device
6.4
Miscellane
ous
Generation
2 virtual
machines
Boot using
UEFI
2016, 2012
R2
Secure boot
2016
✔ Note 14
RHEL/CentOS 5.x Series
This series has a supported 32-bit PAE kernel available. There is no built-in LIS support for RHEL/CentOS before
5.9.
FEATURE
WINDOWS SERVER
VERSION
Availability
Core
2016, 2012 R2, 2012,
2008 R2
5.2 -5.11
5.2-5.11
5.9 - 5.11
LIS 4.2
LIS 4.1
Built in
✔
✔
✔
FEATURE
Windows Server 2016
Accurate Time
WINDOWS SERVER
VERSION
5.2 -5.11
5.2-5.11
5.9 - 5.11
2016
Networking
Jumbo frames
2016, 2012 R2, 2012,
2008 R2
✔
✔
✔
VLAN tagging and
trunking
2016, 2012 R2, 2012,
2008 R2
✔ Note 1
✔ Note 1
✔ Note 1
Live Migration
2016, 2012 R2, 2012,
2008 R2
✔
✔
✔
Static IP Injection
2016, 2012 R2, 2012
✔ Note 2
✔ Note 2
✔ Note 2
vRSS
2016, 2012 R2
TCP Segmentation
and Checksum
Offloads
2016, 2012 R2, 2012,
2008 R2
✔
✔
SR-IOV
2016
Storage
VHDX resize
2016, 2012 R2
✔
✔
Virtual Fibre Channel
2016, 2012 R2
✔ Note 3
✔ Note 3
Live virtual machine
backup
2016, 2012 R2
✔ Note 5, 15
✔ Note 5
✔ Note 4, 5, 6
TRIM support
2016, 2012 R2
SCSI WWN
2016, 2012 R2
Memory
PAE Kernel Support
2016, 2012 R2, 2012,
2008 R2
✔
✔
✔
Configuration of
MMIO gap
2016, 2012 R2
✔
✔
✔
Dynamic Memory Hot-Add
2016, 2012 R2, 2012
Dynamic Memory Ballooning
2016, 2012 R2, 2012
✔ Note 7, 9, 10, 11
✔ Note 7, 9, 10, 11
FEATURE
WINDOWS SERVER
VERSION
5.2 -5.11
5.2-5.11
2016, 2012 R2, 2012,
2008 R2
✔
✔
Key-Value Pair
2016, 2012 R2, 2012,
2008 R2
✔
✔
Non-Maskable
Interrupt
2016, 2012 R2
✔
✔
File copy from host to
guest
2016, 2012 R2
✔
✔
lsvmbus command
2016, 2012 R2, 2012,
2008 R2
Hyper-V Sockets
2016
PCI
Passthrough/DDA
2016
Runtime Memory
Resize
5.9 - 5.11
2016
Video
Hyper-V-specific
video device
Miscellaneous
✔
Generation 2 virtual
machines
Boot using UEFI
2016, 2012 R2
Secure boot
2016
Notes
1. For this RHEL/CentOS release, VLAN tagging works but VLAN trunking does not.
2. Static IP injection may not work if Network Manager has been configured for a given synthetic network
adapter on the virtual machine. For smooth functioning of static IP injection please make sure that either
Network Manager is either turned off completely or has been turned off for a specific network adapter
through its ifcfg-ethX file.
3. On Windows Server 2012 R2 while using virtual fibre channel devices, make sure that logical unit number 0
(LUN 0) has been populated. If LUN 0 has not been populated, a Linux virtual machine might not be able to
mount fibre channel devices natively.
4. For built-in LIS, the "hyperv-daemons" package must be installed for this functionality.
5. If there are open file handles during a live virtual machine backup operation, then in some corner cases, the
backed-up VHDs might have to undergo a file system consistency check (fsck) on restore. Live backup
operations can fail silently if the virtual machine has an attached iSCSI device or direct-attached storage
(also known as a pass-through disk).
6. While the Linux Integration Services download is preferred, live backup support for RHEL/CentOS 5.9 5.11/6.4/6.5 is also available through Hyper-V Backup Essentials for Linux.
7. Dynamic memory support is only available on 64-bit virtual machines.
8. Hot-Add support is not enabled by default in this distribution. To enable Hot-Add support you need to add a
udev rule under /etc/udev/rules.d/ as follows:
a. Create a file /etc/udev/rules.d/100-balloon.rules. You may use any other desired name for the
file.
b. Add the following content to the file:
SUBSYSTEM=="memory", ACTION=="add", ATTR{state}="online"
c. Reboot the system to enable Hot-Add support.
While the Linux Integration Services download creates this rule on installation, the rule is also removed
when LIS is uninstalled, so the rule must be recreated if dynamic memory is needed after uninstallation.
9. Dynamic memory operations can fail if the guest operating system is running too low on memory. The
following are some best practices:
Startup memory and minimal memory should be equal to or greater than the amount of memory
that the distribution vendor recommends.
Applications that tend to consume the entire available memory on a system are limited to
consuming up to 80 percent of available RAM.
10. If you are using Dynamic Memory on a Windows Server 2016 or Windows Server 2012 R2 operating
system, specify Startup memory, Minimum memory, and Maximum memory parameters in multiples
of 128 megabytes (MB). Failure to do so can lead to hot-add failures, and you may not see any memory
increase in a guest operating system.
11. Certain distributions, including those using LIS 4.0 and 4.1, only provide Ballooning support and do not
provide Hot-Add support. In such a scenario, the dynamic memory feature can be used by setting the
Startup memory parameter to a value which is equal to the Maximum memory parameter. This results in all
the requisite memory being Hot-Added to the virtual machine at boot time and then later depending upon
the memory requirements of the host, Hyper-V can freely allocate or deallocate memory from the guest
using Ballooning. Please configure Startup Memory and Minimum Memory at or above the
recommended value for the distribution.
12. To enable key/value pair (KVP) infrastructure, install the hypervkvpd or hyperv-daemons rpm package from
your RHEL ISO. Alternatively the package can be installed directly from RHEL repositories.
13. The key/value pair (KVP) infrastructure might not function correctly without a Linux software update.
Contact your distribution vendor to obtain the software update in case you see problems with this feature.
14. On Windows Server 2012 R2 Generation 2 virtual machines have secure boot enabled by default and some
Linux virtual machines will not boot unless the secure boot option is disabled. You can disable secure boot
in the Firmware section of the settings for the virtual machine in Hyper-V Manager or you can disable it
using Powershell:
Set-VMFirmware -VMName "VMname" -EnableSecureBoot Off
The Linux Integration Services download can be applied to existing Generation 2 VMs but does not impart
Generation 2 capability.
15. In Red Hat Enterprise Linux or CentOS 5.2, 5.3, and 5.4 the filesystem freeze functionality is not available, so
Live Virtual Machine Backup is also not available.
See Also
Set-VMFirmware
Supported Debian virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Red Hat Hardware Certification
Supported Debian virtual machines on Hyper-V
6/1/2017 • 3 min to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
The following feature distribution map indicates the features that are present in each version. The known issues
and workarounds for each distribution are listed after the table.
Table legend
Built in - LIS are included as part of this Linux distribution. The Microsoft-provided LIS download package
doesn't work for this distribution so do not install it. The kernel module version numbers for the built in LIS
(as shown by lsmod, for example) are different from the version number on the Microsoft-provided LIS
download package. A mismatch does not indicate that the built in LIS is out of date.
✔ - Feature available
(blank) - Feature not available
FEATURE
WINDOWS SERVER
OPERATING SYSTEM VERSION
Availability
Core
2016, 2012 R2, 2012, 2008
R2
Windows Server 2016
Accurate Time
2016
8.0-8.8 (JESSIE)
7.0-7.11 (WHEEZY)
Built in
Built in (Note 6)
✔
✔
Networking
Jumbo frames
2016, 2012 R2, 2012, 2008
R2
✔
✔
VLAN tagging and trunking
2016, 2012 R2, 2012, 2008
R2
✔
✔
Live Migration
2016, 2012 R2, 2012, 2008
R2
✔
✔
Static IP Injection
2016, 2012 R2, 2012
vRSS
2016, 2012 R2
TCP Segmentation and
Checksum Offloads
2016, 2012 R2, 2012, 2008
R2
SR-IOV
2016
WINDOWS SERVER
OPERATING SYSTEM VERSION
8.0-8.8 (JESSIE)
7.0-7.11 (WHEEZY)
VHDX resize
2016, 2012 R2
✔ Note 1
✔ Note 1
Virtual Fibre Channel
2016, 2012 R2
Live virtual machine backup
2016, 2012 R2
✔ Note 4,5
✔ Note 4
TRIM support
2016, 2012 R2
SCSI WWN
2016, 2012 R2
FEATURE
Storage
Memory
PAE Kernel Support
2016, 2012 R2, 2012, 2008
R2
✔
✔
Configuration of MMIO gap
2016, 2012 R2
✔
✔
Dynamic Memory - HotAdd
2016, 2012 R2, 2012
Dynamic Memory Ballooning
2016, 2012 R2, 2012
Runtime Memory Resize
2016
Video
2016, 2012 R2, 2012, 2008
R2
✔
Key-Value Pair
2016, 2012 R2, 2012, 2008
R2
✔ Note 4
Non-Maskable Interrupt
2016, 2012 R2
✔
File copy from host to guest
2016, 2012 R2
✔ Note 4
lsvmbus command
2016, 2012 R2, 2012, 2008
R2
Hyper-V Sockets
2016
PCI Passthrough/DDA
2016
Hyper-V-specificvideo device
Miscellaneous
Generation 2 virtual
machines
✔
FEATURE
WINDOWS SERVER
OPERATING SYSTEM VERSION
8.0-8.8 (JESSIE)
Boot using UEFI
2016, 2012 R2
✔ Note 7
Secure boot
2016
7.0-7.11 (WHEEZY)
Notes
1. Creating file systems on VHDs larger than 2TB is not supported.
2. On Windows Server 2008 R2 SCSI disks create 8 different entries in /dev/sd*.
3. Windows Server 2012 R2 a VM with 8 cores or more will have all interrupts routed to a single vCPU.
4. Starting with Debian 8.3 the manually-installed Debian package "hyperv-daemons" contains the key-value
pair, fcopy, and VSS daemons. On Debian 7.x and 8.0-8.2 the hyperv-daemons package must come from
Debian backports.
5. Live virtual machine backup will not work with ext2 file systems. The default layout created by the Debian
installer includes ext2 filesystems, you you must customize the layout to not create this filesystem type.
6. While Debian 7.x is out of support and uses an older kernel, the kernel included in Debian backports for
Debian 7.x has improved Hyper-V capabilities.
7. OnWindows Server 2012 R2 Generation 2 virtual machines have secure boot enabled by default and some
Linux virtual machines will not boot unless the secure boot option is disabled. You can disable secure boot
in the Firmware section of the settings for the virtual machine in Hyper-V Manager or you can disable it
using Powershell:
Set-VMFirmware -VMName "VMname" -EnableSecureBoot Off
See Also
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
6/1/2017 • 8 min to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
The following feature distribution map indicates the features that are present in each version. The known issues
and workarounds for each distribution are listed after the table.
In this section:
Red Hat Compatible Kernel Series
Unbreakable Enterprise Kernel Series
Notes
Table legend
Built in - LIS are included as part of this Linux distribution. The kernel module version numbers for the
built in LIS (as shown by lsmod, for example) are different from the version number on the Microsoftprovided LIS download package. A mismatch doesn't indicate that the built in LIS is out of date.
✔ - Feature available
(blank) - Feature not available
UEK R*x QU*y - Unbreakable Enterprise Kernel (UEK) where x is the release number and y is the quarterly
update.
Red Hat Compatible Kernel Series
The 32-bit kernel for the 6.x series is PAE enabled. There is no built-in LIS support for Oracle Linux RHCK 6.0-6.3.
Oracle Linux 7.x kernels are 64-bit only.
FEATURE
WINDOWS
SERVER
VERSION
Availabil
ity
Core
2016,
2012 R2,
2012,
2008 R2
Windows
Server
2016
Accurate
Time
2016
6.4-6.8
AND 7.07.3
6.4-6.8
AND 7.07.2
RHCK 7.07.2
RHCK 6.8
RHCK 6.6,
6.7
RHCK 6.5
RHCK6.4
LIS 4.2
LIS 4.1
Built in
Built in
Built in
Built in
Built in
✔
✔
✔
✔
✔
✔
WINDOWS
SERVER
VERSION
6.4-6.8
AND 7.07.3
6.4-6.8
AND 7.07.2
RHCK 7.07.2
RHCK 6.8
RHCK 6.6,
6.7
RHCK 6.5
RHCK6.4
Jumbo
frames
2016,
2012 R2,
2012,
2008 R2
✔
✔
✔
✔
✔
✔
✔
VLAN
tagging
and
trunking
2016,
2012 R2,
2012,
2008 R2
✔ (Note
✔ (Note
✔
✔ Note 1
✔ Note 1
✔ Note 1
✔ Note 1
1 for 6.46.8)
1 for 6.46.8)
Live
Migration
2016,
2012 R2,
2012,
2008 R2
✔
✔
✔
✔
✔
✔
✔
Static IP
Injection
2016,
2012 R2,
2012
✔
✔
✔
✔
✔
✔
✔
vRSS
2016,
2012 R2
✔
✔
✔
✔
✔
TCP
Segmenta
tion and
Checksu
m
Offloads
2016,
2012 R2,
2012,
2008 R2
✔
✔
✔
✔
✔
SR-IOV
2016
FEATURE
Networki
ng
Storage
VHDX
resize
2016,
2012 R2
✔
✔
✔
✔
✔
✔
Virtual
Fibre
Channel
2016,
2012 R2
✔ Note 2
✔ Note 2
✔ Note 2
✔ Note 2
✔ Note 2
✔ Note 2
Live
virtual
machine
backup
2016,
2012 R2
✔ Note
✔ Note
✔ Note 3,
✔ Note 3,
✔ Note 3,
✔ Note 3,
✔ Note 3,
3, 4
3, 4
4, 11
4, 11
4, 11
4, 5, 11
4, 5, 11
TRIM
support
2016,
2012 R2
✔
✔
✔
✔
SCSI
WWN
2016,
2012 R2
✔
✔
WINDOWS
SERVER
VERSION
6.4-6.8
AND 7.07.3
6.4-6.8
AND 7.07.2
RHCK 7.07.2
RHCK 6.8
RHCK 6.6,
6.7
RHCK 6.5
RHCK6.4
PAE
Kernel
Support
2016,
2012 R2,
2012,
2008 R2
✔ (6.x
✔ (6.x
N/A
✔
✔
✔
✔
only)
only)
Configura
tion of
MMIO
gap
2016,
2012 R2
✔
✔
✔
✔
✔
✔
✔
Dynamic
Memory Hot-Add
2016,
2012 R2,
2012
✔ Note
✔ Note
✔ Note 6,
✔ Note 6,
✔ Note 6,
✔ Note 6,
7, 8, 9, 10
(Note 6
for 6.46.7)
7, 8, 9, 10
(Note 6
for 6.46.7)
7, 8, 9
7, 8, 9
7, 8, 9
7, 8, 9
Dynamic
Memory Balloonin
g
2016,
2012 R2,
2012
✔ Note
✔ Note
✔ Note 6,
✔ Note 6,
✔ Note 6,
✔ Note 6,
✔ Note 6,
7, 9, 10
(Note 6
for 6.46.7)
7, 9, 10
(Note 6
for 6.46.7)
8, 9
8, 9
8, 9
8, 9
8, 9, 10
Runtime
Memory
Resize
2016
2016,201
2 R2,
2012,
2008 R2
✔
✔
✔
✔
✔
✔
Key-Value
Pair
2016,
2012 R2,
2012,
2008 R2
✔
✔
✔ Note
✔ Note
✔ Note
✔ Note
✔ Note
12
12
12
12
12
NonMaskable
Interrupt
2016,
2012 R2
✔
✔
✔
✔
✔
✔
✔
File copy
from host
to guest
2016,
2012 R2
✔
✔
FEATURE
Memory
Video
Hyper-Vspecific
video
device
Miscellan
eous
✔
WINDOWS
SERVER
VERSION
6.4-6.8
AND 7.07.3
6.4-6.8
AND 7.07.2
lsvmbus
command
2016,
2012 R2,
2012,
2008 R2
✔
✔
Hyper-V
Sockets
2016
✔
✔
PCI
Passthrou
gh/DDA
2016
✔ Note
13
FEATURE
RHCK 7.07.2
RHCK 6.8
✔ Note
✔ Note
✔ Note
13
13
13
RHCK 6.6,
6.7
RHCK 6.5
RHCK6.4
Generati
on 2
virtual
machines
Boot
using
UEFI
2016,
2012 R2
Secure
boot
2016
Unbreakable Enterprise Kernel Series
The Oracle Linux Unbreakable Enterprise Kernel (UEK) is 64-bit only and has LIS support built-in.
FEATURE
WINDOWS SERVER
VERSION
Availability
Core
2016, 2012 R2,
2012, 2008 R2
Windows Server
2016 Accurate
Time
2016
UEK R4
UEK R3 QU3
UEK R3 QU2
UEK R3 QU1
Built in
Built in
Built in
Built in
✔
✔
✔
✔
Networking
Jumbo frames
2016, 2012 R2,
2012, 2008 R2
✔
✔
✔
✔
VLAN tagging
and trunking
2016, 2012 R2,
2012, 2008 R2
✔
✔
✔
✔
Live Migration
2016, 2012 R2,
2012, 2008 R2
✔
✔
✔
✔
FEATURE
WINDOWS SERVER
VERSION
UEK R4
UEK R3 QU3
UEK R3 QU2
✔
✔
Static IP Injection
2016, 2012 R2,
2012
✔
vRSS
2016, 2012 R2
✔
TCP
Segmentation
and Checksum
Offloads
2016, 2012 R2,
2012, 2008 R2
✔
SR-IOV
2016
UEK R3 QU1
Storage
VHDX resize
2016, 2012 R2
✔
✔
✔
Virtual Fibre
Channel
2016, 2012 R2
✔
✔
✔
Live virtual
machine backup
2016, 2012 R2
✔ Note 3, 4, 5,
✔ Note 3, 4, 5,
✔ Note 3, 4, 5,
11
11
11
TRIM support
2016, 2012 R2
SCSI WWN
2016, 2012 R2
✔
Memory
PAE Kernel
Support
2016, 2012 R2,
2012, 2008 R2
N/A
N/A
N/A
N/A
Configuration of
MMIO gap
2016, 2012 R2
✔
✔
✔
✔
Dynamic
Memory - HotAdd
2016, 2012 R2,
2012
✔
Dynamic
Memory Ballooning
2016, 2012 R2,
2012
✔
Runtime Memory
Resize
2016
✔
✔
Video
Hyper-V-specific
video device
Miscellaneous
2016, 2012 R2,
2012, 2008 R2
✔
FEATURE
WINDOWS SERVER
VERSION
UEK R4
UEK R3 QU3
UEK R3 QU2
UEK R3 QU1
Key-Value Pair
2016, 2012 R2,
2012, 2008 R2
✔ Note 11, 12
✔ Note 11, 12
✔ Note 11, 12
✔ Note 11, 12
Non-Maskable
Interrupt
2016, 2012 R2
✔
✔
✔
✔
File copy from
host to guest
2016, 2012 R2
✔ Note 11
✔ Note 11
✔ Note 11
✔ Note 11
lsvmbus
command
2016, 2012 R2,
2012, 2008 R2
Hyper-V Sockets
2016
PCI
Passthrough/DD
A
2016
Generation 2
virtual
machines
Boot using UEFI
2016, 2012 R2
✔
Secure boot
2016
✔
Notes
1. For this Oracle Linux release, VLAN tagging works but VLAN trunking does not.
2. While using virtual fibre channel devices, ensure that logical unit number 0 (LUN 0) has been populated. If
LUN 0 has not been populated, a Linux virtual machine might not be able to mount fibre channel devices
natively.
3. If there are open file handles during a live virtual machine backup operation, then in some corner cases, the
backed-up VHDs might have to undergo a file system consistency check (fsck) on restore.
4. Live backup operations can fail silently if the virtual machine has an attached iSCSI device or direct-attached
storage (also known as a pass-through disk).
5. Live backup support for Oracle Linux 6.4/6.5/UEKR3 QU2 and QU3 is available through Hyper-V Backup
Essentials for Linux.
6. Dynamic memory support is only available on 64-bit virtual machines.
7. Hot-Add support is not enabled by default in this distribution. To enable Hot-Add support you need to add
a udev rule under /etc/udev/rules.d/ as follows:
a. Create a file /etc/udev/rules.d/100-balloon.rules. You may use any other desired name for the
file.
b. Add the following content to the file:
SUBSYSTEM=="memory", ACTION=="add", ATTR{state}="online"
c. Reboot the system to enable Hot-Add support.
While the Linux Integration Services download creates this rule on installation, the rule is also removed
when LIS is uninstalled, so the rule must be recreated if dynamic memory is needed after uninstallation.
8. Dynamic memory operations can fail if the guest operating system is running too low on memory. The
following are some best practices:
Startup memory and minimal memory should be equal to or greater than the amount of memory
that the distribution vendor recommends.
Applications that tend to consume the entire available memory on a system are limited to
consuming up to 80 percent of available RAM.
9. If you are using Dynamic Memory on a Windows Server 2016 or Windows Server 2012 R2 operating
system, specify Startup memory, Minimum memory, and Maximum memory parameters in multiples
of 128 megabytes (MB). Failure to do so can lead to hot-add failures, and you may not see any memory
increase in a guest operating system.
10. Certain distributions, including those using LIS 3.5 or LIS 4.0, only provide Ballooning support and do not
provide Hot-Add support. In such a scenario, the dynamic memory feature can be used by setting the
Startup memory parameter to a value which is equal to the Maximum memory parameter. This results in all
the requisite memory being Hot-Added to the virtual machine at boot time and then later depending upon
the memory requirements of the host, Hyper-V can freely allocate or deallocate memory from the guest
using Ballooning. Please configure Startup Memory and Minimum Memory at or above the
recommended value for the distribution.
11. Oracle Linux Hyper-V daemons are not installed by default. To use these daemons, install the hypervdaemons package. This package conflicts with downloaded Linux Integration Services and should not be
installed on systems with downloaded LIS.
12. The key/value pair (KVP) infrastructure might not function correctly without a Linux software update.
Contact your distribution vendor to obtain the software update in case you see problems with this feature.
13. OnWindows Server 2012 R2Generation 2 virtual machines have secure boot enabled by default and some
Linux virtual machines will not boot unless the secure boot option is disabled. You can disable secure boot
in the Firmware section of the settings for the virtual machine in Hyper-V Manager or you can disable it
using Powershell:
Set-VMFirmware -VMName "VMname" -EnableSecureBoot Off
The Linux Integration Services download can be applied to existing Generation 2 VMs but does not impart
Generation 2 capability.
See Also
Set-VMFirmware
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Debian virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Supported SUSE virtual machines on Hyper-V
4/24/2017 • 5 min to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
The following is a feature distribution map that indicates the features in each version. The known issues and
workarounds for each distribution are listed after the table.
The built-in SUSE Linux Enterprise Service drivers for Hyper-V are certified by SUSE. An example configuration can
be viewed in this bulletin: SUSE YES Certification Bulletin.
Table legend
Built in - LIS are included as part of this Linux distribution.The Microsoft-provided LIS download package
does not work for this distribution, so do not install it.The kernel module version numbers for the built in
LIS (as shown by lsmod, for example) are different from the version number on the Microsoft-provided LIS
download package. A mismatch doesn't indicate that the built in LIS is out of date.
✔ - Feature available
(blank) - Feature not available
SLES12 and SLES12SP1 are 64-bit only.
FEATURE
WINDOWS
SERVER
OPERATIN
G SYSTEM
VERSION
Availabil
ity
SLES 12
SP2
SLES 12
SP1
SLES 12
SLES 11
SP4
SLES 11
SP3
SLES 11
SP2
OPEN SUSE
12.3
Built-in
Built-in
Built-in
Built-in
Built-in
Built-in
Built-in
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
✔
Core
2016,
2012 R2,
2012,
2008 R2
✔
Windows
Server
2016
Accurate
Time
2016
✔
2016,
2012 R2,
2012,
2008 R2
✔
Networki
ng
Jumbo
frames
FEATURE
WINDOWS
SERVER
OPERATIN
G SYSTEM
VERSION
VLAN
tagging
and
trunking
SLES 12
SP2
SLES 12
SP1
SLES 12
SLES 11
SP4
SLES 11
SP3
SLES 11
SP2
OPEN SUSE
12.3
2016,
2012 R2,
2012,
2008 R2
✔
✔
✔
✔
✔
✔
✔
Live
migration
2016,
2012 R2,
2012,
2008 R2
✔
✔
✔
✔
✔
✔
✔
Static IP
Injection
2016,
2012 R2,
2012
✔Note 1
✔Note 1
✔Note 1
✔Note 1
✔Note 1
✔Note 1
✔Note 1
vRSS
2016,
2012 R2
✔
✔
✔
TCP
Segmenta
tion and
Checksu
m
Offloads
2016,
2012 R2,
2012,
2008 R2
✔
✔
✔
✔
SR-IOV
2016
✔
VHDX
resize
2016,
2012 R2
✔
✔
✔
✔
✔
Virtual
Fibre
Channel
2016,
2012 R2
✔
✔
✔
✔
✔
Live
virtual
machine
backup
2016,
2012 R2
✔ Note
✔ Note
✔ Note
✔ Note 2,
✔ Note 2,
2, 3, 8
2, 3, 8
2, 3, 8
3, 8
3, 8
TRIM
support
2016,
2012 R2
✔
✔
✔
✔
SCSI
WWN
2016,
2012 R2
✔
2016,
2012 R2,
2012,
2008 R2
N/A
N/A
✔
✔
✔
✔
Storage
Memory
PAE
Kernel
Support
✔
FEATURE
WINDOWS
SERVER
OPERATIN
G SYSTEM
VERSION
SLES 12
SP2
SLES 12
SP1
SLES 12
SLES 11
SP4
SLES 11
SP3
SLES 11
SP2
OPEN SUSE
12.3
✔
✔
Configura
tion of
MMIO
gap
2016,
2012 R2
✔
✔
✔
✔
✔
Dynamic
Memory Hot-Add
2016,
2012 R2,
2012
✔ Note
✔ Note
✔ Note
✔ Note 4,
✔ Note 4,
5, 6
5, 6
5, 6
5, 6
5, 6
Dynamic
Memory Balloonin
g
2016,
2012 R2,
2012
✔ Note
✔ Note
✔ Note
✔ Note 4,
✔ Note 4,
✔ Note 4,
5, 6
5, 6
5, 6
5, 6
5, 6
5, 6
Runtime
Memory
Resize
2016
✔ Note
5, 6
Video
2016,
2012 R2,
2012,
2008 R2
✔
✔
✔
✔
✔
Key/value
pair
2016,
2012 R2,
2012,
2008 R2
✔
✔
✔
✔ Note 7
✔ Note 7
✔ Note 7
✔ Note 7
NonMaskable
Interrupt
2016,
2012 R2
✔
✔
✔
✔
✔
✔
✔
File copy
from host
to guest
2016,
2012 R2
✔
✔
✔
✔
lsvmbus
command
2016,
2012 R2,
2012,
2008 R2
✔
Hyper-V
Sockets
2016
PCI
Passthrou
gh/DDA
2016
Hyper-Vspecific
video
device
Miscellan
eous
✔
✔
FEATURE
WINDOWS
SERVER
OPERATIN
G SYSTEM
VERSION
SLES 12
SP2
SLES 12
SP1
SLES 12
SLES 11
SP4
✔ Note 9
SLES 11
SP3
SLES 11
SP2
OPEN SUSE
12.3
Generati
on 2
virtual
machines
Boot
using
UEFI
2016,
2012 R2
✔ Note 9
✔ Note 9
✔ Note 9
Secure
boot
2016
✔
✔
✔
Notes
1. Static IP injection may not work if Network Manager has been configured for a given Hyper-V-specific
network adapter on the virtual machine. To ensure smooth functioning of static IP injection please ensure
that Network Manager is turned off completely or has been turned off for a specific network adapter
through its ifcfg-ethX file.
2. If there are open file handles during a live virtual machine backup operation, then in some corner cases, the
backed-up VHDs might have to undergo a file system consistency check (fsck) on restore.
3. Live backup operations can fail silently if the virtual machine has an attached iSCSI device or directattached storage (also known as a pass-through disk).
4. Dynamic memory operations can fail if the guest operating system is running too low on memory. The
following are some best practices:
Startup memory and minimal memory should be equal to or greater than the amount of memory
that the distribution vendor recommends.
Applications that tend to consume the entire available memory on a system are limited to
consuming up to 80 percent of available RAM.
5. Dynamic memory support is only available on 64-bit virtual machines.
6. If you are using Dynamic Memory on Windows Server 2016 or Windows Server 2012 operating systems,
specify Startup memory, Minimum memory, and Maximum memory parameters in multiples of 128
megabytes (MB). Failure to do so can lead to Hot-Add failures, and you may not see any memory increase
in a guest operating system.
7. In Windows Server 2016 or Windows Server 2012 R2, the key/value pair infrastructure might not function
correctly without a Linux software update. Contact your distribution vendor to obtain the software update
in case you see problems with this feature.
8. VSS backup will fail if a single partition is mounted multiple times.
9. On Windows Server 2012 R2, Generation 2 virtual machines have secure boot enabled by default and
Generation 2 Linux virtual machines will not boot unless the secure boot option is disabled. You can disable
secure boot in the Firmware section of the settings for the virtual machine in Hyper-V Manager or you can
disable it using Powershell:
Set-VMFirmware -VMName "VMname" -EnableSecureBoot Off
See Also
Set-VMFirmware
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Debian virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
6/1/2017 • 7 min to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
Beginning with Ubuntu 12.04, loading the "linux-virtual" package installs a kernel suitable for use as a guest
virtual machine. This package always depends on the latest minimal generic kernel image and headers used for
virtual machines. While its use is optional, the linux-virtual kernel will load fewer drivers and may boot faster and
have less memory overhead than a generic image.
To get full use of Hyper-V, install the appropriate linux-tools and linux-cloud-tools packages to install tools and
daemons for use with virtual machines. When using the linux-virtual kernel, load linux-tools-virtual and linuxcloud-tools-virtual.
The following feature distribution map indicates the features in each version. The known issues and workarounds
for each distribution are listed after the table.
Table legend
Built in - LIS are included as part of this Linux distribution. The Microsoft-provided LIS download package
doesn't work for this distribution, so don't install it. The kernel module version numbers for the built in LIS
(as shown by lsmod, for example) are different from the version number on the Microsoft-provided LIS
download package. A mismatch doesn't indicate that the built in LIS is out of date.
✔ - Feature available
(blank) - Feature not available
FEATURE
WINDOWS
SERVER
OPERATING
SYSTEM
VERSION
Availability
17.04
16.10
16.04 LTS
14.04 LTS
12.04 LTS
Built-in
Built-in
Built-in
Built-in
Built-in
✔
✔
✔
✔
Core
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
Windows
Server 2016
Accurate Time
2016
✔
✔
✔
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
Networking
Jumbo frames
FEATURE
WINDOWS
SERVER
OPERATING
SYSTEM
VERSION
17.04
16.10
16.04 LTS
14.04 LTS
12.04 LTS
VLAN tagging
and trunking
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
Live migration
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
Static IP
Injection
2016, 2012
R2, 2012
✔ Note 1
✔ Note 1
✔ Note 1
✔ Note 1
✔ Note 1
vRSS
2016, 2012
R2
✔
✔
✔
✔
TCP
Segmentation
and
Checksum
Offloads
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
SR-IOV
2016
✔
✔
✔
VHDX resize
2016, 2012
R2
✔
✔
✔
✔
Virtual Fibre
Channel
2016, 2012
R2
✔ Note 2
✔ Note 2
✔ Note 2
✔ Note 2
Live virtual
machine
backup
2016, 2012
R2
✔ Note 3, 4,
✔ Note 3, 4,
✔ Note 3, 4,
✔ Note 3, 4,
6
6
5
5
TRIM support
2016, 2012
R2
✔
✔
✔
✔
SCSI WWN
2016, 2012
R2
✔
✔
✔
✔
PAE Kernel
Support
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
Configuration
of MMIO gap
2016, 2012
R2
✔
✔
✔
✔
✔
Storage
✔
Memory
FEATURE
WINDOWS
SERVER
OPERATING
SYSTEM
VERSION
17.04
16.10
16.04 LTS
14.04 LTS
12.04 LTS
Dynamic
Memory Hot-Add
2016, 2012
R2, 2012
✔ Note 7, 8,
✔ Note 7, 8,
✔ Note 7, 8,
✔ Note 7, 8,
9
9
9
9
Dynamic
Memory Ballooning
2016, 2012
R2, 2012
✔ Note 7, 8,
✔ Note 7, 8,
✔ Note 7, 8,
✔ Note 7, 8,
9
9
9
9
Runtime
Memory
Resize
2016
✔
✔
✔
✔
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
Key/value pair
2016, 2012
R2, 2012,
2008 R2
✔ Note 6, 10
✔ Note 6, 10
✔ Note 5, 10
✔ Note 5, 10
✔ Note 5, 10
NonMaskable
Interrupt
2016, 2012
R2
✔
✔
✔
✔
✔
File copy from
host to guest
2016, 2012
R2
✔
✔
✔
✔
lsvmbus
command
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
Hyper-V
Sockets
2016
PCI
Passthrough/
DDA
2016
✔
✔
✔
✔
Boot using
UEFI
2016, 2012
R2
✔ Note 11,
✔ Note 11,
✔ Note 11,
✔ Note 11,
12
12
12
12
Secure boot
2016
✔
✔
✔
✔
Video
Hyper-V
specific video
device
Miscellaneou
s
Generation 2
virtual
machines
Notes
1. Static IP injection may not work if Network Manager has been configured for a given Hyper-V-specific
network adapter on the virtual machine. To ensure smooth functioning of static IP injection please ensure
that Network Manager is turned off completely or has been turned off for a specific network adapter
through its ifcfg-ethX file.
2. While using virtual fiber channel devices, ensure that logical unit number 0 (LUN 0) has been populated. If
LUN 0 has not been populated, a Linux virtual machine might not be able to mount fiber channel devices
natively.
3. If there are open file handles during a live virtual machine backup operation, then in some corner cases, the
backed-up VHDs might have to undergo a file system consistency check ( fsck ) on restore.
4. Live backup operations can fail silently if the virtual machine has an attached iSCSI device or directattached storage (also known as a pass-through disk).
5. On long term support (LTS) releases use latest virtual Hardware Enablement (HWE) kernel for up to date
Linux Integration Services.
To install the virtual HWE kernel on 16.04, run the following commands as root (or sudo):
# apt-get update
# apt-get install linux-virtual-lts-xenial
To install the virtual HWE kernel on 14.04, run the following commands as root (or sudo):
# apt-get update
# apt-get install linux-virtual-lts-xenial
12.04 does not have a separate virtual kernel. To install the generic HWE kernel on 12.04, run the following
commands as root (or sudo):
# apt-get update
# apt-get install linux-generic-lts-trusty
On Ubuntu 12.04, 14.04, and 16.04 the following Hyper-V daemons are in a separately installed package:
VSS Snapshot daemon - This daemon is required to create live Linux virtual machine backups.
KVP daemon - This daemon allows setting and querying intrinsic and extrinsic key value pairs.
fcopy daemon - This daemon implements a file copying service between the host and guest.
To install these Hyper-V daemons on 16.04, run the following commands as root (or sudo):
# apt-get install linux-tools-virtual-lts-xenial linux-cloud-tools-virtual-lts-xenial
To install these Hyper-V daemons on 14.04, run the following commands as root (or sudo).
# apt-get install hv-kvp-daemon-init linux-tools-virtual-lts-xenial linux-cloud-tools-virtual-ltsxenial
To install the KVP daemon on 12.04, run the following commands as root (or sudo).
# apt-get install hv-kvp-daemon-init linux-tools-lts-trusty linux-cloud-tools-generic-lts-trusty
Whenever the kernel is updated, the virtual machine must be rebooted to use it.
6. On Ubuntu 17.04 and 16.10, use the latest virtual kernel to have up-to-date Hyper-V capabilities.
To install the virtual kernel on 17.04 and 16.10, run the following commands as root (or sudo):
# apt-get update
# apt-get install linux-image-virtual
On Ubuntu 17.04 and 16.10 the following Hyper-V daemons are in a separately installed package:
VSS Snapshot daemon - This daemon is required to create live Linux virtual machine backups.
KVP daemon - This daemon allows setting and querying intrinsic and extrinsic key value pairs.
fcopy daemon - This daemon implements a file copying service between the host and guest.
To install these Hyper-V daemons on 17.04 and 16.10, run the following commands as root (or sudo):
# apt-get install linux-tools-virtual linux-cloud-tools-virtual
Whenever the kernel is updated, the virtual machine must be rebooted to use it.
7. Dynamic memory support is only available on 64-bit virtual machines.
8. Dynamic Memory operations can fail if the guest operating system is running too low on memory. The
following are some best practices:
Startup memory and minimal memory should be equal to or greater than the amount of memory
that the distribution vendor recommends.
Applications that tend to consume the entire available memory on a system are limited to
consuming up to 80 percent of available RAM.
9. If you are using Dynamic Memory on Windows Server 2016 or Windows Server 2012 operating systems,
specify Startup memory, Minimum memory, and Maximum memory parameters in multiples of 128
megabytes (MB). Failure to do so can lead to Hot-Add failures, and you might not see any memory increase
on a guest operating system.
10. In Windows Server 2016 or Windows Server 2012 R2, the key/value pair infrastructure might not function
correctly without a Linux software update. Contact your distribution vendor to obtain the software update
in case you see problems with this feature.
11. On Windows Server 2012 R2, Generation 2 virtual machines have secure boot enabled by default and
some Linux virtual machines will not boot unless the secure boot option is disabled. You can disable secure
boot in the Firmware section of the settings for the virtual machine in Hyper-V Manager or you can
disable it using Powershell:
Set-VMFirmware -VMName "VMname" -EnableSecureBoot Off
12. Before attempting to copy the VHD of an existing Generation 2 VHD virtual machine to create new
Generation 2 virtual machines, follow these steps:
a. Log in to the existing Generation 2 virtual machine.
b. Change directory to the boot EFI directory:
# cd /boot/efi/EFI
c. Copy the ubuntu directory in to a new directory named boot:
# sudo cp -r ubuntu/ boot
d. Change directory to the newly created boot directory:
# cd boot
e. Rename the shimx64.efi file:
# sudo mv shimx64.efi bootx64.efi
See Also
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Debian virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Set-VMFirmware
Ubuntu 14.04 in a Generation 2 VM - Ben Armstrong's Virtualization Blog
Supported FreeBSD virtual machines on Hyper-V
4/24/2017 • 2 min to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
The following feature distribution map indicates the features in each version. The known issues and workarounds
for each distribution are listed after the table.
Table legend
Built in - BIS are included as part of this FreeBSD release.
✔ - Feature available
(blank) - Feature not available
FEATURE
WINDOWS
SERVER
OPERATING
SYSTEM
VERSION
Availabilit
y
Core
2016, 2012
R2, 2012,
2008 R2
Windows
Server
2016
Accurate
Time
2016
11
10.3
10.2
10 - 10.1
9.1 - 9.3
8.4
Built in
Built in
Built in
Built in
Ports
Ports
✔
✔
✔
✔
✔
✔
Networkin
g
Jumbo
frames
2016, 2012
R2, 2012,
2008 R2
✔ Note 3
✔ Note 3
✔ Note 3
✔ Note 3
✔ Note 3
✔ Note 3
VLAN
tagging
and
trunking
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
✔
Live
migration
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
✔
✔
✔
FEATURE
WINDOWS
SERVER
OPERATING
SYSTEM
VERSION
Static IP
Injection
11
10.3
10.2
10 - 10.1
9.1 - 9.3
8.4
2016, 2012
R2, 2012
✔ Note 4
✔ Note 4
✔ Note 4
✔ Note 4
✔
✔
vRSS
2016, 2012
R2
✔
TCP
Segmentati
on and
Checksum
Offloads
2016, 2012
R2, 2012,
2008 R2
✔
✔
✔
Large
Receive
Offload
(LRO)
2016, 2012
R2, 2012,
2008 R2
✔
✔
SR-IOV
2016
Storage
Note 1
Note 1
Note 1
Note 1
Note 1,2
Note 1,2
Note 1,2
VHDX
resize
2016, 2012
R2
✔ Note 7
Virtual
Fibre
Channel
2016, 2012
R2
Live virtual
machine
backup
2016, 2012
R2
TRIM
support
2016, 2012
R2
SCSI WWN
2016, 2012
R2
✔
✔
✔
✔
✔
Memory
PAE Kernel
Support
2016, 2012
R2, 2012,
2008 R2
Configurati
on of
MMIO gap
2016, 2012
R2
Dynamic
Memory Hot-Add
2016, 2012
R2, 2012
✔
FEATURE
WINDOWS
SERVER
OPERATING
SYSTEM
VERSION
Dynamic
Memory Ballooning
2016, 2012
R2, 2012
Runtime
Memory
Resize
2016
11
10.3
10.2
10 - 10.1
9.1 - 9.3
8.4
✔
✔ Note 6
✔ Note 5,
✔ Note 6
✔ Note 6
✔
✔
Video
Hyper-V
specific
video
device
2016, 2012
R2, 2012,
2008 R2
Miscellane
ous
Key/value
pair
2016, 2012
R2, 2012,
2008 R2
✔
NonMaskable
Interrupt
2016, 2012
R2
✔
File copy
from host
to guest
2016, 2012
R2
lsvmbus
command
2016, 2012
R2, 2012,
2008 R2
Hyper-V
Sockets
2016
PCI
Passthroug
h/DDA
2016
Generatio
n 2 virtual
machines
Boot using
UEFI
2016, 2012
R2
Secure boot
2016
Notes
6
✔
✔
✔
1. Suggest to Label Disk Devices to avoid ROOT MOUNT ERROR during startup.
2. The virtual DVD drive may not be recognized when BIS drivers are loaded on FreeBSD 8.x and 9.x unless
you enable the legacy ATA driver through the following command.
# dd if=/dev/da1 of=/dev/da1 count=0
# gpart recover da1
3. 9126 is the maximum supported MTU size.
4. In a failover scenario, you cannot set a static IPv6 address in the replica server. Use an IPv4 address instead.
5. KVP is provided by ports on FreeBSD 10. See the FreeBSD 10 ports on FreeBSD.org for more information.
6. KVP may not work on Windows Server 2008 R2.
7. To make VHDX online resizing work properly in FreeBSD 11, a special manual step is required to work
around a GEOM bug which is fixed in 11+, after the host resizes the VHDX disk - open the disk for write,
and run “gpart recover” as the following.
# set hw.ata.disk_enable=1
# boot
See Also
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best practices for running FreeBSD on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual
machines on Hyper-V
4/24/2017 • 8 min to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
This article describes features available in components such as core, networking, storage, and memory when
using Linux and FreeBSD on a virtual machine.
Core
FEATURE
DESCRIPTION
Integrated shutdown
With this feature, an administrator can shut down virtual
machines from the Hyper-V Manager. For more information,
see Operating system shutdown.
Time synchronization
This feature ensures that the maintained time inside a virtual
machine is kept synchronized with the maintained time on
the host. For more information, see Time synchronization.
Windows Server 2016 Accurate Time
This feature allows the guest to use the Windows Server 2016
Accurate Time capability, which improves time
synchronization with the host to 1ms accuracy. For more
information, see Windows Server 2016 Accurate Time
Multiprocessing support
With this feature, a virtual machine can use multiple
processors on the host by configuring multiple virtual CPUs.
Heartbeat
With this feature, the host to can track the state of the virtual
machine. For more information, see Heartbeat.
Integrated mouse support
With this feature, you can use a mouse on a virtual machine's
desktop and also use the mouse seamlessly between the
Windows Server desktop and the Hyper-V console for the
virtual machine.
Hyper-V specific Storage device
This feature grants high-performance access to storage
devices that are attached to a virtual machine.
Hyper-V specific Network device
This feature grants high-performance access to network
adapters that are attached to a virtual machine.
Networking
FEATURE
DESCRIPTION
Jumbo frames
With this feature, an administrator can increase the size of
network frames beyond 1500 bytes, which leads to a
significant increase in network performance.
VLAN tagging and trunking
This feature allows you to configure virtual LAN (VLAN) traffic
for virtual machines.
Live Migration
With this feature, you can migrate a virtual machine from one
host to another host. For more information, see Virtual
Machine Live Migration Overview.
Static IP Injection
With this feature, you can replicate the static IP address of a
virtual machine after it has been failed over to its replica on a
different host. Such IP replication ensures that network
workloads continue to work seamlessly after a failover event.
vRSS (Virtual Receive Side Scaling)
Spreads the load from a virtual network adapter across
multiple virtual processors in a virtual machine.For more
information, see Virtual Receive-side Scaling in Windows
Server 2012 R2.
TCP Segmentation and Checksum Offloads
Transfers segmentation and checksum work from the guest
CPU to the host virtual switch or network adapter during
network data transfers.
Large Receive Offload (LRO)
Increases inbound throughput of high-bandwidth
connections by aggregating multiple packets into a larger
buffer, decreasing CPU overhead.
SR-IOV
Single Root I/O devices use DDA to allow guests access to
portions of specific NIC cards allowing for reduced latency
and increased throughput. SR-IOV requires up to date
physical function (PF) drivers on the host and virtual function
(VF) drivers on the guest.
Storage
FEATURE
DESCRIPTION
VHDX resize
With this feature, an administrator can resize a fixed-size
.vhdx file that is attached to a virtual machine. For more
information, see Online Virtual Hard Disk Resizing Overview.
Virtual Fibre Channel
With this feature, virtual machines can recognize a fiber
channel device and mount it natively. For more information,
see Hyper-V Virtual Fibre Channel Overview.
FEATURE
DESCRIPTION
Live virtual machine backup
This feature facilitates zero down time backup of live virtual
machines.
Note that the backup operation does not succeed if the
virtual machine has virtual hard disks (VHDs) that are hosted
on remote storage such as a Server Message Block (SMB)
share or an iSCSI volume. Additionally, ensure that the
backup target does not reside on the same volume as the
volume that you back up.
TRIM support
TRIM hints notify the drive that certain sectors that were
previously allocated are no longer required by the app and
can be purged. This process is typically used when an app
makes large space allocations via a file and then self-manages
the allocations to the file, for example, to virtual hard disk
files.
SCSI WWN
The storvsc driver extracts World Wide Name (WWN)
information from the port and node of devices attached to
the virtual machine and creates the appropriate sysfs files.
Memory
FEATURE
DESCRIPTION
PAE Kernel Support
Physical Address Extension (PAE) technology allows a 32-bit
kernel to access a physical address space that is larger than
4GB. Older Linux distributions such as RHEL 5.x used to ship
a separate kernel that was PAE enabled. Newer distributions
such as RHEL 6.x have pre-built PAE support.
Configuration of MMIO gap
With this feature, appliance manufacturers can configure the
location of the Memory Mapped I/O (MMIO) gap. The MMIO
gap is typically used to divide the available physical memory
between an appliance's Just Enough Operating Systems
(JeOS) and the actual software infrastructure that powers the
appliance.
FEATURE
DESCRIPTION
Dynamic Memory - Hot-Add
The host can dynamically increase or decrease the amount of
memory available to a virtual machine while it is in operation.
Before provisioning, the Administrator enables Dynamic
Memory in the Virtual Machine Settings panel and specify the
Startup Memory, Minimum Memory, and Maximum Memory
for the virtual machine. When the virtual machine is in
operation Dynamic Memory cannot be disabled and only the
Minimum and Maximum settings can be changed. (It is a best
practice to specify these memory sizes as multiples of
128MB.)
When the virtual machine is first started available memory is
equal to the Startup Memory. As Memory Demand
increases due to application workloads Hyper-V may
dynamically allocate more memory to the virtual machine via
the Hot-Add mechanism, if supported by that version of the
kernel. The maximum amount of memory allocated is capped
by the value of the Maximum Memory parameter.
The Memory tab of Hyper-V manager will display the amount
of memory assigned to the virtual machine, but memory
statistics within the virtual machine will show the highest
amount of memory allocated.
For more information, see Hyper-V Dynamic Memory
Overview.
Dynamic Memory - Ballooning
The host can dynamically increase or decrease the amount of
memory available to a virtual machine while it is in operation.
Before provisioning, the Administrator enables Dynamic
Memory in the Virtual Machine Settings panel and specify the
Startup Memory, Minimum Memory, and Maximum Memory
for the virtual machine. When the virtual machine is in
operation Dynamic Memory cannot be disabled and only the
Minimum and Maximum settings can be changed. (It is a best
practice to specify these memory sizes as multiples of
128MB.)
When the virtual machine is first started available memory is
equal to the Startup Memory. As Memory Demand
increases due to application workloads Hyper-V may
dynamically allocate more memory to the virtual machine via
the Hot-Add mechanism (above). As Memory Demand
decreases Hyper-V may automatically deprovision memory
from the virtual machine via the Balloon mechanism. Hyper-V
will not deprovision memory below the Minimum Memory
parameter.
The Memory tab of Hyper-V manager will display the amount
of memory assigned to the virtual machine, but memory
statistics within the virtual machine will show the highest
amount of memory allocated.
For more information, see Hyper-V Dynamic Memory
Overview.
FEATURE
DESCRIPTION
Runtime Memory Resize
An administrator can set the amount of memory available to
a virtual machine while it is in operation, either increasing
memory ("Hot Add") or decreasing it ("Hot Remove").
Memory is returned to Hyper-V via the balloon driver (see
"Dynamic Memory - Ballooning"). The balloon driver
maintains a minimum amount of free memory after
ballooning, called the "floor", so assigned memory cannot be
reduced below the current demand plus this floor amount.
The Memory tab of Hyper-V manager will display the amount
of memory assigned to the virtual machine, but memory
statistics within the virtual machine will show the highest
amount of memory allocated. (It is a best practice to specify
memory values as multiples of 128MB.)
Video
FEATURE
DESCRIPTION
Hyper-V-specific video device
This feature provides high-performance graphics and superior
resolution for virtual machines. This device does not provide
Enhanced Session Mode or RemoteFX capabilities.
Miscellaneous
FEATURE
DESCRIPTION
KVP (Key-Value pair) exchange
This feature provides a key/value pair (KVP) exchange service
for virtual machines. Typically administrators use the KVP
mechanism to perform read and write custom data
operations on a virtual machine. For more information, see
Data Exchange: Using key-value pairs to share information
between the host and guest on Hyper-V.
Non-Maskable Interrupt
With this feature, an administrator can issue Non-Maskable
Interrupts (NMI) to a virtual machine. NMIs are useful in
obtaining crash dumps of operating systems that have
become unresponsive due to application bugs. These crash
dumps can be diagnosed after you restart.
File copy from host to guest
This feature allows files to be copied from the host physical
computer to the guest virtual machines without using the
network adaptor. For more information, see Guest services.
lsvmbus command
This command gets information about devices on the HyperV virtual machine bus (VMBus) similiar to information
commands like lspci.
Hyper-V Sockets
This is an additional communication channel between the
host and guest operating system. To load and use the HyperV Sockets kernel module, see Make your own integration
services.
FEATURE
DESCRIPTION
PCI Passthrough/DDA
With Windows Server 2016 administrators can pass through
PCI Express devices via the Discrete Device Assignment
mechanism. Common devices are network cards, graphics
cards, and special storage devices. The virtual machine will
require the appropriate driver to use the exposed hardware.
The hardware must be assigned to the virtual machine for it
to be used.
For more information, see Discrete Device Assignment Description and Background.
DDA is a pre-requisite for SR-IOV networking. Virtual ports
will need to be assigned to the virtual machine and the virtual
machine must use the correct Virtual Function (VF) drivers for
device multiplexing.
Generation 2 virtual machines
FEATURE
DESCRIPTION
Boot using UEFI
This feature allows virtual machines to boot using Unified
Extensible Firmware Interface (UEFI).
For more information, see Generation 2 Virtual Machine
Overview.
Secure boot
This feature allows virtual machines to use UEFI based secure
boot mode. When a virtual machine is booted in secure
mode, various operating system components are verified
using signatures present in the UEFI data store.
For more information, see Secure Boot.
See Also
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Debian virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Best practices for running FreeBSD on Hyper-V
Best Practices for running Linux on Hyper-V
4/24/2017 • 4 min to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
This topic contains a list of recommendations for running Linux virtual machine on Hyper-V.
Tuning Linux File Systems on Dynamic VHDX Files
Some Linux file systems may consume significant amounts of real disk space even when the file system is mostly
empty. To reduce the amount of real disk space usage of dynamic VHDX files, consider the following
recommendations:
When creating the VHDX, use 1MB BlockSizeBytes (from the default 32MB) in PowerShell, for example:
PS > New-VHD -Path C:\MyVHDs\test.vhdx -SizeBytes 127GB -Dynamic -BlockSizeBytes 1MB
The ext4 format is preferred to ext3 because ext4 is more space efficient than ext3 when used with dynamic
VHDX files.
When creating the filesystem specify the number of groups to be 4096, for example:
# mkfs.ext4 -G 4096 /dev/sdX1
Grub Menu Timeout on Generation 2 Virtual Machines
Because of legacy hardware being removed from emulation in Generation 2 virtual machines, the grub menu
countdown timer counts down too quickly for the grub menu to be displayed, immediately loading the default
entry. Until grub is fixed to use the EFI-supported timer, modify /boot/grub/grub.conf, /etc/default/grub, or
equivalent to have "timeout=100000" instead of the default "timeout=5".
PxE Boot on Generation 2 Virtual Machines
Because the PIT timer is not present in Generation 2 Virtual Machines, network connections to the PxE TFTP server
can be prematurely terminated and prevent the bootloader from reading Grub configuration and loading a kernel
from the server.
On RHEL 6.x, the legacy grub v0.97 EFI bootloader can be used instead of grub2 as described here:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s1netboot-pxe-config-efi.html
On Linux distributions other than RHEL 6.x, similar steps can be followed to configure grub v0.97 to load Linux
kernels from a PxE server.
Additionally, on RHEL/CentOS 6.6 keyboard and mouse input will not work with the pre-install kernel which
prevents specifying installation options in the menu. A serial console must be configured to allow choosing
installation options.
In the efidefault file on the PxE server, add the following kernel parameter "console=ttyS1"
On the VM in Hyper-V, setup a COM port using this PowerShell cmdlet:
Set-VMComPort -VMName <Name> -Number 2 -Path \\.\pipe\dbg1
Specifying a kickstart file to the pre-install kernel would also avoid the need for keyboard and mouse input during
installation.
Use static MAC addresses with failover clustering
Linux virtual machines that will be deployed using failover clustering should be configured with a static media
access control (MAC) address for each virtual network adapter. In some versions of Linux, the networking
configuration may be lost after failover because a new MAC address is assigned to the virtual network adapter. To
avoid losing the network configuration, ensure that each virtual network adapter has a static MAC address. You
can configure the MAC address by editing the settings of the virtual machine in Hyper-V Manager or Failover
Cluster Manager.
Use Hyper-V-specific network adapters, not the legacy network
adapter
Configure and use the virtual Ethernet adapter, which is a Hyper-V-specific network card with enhanced
performance. If both legacy and Hyper-V-specific network adapters are attached to a virtual machine, the network
names in the output of ifconfig -a might show random values such as _tmp12000801310. To avoid this issue,
remove all legacy network adapters when using Hyper-V-specific network adapters in a Linux virtual machine.
Use I/O scheduler NOOP for better disk I/O performance
The Linux kernel has four different I/O schedulers to reorder requests with different algorithms. NOOP is a first-in
first-out queue that passes the schedule decision to be made by the hypervisor. It is recommended to use NOOP
as the scheduler when running Linux virtual machine on Hyper-V. To change the scheduler for a specific device, in
the boot loader's configuration (/etc/grub.conf, for example), add elevator=noop to the kernel parameters, and
then restart.
Add "numa=off" if the Linux virtual machine has more than 7 virtual
processors or more than 30 GB RAM
Linux virtual machines configured to use more than 7 virtual processors should add numa=off to the GRUB
boot.cfg to work around a known issue in the 2.6.x Linux kernels. Linux virtual machines configured to use more
than 30 GB RAM should also add numa=off to the GRUB boot.cfg.
Reserve more memory for kdump
In case the dump capture kernel ends up with a panic on boot, reserve more memory for the kernel. For example,
change the parameter crashkernel=384M-:128M to crashkernel=384M-:256M in the Ubuntu grub
configuration file.
Shrinking VHDX or expanding VHD and VHDX files can result in
erroneous GPT partition tables
Hyper-V allows shrinking virtual disk (VHDX) files without regard for any partition, volume, or file system data
structures that may exist on the disk. If the VHDX is shrunk to where the end of the VHDX comes before the end of
a partition, data can be lost, that partition can become corrupted, or invalid data can be returned when the
partition is read.
After resizing a VHD or VHDX, administrators should use a utility like fdisk or parted to update the partition,
volume, and file system structures to reflect the change in the size of the disk. Shrinking or expanding the size of a
VHD or VHDX that has a GUID Partition Table (GPT) will cause a warning when a partition management tool is
used to check the partition layout, and the administrator will be warned to fix the first and secondary GPT headers.
This manual step is safe to perform without data loss.
See also
Supported Linux and FreeBSD virtual machines for Hyper-V on Windows
Best practices for running FreeBSD on Hyper-V
Deploy a Hyper-V Cluster
Best practices for running FreeBSD on Hyper-V
4/24/2017 • 2 min to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
This topic contains a list of recommendations for running FreeBSD as a guest operating system on a Hyper-V
virtual machine.
Enable CARP in FreeBSD 10.2 on Hyper-V
The Common Address Redundancy Protocol (CARP) allows multiple hosts to share the same IP address and Virtual
Host ID (VHID) to help provide high availability for one or more services. If one or more hosts fail, the other hosts
transparently take over so users won't notice a service failure.To use CARP in FreeBSD 10.2, follow the instructions
in the FreeBSD handbook and do the following in Hyper-V Manager.
Verify the virtual machine has a Network Adapter and it's assigned a virtual switch. Select the virtual machine
and select Actions > Settings.
Enable MAC address spoofing. To do this,
1. Select the virtual machine and select Actions > Settings.
2. Expand Network Adapter and select Advanced Features.
3. Select Enable MAC Address spoofing.
Create labels for disk devices
During startup, device nodes are created as new devices are discovered. This can mean that device names can
change when new devices are added. If you get a ROOT MOUNT ERROR during startup, you should create labels
for each IDE partition to avoid conflicts and changes. To learn how, see Labeling Disk Devices. Below are examples.
IMPORTANT
Make a backup copy of your fstab before making any changes.
1. Reboot the system into single user mode. This can be accomplished by selecting boot menu option 2 for
FreeBSD 10.3+ (option 4 for FreeBSD 8.x), or performing a 'boot -s' from the boot prompt.
2. In Single user mode, create GEOM labels for each of the IDE disk partitions listed in your fstab (both root
and swap). Below is an example of FreeBSD 10.3.
# cat /etc/fstab
# Device
Mountpoint
/dev/da0p2
/
/dev/da0p3
none
FStype Options
ufs
rw
swap
sw
Dump
1
0
Pass#
1
0
# glabel label rootfs /dev/da0p2
# glabel label swap /dev/da0p3
# exit
Additional information on GEOM labels can be found at: Labeling Disk Devices.
3. The system will continue with multi-user boot. After the boot completes, edit /etc/fstab and replace the
conventional device names, with their respective labels. The final /etc/fstab will look like this:
# Device
/dev/label/rootfs
/dev/label/swap
Mountpoint
/
none
FStype Options
ufs
rw
swap
sw
Dump
1
0
Pass#
1
0
4. The system can now be rebooted. If everything went well, it will come up normally and mount will show:
# mount
/dev/label/rootfs on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, mutilabel)
Use a wireless network adapter as the virtual switch
If the virtual switch on the host is based on wireless network adapter, reduce the ARP expiration time to 60
seconds by the following command. Otherwise the networking of the VM may stop working after a while.
# sysctl net.link.ether.inet.max_age=60
See also
Supported FreeBSD virtual machines on Hyper-V
Hyper-V feature compatibility by generation and
guest
4/24/2017 • 2 min to read • Edit Online
Applies To: Windows Server 2016
The tables in this article show you the generations and operating systems that are compatible with some of the
Hyper-V features, grouped by categories. In general, you'll get the best availability of features with a generation 2
virtual machine that runs the newest operating system.
Keep in mind that some features rely on hardware or other infrastructure. For hardware details, see System
requirements for Hyper-V on Windows Server 2016. In some cases, a feature can be used with any supported
guest operating system. For details on which operating systems are supported, see:
Supported Linux and FreeBSD virtual machines
Supported Windows guest operating systems
Availability and backup
FEATURE
GENERATION
GUEST OPERATING SYSTEM
Checkpoints
1 and 2
Any supported guest
Guest clustering
1 and 2
Guests that run cluster-aware
applications and have iSCSI target
software installed
Replication
1 and 2
Any supported guest
FEATURE
GENERATION
GUEST OPERATING SYSTEM
Dynamic memory
1 and 2
Specific versions of supported guests.
See Hyper-V Dynamic Memory
Overview for versions older than
Windows Server 2016 and Windows 10.
Hot add/removal of memory
1 and 2
Windows Server 2016, Windows 10
Virtual NUMA
1 and 2
Any supported guest
GENERATION
GUEST OPERATING SYSTEM
Compute
Development and test
FEATURE
FEATURE
GENERATION
GUEST OPERATING SYSTEM
COM/Serial ports
1 and 2
Note: For generation 2, use Windows
PowerShell to configure. For details, see
Add a COM port for kernel debugging.
Any supported guest
FEATURE
GENERATION
GUEST OPERATING SYSTEM
Live migration
1 and 2
Any supported guest
Import/export
1 and 2
Any supported guest
FEATURE
GENERATION
GUEST OPERATING SYSTEM
Hot add/removal of virtual network
adapter
2
Any supported guest
Legacy virtual network adapter
1
Any supported guest
Single root input/output virtualization
(SR-IOV)
1 and 2
64-bit Windows guests, starting with
Windows Server 2012 and Windows 8.
Virtual machine multi queue (VMMQ)
1 and 2
Any supported guest
Mobility
Networking
Remote connection experience
FEATURE
GENERATION
GUEST OPERATING SYSTEM
Discrete device assignment (DDA)
1 and 2
Windows Server 2016, Windows Server
2012 R2 only with Update 3133690
installed, Windows 10
Note: For details on Update 3133690,
see this support article.
Enhanced session mode
1 and 2
Windows Server 2016, Windows Server
2012 R2, Windows 10, and Windows
8.1, with Remote Desktop Services
enabled
Note: You might need to also configure
the host. For details, see Use local
resources on Hyper-V virtual machine
with VMConnect.
RemoteFx
1 and 2
Generation 1 on 32-bit and 64-bit
Windows versions starting with
Windows 8.
Generation 2 on 64-bit Windows 10
versions
Security
FEATURE
GENERATION
GUEST OPERATING SYSTEM
Secure boot
2
Linux: Ubuntu 14.04 and later, SUSE
Linux Enterprise Server 12 and later,
Red Hat Enterprise Linux 7.0 and later,
and CentOS 7.0 and later
Windows: All supported versions that
can run on a generation 2 virtual
machine
Shielded virtual machines
2
Windows: All supported versions that
can run on a generation 2 virtual
machine
FEATURE
GENERATION
GUEST OPERATING SYSTEM
Shared virtual hard disks (VHDX only)
1 and 2
Windows Server 2016, Windows Server
2012 R2, Windows Server 2012
SMB3
1 and 2
All that support SMB3
Storage spaces direct
2
Windows Server 2016
Virtual Fibre Channel
1 and 2
Windows Server 2016, Windows Server
2012 R2, Windows Server 2012
VHDX format
1 and 2
Any supported guest
Storage
Get started with Hyper-V on Windows Server 2016
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
Use the following resources to set up and try out Hyper-V on the Nano Server, Server Core or GUI installation
option of Windows Server 2016. But before you install anything, check the System Requirements for Windows
Server and the System Requirements for Hyper-V.
Download and install Windows Server
To use Nano Server as a virtual machine host:
Deploy Nano Server
Use Hyper-V on Nano Server
To use the Server Core or GUI installation option of Windows Server 2016 as a virtual machine host:
Install the Hyper-V role on Windows Server 2016
Create a virtual switch for Hyper-V virtual machines
Create a virtual machine in Hyper-V
Install the Hyper-V role on Windows Server 2016
4/24/2017 • 3 min to read • Edit Online
Applies To: Windows Server 2016
To create and run virtual machines, install the Hyper-V role on Windows Server 2016 by using Server Manager or
the Install-WindowsFeature cmdlet in Windows PowerShell. To install the Hyper-V role on a Nano Server, see
Getting Started with Nano Server. For Windows 10, see Install Hyper-V on Windows 10.
To learn more about Hyper-V, see the Hyper-V Technology Overview. To try out Windows Server 2016, you can
download and install an evaluation copy. See the Evaluation Center.
Before you install Windows Server 2016 or add the Hyper-V role, make sure that:
Your computer hardware is compatible. For details, see System Requirements for Windows Server and System
requirements for Hyper-V on Windows Server 2016.
You don't plan to use third-party virtualization apps that rely on the same processor features that Hyper-V
requires. Examples include VMWare Workstation and VirtualBox. You can install Hyper-V without uninstalling
these other apps. But, if you try to use them to manage virtual machines when the Hyper-V hypervisor is
running, the virtual machines might not start or might run unreliably. For details and instructions for turning
off the Hyper-V hypervisor if you need to use one of these apps, see Virtualization applications do not work
together with Hyper-V, Device Guard, and Credential Guard.
If you want to install only the management tools, such as Hyper-V Manager, see Remotely manage Hyper-V hosts
with Hyper-V Manager.
Install Hyper-V by using Server Manager
1. In Server Manager, on the Manage menu, click Add Roles and Features.
2. On the Before you begin page, verify that your destination server and network environment are prepared
for the role and feature you want to install. Click Next.
3. On the Select installation type page, select Role-based or feature-based installation and then click
Next.
4. On the Select destination server page, select a server from the server pool and then click Next.
5. On the Select server roles page, select Hyper-V.
6. To add the tools that you use to create and manage virtual machines, click Add Features. On the Features
page, click Next.
7. On the Create Virtual Switches page, Virtual Machine Migration page, and Default Stores page, select
the appropriate options.
8. On the Confirm installation selections page, select Restart the destination server automatically if
required, and then click Install.
9. When installation is finished, verify that Hyper-V installed correctly. Open the All Servers page in Server
Manager and select a server on which you installed Hyper-V. Check the Roles and Features tile on the
page for the selected server.
Install Hyper-V by using the Install-WindowsFeature cmdlet
1. On the Windows desktop, click the Start button and type any part of the name Windows PowerShell.
2. Right-click Windows PowerShell and select Run as Administrator.
3. To install Hyper-V on a server you're connected to remotely, run the following command and replace
<computer_name> with the name of server.
Install-WindowsFeature -Name Hyper-V -ComputerName <computer_name> -IncludeManagementTools -Restart
If you're connected locally to the server, run the command without
-ComputerName <computer_name>
.
4. After the server restarts, you can see that the Hyper-V role is installed and see what other roles and features
are installed by running the following command:
Get-WindowsFeature -ComputerName <computer_name>
If you're connected locally to the server, run the command without
-ComputerName <computer_name>
.
NOTE
If you install this role on a server that runs the Server Core installation option of Windows Server 2016 and use the
parameter -IncludeManagementTools , only the Hyper-V Module for Windows PowerShell will be installed. You can use the
GUI management tool, Hyper-V Manager, on another computer to remotely manage a Hyper-V host that runs on a Server
Core installation. For instructions on connecting remotely, see Remotely manage Hyper-V hosts with Hyper-V Manager.
See also
Install-WindowsFeature
Create a virtual switch for Hyper-V virtual machines
4/24/2017 • 3 min to read • Edit Online
Applies To: Windows 10, Windows Server 2016
A virtual switch allows virtual machines created on Hyper-V hosts to communicate with other computers. You can
create a virtual switch when you first install the Hyper-V role on Windows Server 2016. To create additional virtual
switches, use Hyper-V Manager or Windows PowerShell. To learn more about virtual switches, see Hyper-V Virtual
Switch.
Virtual machine networking can be a complex subject. And there are several new virtual switch features that you
may want to use like Switch Embedded Teaming (SET). But basic networking is fairly easy to do. This topic covers
just enough so that you can create networked virtual machines in Hyper-V. To learn more about how you can set
up your networking infrastructure, review the Networking documentation.
Create a virtual switch by using Hyper-V Manager
1. Open Hyper-V Manager, select the Hyper-V host computer name.
2. Select Action > Virtual Switch Manager.
3. Choose the type of virtual switch you want.
CONNECTION TYPE
DESCRIPTION
External
Gives virtual machines access to a physical network to
communicate with servers and clients on an external
network. Allows virtual machines on the same Hyper-V
server to communicate with each other.
Internal
Allows communication between virtual machines on the
same Hyper-V server, and between the virtual machines
and the management host operating system.
Private
Only allows communication between virtual machines on
the same Hyper-V server. A private network is isolated
from all external network traffic on the Hyper-V server.
This type of network is useful when you must create an
isolated networking environment, like an isolated test
domain.
4. Select Create Virtual Switch.
5. Add a name for the virtual switch.
6. If you select External, choose the network adapter (NIC) that you want to use and any other options
described in the following table.
SETTING NAME
DESCRIPTION
Allow management operating system to share this
network adapter
Select this option if you want to allow the Hyper-V host to
share the use of the virtual switch and NIC or NIC team
with the virtual machine. With this enabled, the host can
use any of the settings that you configure for the virtual
switch like Quality of Service (QoS) settings, security
settings, or other features of the Hyper-V virtual switch.
Enable single-root I/O virtualization (SR-IOV)
Select this option only if you want to allow virtual machine
traffic to bypass the virtual machine switch and go directly
to the physical NIC. For more information, see Single-Root
I/O Virtualization in the Poster Companion Reference:
Hyper-V Networking.
7. If you want to isolates network traffic from the management Hyper-V host operating system or other virtual
machines that share the same virtual switch, select Enable virtual LAN identification for management
operating system. You can change the VLAN ID to any number or leave the default. This is the virtual LAN
identification number that the management operating system will use for all network communication
through this virtual switch.
8. Click OK.
9. Click Yes.
Create a virtual switch by using Windows PowerShell
1. On the Windows desktop, click the Start button and type any part of the name Windows PowerShell.
2. Right-click Windows PowerShell and select Run as Administrator.
3. Find existing network adapters by running the Get-NetAdapter cmdlet. Make a note of the network adapter
name that you want to use for the virtual switch.
Get-NetAdapter
4. Create a virtual switch by using the New-VMSwitch cmdlet. For example, to create an external virtual switch
named ExternalSwitch, using the ethernet network adapter, and with Allow management operating
system to share this network adapter turned on, run the following command.
New-VMSwitch -name ExternalSwitch -NetAdapterName Ethernet -AllowManagementOS $true
To create an internal switch, run the following command.
New-VMSwitch -name InternalSwitch -SwitchType Internal
To create an private switch, run the following command.
New-VMSwitch -name PrivateSwitch -SwitchType Private
For more advanced Windows PowerShell scripts that cover improved or new virtual switch features in Windows
Server 2016, see Remote Direct Memory Access and Switch Embedded Teaming.
Next step
Create a virtual machine in Hyper-V
Create a virtual machine in Hyper-V
4/24/2017 • 4 min to read • Edit Online
Applies To: Windows 10, Windows Server 2016
Learn how to create a virtual machine by using Hyper-V Manager and Windows PowerShell and what options you
have when you create a virtual machine in Hyper-V Manager.
Create a virtual machine by using Hyper-V Manager
1. Open Hyper-V Manager.
2. From the Action pane, click New, and then click Virtual Machine.
3. From the New Virtual Machine Wizard, click Next.
4. Make the appropriate choices for your virtual machine on each of the pages. For more information, see New
virtual machine options and defaults in Hyper-V Manager later in this topic.
5. After verifying your choices in the Summary page, click Finish.
6. In Hyper-V Manager, right-click the virtual machine and select connect.
7. In the Virtual Machine Connection window, select Action > Start.
Create a virtual machine by using Windows PowerShell
1. On the Windows desktop, click the Start button and type any part of the name Windows PowerShell.
2. Right-click Windows PowerShell and select Run as administrator.
3. Get the name of the virtual switch that you want the virtual machine to use by using Get-VMSwitch. For
example,
Get-VMSwitch * | Format-Table Name
4. Use the New-VM cmdlet to create the virtual machine. See the following examples.
NOTE
If you may move this virtual machine to a Hyper-V host that runs Windows Server 2012 R2, use the -Version
parameter with New-VM to set the virtual machine configuration version to 5. The default virtual machine
configuration version for Windows Server 2016 isn't supported by Windows Server 2012 R2 or earlier versions. You
can't change the virtual machine configuration version after the virtual machine is created. For more information, see
Supported virtual machine configuration versions.
Existing virtual hard disk - To create a virtual machine with an existing virtual hard disk, you can
use the following command where,
-Name is the name that you provide for the virtual machine that you're creating.
-MemoryStartupBytes is the amount of memory that is available to the virtual machine at start
up.
-BootDevice is the device that the virtual machine boots to when it starts like the network
adapter (NetworkAdapter) or virtual hard disk (VHD).
-VHDPath is the path to the virtual machine disk that you want to use.
-Path is the path to store the virtual machine configuration files.
-Generation is the virtual machine generation. Use generation 1 for VHD and generation 2 for
VHDX. See Should I create a generation 1 or 2 virtual machine in Hyper-V?.
-Switch is the name of the virtual switch that you want the virtual machine to use to connect
to other virtual machines or the network. See Create a virtual switch for Hyper-V virtual
machines.
New-VM -Name <Name> -MemoryStartupBytes <Memory> -BootDevice <BootDevice> -VHDPath
<VHDPath> -Path <Path> -Generation <Generation> -Switch <SwitchName>
For example:
New-VM -Name Win10VM -MemoryStartupBytes 4GB -BootDevice VHD -VHDPath .\VMs\Win10.vhdx Path .\VMData -Generation 2 -Switch ExternalSwitch
This creates a generation 2 virtual machine named Win10VM with 4GB of memory. It boots
from the folder VMs\Win10.vhdx in the current directory and uses the virtual switch named
ExternalSwitch. The virtual machine configuration files are stored in the folder VMData.
New virtual hard disk - To create a virtual machine with a new virtual hard disk, replace the VHDPath parameter from the example above with -NewVHDPath and add the NewVHDSizeBytes parameter. For example,
New-VM -Name Win10VM -MemoryStartupBytes 4GB -BootDevice VHD -NewVHDPath .\VMs\Win10.vhdx -Path
.\VMData -NewVHDSizeBytes 20GB -Generation 2 -Switch ExternalSwitch
New virtual hard disk that boots to operating system image - To create a virtual machine with a
new virtual disk that boots to an operating system image, see the PowerShell example in Create
virtual machine walkthrough for Hyper-V on Windows 10.
5. Start the virtual machine by using the Start-VM cmdlet. Run the following cmdlet where Name is the name
of the virtual machine you created.
Start-VM -Name <Name>
For example:
Start-VM -Name Win10VM
6. Connect to the virtual machine by using Virtual Machine Connection (VMConnect).
VMConnect.exe
Options in Hyper-V Manager New Virtual Machine Wizard
The following table lists the options you can pick when you create a virtual machine in Hyper-V Manager and the
defaults for each.
PAGE
DEFAULT FOR WINDOWS SERVER 2016
AND WINDOWS 10
Specify Name and Location
Name: New Virtual Machine.
Location:
C:\ProgramData\Microsoft\Window
s\Hyper-V\.
OTHER OPTIONS
You can also enter your own name and
choose another location for the virtual
machine.
This is where the virtual machine
configuration files will be stored.
Specify Generation
Generation 1
You can also choose to create a
Generation 2 virtual machine. For more
information, see Should I create a
generation 1 or 2 virtual machine in
Hyper-V?.
Assign Memory
Startup memory: 1024 MB
You can set the startup memory from
32MB to 5902MB.
Dynamic memory: not selected
You can also choose to use Dynamic
Memory. For more information, see
Hyper-V Dynamic Memory Overview.
Configure Networking
Not connected
You can select a network connection for
the virtual machine to use from a list of
existing virtual switches. See Create a
virtual switch for Hyper-V virtual
machines.
Connect Virtual Hard Disk
Create a virtual hard disk
You can also choose to use an existing
virtual hard disk or wait and attach a
virtual hard disk later.
Name: <vmname>.vhdx
Location:
C:\Users\Public\Documents\HyperV\Virtual Hard Disks\
Size: 127GB
Installation Options
Install an operating system later
These options change the boot order of
the virtual machine so that you can
install from an .iso file, bootable floppy
disk or a network installation service,
like Windows Deployment Services
(WDS).
Summary
Displays the options that you have
chosen, so that you can verify they are
correct.
Tip: You can copy the summary from
the page and paste it into e-mail or
somewhere else to help you keep track
of your virtual machines.
- Name
- Generation
- Memory
- Network
- Hard Disk
- Operating System
See also
New-VM
Supported virtual machine configuration versions
Should I create a generation 1 or 2 virtual machine in Hyper-V?
Create a virtual switch for Hyper-V virtual machines
Plan for Hyper-V on Windows Server 2016
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
Use these resources to help you plan your Hyper-V deployment.
Hyper-V scalability in Windows Server 2016
Hyper-V security in Windows Server 2016
Networking in Windows Server 2016
Should I create a generation 1 or 2 virtual machine?
Plan for deploying devices using Discrete Device Assignment
Should I create a generation 1 or 2 virtual machine in
Hyper-V?
4/24/2017 • 9 min to read • Edit Online
Applies To: Microsoft Hyper-V Server 2016, Windows 10, Windows Server 2016
Your choice to create a generation 1 or generation 2 virtual machine depends on which guest operating system
you want to install and the boot method you want to use to deploy the virtual machine. We recommend that you
create a generation 2 virtual machine to take advantage of features like Secure Boot unless one of the following
statements is true:
The VHD you want to boot from is not UEFI-compatible.
You plan to move your virtual machine to Azure.
Generation 2 doesn't support the operating system you want to run on the virtual machine.
Generation 2 doesn't support the boot method you want to use.
For more information about what features are available with generation 2 virtual machines, see Hyper-V feature
compatibility by generation and guest.
You can't change a virtual machine's generation after you've created it. So, we recommend that you review the
considerations here, as well as choose the operating system, boot method, and features you want to use before
you choose a generation.
Which guest operating systems are supported?
Generation 1 virtual machines support most guest operating systems. Generation 2 virtual machines support
most 64-bit versions of Windows and more current versions of Linux and FreeBSD operating systems. Use the
following sections to see which generation of virtual machine supports the guest operating system you want to
install.
Windows guest operating system support
CentOS and Red Hat Enterprise Linux guest operating system support
Debian guest operating system support
FreeBSD guest operating system support
Oracle Linux guest operating system support
SUSE guest operating system support
Ubuntu guest operating system support
Windows guest operating system support
The following table shows which 64-bit versions of Windows you can use as a guest operating system for
generation 1 and generation 2 virtual machines.
64-BIT VERSIONS OF WINDOWS
GENERATION 1
GENERATION 2
Windows Server 2012 R2
✔
✔
Windows Server 2012
✔
✔
Windows Server 2008 R2
✔
✖
Windows Server 2008
✔
✖
Windows 10
✔
✔
Windows 8.1
✔
✔
Windows 8
✔
✔
Windows 7
✔
✖
The following table shows which 32-bit versions of Windows you can use as a guest operating system for
generation 1 and generation 2 virtual machines.
32-BIT VERSIONS OF WINDOWS
GENERATION 1
GENERATION 2
Windows 10
✔
✖
Windows 8.1
✔
✖
Windows 8
✔
✖
Windows 7
✔
✖
CentOS and Red Hat Enterprise Linux guest operating system support
The following table shows which versions of Red Hat Enterprise Linux (RHEL) and CentOS you can use as a guest
operating system for generation 1 and generation 2 virtual machines.
OPERATING SYSTEM VERSIONS
GENERATION 1
GENERATION 2
RHEL/CentOS 7.x series
✔
✔
RHEL/CentOS 6.x series
✔
✖
RHEL/CentOS 5.x series
✔
✖
For more information, see CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V.
Debian guest operating system support
The following table shows which versions of Debian you can use as a guest operating system for generation 1 and
generation 2 virtual machines.
OPERATING SYSTEM VERSIONS
GENERATION 1
GENERATION 2
Debian 7.x series
✔
✖
OPERATING SYSTEM VERSIONS
GENERATION 1
GENERATION 2
Devian 8.x series
✔
✔
For more information, see Debian virtual machines on Hyper-V.
FreeBSD guest operating system support
The following table shows which versions of FreeBSD you can use as a guest operating system for generation 1
and generation 2 virtual machines.
OPERATING SYSTEM VERSIONS
GENERATION 1
GENERATION 2
FreeBSD 10 and 10.1
✔
✖
FreeBSD 9.1 and 9.3
✔
✖
FreeBSD 8.4
✔
✖
For more information, see FreeBSD virtual machines on Hyper-V.
Oracle Linux guest operating system support
The following table shows which versions of Red Hat Compatible Kernel Series you can use as a guest operating
system for generation 1 and generation 2 virtual machines.
RED HAT COMPATIBLE KERNEL SERIES
VERSIONS
GENERATION 1
GENERATION 2
Oracle Linux 7.x series
✔
✔
Oracle Linux 6.x series
✔
✖
The following table shows which versions of Unbreakable Enterprise Kernel you can use as a guest operating
system for generation 1 and generation 2 virtual machines.
UNBREAKABLE ENTERPRISE KERNEL (UEK)
VERSIONS
GENERATION 1
GENERATION 2
Oracle Linux UEK R3 QU3
✔
✖
Oracle Linux UEK R3 QU2
✔
✖
Oracle Linux UEK R3 QU1
✔
✖
For more information, see Oracle Linux virtual machines on Hyper-V.
SUSE guest operating system support
The following table shows which versions of SUSE you can use as a guest operating system for generation 1 and
generation 2 virtual machines.
OPERATING SYSTEM VERSIONS
GENERATION 1
GENERATION 2
SUSE Linux Enterprise Server 12 series
✔
✔
SUSE Linux Enterprise Server 11 series
✔
✖
OPERATING SYSTEM VERSIONS
GENERATION 1
GENERATION 2
Open SUSE 12.3
✔
✖
For more information, see SUSE virtual machines on Hyper-V.
Ubuntu guest operating system support
The following table shows which versions of Ubuntu you can use as a guest operating system for generation 1
and generation 2 virtual machines.
OPERATING SYSTEM VERSIONS
GENERATION 1
GENERATION 2
Ubuntu 14.04 and later versions
✔
✔
Ubuntu 12.04
✔
✖
For more information, see Ubuntu virtual machines on Hyper-V.
How can I boot the virtual machine?
The following table shows which boot methods are supported by generation 1 and generation 2 virtual machines.
BOOT METHOD
GENERATION 1
GENERATION 2
PXE boot by using a standard network
adapter
✖
✔
PXE boot by using a legacy network
adapter
✔
✖
Boot from a SCSI virtual hard disk
(.VHDX) or virtual DVD (.ISO)
✖
✔
Boot from IDE Controller virtual hard
disk (.VHD) or virtual DVD (.ISO)
✔
✖
Boot from floppy (.VFD)
✔
✖
What are the advantages of using generation 2 virtual machines?
Here are some of the advantages you get when you use a generation 2 virtual machine:
Secure Boot - This is a feature that verifies the boot loader is signed by a trusted authority in the UEFI
database to help prevent unauthorized firmware, operating systems, or UEFI drivers from running at boot
time. Secure Boot is enabled by default for generation 2 virtual machines. If you need to run a guest
operating system that's not supported by Secure Boot, you can disable it after the virtual machine's created.
For more information, see Secure Boot.
To Secure Boot generation 2 Linux virtual machines, you need to choose the UEFI CA Secure Boot template
when you create the virtual machine.
Larger boot volume - The maximum boot volume for generation 2 virtual machines is 64 TB. This is the
maximum disk size supported by a .VHDX. For generation 1 virtual machines, the maximum boot volume is
2TB for a .VHDX and 2040GB for a .VHD. For more information, see Hyper-V Virtual Hard Disk Format
Overview.
You may also see a slight improvement in virtual machine boot and installation times with generation 2
virtual machines.
What's the difference in device support?
The following table compares the devices available between generation 1 and generation 2 virtual machines.
GENERATION 1 DEVICE
GENERATION 2 REPLACEMENT
GENERATION 2 ENHANCEMENTS
IDE controller
Virtual SCSI controller
Boot from .vhdx (64 TB maximum size,
and online resize capability)
IDE CD-ROM
Virtual SCSI CD-ROM
Support for up to 64 SCSI DVD devices
per SCSI controller.
Legacy BIOS
UEFI firmware
Secure Boot
Legacy network adapter
Synthetic network adapter
Network boot with IPv4 and IPv6
Floppy controller and DMA controller
No floppy controller support
N/A
Universal asynchronous
receiver/transmitter (UART) for COM
ports
Optional UART for debugging
Faster and more reliable
i8042 keyboard controller
Software-based input
Uses fewer resources because there is
no emulation. Also reduces the attack
surface from the guest operating
system.
PS/2 keyboard
Software-based keyboard
Uses fewer resources because there is
no emulation. Also reduces the attack
surface from the guest operating
system.
PS/2 mouse
Software-based mouse
Uses fewer resources because there is
no emulation. Also reduces the attack
surface from the guest operating
system.
S3 video
Software-based video
Uses fewer resources because there is
no emulation. Also reduces the attack
surface from the guest operating
system.
PCI bus
No longer required
N/A
Programmable interrupt controller (PIC)
No longer required
N/A
Programmable interval timer (PIT)
No longer required
N/A
Super I/O device
No longer required
N/A
More about generation 2 virtual machines
Here are some additional tips about using generation 2 virtual machines.
Attach or add a DVD drive
You can't attach a physical CD or DVD drive to a generation 2 virtual machine. The virtual DVD drive in
generation 2 virtual machines only supports ISO image files. To create an ISO image file of a Windows
environment, you can use the Oscdimg command line tool. For more information, see Oscdimg CommandLine Options.
When you create a new virtual machine with the New-VM Windows PowerShell cmdlet, the generation 2
virtual machine doesn't have a DVD drive. You can add a DVD drive while the virtual machine is running.
Use UEFI firmware
Secure Boot or UEFI firmware isn't required on the physical Hyper-V host. Hyper-V provides virtual firmware to
virtual machines that is independent of what's on the Hyper-V host.
UEFI firmware in a generation 2 virtual machine doesn't support setup mode for Secure Boot.
We don't support running a UEFI shell or other UEFI applications in a generation 2 virtual machine. Using a
non-Microsoft UEFI shell or UEFI applications is technically possible if they are compiled directly from the
sources. If these applications are not appropriately digitally signed, you must disable Secure Boot for the virtual
machine.
Work with VHDX files
You can resize a VHDX file that contains the boot volume for a generation 2 virtual machine while the virtual
machine is running.
We don't support or recommend that you create a VHDX file that is bootable to both generation 1 and
generation 2 virtual machines.
The virtual machine generation is a property of the virtual machine, not a property of the virtual hard disk. So
you can't tell if a VHDX file was created by a generation 1 or a generation 2 virtual machine.
A VHDX file created with a generation 2 virtual machine can be attached to the IDE controller or the SCSI
controller of a generation 1 virtual machine. However, if this is a bootable VHDX file, the generation 1 virtual
machine won't boot.
Use IPv6 instead of IPv4
By default, generation 2 virtual machines use IPv4. To use IPv6 instead, run the Set-VMFirmware Windows
PowerShell cmdlet. For example, the following command sets the preferred protocol to IPv6 for a virtual machine
named TestVM:
Set-VMFirmware -VMName TestVM -IPProtocolPreference IPv6
Add a COM port for kernel debugging
COM ports aren't available in generation 2 virtual machines until you add them. You can do this with Windows
PowerShell or Windows Management Instrumentation (WMI). These steps show you how to do it with Windows
PowerShell.
To add a COM port:
1. Disable Secure Boot. Kernel debugging isn't compatible with Secure Boot. Make sure the virtual machine is
in an Off state, then use the Set-VMFirmware cmdlet. For example, the following command disables Secure
Boot on virtual machine TestVM:
Set-VMFirmware -Vmname TestVM -EnableSecureBoot Off
2. Add a COM port. Use the Set-VMComPort cmdlet to do this. For example, the following command
configures the first COM port on virtual machine, TestVM, to connect to the named pipe, TestPipe, on the
local computer:
Set-VMComPort -VMName TestVM 1 \\.\pipe\TestPipe
NOTE
Configured COM ports aren't listed in the settings of a virtual machine in Hyper-V Manager.
See Also
Linux and FreeBSD Virtual Machines on Hyper-V
Use local resources on Hyper-V virtual machine with VMConnect
Plan for Hyper-V scalability in Windows Server 2016
Plan for Hyper-V networking in Windows Server 2016
4/24/2017 • 4 min to read • Edit Online
Applies To: Microsoft Hyper-V Server 2016, Windows Server 2016
A basic understanding of networking in Hyper-V helps you plan networking for virtual machines. This article also
covers some networking considerations when using live migration and when using Hyper-V with other server
features and roles.
Hyper-V networking basics
Basic networking in Hyper-V is fairly simple. It uses two parts - a virtual switch and a virtual networking adapter.
You'll need at least one of each to establish networking for a virtual machine. The virtual switch connects to any
Ethernet-based network. The virtual network adapter connects to a port on the virtual switch, which makes it
possible for a virtual machine to use a network.
The easiest way to establish basic networking is to create a virtual switch when you install Hyper-V. Then, when
you create a virtual machine, you can connect it to the switch. Connecting to the switch automatically adds a virtual
network adapter to the virtual machine. For instructions, see Create a virtual switch for Hyper-V virtual machines.
To handle different types of networking, you can add virtual switches and virtual network adapters. All switches are
part of the Hyper-V host, but each virtual network adapter belongs to only one virtual machine.
The virtual switch is a software-based layer-2 Ethernet network switch. It provides built-in features for monitoring,
controlling, and segmenting traffic, as well as security, and diagnostics. You can add to the set of built-in features
by installing plug-ins, also called extensions. These are available from independent software vendors. For more
information about the switch and extensions, see Hyper-V Virtual Switch.
Switch and network adapter choices
Hyper-V offers three types of virtual switches and two types of virtual network adapters. You'll choose which one
of each you want when you create it. You can use Hyper-V Manager or the Hyper-V module for Windows
PowerShell to create and manage virtual switches and virtual network adapters. Some advanced networking
capabilities, such as extended port access control lists (ACLs), can only be managed by using cmdlets in the HyperV module.
You can make some changes to a virtual switch or virtual network adapter after you create it. For example, it's
possible to change an existing switch to a different type, but doing that affects the networking capabilities of all the
virtual machines connected to that switch. So, you probably won't do this unless you made a mistake or need to
test something. As another example, you can connect a virtual network adapter to a different switch, which you
might do if you want to connect to a different network. But, you can't change a virtual network adapter from one
type to another. Instead of changing the type, you'd add another virtual network adapter and choose the
appropriate type.
Virtual switch types are:
External virtual switch - Connects to a wired, physical network by binding to a physical network adapter.
Internal virtual switch - Connects to a network that can be used only by the virtual machines running on
the host that has the virtual switch, and between the host and the virtual machines.
Private virtual switch - Connects to a network that can be used only by the virtual machines running on
the host that has the virtual switch, but doesn't provide networking between the host and the virtual
machines.
Virtual network adapter types are:
Hyper-V specific network adapter - Available for both generation 1 and generation 2 virtual machines.
It's designed specifically for Hyper-V and requires a driver that's included in Hyper-V integration services.
This type of network adapter faster and is the recommended choice unless you need to boot to the network
or are running an unsupported guest operating system. The required driver is provided only for supported
guest operating systems. Note that in Hyper-V Manager and the networking cmdlets, this type is just
referred to as a network adapter.
Legacy network adapter - Available only in generation 1 virtual machines. Emulates an Intel 21140-based
PCI Fast Ethernet Adapter and can be used to boot to a network so you can install an operating system from
a service such as Windows Deployment Services.
Hyper-V networking and related technologies
Recent Windows Server releases introduced improvements that give you more options for configuring networking
for Hyper-V. For example, Windows Server 2012 introduced support for converged networking. This lets you route
network traffic through one external virtual switch. Windows Server 2016 builds on this by allowing Remote Direct
Memory Access (RDMA) on network adapters bound to a Hyper-V virtual switch. You can use this configuration
either with or without Switch Embedded Teaming (SET). For details, see Remote Direct Memory Access (RDMA)
and Switch Embedded Teaming (SET)
Some features rely on specific networking configurations or do better under certain configurations. Consider these
when planning or updating your network infrastructure.
Failover clustering - It's a best practice to isolate cluster traffic and use Hyper-V Quality of Service (QoS) on the
virtual switch. For details, see Network Recommendations for a Hyper-V Cluster
Live migration - Use performance options to reduce network and CPU usage and the time it takes to complete a
live migration. For instructions, see Set up hosts for live migration without Failover Clustering.
Storage Spaces Direct - This feature relies on the SMB3.0 network protocol and RDMA. For details, see Storage
Spaces Direct in Windows Server 2016.
Plan for Hyper-V scalability in Windows Server 2016
4/24/2017 • 4 min to read • Edit Online
Applies To: Windows Server 2016
This article gives you details about the maximum configuration for components you can add and remove on a
Hyper-V host or its virtual machines, such as virtual processors or checkpoints. As you plan your deployment,
consider the maximums that apply to each virtual machine, as well as those that apply to the Hyper-V host.
Maximums for memory and logical processors are the biggest increases from Windows Server 2012, in response
to requests to support newer scenarios such as machine learning and data analytics. The Windows Server blog
recently published the performance results of a virtual machine with 5.5 terabytes of memory and 128 virtual
processors running 4 TB in-memory database. Performance was greater than 95% of the performance of a
physical server. For details, see Windows Server 2016 Hyper-V large-scale VM performance for in-memory
transaction processing. Other numbers are similar to those that apply to Windows Server 2012. (Maximums for
Windows Server 2012 R2 were the same as Windows Server 2012.)
NOTE
For information about System Center Virtual Machine Manager (VMM), see Virtual Machine Manager. VMM is a Microsoft
product for managing a virtualized data center that is sold separately.
Maximums for virtual machines
These maximums apply to each virtual machine. Not all components are available in both generations of virtual
machines. For a comparison of the generations, see Should I create a generation 1 or 2 virtual machine in HyperV?
COMPONENT
MAXIMUM
NOTES
Checkpoints
50
The actual number may be lower,
depending on the available storage.
Each checkpoint is stored as an .avhd
file that uses physical storage.
Memory
12 TB for generation 2;
1 TB for generation 1
Review the requirements for the specific
operating system to determine the
minimum and recommended amounts.
Serial (COM) ports
2
None.
Size of physical disks attached directly
to a virtual machine
Varies
Maximum size is determined by the
guest operating system.
Virtual Fibre Channel adapters
4
As a best practice, we recommended
that you connect each virtual Fibre
Channel Adapter to a different virtual
SAN.
Virtual floppy devices
1 virtual floppy drive
None.
COMPONENT
MAXIMUM
NOTES
Virtual hard disk capacity
64 TB for VHDX format;
2040 GB for VHD format
Each virtual hard disk is stored on
physical media as either a .vhdx or a
.vhd file, depending on the format used
by the virtual hard disk.
Virtual IDE disks
4
The startup disk (sometimes called the
boot disk) must be attached to one of
the IDE devices. The startup disk can be
either a virtual hard disk or a physical
disk attached directly to a virtual
machine.
Virtual processors
240 for generation 2;
64 for generation 1;
320 available to the host OS (root
partition)
The number of virtual processors
supported by a guest operating system
might be lower. For details, see the
information published for the specific
operating system.
Virtual SCSI controllers
4
Use of virtual SCSI devices requires
integration services, which are available
for supported guest operating systems.
For details on which operating systems
are supported, see Supported Linux
and FreeBSD virtual machines and
Supported Windows guest operating
systems.
Virtual SCSI disks
256
Each SCSI controller supports up to 64
disks, which means that each virtual
machine can be configured with as
many as 256 virtual SCSI disks. (4
controllers x 64 disks per controller)
Virtual network adapters
12 total:
- 8 Hyper-V specific network adapters
- 4 legacy network adapters
The Hyper-V specific network adapter
provides better performance and
requires a driver included in integration
services. For more information, see Plan
for Hyper-V networking in Windows
Server 2016.
Maximums for Hyper-V hosts
These maximums apply to each Hyper-V host.
COMPONENT
MAXIMUM
NOTES
Logical processors
512
Both of these must be enabled in the
firmware:
- Hardware-assisted virtualization
- Hardware-enforced Data Execution
Prevention (DEP)
The host OS (root partition) will only
see maximum 320 logical processors
COMPONENT
MAXIMUM
NOTES
Memory
24 TB
None.
Network adapter teams (NIC Teaming)
No limits imposed by Hyper-V.
For details, see NIC Teaming.
Physical network adapters
No limits imposed by Hyper-V.
None.
Running virtual machines per server
1024
None.
Storage
Limited by what is supported by the
host operating system. No limits
imposed by Hyper-V.
Note: Microsoft supports networkattached storage (NAS) when using
SMB 3.0. NFS-based storage is not
supported.
Virtual network switch ports per server
Varies; no limits imposed by Hyper-V.
The practical limit depends on the
available computing resources.
Virtual processors per logical processor
No ratio imposed by Hyper-V.
None.
Virtual processors per server
2048
None.
Virtual storage area networks (SANs)
No limits imposed by Hyper-V.
None.
Virtual switches
Varies; no limits imposed by Hyper-V.
The practical limit depends on the
available computing resources.
Failover Clusters and Hyper-V
This table lists the maximums that apply when using Hyper-V and Failover Clustering. It's important to do capacity
planning to ensure that there will be enough hardware resources to run all the virtual machines in a clustered
environment.
To learn about updates to Failover Clustering, including new features for virtual machines, see What's New in
Failover Clustering in Windows Server 2016.
COMPONENT
MAXIMUM
NOTES
Nodes per cluster
64
Consider the number of nodes you
want to reserve for failover, as well as
maintenance tasks such as applying
updates. We recommend that you plan
for enough resources to allow for 1
node to be reserved for failover, which
means it remains idle until another
node is failed over to it. (This is
sometimes referred to as a passive
node.) You can increase this number if
you want to reserve additional nodes.
There is no recommended ratio or
multiplier of reserved nodes to active
nodes; the only requirement is that the
total number of nodes in a cluster can't
exceed the maximum of 64.
COMPONENT
MAXIMUM
NOTES
Running virtual machines per cluster
and per node
8,000 per cluster
Several factors can affect the real
number of virtual machines you can
run at the same time on one node,
such as:
- Amount of physical memory being
used by each virtual machine.
- Networking and storage bandwidth.
- Number of disk spindles, which affects
disk I/O performance.
Plan for Hyper-V security in Windows Server 2016
4/24/2017 • 3 min to read • Edit Online
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016
Secure the Hyper-V host operating system, the virtual machines, configuration files, and virtual machine data. Use
the following list of recommended practices as a checklist to help you secure your Hyper-V environment.
Secure the Hyper-V host
Keep the host OS secure.
Minimize the attack surface by using the minimum Windows Server installation option that you need for
the management operating system. For more information, see Installation Options for Windows Server
2016 and Getting started with Nano Server. We don't recommend that you run production workloads on
Hyper-V on Windows 10.
Keep the Hyper-V host operating system, firmware, and device drivers up to date with the latest security
updates. Check your vendor's recommendations to update firmware and drivers.
Don't use the Hyper-V host as a workstation or install any unnecessary software.
Remotely manage the Hyper-V host. If you must manage the Hyper-V host locally, use Credential Guard.
For more information, see Protect derived domain credentials with Credential Guard.
Enable code integrity policies. Use virtualization-based security protected Code Integrity services. For
more information, see Device Guard Deployment Guide.
Use a secure network.
Use a separate network with a dedicated network adapter for the physical Hyper-V computer.
Use a private or secure network to access VM configurations and virtual hard disk files.
Use a private/dedicated network for your live migration traffic. Consider enabling IPSec on this network
to use encryption and secure your VM's data going over the network during migration. For more
information, see Set up hosts for live migration without Failover Clustering.
Secure storage migration traffic. Use SMB 3.0 for end-to-end encryption of SMB data and data protection
tampering or eavesdropping on untrusted networks. Use a private network to access the SMB share contents to
prevent man-in-the-middle attacks. For more information, see SMB Security Enhancements.
Configure hosts to be part of a guarded fabric. For more information, see Guarded fabric.
Secure devices. Secure the storage devices where you keep virtual machine resource files.
Secure the hard drive. Use BitLocker Drive Encryption to protect resources.
Harden the Hyper-V host operating system. Use the baseline security setting recommendations described in
the Windows Server Security Baseline.
Grant appropriate permissions.
Add users that need to manage the Hyper-V host to the Hyper-V administrators group.
Don't grant virtual machine administrators permissions on the Hyper-V host operating system.
Configure anti-virus exclusions and options for Hyper-V. Windows Defender already has automatic
exclusions configured. For more information about exclusions, see Recommended antivirus exclusions for
Hyper-V hosts.
Don't mount unknown VHDs. This can expose the host to file system level attacks.
Don't enable nesting in your production environment unless it's required. If you enable nesting, don't
run unsupported hypervisors on a virtual machine.
For more secure environments:
Use hardware with a Trusted Platform Module (TPM) 2.0 chip to set up a guarded fabric. For more
information, see System requirements for Hyper-V on Windows Server 2016.
Secure virtual machines
Create generation 2 virtual machines for supported guest operating systems. For more information, see
Generation 2 security settings.
Enable Secure Boot. For more information, see Generation 2 security settings.
Keep the guest OS secure.
Install the latest security updates before you turn on a virtual machine in a production environment.
Install integration services for the supported guest operating systems that need it and keep it up to date.
Integration service updates for guests that run supported Windows versions are available through
Windows Update.
Harden the operating system that runs in each virtual machine based on the role it performs. Use the
baseline security setting recommendations that are described in the Windows Security Baseline.
Use a secure network. Make sure virtual network adapters connect to the correct virtual switch and have the
appropriate security setting and limits applied.
Store virtual hard disks and snapshot files in a secure location.
Secure devices. Configure only required devices for a virtual machine. Don't enable discrete device assignment
in your production environment unless you need it for a specific scenario. If you do enable it, make sure to only
expose devices from trusted vendors.
Configure antivirus, firewall, and intrusion detection software within virtual machines as appropriate
based on the virtual machine role.
Enable virtualization based security for guests that run Windows 10 or Windows Server 2016. For more
information, see the Device Guard Deployment Guide.
Only enable Discrete Device Assignment if needed for a specific workload. Due to the nature of passing
through a physical device, work with the device manufacturer to understand if it should be used in a secure
environment.
For more secure environments:
Deploy virtual machines with shielding enabled and deploy them to a guarded fabric. For more
information, see Generation 2 security settings and Guarded fabric.
Plan for Deploying Devices using Discrete Device
Assignment
4/24/2017 • 8 min to read • Edit Online
Applies To: Microsoft Hyper-V Server 2016, Windows Server 2016
Discrete Device Assignment allows physical PCIe hardware to be directly accessible from within a virtual machine.
This guide will discuss the type of devices that can use Discrete Device Assignment, host system requirements,
limitations imposed on the virtual machines as well as security implications of Discrete Device Assignment.
For Discrete Device Assignment's initial release, we have focused on two device classes to be formally supported
by Microsoft. These are Graphics Adapters and NVMe Storage devices. Other devices are likely to work and
hardware vendors are able to offer statements of support for those devices. For these other devices, please reach
out to those hardware vendors for support.
If you are ready to try out Discrete Device Assignment, you can jump over to Deploying Graphics Devices Using
Discrete Device Assignment or Deploying Storage Devices using Discrete Device Assignment to get started!
Supported Virtual Machines and Guest Operating Systems
Discrete Device Assignment is supported for Generation 1 or 2 VMs. Additionally, the guests supported include
Windows 10, Windows Server 2016, Windows Server 2012r2 with KB 3133690 applied, and various distributions
of the Linux OS.
System Requirements
In addition to the System Requirements for Windows Server and the System Requirements for Hyper-V, Discrete
Device Assignment requires server class hardware that is capable of granting the operating system control over
configuring the PCIe fabric (Native PCI Express Control). In addition, the PCIe Root Complex has to support "Access
Control Services" or ACS which enables Hyper-V to force all PCIe traffic through the I/O MMU.
These capabilities usually aren't exposed directly in the BIOS of the server and are often hidden behind other
settings. For example, the same capabilities are required for SR-IOV support and in the BIOS you may need to set
"Enable SR-IOV." Please reach out to your system vendor if you are unable to identify the correct setting in your
BIOS.
To help ensure hardware the hardware is capable of Discrete Device Assignment, our engineers have put together
a Machine Profile Script that you can run on an Hyper-V enabled host to test if your server is correctly setup and
what devices are capable of Discrete Device Assignment.
Device Requirements
Not every PCIe device can be used with Discrete Device Assignment. For example, older devices that leverage
legacy (INTx) PCI Interrupts are not supported. Jake Oshin's blog posts go into more detail however, for the
consumer, running the Machine Profile Script will tell you which devices are capable of being used for Discrete
Device Assignment.
Device manufactures can reach out to their Microsoft representative for more details.
Device Driver
As Discrete Device Assignment passes the entire PCIe device into the Guest VM, a host driver is not required to be
installed prior to the device being mounted within the VM. The only requirement on the host is that the device's
PCIe Location Path can be determined. The device's driver can optionally be installed if this helps in identifying the
device. An example of this is, a GPU that doesn't has it's device driver installed shows up as a Microsoft Basic
Render Device. If the device driver is installed, it's manufacture and model will likely be displayed.
Once the device is mounted inside the guest, the Manufacturer's device driver can now be installed like normal
inside the guest virtual machine.
Virtual Machine Limitations
Due to the nature of how Discrete Device Assignment is implemented, some features of a virtual machine are
restricted while a device is attached. The follow features are not available:
VM Save/Restore
Live Migration of a VM
The use of Dynamic Memory
Security
Discrete Device Assignment passes the entire device into the VM. This means all capabilities of that device are
accessible from the guest operating system. Some capabilities, like firmware updating, may adversely impact the
stability of the system. As such, numerous warnings are presented to the admin when dismounting the device
from the host. We highly recommend that Discrete Device Assignment is only used where the tenants of the VMs
are trusted.
If the admin desires to use a device with an untrusted tenant, we have provided device manufactures with the
ability to create a Device Mitigation driver that can be installed on the host. Please contact the device manufacturer
for details on whether they provide a Device Mitigation Driver.
If you would like to bypass the security checks for a device that does not have a Device Mitigation Driver you will
have to pass in --force to the Dismount-VMHostAssignableDevice call. Understand that by doing so, you have
changed the security profile of that system and this is only recommended during prototyping or trusted
environments.
PCIe Location Path
The PCIe Location path is required to dismount and mount the device from the Host. An example location path
looks like the following: "PCIROOT(20)#PCI(0300)#PCI(0000)#PCI(0800)#PCI(0000)" . The Machine Profile Script will
also return the Location Path of the PCIe device.
Getting the Location Path by Using Device Manager
Open Device Manager and locate the device.
Right click and select “Properties.”
Navigate to the Details tab and select “Location Paths” in the Property drop down.
Right click the entry that begins with “PCIROOT” and select "Copy." You now have the location path for that
device.
MMIO Space
Some devices, especially GPUs, require additional MMIO space to be allocated to the VM for the memory of that
device to be accessible. By default, each VM starts off with 128MB of low MMIO space and 512MB of high MMIO
space allocated to it, but a device might require more, or you may pass through multiple devices that the
combined requirements exceed these values. Changing MMIO Space is straight forward, you do it in PowerShell
using the following commands.
Set-VM -LowMemoryMappedIoSpace 3Gb -VMName $vm
Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName $vm
The easiest way to determine how much MMIO space to allocate is to use the Machine Profile Script. Alternatively,
you can calculate it based using device manager and the blog post, Discrete Device Assignment - GPUs, has more
details.
Machine Profile Script
In order to simply identifying if the server is configured correctly and what devices are available to be passed
through using Discrete Device Assignment, one of our engineers put together the following PowerShell script:
SurveyDDA.ps1.
Before using the script, please ensure you have the Hyper-V role installed and you run the script from a
PowerShell command window that has Administrator privileges.
If the system is incorrectly configured to support Discrete Device Assignment, the tool will display an error
message as to what is wrong. If the tool finds the system configured correctly, it will enumerate all the devices it
can find on the PCIe Bus.
For each device it finds, it'll display if it is able to be used with Discrete Device Assignment or not and if not it will
give you a reason. When a device is successfully identified as being able to be used for Discrete Device
Assignment, the device's Location Path will be displayed. Additionally, if that device requires MMIO space, that will
also be listed.
Frequently Asked Questions
How does Remote Desktop's RemoteFX vGPU technology relate to Discrete Device Assignment?
They are completely separate technologies. RemoteFX vGPU does not need to be installed for Discrete Device
Assignment to work. Additionally, no additional roles are required to be installed. RemoteFX vGPU requires the
RDVH role to be installed in order for the RemoteFX vGPU driver to be present in the VM. For Discrete Device
Assignment, since you will be installing the Hardware Vendor's driver into the virtual machine, no additional roles
need to be present.
I've passed a GPU into a VM but Remote Desktop or an application isn't recognizing the GPU
There are a number of reasons this could happen but here are a few things to try.
Ensure the latest GPU vendor's driver is installed and is not reporting an error by checking the device state in
device manager.
Ensure that device has enough MMIO space allocated for it within the VM.
Ensure you're using a GPU that the vendor supports being used in this configuration, for example, some
vendors disallow their consumer cards from working when passed through to a VM.
Ensure the application that's being run supports being run in a VM or that the drivers and GPU are supported
by the application. Some applications have whitelists of GPUs and environments.
If you're using the Remote Desktop Session Host role or Windows Multipoint Service, you will need to ensure
that a specific group policy is set.
Inside the guest
Using the run dialog, open gpedit.msc
Navigate the tree
Computer Configuration
Administrator Templates
Windows Components
Remote Desktop Services
Remote Desktop Session Host
Remote Session Environment
Use the hardware default graphics adapter for all Remote Desktop Services sessions
Set to Enabled and reboot the VM
Can Discrete Device Assignment take advantage of Remote Desktop's AVC444 codec?
Yes, visit this blog post for more information: Remote Desktop Protocol (RDP) 10 AVC/H.264 improvements in
Windows 10 and Windows Server 2016 Technical Preview.
Can I use PowerShell to get the Location Path?
Yes, there are various ways to do this, here is one example.
#Enumerate all PNP Devices on the system
$pnpdevs = Get-PnpDevice -presentOnly
#Select only those devices that are Display devices manufactured by NVIDIA
$gpudevs = $pnpdevs |where-object {$_.Class -like "Display" -and $_.Manufacturer -like "NVIDIA"}
#Select the location path of the first device that's available to be dismounted by the host.
$locationPath = ($gpudevs | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]
Can Discrete Device Assignment be used to pass a USB device into a VM?
Although not officially supported, our customers have used Discrete Device Assignment to do this by passing the
entire USB3 controller into a VM. As the whole controller is being passed in, each USB device plugged into that
controller will also be accessible in the VM. Note, only some USB3 controllers may work, and USB2 controllers
definitely will not work.
Deploy Hyper-V on Windows Server 2016
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
Use these resources to help you deploy Hyper-V on Windows Server 2016.
Configure virtual local area networks for Hyper-V
Set up hosts for live migration without Failover Clustering
Upgrade virtual machine version in Hyper-V on Windows 10 or Windows Server 2016
Deploy graphics devices using Discrete Device Assignment
Deploy storage devices using Discrete Device Assignment
4/24/2017 • 3 min to read • Edit Online
Applies To: Windows Server 2016, Windows 10
Export and Import virtual machines
This article shows you how to export and import a virtual machine, which is a quick way to move or copy them. This
article also discusses some of the choices to make when doing an export or import.
Export a Virtual Machine
An export gathers all required files into one unit--virtual hard disk files, virtual machine configuration files, and any
checkpoint files. You can do this on a virtual machine that is in either a started or stopped state.
Using Hyper-V Manager
To create a virtual machine export:
1. In Hyper-V Manager, right-click the virtual machine and select Export.
2. Choose where to store the exported files, and click Export.
When the export is done, you can see all exported files under the export location.
Using PowerShell
Open a session as Administrator and run a command like the following, after replacing <vm name> and <path>:
Export-VM -Name \<vm name\> -Path \<path\>
For details, see Export-VM.
Import a Virtual Machine
Importing a virtual machine registers the virtual machine with the Hyper-V host. You can import back into the host,
or new host. If you're importing to the same host, you don't need to export the virtual machine first, because HyperV tries to recreate the virtual machine from available files. Importing a virtual machine registers it so it can be used
on the Hyper-V host.
The Import Virtual Machine wizard also helps you fix incompatibilities that can exist when moving from one host to
another. This is commonly differences in physical hardware, such as memory, virtual switches, and virtual
processors.
Import using Hyper-V Manager
To import a virtual machine:
1. From the Actions menu in Hyper-V Manager, click Import Virtual Machine.
2. Click Next.
3. Select the folder that contains the exported files, and click Next.
4. Select the virtual machine to import.
5. Choose the import type, and click Next. (For descriptions, see Import types, below.)
6. Click Finish.
Import using PowerShell
Use the Import-VM cmdlet, following the example for the type of import you want. For descriptions of the types,
see Import types, below.
Register in place
This type of import uses the files where they are stored at the time of import and retains the virtual machine's ID.
The following command shows an example of an import file. Run a similar command with your own values.
Import-VM -Path 'C:\<vm export path>\2B91FEB3-F1E0-4FFF-B8BE-29CED892A95A.vmcx'
Restore
To import the virtual machine specifying your own path for the virtual machine files, run a command like this,
replacing the examples with your values:
Import-VM -Path 'C:\<vm export path>\2B91FEB3-F1E0-4FFF-B8BE-29CED892A95A.vmcx' -Copy -VhdDestinationPath
'D:\Virtual Machines\WIN10DOC' -VirtualMachinePath 'D:\Virtual Machines\WIN10DOC'
Import as a copy
To complete a copy import and move the virtual machine files to the default Hyper-V location, run a command like
this, replacing the examples with your values:
Import-VM -Path 'C:\<vm export path>\2B91FEB3-F1E0-4FFF-B8BE-29CED892A95A.vmcx' -Copy -GenerateNewId
For details, see Import-VM.
Import types
Hyper-V offers three import types:
Register in-place – This type assumes export files are in the location where you'll store and run the virtual
machine. The imported virtual machine has the same ID as it did at the time of export. Because of this, if the
virtual machine is already registered with Hyper-V, it needs to be deleted before the import will work. When
the import has completed, the export files become the running state files and can't be removed.
Restore the virtual machine – Restore the virtual machine to a location you choose, or use the default to
Hyper-V. This import type creates a copy of the exported files and moves them to the selected location. When
imported, the virtual machine has the same ID as it did at the time of export. Because of this, if the virtual
machine is already running in Hyper-V, it needs to be deleted before the import can be completed. When the
import has completed, the exported files remain intact and can be removed or imported again.
Copy the virtual machine – This is similar to the Restore type in that you select a location for the files. The
difference is that the imported virtual machine has a new unique ID, which means you can import the virtual
machine to the same host multiple times.
Set up hosts for live migration without Failover
Clustering
4/24/2017 • 7 min to read • Edit Online
Applies To: Windows Server 2016
This article shows you how to set up hosts that aren't clustered so you can do live migrations between them. Use
these instructions if you didn't set up live migration when you installed Hyper-V, or if you want to change the
settings. To set up clustered hosts, use tools for Failover Clustering.
Requirements for setting up live migration
To set up non-clustered hosts for live migration, you'll need:
A user account with permission to perform the various steps. Membership in the local Hyper-V
Administrators group or the Administrators group on both the source and destination computers meets
this requirement, unless you're configuring constrained delegation. Membership in the Domain
Administrators group is required to configure constrained delegation.
The Hyper-V role in Windows Server 2016 or Windows Server 2012 R2 installed on the source and
destination servers. You can do a live migration between hosts running Windows Server 2016 and
Windows Server 2012 R2 if the virtual machine is at least version 5.
For version upgrade instructions, see Upgrade virtual machine version in Hyper-V on Windows 10 or
Windows Server 2016. For installation instructions, see Install the Hyper-V role on Windows Server.
Source and destination computers that either belong to the same Active Directory domain, or belong to
domains that trust each other.
The Hyper-V management tools installed on a computer running Windows Server 2016 or Windows 10, unless
the tools are installed on the source or destination server and you'll run the tools from the server.
Consider options for authentication and networking
Consider how you want to set up the following:
Authentication: Which protocol will be used to authenticate live migration traffic between the source and
destination servers? The choice determines whether you'll need to sign on to the source server before
starting a live migration:
Kerberos lets you avoid having to sign in to the server, but requires constrained delegation to be set up.
See below for instructions.
CredSSP lets you avoid configuring constrained delegation, but requires you sign in to the source
server. You can do this through a local console session, a Remote Desktop session, or a remote
Windows PowerShell session.
CredSPP requires signing in for situations that might not be obvious. For example, if you sign in to
TestServer01 to move a virtual machine to TestServer02, and then want to move the virtual machine
back to TestServer01, you'll need to sign in to TestServer02 before you try to move the virtual
machine back to TestServer01. If you don't do this, the authentication attempt fails, an error occurs,
and the following message is displayed:
"Virtual machine migration operation failed at migration Source.
Failed to establish a connection with host computer name: No credentials are available in the
security package 0x8009030E."
Performance: Does it makes sense to configure performance options? These options can reduce network
and CPU usage, as well as make live migrations go faster. Consider your requirements and your
infrastructure, and test different configurations to help you decide. The options are described at the end of
step 2.
Network preference: Will you allow live migration traffic through any available network, or isolate the
traffic to specific networks? As a security best practice, we recommend that you isolate the traffic onto
trusted, private networks because live migration traffic is not encrypted when it is sent over the network.
Network isolation can be achieved through a physically isolated network or through another trusted
networking technology such as VLANs.
Step 1: Configure constrained delegation (optional)
If you have decided to use Kerberos to authenticate live migration traffic, configure constrained delegation using
an account that is a member of the Domain Administrators group.
Use the Users and Computers snap-in to configure constrained delegation
1. Open the Active Directory Users and Computers snap-in. (From Server Manager, select the server if it's not
selected, click Tools >> Active Directory Users and Computers).
2. From the navigation pane in Active Directory Users and Computers, select the domain and double-click
the Computers folder.
3. From the Computers folder, right-click the computer account of the source server and then click
Properties.
4. From Properties, click the Delegation tab.
5. On the delegation tab, select Trust this computer for delegation to the specified services only and
then select Use any authentication protocol.
6. Click Add.
7. From Add Services, click Users or Computers.
8. From Select Users or Computers, type the name of the destination server. Click Check Names to verify it,
and then click OK.
9. From Add Services, in the list of available services, do the following and then click OK:
To move virtual machine storage, select cifs. This is required if you want to move the storage along
with the virtual machine, as well as if you want to move only a virtual machine's storage. If the server
is configured to use SMB storage for Hyper-V, this should already be selected.
To move virtual machines, select Microsoft Virtual System Migration Service.
10. On the Delegation tab of the Properties dialog box, verify that the services you selected in the previous
step are listed as the services to which the destination computer can present delegated credentials. Click
OK.
11. From the Computers folder, select the computer account of the destination server and repeat the process.
In the Select Users or Computers dialog box, be sure to specify the name of the source server.
The configuration changes take effect after both of the following happen:
The changes are replicated to the domain controllers that the servers running Hyper-V are logged into.
The domain controller issues a new kerboros ticket.
Step 2: Set up the source and destination computers for live migration
This step includes choosing options for authentication and networking. As a security best practice, we recommend
that you select specific networks to use for live migration traffic, as discussed above. This step also shows you how
to choose the performance option.
Use Hyper-V Manager to set up the source and destination computers for live migration
1. Open Hyper-V Manager. (From Server Manager, click Tools >>Hyper-V Manager.)
2. In the navigation pane, select one of the servers. (If it isn't listed, right-click Hyper-V Manager, click
Connect to Server, type the server name, and click OK. Repeat to add more servers.)
3. In the Action pane, click Hyper-V Settings >>Live Migrations.
4. In the Live Migrations pane, check Enable incoming and outgoing live migrations.
5. Under Simultaneous live migrations, specify a different number if you don't want to use the default of 2.
6. Under Incoming live migrations, if you want to use specific network connections to accept live migration
traffic, click Add to type the IP address information. Otherwise, click Use any available network for live
migration. Click OK.
7. To choose Kerberos and performance options, expand Live Migrations and then select Advanced
Features.
If you have configured constrained delegation, under Authentication protocol, select Kerberos.
Under Performance options, review the details and choose a different option if it's appropriate for
your environment.
8. Click OK.
9. Select the other server in Hyper-V Manager and repeat the steps.
Use Windows PowerShell to set up the source and destination computers for live migration
Three cmdlets are available for configuring live migration on non-clustered hosts: Enable-VMMigration, SetVMMigrationNetwork, and Set-VMHost. This example uses all three and does the following:
Configures live migration on the local host
Allows incoming migration traffic only on a specific network
Chooses Kerberos as the authentication protocol
Each line represents a separate command.
PS C:\> Enable-VMMigration
PS C:\> Set-VMMigrationNetwork 192.168.10.1
PS C:\> Set-VMHost -VirtualMachineMigrationAuthenticationType Kerberos
Set-VMHost also lets you choose a performance option (and many other host settings). For example, to choose
SMB but leave the authentication protocol set to the default of CredSSP, type:
PS C:\> Set-VMHost -VirtualMachineMigrationPerformanceOption SMBTransport
This table describes how the performance options work.
OPTION
DESCRIPTION
TCP/IP
Copies the memory of the virtual machine to the destination
server over a TCP/IP connection.
Compression
Compresses the memory content of the virtual machine
before copying it to the destination server over a TCP/IP
connection. Note: This is the default setting.
SMB
Copies the memory of the virtual machine to the destination
server over a SMB 3.0 connection.
- SMB Direct is used when the network adapters on the
source and destination servers have Remote Direct Memory
Access (RDMA) capabilities enabled.
- SMB Multichannel automatically detects and uses multiple
connections when a proper SMB Multichannel configuration is
identified.
For more information, see Improve Performance of a File
Server with SMB Direct.
Next steps
After you set up the hosts, you're ready to do a live migration. For instructions, see Use live migration without
Failover Clustering to move a virtual machine.
Upgrade virtual machine version in Hyper-V on
Windows 10 or Windows Server 2016
4/24/2017 • 4 min to read • Edit Online
Applies To: Windows 10, Windows Server 2016
Make the latest Hyper-V features available on your virtual machines by upgrading the configuration version.
Don't do this until:
You upgrade your Hyper-V hosts to the latest version of Windows or Windows Server.
You upgrade the cluster functional level.
You're sure that you won't need to move the virtual machine back to a Hyper-V host that runs a previous
version of Windows or Windows Server.
For more information, see Cluster Operating System Rolling Upgrade and Upgrading Windows Server 2012 R2
clusters to Windows Server 2016 in VMM.
Step 1: Check the virtual machine configuration versions
1. On the Windows desktop, click the Start button and type any part of the name Windows PowerShell.
2. Right-click Windows PowerShell and select Run as Administrator.
3. Use the Get-VM cmdlet. Run the following command to get the versions of your virtual machines.
Get-VM * | Format-Table Name, Version
You can also see the configuration version in Hyper-V Manager by selecting the virtual machine and looking
at the Summary tab.
Step 2: Upgrade the virtual machine configuration version
1. Shut down the virtual machine in Hyper-V Manager.
2. Select Action > Upgrade Configuration Version. If this option isn't available for the virtual machine, then it's
already at the highest configuration version supported by the Hyper-V host.
To upgrade the virtual machine configuration version by using Windows PowerShell, use the Update-VMVersion
cmdlet. Run the following command where vmname is the name of the virtual machine.
Update-VMVersion <vmname>
Supported virtual machine configuration versions
The following table shows which virtual machine configuration versions are supported by Hyper-V hosts that run
on specific versions of Windows operating systems.
HYPER-V HOST WINDOWS VERSION
SUPPORTED VIRTUAL MACHINE CONFIGURATION VERSIONS
Windows Server 2016
8.0, 7.1, 7.0, 6.2, 5.0
Windows 10 Anniversary Update
8.0, 7.1, 7.0, 6.2, 5.0
HYPER-V HOST WINDOWS VERSION
SUPPORTED VIRTUAL MACHINE CONFIGURATION VERSIONS
Windows Server 2016 Technical Preview
7.1, 7.0, 6.2, 5.0
Windows 10 build 10565 or later
7.0, 6.2, 5.0
Windows 10 builds earlier than 10565
6.2, 5.0
Windows Server 2012 R2
5.0
Windows 8.1
5.0
Run the PowerShell cmdlet Get-VMHostSupportedVersion to see what virtual machine configuration versions
your Hyper-V Host supports. When you create a virtual machine, it's created with the default configuration
version. To see what the default is, run the following command.
Get-VMHostSupportedVersion -Default
If you need to create a virtual machine that you can move to a Hyper-V Host that runs an older version of
Windows, use the New-VM cmdlet with the -version parameter. For example, to create a virtual machine that you
can move to a Hyper-V host that runs Windows Server 2012 R2 , run the following command. This command will
create a virtual machine named "WindowsCV5" with a configuration version 5.0.
New-VM -Name "WindowsCV5" -Version 5.0
Why should I upgrade the virtual machine configuration version?
When you move or import a virtual machine to a computer that runs Hyper-V on Windows Server 2016 or
Windows 10, the virtual machine"s configuration isn't automatically updated. This means that you can move the
virtual machine back to a Hyper-V host that runs a previous version of Windows or Windows Server. But, this
also means that you can't use some of the new virtual machine features until you manually update the
configuration version. You can't downgrade the virtual machine configuration version after you've upgraded it.
The virtual machine configuration version represents the compatibility of the virtual machine's configuration,
saved state, and snapshot files with the version of Hyper-V. When you update the configuration version, you
change the file structure that is used to store the virtual machines configuration and the checkpoint files. You also
update the configuration version to the latest version supported by that Hyper-V host. Upgraded virtual
machines use a new configuration file format, which is designed to increase the efficiency of reading and writing
virtual machine configuration data. The upgrade also reduces the potential for data corruption in the event of a
storage failure.
The following table lists descriptions, file name extensions, and default locations for each type of file that's used
for new or upgraded virtual machines.
VIRTUAL MACHINE FILE TYPES
DESCRIPTION
Configuration
Virtual machine configuration information that is stored in
binary file format.
File name extension: .vmcx
Default location: C:\ProgramData\Microsoft\Windows\HyperV\Virtual Machines
VIRTUAL MACHINE FILE TYPES
DESCRIPTION
Runtime state
Virtual machine runtime state information that is stored in
binary file format.
File name extension: .vmrs
Default location: C:\ProgramData\Microsoft\Windows\HyperV\Virtual Machines
Virtual hard disk
Stores virtual hard disks for the virtual machine.
File name extension: .vhd or .vhdx
Default location: C:\ProgramData\Microsoft\Windows\HyperV\Virtual Hard Disks
Automatic virtual hard disk
Differencing disk files used for virtual machine checkpoints.
File name extension: .avhdx
Default location: C:\ProgramData\Microsoft\Windows\HyperV\Virtual Hard Disks
Checkpoint
Checkpoints are stored in multiple checkpoint files. Each
checkpoint creates a configuration file and runtime state file.
File name extensions: .vmrs and .vmcx
Default location:
C:\ProgramData\Microsoft\Windows\Snapshots
What happens if I don't upgrade the virtual machine configuration
version?
If you have virtual machines that you created with an earlier version of Hyper-V, some features may not work
with those virtual machines until you update the configuration version.
The following table shows the minimum virtual machine configuration version required to use new Hyper-V
features.
FEATURE
MINIMUM VM CONFIGURATION VERSION
Hot Add/Remove Memory
6.2
Secure Boot for Linux VMs
6.2
Production Checkpoints
6.2
PowerShell Direct
6.2
Virtual Machine Grouping
6.2
Virtual Trusted Platform Module (vTPM)
7.0
Virtual machine multi queues (VMMQ)
7.1
XSAVE support
8.0
Key storage drive
8.0
Guest Virtualization Based Security support (VBS)
8.0
FEATURE
MINIMUM VM CONFIGURATION VERSION
Nested virtualization
8.0
Virtual processor count
8.0
Large memory VMs
8.0
For more information about these features, see What's new in Hyper-V on Windows Server 2016.
Deploy graphics devices using Discrete Device
Assignment
4/24/2017 • 4 min to read • Edit Online
Applies To: Microsoft Hyper-V Server 2016, Windows Server 2016
This is preliminary content and subject to change.
Starting with Windows Server 2016, you can use Discrete Device Assignment, or DDA, to pass an entire PCIe
Device into a VM. This will allow high performance access to devices like NVMe storage or Graphics Cards from
within a VM while being able to leverage the devices native drivers. Please visit the Plan for Deploying Devices
using Discrete Device Assignment for more details on which devices work, what are the possible security
implications, etc.
There are three steps to using a device with Discrete Device Assignment:
Configure the VM for Discrete Device Assignment
Dismount the Device from the Host Partition
Assigning the Device to the Guest VM
All command can be executed on the Host on a Windows PowerShell console as an Administrator.
Configure the VM for DDA
Discrete Device Assignment imposes some restrictions to the VMs and the following step needs to be taken.
1. Configure the “Automatic Stop Action” of a VM to TurnOff by executing
Set-VM -Name VMName -AutomaticStopAction TurnOff
Some Additional VM preparation is required for Graphics Devices
Some hardware performs better if the VM in configured in a certain way. For details on whether or not you need
the following configurations for your hardware, please reach out to the hardware vendor. Additional details can be
found on Plan for Deploying Devices using Discrete Device Assignment and on this blog post.
1. Enable Write-Combining on the CPU Set-VM -GuestControlledCacheTypes $true -VMName VMName
2. Configure the 32 bit MMIO space Set-VM -LowMemoryMappedIoSpace 3Gb -VMName VMName
3. Configure greater than 32 bit MMIO space Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName VMName Note, the
MMIO space values above are reasonable values to set for experimenting with a single GPU. If after starting the
VM, the device is reporting an error relating to not enough resources, you'll likely need to modify these values.
Also, if you're assigning multiple GPUs, you'll need to increase these values as well.
Dismount the Device from the Host Partition
Optional - Install the Partitioning Driver
Discrete Device Assignment provide hardware venders the ability to provide a security mitigation driver with their
devices. Note that this driver is not the same as the device driver that will be installed in the guest VM. It’s up to the
hardware vendor’s discretion to provide this driver, however, if they do provide it, please install it prior to
dismounting the device from the host partition. Please reach out to the hardware vendor for more information on
if they have a mitigation driver
If no Partitioning driver is provided, during dismount you must use the -force option to bypass the security
warning. Please read more about the security implications of doing this on Plan for Deploying Devices using
Discrete Device Assignment.
Locating the Device’s Location Path
The PCI Location path is required to dismount and mount the device from the Host. An example location path looks
like the following: "PCIROOT(20)#PCI(0300)#PCI(0000)#PCI(0800)#PCI(0000)" . More details on located the Location
Path can be found here: Plan for Deploying Devices using Discrete Device Assignment.
Disable the Device
Using Device Manager or PowerShell, ensure the device is “disabled.”
Dismount the Device
Depending on if the vendor provided a mitigation driver, you’ll either need to use the “-force” option or not.
If a Mitigation Driver was installed Dismount-VMHostAssignableDevice -LocationPath $locationPath
If a Mitigation Driver was not installed Dismount-VMHostAssignableDevice -force -LocationPath $locationPath
Assigning the Device to the Guest VM
The final step is to tell Hyper-V that a VM should have access to the device. In addition to the location path found
above, you'll need to know the name of the vm.
Add-VMAssignableDevice -LocationPath $locationPath -VMName VMName
What’s Next
After a device is successfully mounted in a VM, you’re now able to start that VM and interact with the device as you
normally would if you were running on a bare metal system. This means that you’re now able to install the
Hardware Vendor’s drivers in the VM and applications will be able to see that hardware present. You can verify this
by opening device manager in the Guest VM and seeing that the hardware now shows up.
Removing a Device and Returning it to the Host
If you want to return he device back to its original state, you will need to stop the VM and issue the following:
#Remove the device from the VM
Remove-VMAssignableDevice -LocationPath $locationPath -VMName VMName
#Mount the device back in the host
Mount-VMHostAssignableDevice -LocationPath $locationPath
You can then re-enable the device in device manager and the host operating system will be able to interact with the
device again.
Examples
Mounting a GPU to a VM
In this example we use PowerShell to configure a VM named “ddatest1” to take the first GPU available by the
manufacturer NVIDIA and assign it into the VM.
#Configure the VM for a Discrete Device Assignment
$vm = "ddatest1"
#Set automatic stop action to TurnOff
Set-VM -Name $vm -AutomaticStopAction TurnOff
#Enable Write-Combining on the CPU
Set-VM -GuestControlledCacheTypes $true -VMName $vm
#Configure 32 bit MMIO space
Set-VM -LowMemoryMappedIoSpace 3Gb -VMName $vm
#Configure Greater than 32 bit MMIO space
Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName $vm
#Find the Location Path and disable the Device
#Enumerate all PNP Devices on the system
$pnpdevs = Get-PnpDevice -presentOnly
#Select only those devices that are Display devices manufactured by NVIDIA
$gpudevs = $pnpdevs |where-object {$_.Class -like "Display" -and $_.Manufacturer -like "NVIDIA"}
#Select the location path of the first device that's available to be dismounted by the host.
$locationPath = ($gpudevs | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]
#Disable the PNP Device
Disable-PnpDevice -InstanceId $gpudevs[0].InstanceId
#Dismount the Device from the Host
Dismount-VMHostAssignableDevice -force -LocationPath $locationPath
#Assign the device to the guest VM.
Add-VMAssignableDevice -LocationPath $locationPath -VMName $vm
Deploy NVMe Storage Devices using Discrete Device
Assignment
4/24/2017 • 2 min to read • Edit Online
Applies To: Microsoft Hyper-V Server 2016, Windows Server 2016
Starting with Windows Server 2016, you can use Discrete Device Assignment, or DDA, to pass an entire PCIe
Device into a VM. This will allow high performance access to devices like NVMe storage or Graphics Cards from
within a VM while being able to leverage the devices native drivers. Please visit the Plan for Deploying Devices
using Discrete Device Assignment for more details on which devices work, what are the possible security
implications, etc. There are three steps to using a device with DDA:
Configure the VM for DDA
Dismount the Device from the Host Partition
Assigning the Device to the Guest VM
All command can be executed on the Host on a Windows PowerShell console as an Administrator.
Configure the VM for DDA
Discrete Device Assignment imposes some restrictions to the VMs and the following step needs to be taken.
1. Configure the “Automatic Stop Action” of a VM to TurnOff by executing
Set-VM -Name VMName -AutomaticStopAction TurnOff
Dismount the Device from the Host Partition
Locating the Device’s Location Path
The PCI Location path is required to dismount and mount the device from the Host. An example location path looks
like the following: "PCIROOT(20)#PCI(0300)#PCI(0000)#PCI(0800)#PCI(0000)" . More details on located the Location
Path can be found here: Plan for Deploying Devices using Discrete Device Assignment.
Disable the Device
Using Device Manager or PowerShell, ensure the device is “disabled.”
Dismount the Device
Dismount-VMHostAssignableDevice -LocationPath $locationPath
Assigning the Device to the Guest VM
The final step is to tell Hyper-V that a VM should have access to the device. In addition to the location path found
above, you'll need to know the name of the vm.
Add-VMAssignableDevice -LocationPath $locationPath -VMName VMName
What’s Next
After a device is successfully mounted in a VM, you’re now able to start that VM and interact with the device as you
normally would if you were running on a bare metal system. You can verify this by opening device manager in the
Guest VM and seeing that the hardware now shows up.
Removing a Device and Returning it to the Host
If you want to return he device back to its original state, you will need to stop the VM and issue the following:
#Remove the device from the VM
Remove-VMAssignableDevice -LocationPath $locationPath -VMName VMName
#Mount the device back in the host
Mount-VMHostAssignableDevice -LocationPath $locationPath
You can then re-enable the device in device manager and the host operating system will be able to interact with
the device again.
Manage Hyper-V on Windows Server 2016
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
Use the resources in this section to help you manage Hyper-V on Windows Server 2016.
Choose between standard or production checkpoints
Enable or disable checkpoints
Manage Windows virtual machines with PowerShell Direct
Set up Hyper-V Replica
Use live migration without Failover Clustering to move a virtual machine
Choose between standard or production checkpoints
in Hyper-V
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows 10, Windows Server 2016
Starting with Windows Server 2016 and Windows 10, you can choose between standard and production
checkpoints for each virtual machine. Production checkpoints are the default for new virtual machines.
Production checkpoints are "point in time" images of a virtual machine, which can be restored later on in a
way that is completely supported for all production workloads. This is achieved by using backup technology
inside the guest to create the checkpoint, instead of using saved state technology.
Standard checkpoints capture the state, data, and hardware configuration of a running virtual machine and
are intended for use in development and test scenarios. Standard checkpoints can be useful if you need to
recreate a specific state or condition of a running virtual machine so that you can troubleshoot a problem.
Change checkpoints to production or standard checkpoints
1. In Hyper-V Manager, right-click the virtual machine and click Settings.
2. Under the Management section, select Checkpoints.
3. Select either production checkpoints or standard checkpoints.
If you choose production checkpoints, you can also specify whether the host should take a standard
checkpoint if a production checkpoint can't be taken. If you clear this checkbox and a production checkpoint
can't be taken, then no checkpoint is taken.
4. If you want to store the checkpoint configuration files in a different place, change it in the Checkpoint File
Location section.
5. Click Apply to save your changes. If you're done, click OK to close the dialog box.
See also
Production checkpoints
Enable or disable checkpoints
Create Hyper-V VHD Set files
4/24/2017 • 1 min to read • Edit Online
VHD Set files are a new shared Virtual Disk model for guest clusters in Windows Server 2016. VHD Set files support
online resizing of shared virtual disks, support Hype-V Replica, and can be included in application-consistent
checkpoints.
VHD Set files use a new VHD file type, .VHDS. VHD Set files store checkpoint information about the group virtual
disk used in guest clusters, in the form of metadata.
Hyper-V handles all aspects of managing the checkpoint chains and merging the shared VHD set. Management
software can run disk operations like online resizing on VHD Set files in the same way it does for .VHDX files. This
means that management software doesn't need to know about the VHD Set file format.
Create a VHD Set file from Hyper-V Manager
1.
2.
3.
4.
Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V Manager.
In the Action pane, click New, and then click Hard Disk.
On the Choose Disk Format page, select VHD Set as the format of the virtual hard disk.
Continue through the pages of the wizard to customize the virtual hard disk. You can click Next to move
through each page of the wizard, or you can click the name of a page in the left pane to move directly to that
page.
5. After you have finished configuring the virtual hard disk, click Finish.
Create a VHD Set file from Windows PowerShell
Use the New-VHD cmdlet, with the file type .VHDS in the file path. This example creates a VHD Set file named
base.vhds that is 10 Gigabytes in size.
PS c:\>New-VHD -Path c:\base.vhds -SizeBytes 10GB
Migrate a shared VHDX file to a VHD Set file
Migrating an existing shared VHDX to a VHDS requires taking the VM offline. This is recommended process, using
Windows PowerShell:
1. Remove the VHDX from the VM. For example, run:
PS c:\>Remove-VMHardDiskDrive existing.vhdx
2. Convert the VHDX to a VHDS. For example, run:
PS c:\>Convert-VHD existing.vhdx new.vhds
3. Add the VHDS to the VM. For example, run:
PS c:\>Add-VMHardDiskDrive new.vhds
Enable or disable checkpoints in Hyper-V
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows 10, Windows Server 2016
You can choose to enable or disable checkpoints for each virtual machine.
1. In Hyper-V Manager, right-click the virtual machine and click Settings.
2. Under the Management section, select Checkpoints.
3. To allow checkpoints to be taken of this virtual machine, make sure Enable checkpoints is selected. To
disable checkpoints, clear the Enable checkpoints check box.
4. Click Apply to apply your changes. If you are done, click OK to close the dialog box.
See also
Choose between standard or production checkpoints in Hyper-V
Remotely manage Hyper-V hosts with Hyper-V
Manager
5/1/2017 • 6 min to read • Edit Online
Applies To: Windows Server 2016, Windows Server 2012 R2, Windows 10, Windows 8.1
This article lists the supported combinations of Hyper-V hosts and Hyper-V Manager versions and describes how
to connect to remote and local Hyper-V hosts so you can manage them.
Hyper-V Manager lets you manage a small number of Hyper-V hosts, both remote and local. It's installed when you
install the Hyper-V Management Tools, which you can do either through a full Hyper-V installation or a tools-only
installation. Doing a tools-only installation means you can use the tools on computers that don't meet the
hardware requirements to host Hyper-V. For details about hardware for Hyper-V hosts, see System requirements.
If Hyper-V Manager isn't installed, see the instructions below.
Supported combinations of Hyper-V Manager and Hyper-V host
versions
In some cases you can use a different version of Hyper-V Manager than the Hyper-V version on the host, as shown
in the table. When you do this, Hyper-V Manager provides the features available for the version of Hyper-V on the
host you're managing. For example, if you use the version of Hyper-V Manager in Windows Server 2012 R2 to
remotely manage a host running Hyper-V in Windows Server 2012, you won't be able to use features available in
Windows Server 2012 R2 on that Hyper-V host.
The following table shows which versions of a Hyper-V host you can manage from a particular version of Hyper-V
Manager. Only supported operating system versions are listed. For details about the support status of a particular
operating system version, use the Search product lifecyle button on the Microsoft Lifecycle Policy page. In
general, older versions of Hyper-V Manager can only manage a Hyper-V host running the same version or the
comparable Windows Server version.
HYPER-V MANAGER VERSION
HYPER-V HOST VERSION
Windows 2016, Windows 10
- Windows Server 2016—all editions and installation options,
including Nano Server, and corresponding version of Hyper-V
Server
- Windows Server 2012 R2—all editions and installation
options, and corresponding version of Hyper-V Server
- Windows Server 2012—all editions and installation options,
and corresponding version of Hyper-V Server
- Windows 10
- Windows 8.1
Windows Server 2012 R2, Windows 8.1
- Windows Server 2012 R2—all editions and installation
options, and corresponding version of Hyper-V Server
- Windows Server 2012—all editions and installation options,
and corresponding version of Hyper-V Server
- Windows 8.1
Windows Server 2012
- Windows Server 2012—all editions and installation options,
and corresponding version of Hyper-V Server
HYPER-V MANAGER VERSION
HYPER-V HOST VERSION
Windows Server 2008 R2 Service Pack 1, Windows 7 Service
Pack 1
- Windows Server 2008 R2—all editions and installation
options, and corresponding version of Hyper-V Server
Windows Server 2008, Windows Vista Service Pack 2
- Windows Server 2008—all editions and installation options,
and corresponding version of Hyper-V Server
NOTE
Service pack support ended for Windows 8 on January 12, 2016. For more information, see the Windows 8.1 FAQ.
Connect to a Hyper-V host
To connect to a Hyper-V host from Hyper-V Manager, right-click Hyper-V Manager in the left pane, and then click
Connect to Server.
Manage Hyper-V on a local computer
Hyper-V Manager doesn't list any computers that host Hyper-V until you add the computer, including a local
computer. To do this:
1. In the left pane, right-click Hyper-V Manager.
2. Click Connect to Server.
3. From Select Computer, click Local computer and then click OK.
If you can't connect:
It's possible that only the Hyper-V tools are installed. To check that Hyper-V platform is installed, look for the
Virtual Machine Management service. (Open the Services desktop app: click Start, click the Start Search box,
type services.msc, and then press Enter. If the Virtual Machine Management service isn't listed, install the
Hyper-V platform by following the instructions in Install Hyper-V.)
Check that your hardware meets the requirements. See System requirements.
Check that your user account belongs to the Administrators group or the Hyper-V Administrators group.
Manage Hyper-V hosts remotely
To manage remote Hyper-V hosts, enable remote management on both the local computer and remote host.
On Windows Server, open Server Manager >Local Server >Remote management and then click Allow remote
connections to this computer.
Or, from either operating system, open Windows PowerShell as Administrator and run:
Enable-PSRemoting
Connect to hosts in the same domain
For Windows 8.1 and earlier, remote management works only when the host is in the same domain and your local
user account is also on the remote host.
To add a remote Hyper-V host to Hyper-V Manager, select Another computer in the Select Computer dialogue
box and type the remote host's hostname, NetBIOS name, or fully qualified domain name (FQDN).
Hyper-V Manager in Windows Server 2016 and Windows 10 offers more types of remote connection than
previous versions, described in the following sections.
Connect to a Windows 2016 or Windows 10 remote host as a different user
This lets you connect to the Hyper-V host when you're not running on the local computer as a user that's a
member of either the Hyper-V Administrators group or the Administrators group on the Hyper-V host. To do this:
1.
2.
3.
4.
In the left pane, right-click Hyper-V Manager.
Click Connect to Server.
Select Connect as another user in the Select Computer dialogue box.
Select Set User.
NOTE
This will only work for Windows Server 2016 or Windows 10 remote hosts.
Connect to a Windows 2016 or Windows 10 remote host using IP address
To do this:
1. In the left pane, right-click Hyper-V Manager.
2. Click Connect to Server.
3. Type the IP address into the Another Computer text field.
NOTE
This will only work for Windows Server 2016 or Windows 10 remote hosts.
Connect to a Windows 2016 or Windows 10 remote host outside your domain, or with no domain
To do this:
1. On the Hyper-V host to be managed, open a Windows PowerShell session as Administrator.
2. Create the necessary firewall rules for private network zones:
Enable-PSRemoting
3. To allow remote access on public zones, enable firewall rules for CredSSP and WinRM:
Enable-WSManCredSSP -Role server
For details, see Enable-PSRemoting and Enable-WSManCredSSP.
Next, configure the computer you'll use to manage the Hyper-V host.
1. Open a Windows PowerShell session as Administrator.
2. Run these commands:
Set-Item WSMan:\localhost\Client\TrustedHosts -Value "fqdn-of-hyper-v-host"
Enable-WSManCredSSP -Role client -DelegateComputer "fqdn-of-hyper-v-host"
3. You might also need to configure the following group policy:
Computer Configuration > Administrative Templates > System > Credentials Delegation >
Allow delegating fresh credentials with NTLM-only server authentication
Click Enable and add wsman/fqdn-of-hyper-v-host.
4. Open Hyper-V Manager.
5. In the left pane, right-click Hyper-V Manager.
6. Click Connect to Server.
NOTE
This will only work for Windows Server 2016 or Windows 10 remote hosts.
For cmdlet details, see Set-Item and Enable-WSManCredSSP.
Install Hyper-V Manager
To use a UI tool, choose the one appropriate for the oeprating system on the computer where you'll run Hyper-V
Manager:
On Windows Server, open Server Manager > Manage > Add roles and features. Move to the Features page and
expand Remote server administration tools > Role administration tools > Hyper-V management tools.
On Windows, Hyper-V Manager is available on any Windows operating system that includes Hyper-V.
1.
2.
3.
4.
5.
On the Windows desktop, click the Start button and begin typing Programs and features.
In search results, click Programs and Features.
In the left pane, click Turn Windows features on or off.
Expand the Hyper-V folder, and click Hyper-V Management Tools.
To install Hyper-V Manager, click Hyper-V Management Tools. If you want to also install the Hyper-V module,
click that option.
To use Windows PowerShell, run the following command as Administrator:
add-windowsfeature rsat-hyper-v-tools
See also
Install Hyper-V
4/24/2017 • 8 min to read • Edit Online
Applies To: Windows Server 2016, Windows 10
Manage Hyper-V Integration Services
Hyper-V Integration Services allow a virtual machine to communicate with the Hyper-V host. Many of these services
are conveniences, such as guest file copy, while others are important to the virtual machine's ability to function
correctly, such as time synchronization. This set of services are sometimes referred to as integration components.
For details about each integration service, see Hyper-V Integration Services.
IMPORTANT
Each service you want to use must be enabled in both the host and guest so they can communicate. All integration services
are on by default on Windows guest operating systems, but you can turned them off individually. The next sections show you
how.
Turn an integration service on or off using Hyper-V Manager
1. From the center pane, right-click the virtual machine and click Settings.
2. From the left pane of the Settings window, under Management, click Integration Services.
The Integration Services pane lists all integration services available on the Hyper-V host, and whether they're turned
on in the virtual machine. To get the version information for a guest operating system, log on to the guest operating
system, open a command prompt, and run this command:
REG QUERY "HKLM\Software\Microsoft\Virtual Machine\Auto" /v IntegrationServicesVersion
Turn an integration service on or off for a Windows guest
All integration services are on by default on Windows guest operating systems, but you can turned them off
individually. The next section shows you how.
Use Windows PowerShell to turn a integration service on or off
To do this in PowerShell, use Enable-VMIntegrationService and Disable-VMIntegrationService.
The following examples show you how turn an integration service on and off by doing this for the guest file copy
service on a virtual machine named "demovm".
1. Get a list of running integration services:
Get-VMIntegrationService -VMName "DemoVM"
2. The output should look like this:
VMName
-----DemoVM
DemoVM
DemoVM
DemoVM
DemoVM
DemoVM
Name
---Guest Service Interface
Heartbeat
Key-Value Pair Exchange
Shutdown
Time Synchronization
VSS
Enabled
------False
True
True
True
True
True
PrimaryStatusDescription SecondaryStatusDescription
------------------------ -------------------------OK
OK
OK
OK
OK
OK
OK
3. Turn on Guest Service Interface:
Enable-VMIntegrationService -VMName "DemoVM" -Name "Guest Service Interface"
4. Verify that Guest Service Interface is enabled:
Get-VMIntegrationService -VMName "DemoVM"
5. Turn off Guest Service Interface:
Disable-VMIntegrationService -VMName "DemoVM" -Name "Guest Service Interface"
Start and stop an integration service from a Windows Guest
IMPORTANT
Stopping an integration service may severely affect the host's ability to manage your virtual machine. To work correctly, each
integration service you want to use must be enabled on both the host and guest.
Each integration service is listed as a service in Windows. To turn an integration service on or off from inside the
virtual machine, you'll start or stop the service.
Use Windows Services
1. Open Services manager
2. Find the services with Hyper-V in the name.
3. Right-click the service you want start or stop.
Use Windows PowerShell
1. To get a list of integration services, run:
Get-Service -Name vm*
2. The output should look similar to this:
Status
-----Running
Running
Running
Running
Running
Running
Stopped
Running
Name
---vmicguestinterface
vmicheartbeat
vmickvpexchange
vmicrdv
vmicshutdown
vmictimesync
vmicvmsession
vmicvss
DisplayName
----------Hyper-V Guest Service Interface
Hyper-V Heartbeat Service
Hyper-V Data Exchange Service
Hyper-V Remote Desktop Virtualizati...
Hyper-V Guest Shutdown Service
Hyper-V Time Synchronization Service
Hyper-V VM Session Service
Hyper-V Volume Shadow Copy Requestor
3. Run either Start-Service or Stop-Service. For example, to turn off Windows PowerShell Direct, run:
Stop-Service -Name vmicvmsession
Start and stop an integration service from a Linux guest
Linux integration services are generally provided through the Linux kernel. The Linux integration services driver is
named hv_utils.
1. To find out if hv_utils is loaded, use this command:
lsmod | grep hv_utils
2. The output should look similar to this:
Module
Size Used by
hv_utils
20480 0
hv_vmbus
61440 8
hv_balloon,hyperv_keyboard,hv_netvsc,hid_hyperv,hv_utils,hyperv_fb,hv_storvsc
3. To find out if the required daemons are running, use this command.
ps -ef | grep hv
4. The output should look similar to this:
root
236
2
root
237
2
...
root
252
2
root
1286
1
root
9333
1
root
9365
1
scooley 43774 43755
0 Jul11 ?
0 Jul11 ?
00:00:00 [hv_vmbus_con]
00:00:00 [hv_vmbus_ctl]
0
0
0
0
0
00:00:00
00:01:11
00:00:00
00:00:00
00:00:00
Jul11
Jul11
Oct12
Oct12
21:20
?
?
?
?
pts/0
[hv_vmbus_ctl]
/usr/lib/linux-tools/3.13.0-32-generic/hv_kvp_daemon
/usr/lib/linux-tools/3.13.0-32-generic/hv_kvp_daemon
/usr/lib/linux-tools/3.13.0-32-generic/hv_vss_daemon
grep --color=auto hv
5. To see what daemons are available, run:
compgen -c hv_
6. The output should look similar to this:
hv_vss_daemon
hv_get_dhcp_info
hv_get_dns_info
hv_set_ifconfig
hv_kvp_daemon
hv_fcopy_daemon
Integration service daemons that might be listed include the following. If they're not, they might not be
supported on your system or they might not be installed. Find details, see Supported Linux and FreeBSD
virtual machines for Hyper-V on Windows.
hv_vss_daemon: This daemon is required to create live Linux virtual machine backups.
hv_kvp_daemon: This daemon allows setting and querying intrinsic and extrinsic key value pairs.
hv_fcopy_daemon: This daemon implements a file copying service between the host and guest.
Examples
These examples stop and start the KVP daemon, named
hv_kvp_daemon
.
1. Use the process ID (PID) to stop the daemon's process. To find the PID, look at the second column of the
output, or use pidof . Hyper-V daemons run as root, so you'll need root permissions.
sudo kill -15 `pidof hv_kvp_daemon`
2. To verify that all
hv_kvp_daemon
process are gone, run:
ps -ef | hv
3. To start the daemon again, run the daemon as root:
sudo hv_kvp_daemon
4. To verify that the
hv_kvp_daemon
process is listed with a new process ID, run:
ps -ef | hv
Keep integration services up to date
We recommend that you keep integration services up to date to get the best performance and most recent features
for your virtual machines. This happens in Windows Server 2016 and Windows 10 by default when your virtual
machines are set up to get important updates from Windows Update.
For virtual machines running on Windows 10 hosts:
NOTE
The image file vmguest.iso isn't included with Hyper-V on Windows 10 because it's no longer needed.
GUEST
UPDATE MECHANISM
NOTES
Windows 10
Windows Update
Windows 8.1
Windows Update
Windows 8
Windows Update
Requires the Data Exchange integration
service.*
Windows 7
Windows Update
Requires the Data Exchange integration
service.*
Windows Vista (SP 2)
Windows Update
Requires the Data Exchange integration
service.*
Windows Server 2012 R2
Windows Update
Windows Server 2012
Windows Update
Requires the Data Exchange integration
service.*
Windows Server 2008 R2 (SP 1)
Windows Update
Requires the Data Exchange integration
service.*
Windows Server 2008 (SP 2)
Windows Update
Extended support only in Windows
Server 2016 (read more).
Windows Home Server 2011
Windows Update
Will not be supported in Windows
Server 2016 (read more).
Windows Small Business Server 2011
Windows Update
Not under mainstream support (read
more).
package manager
Integration services for Linux are built
into the distro but there may be
optional updates available. ********
Linux guests
* If the Data Exchange integration service can't be enabled, the integration services for these guests are available
from the Download Center as a cabinet (cab) file. Instructions for applying a cab are available in this blog post.
For virtual machines running on Windows 8.1 hosts:
GUEST
UPDATE MECHANISM
NOTES
Windows 10
Windows Update
Windows 8.1
Windows Update
Windows 8
Integration Services disk
See instructions, below.
Windows 7
Integration Services disk
See instructions, below.
Windows Vista (SP 2)
Integration Services disk
See instructions, below.
Windows XP (SP 2, SP 3)
Integration Services disk
See instructions, below.
Windows Server 2012 R2
Windows Update
Windows Server 2012
Integration Services disk
See instructions, below.
Windows Server 2008 R2
Integration Services disk
See instructions, below.
Windows Server 2008 (SP 2)
Integration Services disk
See instructions, below.
Windows Home Server 2011
Integration Services disk
See instructions, below.
Windows Small Business Server 2011
Integration Services disk
See instructions, below.
Windows Server 2003 R2 (SP 2)
Integration Services disk
See instructions, below.
Windows Server 2003 (SP 2)
Integration Services disk
See instructions, below.
package manager
Integration services for Linux are built
into the distro but there may be
optional updates available. **
Linux guests
For virtual machines running on Windows 8 hosts:
GUEST
UPDATE MECHANISM
Windows 8.1
Windows Update
Windows 8
Integration Services disk
See instructions, below.
Windows 7
Integration Services disk
See instructions, below.
Windows Vista (SP 2)
Integration Services disk
See instructions, below.
Windows XP (SP 2, SP 3)
Integration Services disk
See instructions, below.
-
NOTES
GUEST
UPDATE MECHANISM
NOTES
Windows Server 2012 R2
Windows Update
Windows Server 2012
Integration Services disk
See instructions, below.
Windows Server 2008 R2
Integration Services disk
See instructions, below.
Windows Server 2008 (SP 2)
Integration Services disk
See instructions, below.
Windows Home Server 2011
Integration Services disk
See instructions, below.
Windows Small Business Server 2011
Integration Services disk
See instructions, below.
Windows Server 2003 R2 (SP 2)
Integration Services disk
See instructions, below.
Windows Server 2003 (SP 2)
Integration Services disk
See instructions, below.
package manager
Integration services for Linux are built
into the distro but there may be
optional updates available. **
Linux guests
For more details about Linux guests, see Supported Linux and FreeBSD virtual machines for Hyper-V on Windows.
Install or update integration services
For hosts earlier than Windows Server 2016 and Windows 10, you'll need to manually instructions from the guest
operating system. These steps can't be automated or done within a Windows PowerShell session.
1. Open Hyper-V Manager. From the Tools menu of Server Manager, click Hyper-V Manager.
2. Connect to the virtual machine. Right-click the virtual machine and click Connect.
3. From the Action menu of Virtual Machine Connection, click Insert Integration Services Setup Disk. This
action loads the setup disk in the virtual DVD drive. Depending on the guest operating system, you might
need to start the installation manually.
4. After the installation finishes, all integration services are available for use.
Manage Windows virtual machines with PowerShell
Direct
4/24/2017 • 2 min to read • Edit Online
Applies To: Windows 10, Windows Server 2016
You can use PowerShell Direct to remotely manage a Windows 10 or Windows Server 2016 virtual machine from
a Windows 10 or Windows Server 2016 Hyper-V host. PowerShell Direct allows Windows PowerShell
management inside a virtual machine regardless of the network configuration or remote management settings on
either the Hyper-V host or the virtual machine. This makes it easier for Hyper-V Administrators to automate and
script virtual machine management and configuration.
There are two ways to run PowerShell Direct:
Create and exit a PowerShell Direct session using PSSession cmdlets
Run script or command with the Invoke-Command cmdlet
If you're managing older virtual machines, use Virtual Machine Connection (VMConnect) or configure a virtual
network for the virtual machine.
Create and exit a PowerShell Direct session using PSSession cmdlets
1. On the Hyper-V host, open Windows PowerShell as Administrator.
2. Use the Enter-PSSession cmdlet to connect to the virtual machine. Run one of the following commands to
create a session by using the virtual machine name or GUID:
Enter-PSSession -VMName <VMName>
Enter-PSSession -VMGUID <VMGUID>
3. Type your credentials for the virtual machine.
4. Run whatever commands you need to. These commands run on the virtual machine that you created the
session with.
5. When you're done, use the Exit-PSSession to close the session.
Exit-PSSession
Run script or command with Invoke-Command cmdlet
You can use the Invoke-Command cmdlet to run a pre-determined set of commands on the virtual machine. Here
is an example of how you can use the Invoke-Command cmdlet where PSTest is the virtual machine name and the
script to run (foo.ps1) is in the script folder on the C:/ drive:
Invoke-Command -VMName PSTest -FilePath C:\script\foo.ps1
To run a single command, use the -ScriptBlock parameter:
Invoke-Command -VMName PSTest -ScriptBlock { cmdlet }
What's required to use PowerShell Direct?
To create a PowerShell Direct session on a virtual machine,
The virtual machine must be running locally on the host and booted.
You must be logged into the host computer as a Hyper-V administrator.
You must supply valid user credentials for the virtual machine.
The host operating system must run at least Windows 10 or Windows Server 2016.
The virtual machine must run at least Windows 10 or Windows Server 2016.
You can use the Get-VM cmdlet to check that the credentials you're using have the Hyper-V administrator role and
to get a list of the virtual machines running locally on the host and booted.
See Also
Enter-PSSession
Exit-PSSession
Invoke-Command
Set up Hyper-V Replica
4/24/2017 • 11 min to read • Edit Online
Applies To: Windows Server 2016
Hyper-V Replica is an integral part of the Hyper-V role. It contributes to your disaster recovery strategy by
replicating virtual machines from one Hyper-V host server to another to keep your workloads available. Hyper-V
Replica creates a copy of a live virtual machine to a replica offline virtual machine. Note the following:
Hyper-V hosts: Primary and secondary host servers can be physically co-located or in separate
geographical locations with replication over a WAN link. Hyper-V hosts can be standalone, clustered, or a
mixture of both. There's no Active Directory dependency between the servers and they don't need to be
domain members.
Replication and change tracking: When you enable Hyper-V Replica for a specific virtual machine, initial
replication creates an identical replica virtual machine on a secondary host server. After that happens,
Hyper-V Replica change tracking creates and maintains a log file that captures changes on a virtual machine
VHD. The log file is played in reverse order to the replica VHD based on replication frequency settings. This
means that the latest changes are stored and replicated asynchronously. Replication can be over HTTP or
HTTPS.
Extended (chained) replication: This lets you replicate a virtual machine from a primary host to a
secondary host, and then replicate the secondary host to a third host. Note that you can't replicate from the
primary host directly to the second and the third.
This feature makes Hyper-V Replica more robust for disaster recovery because if an outage occurs you can
recover from both the primary and extended replica. You can fail over to the extended replica if your
primary and secondary locations go down. Note that the extended replica doesn't support applicationconsistent replication and must use the same VHDs that the secondary replica is using.
Failover: If an outage occurs in your primary (or secondary in case of extended) location you can manually
initiate a test, planned, or unplanned failover.
When should I run?
TEST
PLANNED
UNPLANNED
Verify that a virtual
machine can fail over and
start in the secondary site
During planned downtime
and outages
During unexpected events
Useful for testing and
training
Is a duplicate virtual
machine created?
Yes
No
No
Where is it initiated?
On the replica virtual
machine
Initiated on primary and
completed on secondary
On the replica virtual
machine
How often should I run?
We recommend once a
month for testing
Once every six months or
in accordance with
compliance requirements
Only in case of disaster
when the primary virtual
machine is unavailable
TEST
PLANNED
UNPLANNED
Does the primary virtual
machine continue to
replicate?
Yes
Yes. When the outage is
resolve reverse replication
replicates the changes
back to the primary site so
that primary and
secondary are
synchronized.
No
Is there any data loss?
None
None. After failover HyperV Replica replicates the last
set of tracked changes
back to the primary to
ensure zero data loss.
Depends on the event and
recovery points
Is there any downtime?
None. It doesn't impact
your production
environment. It creates a
duplicate test virtual
machine during failover.
After failover finishes you
select Failover on the
replica virtual machine and
it's automatically cleaned
up and deleted.
The duration of the
planned outage
The duration of the
unplanned outage
Recovery points: When you configure replication settings for a virtual machine, you specify the recovery
points you want to store from it. Recovery points represent a snapshot in time from which you can recover a
virtual machine. Obviously less data is lost if you recover from a very recent recovery point. You can access
recovery points up to 24 hours ago.
Deployment prerequisites
Here's what you should verify before you begin:
Figure out which VHDs need to be replicated. In particular, VHDs that contain data that is rapidly
changing and not used by the Replica server after failover, such as page file disks, should be excluded from
replication to conserve network bandwidth. Make a note of which VHDs can be excluded.
Decide how often you need to synchronize data: The data on the Replica server is synchronized
updated according to the replication frequency you configure (30 seconds, 5 minutes, or 15 minutes). The
frequency you choose should consider the following: Are the virtual machines running critical data with a
low RPO? What are you bandwidth considerations? Your highly-critical virtual machines will obviously need
more frequent replication.
Decide how to recover data: By default Hyper-V Replica only stores a single recovery point that will be the
latest replication sent from the primary to the secondary. However if you want the option to recover data to
an earlier point in time you can specify that additional recovery points should be stored (to a maximum of
24 hourly points). If you do need additional recovery points you should note that this requires more
overhead on processing and storage resources.
Figure out which workloads you'll replicate: Standard Hyper-V Replica replication maintains
consistency in the state of the virtual machine operating system after a failover, but not the state of
applications that running on the virtual machine. If you want to be able to recovery your workload state you
can create app-consistent recovery points. Note that app-consistent recovery isn't available on the extended
replica site if you're using extended (chained) replication.
Decide how to do the initial replication of virtual machine data: Replication starts by transferring the
needs to transfer the current state of the virtual machines. This initial state can be transmitted directly over
the existing network, either immediately or at a later time that you configure. You can also use a pre-existing
restored virtual machine (for example, if you have restored an earlier backup of the virtual machine on the
Replica server) as the initial copy. Or, you can save network bandwidth by copying the initial copy to external
media and then physically delivering the media to the Replica site. If you want to use a preexisting virtual
machine delete all previous snapshots associated with it.
Deployment steps
Step 1: Set up the Hyper-V hosts
You'll need at least two Hyper-V hosts with one or more virtual machines on each server. Get started with Hyper-V.
The host server that you'll replicate virtual machines to will need to be set up as the replica server.
1. In the Hyper-V settings for the server you'll replicate virtual machines to, in Replication Configuration,
select Enable this computer as a Replica server.
2. You can replicate over HTTP or encrypted HTTPS. Select Use Kerberos (HTTP) or Use certificate-based
Authentication (HTTPS). By default HTTP 80 and HTTP 443 are enabled as firewall exceptions on the
replica Hyper-V server. If you change the default port settings you'll need to also change the firewall
exception. If you're replicating over HTTPS, you'll need to select a certificate and you should have certificate
authentication set up.
3. For authorization, select Allow replication from any authenticated server to allow the replica server to
accept virtual machine replication traffic from any primary server that authenticates successfully. Select
Allow replication from the specified servers to accept traffic only from the primary servers you
specifically select.
For both options you can specify where the replicated VHDs should be stored on the replica Hyper-V server.
4. Click OK or Apply.
Step 2: Set up the firewall
To allow replication between the primary and secondary servers, traffic must get through the Windows firewall (or
any other third-party firewalls). When you installed the Hyper-V role on the servers by default exceptions for HTTP
(80) and HTTPS (443) are created. If you're using these standard ports, you'll just need to enable the rules:
To enable the rules on a standalone host server:
1. Open Windows Firewall with Advance Security and click Inbound Rules.
2. To enable HTTP (Kerberos) authentication, right-click Hyper-V Replica HTTP Listener (TCP-In)
>Enable Rule. To enable HTTPS certificate-based authentication, right-click Hyper-V Replica
HTTPS Listener (TCP-In) > Enable Rule.
To enable rules on a Hyper-V cluster, open a Windows PowerShell session using Run as Administrator,
then run one of these commands:
For HTTP:
get-clusternode | ForEach-Object {Invoke-command -computername $_.name -scriptblock {EnableNetfirewallrule -displayname "Hyper-V Replica HTTP Listener (TCP-In)"}}
For HTTPS:
get-clusternode | ForEach-Object {Invoke-command -computername $_.name -scriptblock {EnableNetfirewallrule -displayname "Hyper-V Replica HTTPS Listener (TCP-In)"}}
Enable virtual machine replication
Do the following on each virtual machine you want to replicate:
1. In the Details pane of Hyper-V Manager, select a virtual machine by clicking it.
Right-click the selected virtual machine and click Enable Replication to open the Enable Replication wizard.
2. On the Before you Begin page, click Next.
3. On the Specify Replica Server page, in the Replica Server box, enter either the NetBIOS or FQDN of the
Replica server. If the Replica server is part of a failover cluster, enter the name of the Hyper-V Replica Broker.
Click Next.
4. On the Specify Connection Parameters page, Hyper-V Replica automatically retrieves the authentication
and port settings you configured for the replica server. If values aren't being retrieved check that the server
is configured as a replica server, and that it's registered in DNS. If required type in the setting manually.
5. On the Choose Replication VHDs page, make sure the VHDs you want to replicate are selected, and clear
the checkboxes for any VHDs that you want to exclude from replication. Then click Next.
6. On the Configure Replication Frequency page, specify how often changes should be synchronized from
primary to secondary. Then click Next.
7. On the Configure Additional Recovery Points page, select whether you want to maintain only the latest
recovery point or to create additional points. If you want to consistently recover applications and workloads
that have their own VSS writers we recommend you select Volume Shadow Copy Service (VSS)
frequency and specify how often to create app-consistent snapshots. Note that the Hyper-V VMM
Requestor Service must be running on both the primary and secondary Hyper-V servers. Then click Next.
8. On the Choose Initial Replication page, select the initial replication method to use. The default setting to
send initial copy over the network will copy the primary virtual machine configuration file (VMCX) and the
virtual hard disk files (VHDX and VHD) you selected over your network connection. Verify network
bandwidth availability if you're going to use this option. If the primary virtual machine is already configured
on the secondary site as a replicate virtual machine it can be useful to select Use an existing virtual
machine on the replication server as the initial copy. You can use Hyper-V export to export the primary
virtual machine and import it as a replica virtual machine on the secondary server. For larger virtual
machines or limited bandwidth you can it choose to have initial replication over the network occur at a later
time, and then configure off-peak hours, or to send the initial replication information as offline media.
If you do offline replication you'll transport the initial copy to the secondary server using an external storage
medium such as a hard disk or USB drive. To do this you'll need to connect the external storage to the
primary server (or owner node in a cluster) and then when you select Send initial copy using external media
you can specify a location locally or on your external media where the initial copy can be stored. A
placeholder virtual machine is created on the replica site. After initial replication completes the external
storage can be shipped to the replica site. There you'll connect the external media to the secondary server or
to the owner node of the secondary cluster. Then you'll import the initial replica to a specified location and
merge it into the placeholder virtual machine.
9. On the Completing the Enable Replication page, review the information in the Summary and then click
Finish.. The virtual machine data will be transferred in accordance with your chosen settings. and a dialog
box appears indicating that replication was successfully enabled.
10. If you want to configure extended (chained) replication, open the replica server, and right-click the virtual
machine you want to replicate. Click Replication > Extend Replication and specify the replication settings.
Run a failover
After completing these deployment steps your replicated environment is up and running. Now you can run
failovers as needed.
Test failover: If you want to run a test failover right-click the primary virtual machine and select Replication >
Test Failover. Pick the latest or other recovery point if configured. A new test virtual machine will be created and
started on the secondary site. After you've finished testing, select Stop Test Failover on the replica virtual machine
to clean up it up. Note that for a virtual machine you can only run on test failover at a time. Read more.
Planned failover: To run a planned failover right-click the primary virtual machine and select Replication >
Planned Failover. Planned failover performs prerequisites checks to ensure zero data loss. It checks that the
primary virtual machine is shut down before beginning the failover. After the virtual machine is failed over, it starts
replicating the changes back to the primary site when it's available. Note that for this to work the primary server
should be configured to recive replication from the secondary server or from the Hyper-V Replica Broker in the
case of a primary cluster. Planned failover sends the last set of tracked changes. Read more.
Unplanned failover: To run an unplanned failover, right-click on the replica virtual machine and select
Replication > Unplanned Failover from Hyper-V Manager or Failover Clustering Manager. You can recover
from the latest recovery point or from previous recovery points if this option is enabled. After failover, check that
everything is working as expected on the failed over virtual machine, then click Complete on the replica virtual
machine. Read more.
Use live migration without Failover Clustering to
move a virtual machine
4/24/2017 • 2 min to read • Edit Online
Applies To: Windows Server 2016
This article shows you how to move a virtual machine by doing a live migration without using Failover Clustering.
A live migration moves running virtual machines between Hyper-V hosts without any noticeable downtime.
To be able to do this, you'll need:
A user account that's a member of the local Hyper-V Administrators group or the Administrators group on
both the source and destination computers.
The Hyper-V role in Windows Server 2016 or Windows Server 2012 R2 installed on the source and
destination servers and set up for live migrations. You can do a live migration between hosts running
Windows Server 2016 and Windows Server 2012 R2 if the virtual machine is at least version 5.
For version upgrade instructions, see Upgrade virtual machine version in Hyper-V on Windows 10 or
Windows Server 2016. For installation instructions, see Set up hosts for live migration.
The Hyper-V management tools installed on a computer running Windows Server 2016 or Windows 10,
unless the tools are installed on the source or destination server and you'll run them from there.
Use Hyper-V Manager to move a running virtual machine
1. Open Hyper-V Manager. (From Server Manager, click Tools >>Hyper-V Manager.)
2. In the navigation pane, select one of the servers. (If it isn't listed, right-click Hyper-V Manager, click
Connect to Server, type the server name, and click OK. Repeat to add more servers.)
3. From the Virtual Machines pane, right-click the virtual machine and then click Move. This opens the Move
Wizard.
4. Use the wizard pages to choose the type of move, destination server, and options.
5. On the Summary page, review your choices and then click Finish.
Use Windows PowerShell to move a running virtual machine
The following example uses the Move-VM cmdlet to move a virtual machine named LMTest to a destination server
named TestServer02 and moves the virtual hard disks and other file, such checkpoints and Smart Paging files, to
the D:\LMTest directory on the destination server.
PS C:\> Move-VM LMTest TestServer02 -IncludeStorage -DestinationStoragePath D:\LMTest
Troubleshooting
Failed to establish a connection
If you haven't set up constrained delegation, you must sign in to source server before you can move a virtual
machine. If you don't do this, the authentication attempt fails, an error occurs, and this message is displayed:
"Virtual machine migration operation failed at migration Source.
Failed to establish a connection with host computer name: No credentials are available in the security package
0x8009030E."
To fix this problem, sign in to the source server and try the move again. To avoid having to sign in to a source
server before doing a live migration, set up constrained delegation. You'll need domain administrator credentials to
set up constrained delegation. For instructions, see Set up hosts for live migration.
Failed because the host hardware isn't compatible
If a virtual machine doesn't have processor compatibility turned on and has one or more snapshots, the move fails
if the hosts have different processor versions. An error occurs and this message is displayed:
The virtual machine cannot be moved to the destination computer. The hardware on the destination
computer is not compatible with the hardware requirements of this virtual machine.
To fix this problem, shut down the virtual machine and turn on the processor compatibility setting.
1.
2.
3.
4.
From Hyper-V Manager, in the Virtual Machines pane, right-click the virtual machine and click Settings.
In the navigation pane, expand Processors and click Compatibility.
Check Migrate to a computer with a different processor version.
Click OK.
To use Windows PowerShell, use the Set-VMProcessor cmdlet:
PS C:\> Set-VMProcessor TestVM -CompatibilityForMigrationEnabled $true
Hyper-V Virtual Switch
5/16/2017 • 4 min to read • Edit Online
Applies To: Windows Server 2016
This topic provides an overview of Hyper-V Virtual Switch, which provides you with the ability to connect virtual
machines (VMs) to networks that are external to the Hyper-V host, including your organization's intranet and the
Internet.
You can also connect to virtual networks on the server that is running Hyper-V when you deploy Software Defined
Networking (SDN).
NOTE
In addition to this topic, the following Hyper-V Virtual Switch documentation is available.
Manage Hyper-V Virtual Switch
Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET)
Network Switch Team Cmdlets in Windows PowerShell
Create networks with VMM 2012
Hyper-V: Configure VLANs and VLAN Tagging
Hyper-V: The WFP virtual switch extension should be enabled if it is required by third party extensions
For more information about other networking technologies, see Networking in Windows Server 2016.
Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available in Hyper-V Manager
when you install the Hyper-V server role.
Hyper-V Virtual Switch includes programmatically managed and extensible capabilities to connect VMs to both
virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for
security, isolation, and service levels.
NOTE
Hyper-V Virtual Switch only supports Ethernet, and does not support any other wired local area network (LAN) technologies,
such as Infiniband and Fibre Channel.
Hyper-V Virtual Switch includes tenant isolation capabilities, traffic shaping, protection against malicious virtual
machines, and simplified troubleshooting.
With built-in support for Network Device Interface Specification (NDIS) filter drivers and Windows Filtering
Platform (WFP) callout drivers, the Hyper-V Virtual Switch enables independent software vendors (ISVs) to create
extensible plug-ins, called Virtual Switch Extensions, that can provide enhanced networking and security
capabilities. Virtual Switch Extensions that you add to the Hyper-V Virtual Switch are listed in the Virtual Switch
Manager feature of Hyper-V Manager.
In the following illustration, a VM has a virtual NIC that is connected to the Hyper-V Virtual Switch through a
switch port.
Hyper-V Virtual Switch capabilities provide you with more options for enforcing tenant isolation, shaping and
controlling network traffic, and employing protective measures against malicious VMs.
NOTE
In Windows Server 2016, a VM with a virtual NIC accurately displays the maximum throughput for the virtual NIC. To view
the virtual NIC speed in Network Connections, right-click the desired virtual NIC icon and then click Status. The virtual NIC
Status dialog box opens. In Connection, the value of Speed matches the speed of the physical NIC installed in the server.
Uses for Hyper-V Virtual Switch
Following are some use case scenarios for Hyper-V Virtual Switch.
Displaying statistics: A developer at a hosted cloud vendor implements a management package that displays the
current state of the Hyper-V virtual switch. The management package can query switch-wide current capabilities,
configuration settings, and individual port network statistics using WMI. The status of the switch is then displayed
to give administrators a quick view of the state of the switch.
Resource tracking: A hosting company is selling hosting services priced according to the level of membership.
Various membership levels include different network performance levels. The administrator allocates resources to
meet the SLAs in a manner that balances network availability. The administrator programmatically tracks
information such as the current usage of bandwidth assigned, and the number of virtual machine (VM) assigned
virtual machine queue (VMQ) or IOV channels. The same program also periodically logs the resources in use in
addition to the per-VM resources assigned for double entry tracking or resources.
Managing the order of switch extensions: An enterprise has installed extensions on their Hyper-V host to both
monitor traffic and report intrusion detection. During maintenance, some extensions may be updated causing the
order of extensions to change. A simple script program is run to reorder the extensions after updates.
Forwarding extension manages VLAN ID: A major switch company is building a forwarding extension that
applies all policies for networking. One element that is managed is virtual local area network (VLAN) IDs. The
virtual switch cedes control of the VLAN to a forwarding extension. The switch company's installation
programmatically call a Windows Management Instrumentation (WMI) application programming interface (API)
that turns on the transparency, telling the Hyper-V Virtual Switch to pass and take no action on VLAN tags.
Hyper-V Virtual Switch Functionality
Some of the principal features that are included in the Hyper-V Virtual Switch are:
ARP/ND Poisoning (spoofing) protection: Provides protection against a malicious VM using Address
Resolution Protocol (ARP) spoofing to steal IP addresses from other VMs. Provides protection against
attacks that can be launched for IPv6 using Neighbor Discovery (ND) spoofing.
DHCP Guard protection: Protects against a malicious VM representing itself as a Dynamic Host
Configuration Protocol (DHCP) server for man-in-the-middle attacks.
Port ACLs: Provides traffic filtering based on Media Access Control (MAC) or Internet Protocol (IP)
addresses/ranges, which enables you to set up virtual network isolation.
Trunk mode to a VM: Enables administrators to set up a specific VM as a virtual appliance, and then direct
traffic from various VLANs to that VM.
Network traffic monitoring: Enables administrators to review traffic that is traversing the network switch.
Isolated (private) VLAN: Enables administrators to segregate traffic on multiple vlans, to more easily
establish isolated tenant communities.
Following is a list of capabilities that enhance Hyper-V Virtual Switch usability:
Bandwidth limit and burst support: Bandwidth minimum guarantees amount of bandwidth reserved.
Bandwidth maximum caps the amount of bandwidth a VM can consume.
Explicit Congestion Notification (ECN) marking support: ECN marking, also known as Data CenterTCP
(DCTCP), enables the physical switch and operating system to regulate traffic flow such that the buffer
resources of the switch are not flooded, which results in increased traffic throughput.
Diagnostics: Diagnostics allow easy tracing and monitoring of events and packets through the virtual
switch.
Remote Direct Memory Access (RDMA) and Switch
Embedded Teaming (SET)
5/23/2017 • 14 min to read • Edit Online
Applies To: Windows Server 2016
This topic provides information on configuring Remote Direct Memory Access (RDMA) interfaces with Hyper-V in
Windows Server 2016, in addition to information about Switch Embedded Teaming (SET).
NOTE
In addition to this topic, the following Switch Embedded Teaming content is available.
TechNet Gallery Download: Windows Server 2016 NIC and Switch Embedded Teaming User Guide
TIP
In editions of Windows Server previous to Windows Server 2016, it is not possible to configure RDMA on network adapters
that are bound to a NIC Team or to a Hyper-V Virtual Switch. In Windows Server 2016, you can enable RDMA on network
adapters that are bound to a Hyper-V Virtual Switch with or without Switch Embedded Teaming (SET).
Configuring RDMA Interfaces with Hyper-V
In Windows Server 2012 R2, using both RDMA and Hyper-V on the same computer as the network adapters that
provide RDMA services can not be bound to a Hyper-V Virtual Switch. This increases the number of physical
network adapters that are required to be installed in the Hyper-V host.
In Windows Server 2016, you can use fewer network adapters while using RDMA with or without SET.
The image below illustrates the software architecture changes between Windows Server 2012 R2 and Windows
Server 2016.
The following sections provide instructions on how to use Windows PowerShell commands to enable Data Center
Bridging (DCB), create a Hyper-V Virtual Switch with an RDMA virtual NIC (vNIC), and create a Hyper-V Virtual
Switch with SET and RDMA vNICs.
Enable Data Center Bridging (DCB )
Before using any RDMA over Converged Ethernet (RoCE) version of RDMA, you must enable DCB. While not
required for Internet Wide Area RDMA Protocol (iWARP) networks, testing has determined that all Ethernet-based
RDMA technologies work better with DCB. Because of this, you should consider using DCB even for iWARP RDMA
deployments.
The following Windows PowerShell script provides an example of how to enable and configure DCB for SMB
Direct:
#
# Turn on DCB
Install-WindowsFeature Data-Center-Bridging
#
# Set a policy for SMB-Direct
New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3
#
# Turn on Flow Control for SMB
Enable-NetQosFlowControl -Priority 3
#
# Make sure flow control is off for other traffic
Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7
#
# Apply policy to the target adapters
Enable-NetAdapterQos -Name "SLOT 2"
#
# Give SMB Direct 30% of the bandwidth minimum
New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 30 -Algorithm ETS
If you have a kernel debugger installed in the system, you must configure the debugger to allow QoS to be set by
running the following command.
# Override the Debugger - by default the debugger blocks NetQos
#
Set-ItemProperty HKLM:"\SYSTEM\CurrentControlSet\Services\NDIS\Parameters" AllowFlowControlUnderDebugger -type
DWORD -Value 1 -Force
Create a Hyper-V Virtual Switch with an RDMA vNIC
If SET is not required for your deployment, you can use the following PowerShell script to create a Hyper-V Virtual
Switch with an RDMA vNIC.
NOTE
Using SET teams with RDMA-capable physical NICs provides more RDMA resources for the vNICs to consume.
#
# Create a vmSwitch without SET
#
New-VMSwitch -Name RDMAswitch -NetAdapterName "SLOT 2"
#
# Add host vNICs and make them RDMA capable
#
Add-VMNetworkAdapter -SwitchName RDMAswitch -Name SMB_1
Enable-NetAdapterRDMA "vEthernet (SMB_1)"
#
# Verify RDMA capabilities
#
Get-NetAdapterRdma
Create a Hyper-V Virtual Switch with SET and RDMA vNICs
To make use of RDMA capabilies on Hyper-V host virtual network adapters (vNICs) on a Hyper-V Virtual Switch
that supports RDMA teaming, you can use this example Windows PowerShell script.
#
# Create a vmSwitch with SET
#
New-VMSwitch -Name SETswitch -NetAdapterName "SLOT 2","SLOT 3" -EnableEmbeddedTeaming $true
#
# Add host vNICs and make them RDMA capable
#
Add-VMNetworkAdapter -SwitchName SETswitch -Name SMB_1 -managementOS
Add-VMNetworkAdapter -SwitchName SETswitch -Name SMB_2 -managementOS
Enable-NetAdapterRDMA "vEthernet (SMB_1)","vEthernet (SMB_2)"
#
# Verify RDMA capabilities; ensure that the capabilities are non-zero
#
Get-NetAdapterRdma | fl *
#
# Many switches won't pass traffic class information on untagged VLAN traffic,
# so make sure host adapters for RDMA are on VLANs. (This example assigns the two SMB_*
# host virtual adapters to VLAN 42.)
#
Set-VMNetworkAdapterIsolation -ManagementOS -VMNetworkAdapterName SMB_1 -IsolationMode VLAN DefaultIsolationID 42
Set-VMNetworkAdapterIsolation -ManagementOS -VMNetworkAdapterName SMB_2 -IsolationMode VLAN DefaultIsolationID 42
Switch Embedded Teaming (SET)
This section provides an overview of Switch Embedded Teaming (SET) in Windows Server 2016, and contains the
following sections.
SET Overview
SET Availability
Supported and Unsupported NICs for SET
SET Compatibility with Windows Server Networking Technologies
SET Modes and Settings
SET and Virtual Machine Queues (VMQs)
SET and Hyper-V Network Virtualization (HNV)
SET and Live Migration
MAC Address Use on Transmitted Packets
Managing a SET team
SET Overview
SET is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the
Software Defined Networking (SDN) stack in Windows Server 2016. SET integrates some NIC Teaming
functionality into the Hyper-V Virtual Switch.
SET allows you to group between one and eight physical Ethernet network adapters into one or more softwarebased virtual network adapters. These virtual network adapters provide fast performance and fault tolerance in the
event of a network adapter failure.
SET member network adapters must all be installed in the same physical Hyper-V host to be placed in a team.
NOTE
The use of SET is only supported in Hyper-V Virtual Switch in Windows Server 2016. You cannot deploy SET in Windows
Server 2012 R2 .
In addition, you can connect your teamed NICs to the same physical switch or to different physical switches. If you
connect NICs to different switches, both switches must be on the same subnet.
The following illustration depicts SET architecture.
Because SET is integrated into the Hyper-V Virtual Switch, you cannot use SET inside of a virtual machine (VM).
You can, however use NIC Teaming within VMs. For more information, see NIC Teaming in Virtual Machines
(VMs).
In addition, SET architecture does not expose team interfaces. Instead, you must configure Hyper-V Virtual Switch
ports.
SET Availability
SET is available in all versions of Windows Server 2016 that include Hyper-V and the SDN stack. In addition, you
can use Windows PowerShell commands and Remote Desktop connections to manage SET from remote
computers that are running a client operating system upon which the tools are supported.
Supported NICs for SET
You can use any Ethernet NIC that has passed the Windows Hardware Qualification and Logo (WHQL) test in a SET
team in Windows Server 2016. SET requires that all network adapters that are members of a SET team must be
identical (i.e., same manufacturer, same model, same firmware and driver). SET supports between one and eight
network adapters in a team.
SET Compatibility with Windows Server Networking Technologies
SET is compatible with the following networking technologies in Windows Server 2016.
Datacenter bridging (DCB)
Hyper-V Network Virtualization - NV-GRE and VxLAN are both supported in Windows Server 2016.
Receive-side Checksum offloads (IPv4, IPv6, TCP) - These are supported if any of the SET team members
support them.
Remote Direct Memory Access (RDMA)
Single root I/O virtualization (SR-IOV)
Transmit-side Checksum offloads (IPv4, IPv6, TCP) - These are supported if all of the SET team members
support them.
Virtual Machine Queues (VMQ)
Virtual Receive Side Scaling (RSS)
SET is not compatible with the following networking technologies in Windows Server 2016.
802.1X authentication
IPsec Task Offload (IPsecTO)
QoS in host or native operating systems
Receive side coalescing (RSC)
Receive side scaling (RSS)
TCP Chimney Offload
Virtual Machine QoS (VM-QoS)
SET Modes and Settings
Unlike NIC Teaming, when you create a SET team, you cannot configure a team name. In addition, using a standby
adapter is supported in NIC Teaming, but it is not supported in SET. When you deploy SET, all network adapters
are active and none are in standby mode.
Another key difference between NIC Teaming and SET is that NIC Teaming provides the choice of three different
teaming modes, while SET supports only Switch Independent mode. With Switch Independent mode, the switch
or switches to which the SET Team members are connected are unaware of the presence of the SET team and do
not determine how to distribute network traffic to SET team members - instead, the SET team distributes inbound
network traffic across the SET team members.
When you create a new SET team, you must configure the following team properties.
Member adapters
Load balancing mode
Member adapters
When you create a SET team, you must specify up to eight identical network adapters that are bound to the HyperV Virtual Switch as SET team member adapters.
Load Balancing mode
The options for SET team Load Balancing distribution mode are Hyper-V Port and Dynamic.
Hyper-V Port
VMs are connected to a port on the Hyper-V Virtual Switch. When using Hyper-V Port mode for SET teams, the
Hyper-V Virtual Switch port and the associated MAC address are used to divide network traffic between SET team
members.
NOTE
When you use SET in conjunction with Packet Direct, the teaming mode Switch Independent and the load balancing mode
Hyper-V Port are required.
Because the adjacent switch always sees a particular MAC address on a given port, the switch distributes the
ingress load (the traffic from the switch to the host) to the port where the MAC address is located. This is
particularly useful when Virtual Machine Queues (VMQs) are used, because a queue can be placed on the specific
NIC where the traffic is expected to arrive.
However, if the host has only a few VMs, this mode might not be granular enough to achieve a well-balanced
distribution. This mode will also always limit a single VM (i.e., the traffic from a single switch port) to the
bandwidth that is available on a single interface.
Dynamic
This load balancing mode provides the following advantages.
Outbound loads are distributed based on a hash of the TCP Ports and IP addresses. Dynamic mode also rebalances loads in real time so that a given outbound flow can move back and forth between SET team
members.
Inbound loads are distributed in the same manner as the Hyper-V port mode.
The outbound loads in this mode are dynamically balanced based on the concept of flowlets. Just as human
speech has natural breaks at the ends of words and sentences, TCP flows (TCP communication streams) also have
naturally occurring breaks. The portion of a TCP flow between two such breaks is referred to as a flowlet.
When the dynamic mode algorithm detects that a flowlet boundary has been encountered - for example when a
break of sufficient length has occurred in the TCP flow - the algorithm automatically rebalances the flow to
another team member if appropriate. In some uncommon circumstances, the algorithm might also periodically
rebalance flows that do not contain any flowlets. Because of this, the affinity between TCP flow and team member
can change at any time as the dynamic balancing algorithm works to balance the workload of the team members.
SET and Virtual Machine Queues (VMQs)
VMQ and SET work well together, and you should enable VMQ whenever you are using Hyper-V and SET.
NOTE
SET always presents the total number of queues that are available across all SET team members. In NIC Teaming, this is
called Sum-of-Queues mode.
Most network adapters have queues that can be used for either Receive Side Scaling (RSS) or VMQ, but not both
at the same time.
Some VMQ settings appear to be settings for RSS queues but are really settings on the generic queues that both
RSS and VMQ use depending on which feature is presently in use. Each NIC has, in its advanced properties, values
for *RssBaseProcNumber and *MaxRssProcessors .
Following are a few VMQ settings that provide better system performance.
Ideally each NIC should have the *RssBaseProcNumber set to an even number greater than or equal to two (2).
This is because the first physical processor, Core 0 (logical processors 0 and 1), typically does most of the
system processing so the network processing should be steered away from this physical processor.
NOTE
Some machine architectures don't have two logical processors per physical processor, so for such machines the base
processor should be greater than or equal to 1. If in doubt, assume your host is using a 2 logical processor per physical
processor architecture.
The team members' processors should be, to the extent that it's practical, non-overlapping. For example, in a 4core host (8 logical processors) with a team of 2 10Gbps NICs, you could set the first one to use base processor
of 2 and to use 4 cores; the second would be set to use base processor 6 and use 2 cores.
SET and Hyper-V Network Virtualization (HNV)
SET is fully compatible with Hyper-V Network Virtualization in Windows Server 2016. The HNV management
system provides information to the SET driver that allows SET to distribute the network traffic load in a manner
that is optimized for the HNV traffic.
SET and Live Migration
Live Migration is supported in Windows Server 2016.
MAC Address Use on Transmitted Packets
When you configure a SET team with dynamic load distribution, the packets from a single source (such as a single
VM) are simultaneously distributed across multiple team members.
To prevent the switches from getting confused and to prevent MAC flapping alarms, SET replaces the source MAC
address with a different MAC address on the frames that are transmitted on team members other than the
affinitized team member. Because of this, each team member uses a different MAC address, and MAC address
conflicts are prevented unless and until failure occurs.
When a failure is detected on the primary NIC, the SET teaming software starts using the VM's MAC address on
the team member that is chosen to serve as the temporary affinitized team member (i.e., the one that will now
appear to the switch as the VM's interface).
This change only applies to traffic that was going to be sent on the VM's affinitized team member with the VM's
own MAC address as its source MAC address. Other traffic continues to be sent with whatever source MAC
address it would have used prior to the failure.
Following are lists that describe SET teaming MAC address replacement behavior, based on how the team is
configured:
In Switch Independent mode with Hyper-V Port distribution
Every vmSwitch port is affinitized to a team member
Every packet is sent on the team member to which the port is affinitized
No source MAC replacement is done
In Switch Independent mode with Dynamic distribution
Every vmSwitch port is affinitized to a team member
All ARP/NS packets are sent on the team member to which the port is affinitized
Packets sent on the team member that is the affinitized team member have no source MAC address
replacement done
Packets sent on a team member other than the affinitized team member will have source MAC
address replacement done
Managing a SET team
It is recommended that you use System Center Virtual Machine Manager (VMM) to manage SET teams, however
you can also use Windows PowerShell to manage SET. The following sections provide the Windows PowerShell
commands that you can use to manage SET.
For information on how to create a SET team using VMM, see the section "Set up a logical switch" in the System
Center VMM library topic Create logical switches.
Create a SET team
You must create a SET team at the same time that you create the Hyper-V Virtual Switch by using the NewVMSwitch Windows PowerShell command.
When you create the Hyper-V Virtual Switch, you must include the new EnableEmbeddedTeaming parameter in
your command syntax. In the following example, a Hyper-V switch named TeamedvSwitch with embedded
teaming and two initial team members is created.
New-VMSwitch -Name TeamedvSwitch -NetAdapterName "NIC 1","NIC 2" -EnableEmbeddedTeaming $true
The EnableEmbeddedTeaming parameter is assumed by Windows PowerShell when the argument to
NetAdapterName is an array of NICs instead of a single NIC. As a result, you could revise the previous command
in the following way.
New-VMSwitch -Name TeamedvSwitch -NetAdapterName "NIC 1","NIC 2"
If you want to create a SET-capable switch with a single team member so that you can add a team member at a
later time, then you must use the EnableEmbeddedTeaming parameter.
New-VMSwitch -Name TeamedvSwitch -NetAdapterName "NIC 1" -EnableEmbeddedTeaming $true
Adding or removing a SET team member
The Set-VMSwitchTeam command includes the NetAdapterName option. To change the team members in a
SET team, enter the desired list of team members after the NetAdapterName option. If TeamedvSwitch was
originally created with NIC 1 and NIC 2, then the following example command deletes SET team member "NIC 2"
and adds new SET team member "NIC 3".
Set-VMSwitchTeam -Name TeamedvSwitch -NetAdapterName "NIC 1","NIC 3"
Removing a SET team
You can remove a SET team only by by removing the Hyper-V Virtual Switch that contains the SET team. Use the
topic Remove-VMSwitch for information on how to remove the Hyper-V Virtual Switch. The following example
removes a Virtual Switch named SETvSwitch.
Remove-VMSwitch "SETvSwitch"
Changing the load distribution algorithm for a SET team
The Set-VMSwitchTeam cmdlet has a VMSwitchLoadBalancingAlgorithm option. This option takes one of
two possible values: HyperVPort or Dynamic. To set or change the load distribution algorithm for a switchembedded team, use this option.
In the following example, the VMSwitchTeam named TeamedvSwitch uses the Dynamic load balancing
algorithm.
Set-VMSwitchTeam -Name TeamedvSwitch -VMSwitchLoadBalancingAlgorithm Dynamic
Manage Hyper-V Virtual Switch
4/24/2017 • 1 min to read • Edit Online
Applies To: Windows Server 2016
You can use this topic to access Hyper-V Virtual Switch management content.
This section contains the following topics.
Configure VLANs on Hyper-V Virtual Switch Ports
Create Security Policies with Extended Port Access Control Lists
Configure and View VLAN Settings on Hyper-V
Virtual Switch Ports
5/8/2017 • 2 min to read • Edit Online
Applies To: Windows Server 2016
You can use this topic to learn best practices for configuring and viewing virtual Local Area Network (VLAN)
settings on a Hyper-V Virtual Switch port.
When you want to configure VLAN settings on Hyper-V Virtual Switch ports, you can use either Windows® Server
2016 Hyper-V Manager or System Center Virtual Machine Manager (VMM).
If you are using VMM, VMM uses the following Windows PowerShell command to configure the switch port.
Set-VMNetworkAdapterIsolation <VM-name|-managementOS> -IsolationMode VLAN -DefaultIsolationID <vlan-value> AllowUntaggedTraffic $True
If you are not using VMM and are configuring the switch port in Windows Server, you can use the Hyper-V
Manager console or the following Windows PowerShell command.
Set-VMNetworkAdapterVlan <VM-name|-managementOS> -Access -VlanID <vlan-value>
Because of these two methods for configuring VLAN settings on Hyper-V Virtual Switch ports, it is possible that
when you attempt to view the switch port settings, it appears to you that the VLAN settings are not configured even when they are configured.
Use the same method to configure and view switch port VLAN settings
To ensure that you do not encounter these issues, you must use the same method to view your switch port VLAN
settings that you used to configure the switch port VLAN settings.
To configure and view VLAN switch port settings, you must do the following:
If you are using VMM or Network Controller to set up and manage your network, and you have deployed
Software Defined Networking (SDN), you must use the VMNetworkAdapterIsolation cmdlets.
If you are using Windows Server 2016 Hyper-V Manager or Windows PowerShell cmdlets, and you have not
deployed Software Defined Networking (SDN), you must use the VMNetworkAdapterVlan cmdlets.
Possible issues
If you do not follow these guidelines you might encounter the following issues.
In circumstances where you have deployed SDN and use VMM, Network Controller, or the
VMNetworkAdapterIsolation cmdlets to configure VLAN settings on a Hyper-V Virtual Switch port: If you use
Hyper-V Manager or Get VMNetworkAdapterVlan to view the configuration settings, the command output
will not display your VLAN settings. Instead you must use the Get-VMNetworkIsolation cmdlet to view the
VLAN settings.
In circumstances where you have not deployed SDN, and instead use Hyper-V Manager or the
VMNetworkAdapterVlan cmdlets to configure VLAN settings on a Hyper-V Virtual Switch port: If you use the
Get-VMNetworkIsolation cmdlet to view the configuration settings, the command output will not display your
VLAN settings. Instead you must use the Get VMNetworkAdapterVlan cmdlet to view the VLAN settings.
It is also important not to attempt to configure the same switch port VLAN settings by using both of these
configuration methods. If you do this, the switch port is incorrectly configured, and the result might be a failure in
network communication.
Resources
For more information on the Windows PowerShell commands that are mentioned in this topic, see the following:
Set-VmNetworkAdapterIsolation
Get-VmNetworkAdapterIsolation
Set-VMNetworkAdapterVlan
Get-VMNetworkAdapterVlan
Create Security Policies with Extended Port Access
Control Lists
4/24/2017 • 9 min to read • Edit Online
Applies To: Windows Server 2016
This topic provides information about extended port Access Control Lists (ACLs) in Windows Server 2016. You can
configure extended ACLs on the Hyper-V Virtual Switch to allow and block network traffic to and from the virtual
machines (VMs) that are connected to the switch via virtual network adapters.
This topic contains the following sections.
Detailed ACL rules
Stateful ACL rules
Detailed ACL rules
Hyper-V Virtual Switch extended ACLs allow you to create detailed rules that you can apply to individual VM
network adapters that are connected to the Hyper-V Virtual Switch. The ability to create detailed rules allows
Enterprises and Cloud Service Providers (CSPs) to address network-based security threats in a multitenant shared
server environment.
With extended ACLs, rather than having to create broad rules that block or allow all traffic from all protocols to or
from a VM, you can now block or allow the network traffic of individual protocols that are running on VMs. You can
create extended ACL rules in Windows Server 2016 that include the following 5-tuple set of parameters: source IP
address, destination IP address, protocol, source port, and destination port. In addition, each rule can specify
network traffic direction (in or out), and the action the rule supports (block or allow traffic).
For example, you can configure port ACLs for a VM to allow all incoming and outgoing HTTP and HTTPS traffic on
port 80, while blocking the network traffic of all other protocols on all ports.
This ability to designate the protocol traffic that can or cannot be received by tenant VMs provides flexibility when
you configure security policies.
Configuring ACL rules with Windows PowerShell
To configure an extended ACL, you must use the Windows PowerShell command AddVMNetworkAdapterExtendedAcl. This command has four different syntaxes, with a distinct use for each syntax:
1. Add an extended ACL to all of the network adapters of a named VM - which is specified by the first
parameter, -VMName. Syntax:
NOTE
If you want to add an extended ACL to one network adapter rather than all, you can specify the network adapter with
the parameter -VMNetworkAdapterName.
Add-VMNetworkAdapterExtendedAcl [-VMName] <string[]> [-Action] <VMNetworkAdapterExtendedAclAction>
{Allow | Deny}
[-Direction] <VMNetworkAdapterExtendedAclDirection> {Inbound | Outbound} [[-LocalIPAddress]
<string>]
[[-RemoteIPAddress] <string>] [[-LocalPort] <string>] [[-RemotePort] <string>] [[-Protocol]
<string>] [-Weight]
<int> [-Stateful <bool>] [-IdleSessionTimeout <int>] [-IsolationID <int>] [-Passthru] [VMNetworkAdapterName
<string>] [-ComputerName <string[]>] [-WhatIf] [-Confirm] [<CommonParameters>]
2. Add an extended ACL to a specific virtual network adapter on a specific VM. Syntax:
Add-VMNetworkAdapterExtendedAcl [-VMNetworkAdapter] <VMNetworkAdapterBase[]> [-Action]
<VMNetworkAdapterExtendedAclAction> {Allow | Deny} [-Direction]
<VMNetworkAdapterExtendedAclDirection> {Inbound |
Outbound} [[-LocalIPAddress] <string>] [[-RemoteIPAddress] <string>] [[-LocalPort] <string>] [[RemotePort]
<string>] [[-Protocol] <string>] [-Weight] <int> [-Stateful <bool>] [-IdleSessionTimeout <int>] [IsolationID
<int>] [-Passthru] [-WhatIf] [-Confirm] [<CommonParameters>]
3. Add an extended ACL to all virtual network adapters that are reserved for use by the Hyper-V host
management operating system.
NOTE
If you want to add an extended ACL to one network adapter rather than all, you can specify the network adapter with
the parameter -VMNetworkAdapterName.
Add-VMNetworkAdapterExtendedAcl [-Action] <VMNetworkAdapterExtendedAclAction> {Allow | Deny} [Direction]
<VMNetworkAdapterExtendedAclDirection> {Inbound | Outbound} [[-LocalIPAddress] <string>] [[RemoteIPAddress]
<string>] [[-LocalPort] <string>] [[-RemotePort] <string>] [[-Protocol] <string>] [-Weight] <int> ManagementOS
[-Stateful <bool>] [-IdleSessionTimeout <int>] [-IsolationID <int>] [-Passthru] [VMNetworkAdapterName <string>]
[-ComputerName <string[]>] [-WhatIf] [-Confirm] [<CommonParameters>]
4. Add an extended ACL to a VM object that you have created in Windows PowerShell, such as $vm = get-vm
"my_vm". In the next line of code you can run this command to create an extended ACL with the following
syntax:
Add-VMNetworkAdapterExtendedAcl [-VM] <VirtualMachine[]> [-Action] <VMNetworkAdapterExtendedAclAction>
{Allow |
Deny} [-Direction] <VMNetworkAdapterExtendedAclDirection> {Inbound | Outbound} [[-LocalIPAddress]
<string>]
[[-RemoteIPAddress] <string>] [[-LocalPort] <string>] [[-RemotePort] <string>] [[-Protocol]
<string>] [-Weight]
<int> [-Stateful <bool>] [-IdleSessionTimeout <int>] [-IsolationID <int>] [-Passthru] [VMNetworkAdapterName
<string>] [-WhatIf] [-Confirm] [<CommonParameters>]
Detailed ACL rule examples
Following are several examples of how you can use the Add-VMNetworkAdapterExtendedAcl command to
configure extended port ACLs and to create security policies for VMs.
Enforce application-level security
Enforce both user-level and application-level security
Provide security support to a non-TCP/UDP application
NOTE
The values for the rule parameter Direction in the tables below are based on traffic flow to or from the VM for which you are
creating the rule. If the VM is receiving traffic, the traffic is inbound; if the VM is sending traffic, the traffic is outbound. For
example, if you apply a rule to a VM that blocks inbound traffic, the direction of inbound traffic is from external resources to
the VM. If you apply a rule that blocks outbound traffic, the direction of outbound traffic is from the local VM to external
resources.
Enforce application-level security
Because many application servers use standardized TCP/UDP ports to communicate with client computers, it is easy
to create rules that block or allow access to an application server by filtering traffic going to and coming from the
port designated to the application.
For example, you might want to allow a user to login to an application server in your datacenter by using Remote
Desktop Connection (RDP). Because RDP uses TCP port 3389, you can quickly set up the following rule:
SOURCE IP
DESTINATION
IP
PROTOCOL
SOURCE PORT
DESTINATION
PORT
DIRECTION
ACTION
*
*
TCP
*
3389
In
Allow
Following are two examples of how you can create rules with Windows PowerShell commands. The first example
rule blocks all traffic to the VM named "ApplicationServer." The second example rule, which is applied to the
network adapter of the VM named "ApplicationServer," allows only inbound RDP traffic to the VM.
NOTE
When you create rules, you can use the -Weight parameter to determine the order in which the Hyper-V Virtual Switch
processes the rules. Values for -Weight are expressed as integers; rules with a higher integer are processed before rules with
lower integers. For example, if you have applied two rules to a VM network adapter, one with a weight of 1 and one with a
weight of 10, the rule with the weight of 10 is applied first.
Add-VMNetworkAdapterExtendedAcl -VMName "ApplicationServer" -Action "Deny" -Direction "Inbound" -Weight 1
Add-VMNetworkAdapterExtendedAcl -VMName "ApplicationServer" -Action "Allow" -Direction "Inbound" -LocalPort
3389 -Protocol "TCP" -Weight 10
Enforce both user-level and application-level security
Because a rule can match a 5-tuple IP packet (Source IP, Destination IP, Protocol, Source Port, and Destination Port),
the rule can enforce a more detailed security policy than a Port ACL.
For example, if you want to provide DHCP service to a limited number of client computers using a specific set of
DHCP servers, you can configure the following rules on the Windows Server 2016 computer that is running HyperV, where the user VMs are hosted:
SOURCE IP
DESTINATION
IP
PROTOCOL
SOURCE PORT
DESTINATION
PORT
DIRECTION
ACTION
SOURCE IP
DESTINATION
IP
PROTOCOL
SOURCE PORT
DESTINATION
PORT
DIRECTION
ACTION
*
255.255.255.2
55
UDP
*
67
Out
Allow
*
10.175.124.0/
25
UDP
*
67
Out
Allow
10.175.124.0/
25
*
UDP
*
68
In
Allow
Following are examples of how you can create these rules with Windows PowerShell commands.
Add-VMNetworkAdapterExtendedAcl -VMName "ServerName" -Action
Add-VMNetworkAdapterExtendedAcl -VMName "ServerName" -Action
255.255.255.255 -RemotePort 67 -Protocol "UDP"-Weight 10
Add-VMNetworkAdapterExtendedAcl -VMName "ServerName" -Action
10.175.124.0/25 -RemotePort 67 -Protocol "UDP"-Weight 20
Add-VMNetworkAdapterExtendedAcl -VMName "ServerName" -Action
10.175.124.0/25 -RemotePort 68 -Protocol "UDP"-Weight 20
"Deny" -Direction "Outbound" -Weight 1
"Allow" -Direction "Outbound" -RemoteIPAddress
"Allow" -Direction "Outbound" -RemoteIPAddress
"Allow" -Direction "Inbound" -RemoteIPAddress
Provide security support to a non-TCP/UDP application
While most network traffic in a datacenter is TCP and UDP, there is still some traffic that utilizes other protocols. For
example, if you want to permit a group of servers to run an IP-multicast application that relies on Internet Group
Management Protocol (IGMP), you can create the following rule.
NOTE
IGMP has a designated IP protocol number of 0x02.
SOURCE IP
DESTINATION
IP
PROTOCOL
SOURCE PORT
DESTINATION
PORT
DIRECTION
ACTION
*
*
0x02
*
*
In
Allow
*
*
0x02
*
*
Out
Allow
Following is an example of how you can create these rules with Windows PowerShell commands.
Add-VMNetworkAdapterExtendedAcl -VMName "ServerName" -Action "Allow" -Direction "Inbound" -Protocol 2 -Weight
20
Add-VMNetworkAdapterExtendedAcl -VMName "ServerName" -Action "Allow" -Direction "Outbound" -Protocol 2 -Weight
20
Stateful ACL rules
Another new capability of extended ACLs allows you to configure stateful rules. A stateful rule filters packets based
on five attributes in a packet - Source IP, Destination IP, Protocol, Source Port, and Destination Port.
Stateful rules have the following capabilities:
They always allow traffic and are not used to block traffic.
If you specify that the value for the parameter Direction is inbound and traffic matches the rule, Hyper-V
Virtual Switch dynamically creates a matching rule that allows the VM to send outbound traffic in response
to the external resource.
If you specify that the value for the parameter Direction is outbound and traffic matches the rule, Hyper-V
Virtual Switch dynamically creates a matching rule that allows the external resource inbound traffic to be
received by the VM.
They include a timeout attribute that is measured in seconds. When a network packet arrives at the switch
and the packet matches a stateful rule, Hyper-V Virtual Switch creates a state so that all subsequent packets
in both directions of the same flow are allowed. The state expires if there is no traffic in either direction in the
period of time that is specified by the timeout value.
Following is an example of how you can use stateful rules.
Allow inbound remote server traffic only after it is contacted by the local server
In some cases, a stateful rule must be employed because only a stateful rule can keep track of a known, established
connection, and distinguish the connection from other connections.
For example, if you want to allow a VM application server to initiate connections on port 80 to web services on the
Internet, and you want the remote Web servers to be able to respond to the VM traffic, you can configure a stateful
rule that allows initial outbound traffic from the VM to the Web services; because the rule is stateful, return traffic to
the VM from the Web servers is also allowed. For security reasons, you can block all other inbound network traffic
to the VM.
To achieve this rule configuration, you can use the settings in the table below.
NOTE
Due to formatting restrictions and the amount of information in the table below, the information is displayed differently than
in previous tables in this document.
PARAMETER
RULE 1
RULE 2
RULE 3
Source IP
*
*
*
Destination IP
*
*
*
Protocol
*
*
TCP
Source Port
*
*
*
Destination Port
*
*
80
Direction
In
Out
Out
Action
Deny
Deny
Allow
Stateful
No
No
Yes
Timeout (in seconds)
N/A
N/A
3600
The stateful rule allows the VM application server to connect to a remote Web server. When the first packet is sent
out, Hyper-V Virtual switch dynamically creates two flow states to allow all packets sent to and all returning packets
from the remote Web server. When the flow of packets between the servers stops, the flow states time out in the
designated timeout value of 3600 seconds, or one hour.
Following is an example of how you can create these rules with Windows PowerShell commands.
Add-VMNetworkAdapterExtendedAcl -VMName "ApplicationServer" -Action "Deny" -Direction "Inbound" -Weight 1
Add-VMNetworkAdapterExtendedAcl -VMName "ApplicationServer" -Action "Deny" -Direction "Outbound" -Weight 1
Add-VMNetworkAdapterExtendedAcl -VMName "ApplicationServer" -Action "Allow" -Direction "Outbound" 80 "TCP" Weight 100 -Stateful -Timeout 3600