Download What`s new in Hyper-V in Windows Server 2012 (Part 1)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Lag wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Server Message Block wikipedia , lookup

Distributed operating system wikipedia , lookup

Remote Desktop Services wikipedia , lookup

CAN bus wikipedia , lookup

Kademlia wikipedia , lookup

Computer cluster wikipedia , lookup

Hyper-V wikipedia , lookup

Transcript
LEARN
MORE
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
16 LPs
64 LPs
320 LPs
1 TB
1 TB
4 TB
16 Nodes up to 1000 VMs
16 Nodes up to 1000 VMs
64 Nodes up to 8,000 VMs
Virtual Machine Processor
Support
Up to 4 VPs
Up to 4 VPs
Up to 64 VPs
VM Memory
Up to 64 GB
Up to 64 GB
Up to 1 TB
Yes, one at a time
Yes, one at a time
Yes, with no limits. As many
as hardware will allow.
No. Quick Storage
Migration via SCVMM
No. Quick Storage
Migration via SCVMM
Yes, with no limits. As many
as hardware will allow.
Servers in a Cluster
16
16
64
VP:LP Ratio
8:1
8:1 for Server
12:1 for Client (VDI)
No limits. As many as
hardware will allow.
HW Logical Processor
Support
Physical Memory Support
Cluster Scale
Live Migration
Live Storage Migration
NUMA (Non-uniform memory
access)
NUMA node 1
NUMA node 2
Helps hosts scale up the number of
cores and memory access
Partitions cores and memory into
“nodes”
Allocation and latency depends on the
memory location relative to a
processor
Processors
Memory
High performance applications
detect NUMA and minimize crossnode memory access
Host NUMA
NUMA node 1
NUMA node 2
Processors
This is optimal…
Memory allocation and thread
allocations within the same NUMA
node
Memory
NUMA node 3
NUMA node 4
Memory populated in each NUMA
node
Processors
Memory
Host NUMA
NUMA node 1
NUMA node 2
This isn’t optimal…
Processors
System is imbalanced
Memory allocation and thread
allocations across different NUMA
nodes
Multiple node hops
Memory
NUMA node 3
NUMA node 4
Processors
NUMA Node 2 has an odd number of
DIMMS
Memory
NUMA Node 3 doesn’t have enough
NUMA Node 4 has no local memory
(worst case)
Host NUMA
Guest NUMA
Presenting NUMA topology within
VM
Guest operating systems & apps
can make intelligent NUMA
decisions about thread and
memory allocation
Guest NUMA nodes are aligned
with host resources
Policy driven per host –
best effort, or force alignment
vNUMA node A vNUMA node B
vNUMA node A vNUMA node B
NUMA node 1 NUMA node 2 NUMA node 3 NUMA node 4
Live Storage Migration
Online MetaOperations
Virtual Fiber Channel
Live VHD Merge (Snapshot Merge)
Support for File Based Storage
Live New Parent
on SMB 3.0
Native 4K Disk Support
New VHDX Format
Offloaded Data Transfer (ODX)
UNMAP Support
LEARN
MORE
The New Default Format for Virtual Hard Disks
Larger
Virtual Disks
Enhanced
Resiliency
Large Sector
Support
Enhanced
Perf
Embed
Custom
Metadata
Larger Block
Sizes
User Defined
Metadata
160000
Disk
155000
VHD
VHDX
10%
150000
145000
10%
140000
135000
130000
125000
PassThru
Fixed
Dynamic
Differencing
1800
Disk
1600
1400
VHD
VHDX
25%
25%
1200
1000
800
600
400
200
0
PassThru
Fixed
Dynamic
Differencing
Token
Offload
Read
Virtual
Disk
Offload
Write
Token
Token
Actual Data Transfer
Intelligent
Storage Array
Virtual
Disk
Creation of a 10 GB Fixed Disk
200
~3 Minutes
150
100
Time (seconds)
50
<1 Second!
0
Average
Desktop
ODX
VHD Stack
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
No. Quick Storage
Migration via SCVMM
No. Quick Storage
Migration via SCVMM
Yes, with no limits. As many
as hardware will allow.
VMs on File Storage
No
No
Yes, SMB 3.0
Guest Fiber Channel
No
No
Yes
Virtual Disk Format
VHD up to 2 TB
VHD up to 2 TB
VHD up to 2 TB
VHDX up to 64 TB
VM Guest Clustering
Yes, via iSCSI
Yes, via iSCSI
Yes, via iSCSI or FC
No
No
Yes
Live VHD Merge
No, offline.
No, offline.
Yes
Live New Parent
No
No
Yes
Secure Offloaded Data
Transfer (ODX)
No
No
Yes
Live Storage Migration
Native 4k Disk Support
Hyper-V: Over 1 Millions IOPs from a Single VM
Industry Leading IO Performance
• VM storage performance on par with
native
• Performance scales linearly with
increase in virtual processors
• Windows Server 2012 Hyper-V can
virtualise over 99% of the world’s SQL
Server.
Windows NIC Teaming
Continuously Available File Server (SMB) storage
CSV 2.0 Integration with Storage Arrays for Replication & HW
snapshots out of the box
Guest Clustering via Fiber Channel for HA
Support for Concurrent Live/Live Storage Migrations
Major Failover Cluster Enhancements…
LEARN
MORE
LEARN
MORE
Failover cluster
Support for 64 nodes & 8,000 VMs in
Cluster Wide Task Scheduling
a Cluster
Inbox Live Migration Queuing
Cluster Aware Updating
SMB Support
Cluster Shared Volumes 2.0
Hyper-V App Monitoring
VM Failover Prioritization
Guest Clustering via Fiber Channel…
Anti-Affinity VM Rules
LEARN
MORE
LEARN
MORE
NPIV port(s)
Disaster Recovery
Application/Service
Failover
I/O Redundancy
Physical Node
Redundancy
Hardware Fault
• Hyper-V Replica for Asynchronous Replication
• CSV 2.0 Integration with Storage Arrays for Synchronous Replication
• Non-Cluster Aware Apps: Hyper-V App Monitoring
• VM Guest Cluster: iSCSI, Fiber Channel
• VM Guest Teaming of SR-IOV NICs
• Network Load Balancing & Failover via Windows NIC Teaming
• Storage Multi-Path IO (MPIO)
• Multi-Channel SMB
• Live Migration for Planned Downtime
• Failover Cluster for Unplanned Downtime
• Windows Hardware Error Architecture (WHEA)/RAS
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
Hyper-V PowerShell
No
No
Yes
Network PowerShell
No
No
Yes
Storage PowerShell
No
No
Yes
SCONFIG
No
Yes
Yes
No
(Server Core @ OS Setup)
No
(Server Core @ OS Setup)
Yes, MinShell
N/A
No
Yes
Enable/Disable Shell
VMConnect Support for
RemoteFX