Download Memory - UNL CSE

Document related concepts
Transcript
CSCE 430/830 Computer Architecture
Advanced Memory Hierarchy
Virtual Machine Monitor
Adopted from
Professor David Patterson
Electrical Engineering and Computer Sciences
University of California, Berkeley
• VM Monitor presents a SW interface to guest
software, isolates state of guests, and protects itself
from guest software (including guest OSes)
• Virtual Machine Revival
– Overcome security flaws of large OSes
– Manage Software, Manage Hardware
– Processor performance no longer highest priority
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
2
Outline
•
•
•
•
•
•
•
•
Virtual Machines
Xen VM: Design and Performance
AMD Opteron Memory Hierarchy
Opteron Memory Performance vs. Pentium 4
Fallacies and Pitfalls
Discuss “Virtual Machines” paper
More VMM slides on Xen VM
Conclusion
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
3
Virtual Machine Monitors (VMMs)
• Virtual machine monitor (VMM) or hypervisor is software
that supports VMs
• VMM determines how to map virtual resources to physical
resources
• Physical resource may be time-shared, partitioned, or
emulated in software
• VMM is much smaller than a traditional OS;
– isolation portion of a VMM is  10,000 lines of code
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
4
Virtual Machine Monitors (VMMs)
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
5
Virtual Machine Monitors (VMMs)
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
6
Abstraction, Virtualization of Computer
System
• Modern computer system is very complex
–
–
–
–
–
Hundreds of millions of transisters
Interconnected high-speed I/O devices
Networking infrastructures
Operating systems, libraries, applications
Graphics and networking softwares
• To manage this complexity
– Levels of Abstractions
» seperated by well-defined interfaces
– Virtualizations
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
7/35
Abstraction, Virtualization of Computer
System
• Levels of Abstraction
– Allows implementation details at lower levels of design to be
ignored or simplified
– Each level is seperated by well-defined interfaces
» Design of a higher level can be decoupled from the lower
levels
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
8/35
Abstraction, Virtualization of Computer
System
• Disadvantage
– Components designed to specification for one interface will
not work with those designed for another.
Component A
Component A
Interface A
Interface A
Interface B
Component A
Interface B
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
9/35
Abstraction vs. Virtualization of Computer System
• Virtualization
– Similar to Abstraction but doesn’t always hide low layer’s
details
– Real system is transformed so that it appears to be different
Resource AA
BB
BB’
isomorphism
Resource A
B
B’
– Virtualization can be applied not only to subsystem, but to an
Entire Machine → Virtual Machine
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
10/35
Abstraction, Virtualization of
Computer System
• Virtualization
Applications or OS
Application uses virtual disk as Real disk
Virtualized
Disk
Virtualization
File
File
Abstraction
Real Disk
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
11/35
Architecture, Implementation Layers
• Architecture
– Functionality and Appearance of a computer system but not
implementation details
– Level of Abstraction = Implementation layer
» ISA, ABI, API
Application
Programs
ABI
Libraries
Operating System
Memory
Scheduler
Manager
Memory
Execution HardwareTranslation
Drivers
API
ISA
System Interconnect (Bus)
Controllers
IO Devices,
Networking
Virtual Machines
Controllers
Main Memory
12/35
Architecture, Implementation Layers
• Implementation Layer : ISA
– Instruction Set Architecture
– Divides hardware and software
– Concept of ISA originates from IBM 360
» IBM System/360 Model (20, 40, 30, 50, 60, 62, 70, 92, 44, 57,
65, 67, 75, 91, 25, 85, 95, 195, 22) : 1964~1971
» Various prices, processing power, processing unit,
devices
» But guarantee a software compatibility
– User ISA and System ISA
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
13/35
Architecture, Implementation Layers
• Implementation Layer : ABI
– Application Binary Interface
– Provides a program with access to the hardware resource and
services available in a system
– Consists of User ISA and System Call Interfaces
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
14/35
Architecture, Implementation Layers
• Implementation Layer : API
– Application Programming Interface
– Key element is Standard Library ( or Libraries )
– Typically defined at the source code level of High Level
Language
– clib in Unix environment : supports the UNIX/C programming
language
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
15/35
What is a VM and Where is the VM?
• What is “Machine”?
– 2 perspectives
– From the perspective of a process
» ABI provides interface between process and machine
– From the perspective of a system
» Underlying hardware itself is a machine.
» ISA provides interface between system and machine
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
16/35
What is a VM and Where is the VM?
• Machine from the perspective of a process
– ABI provides interface between process and machine
Application
Programs
Application Software
Operating System
System calls
User ISA
Machine
Libraries
ABI
Memory
Drivers
Scheduler
Memory
Manager
Execution HardwareTranslation
System Interconnect (Bus)
Controllers
IO Devices,
Networking
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
Controllers
Main Memory
17/35
What is a VM and Where is the VM?
• Machine from the perspective of a system
– ISA provides interface between system and machine
Application
Programs
Application Software
Operating System
Operating System
System ISA
Machine
User ISA
ISA
Memory
Drivers
Scheduler
Memory
Manager
Execution HardwareTranslation
System Interconnect (Bus)
Controllers
IO Devices,
Networking
4/30/2017
Libraries
CSCE 430/830, Advanced Memory Hierarchy
Controllers
Main Memory
18/35
What is a VM and Where is the VM?
• Virtual Machine is a Machine.
– VM virtualizes Machine Itself!
– There are 2 types of VM
» Process-level VM
» System-level VM
– VM is implemented as combination of
» Real hardware
» Virtualizing software
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
19/35
What is a VM and Where is the VM?
• Process VM
– VM is just a process from the view of host OS
– Application on the VM cannot see the host OS
Virtualizing
Application
Process
Software
Application Software
System calls
System calls
ABI
User ISA
Machine
Virtual
OS
ABI
Runtime
User ISA
Machine
Hardware
4/30/2017
Guest
CSCE 430/830, Advanced Memory Hierarchy
Host
20/35
What is a VM and Where is the VM?
• System VM
– Provides a system environment
Application Software
Applications
Guest
Operating System
OS
Virtualizing
System ISA
Machine
4/30/2017
User ISA
ISA
Virtual
Software
Machine
Hardware
CSCE 430/830, Advanced Memory Hierarchy
Runtime
Host
21/35
What is a VM and Where is the VM?
• System VM
– Example of a System VM as a process
» VMWare
Applications
Guest OS
Virtualizing
Software (VMWare)
Other Host Applications
Host OS
Hardware
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
22/35
VMM Overhead?
• Depends on the workload
• User-level processor-bound programs (e.g.,
SPEC) have zero-virtualization overhead
– Runs at native speeds since OS rarely invoked
• I/O-intensive workloads  OS-intensive
 execute many system calls and privileged
instructions
 can result in high virtualization overhead
– For System VMs, goal of architecture and VMM is to run
almost all instructions directly on native hardware
• If I/O-intensive workload is also I/O-bound
 low processor utilization since waiting for I/O
 processor virtualization can be hidden
 low virtualization overhead
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
23
Requirements of a Virtual Machine Monitor
• A VM Monitor
– Presents a SW interface to guest software,
– Isolates state of guests from each other, and
– Protects itself from guest software (including guest OSes)
• Guest software should behave on a VM exactly
as if running on the native HW
– Except for performance-related behavior or limitations of
fixed resources shared by multiple VMs
• Guest software should not be able to change
allocation of real system resources directly
• Hence, VMM must control  everything even
though guest VM and OS currently running is
temporarily using them
– Access to privileged state, Address translation, I/O,
Exceptions and Interrupts, …
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
24
Requirements of a Virtual Machine Monitor
•
VMM must be at higher privilege level than
guest VM, which generally run in user mode
 Execution of privileged instructions handled by VMM
•
E.g., Timer interrupt: VMM suspends currently
running guest VM, saves its state, handles
interrupt, determine which guest VM to run
next, and then load its state
– Guest VMs that rely on timer interrupt provided with virtual
timer and an emulated timer interrupt by VMM
•
Requirements of system virtual machines are
 same as paged-virtual memory:
1.
2.
4/30/2017
At least 2 processor modes, system and user
Privileged subset of instructions available only in system
mode, trap if executed in user mode
» All system resources controllable only via these
instructions
CSCE 430/830, Advanced Memory Hierarchy
25
ISA Support for Virtual Machines
• If plan for VM during design of ISA, easy to reduce
instructions executed by VMM, speed to emulate
– ISA is virtualizable if can execute VM directly on real machine while
letting VMM retain ultimate control of CPU: “direct execution”
– Since VMs have been considered for desktop/PC server apps only
recently, most ISAs were created ignoring virtualization, including
80x86 and most RISC architectures
• VMM must ensure that guest system only interacts
with virtual resources  conventional guest OS
runs as user mode program on top of VMM
– If guest OS accesses or modifies information related to HW resources
via a privileged instruction—e.g., reading or writing the page table
pointer—it will trap to VMM
• If not, VMM must intercept instruction and support
a virtual version of sensitive information as guest
OS expects
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
26
Impact of VMs on Virtual Memory
• Virtualization of virtual memory if each guest OS in
every VM manages its own set of page tables?
• VMM separates real and physical memory
– Makes real memory a separate, intermediate level between virtual
memory and physical memory
– Some use the terms virtual memory, physical memory, and
machine memory to name the 3 levels
– Guest OS maps virtual memory to real memory via its page tables,
and VMM page tables map real memory to physical memory
• VMM maintains a shadow page table that maps
directly from the guest virtual address space to the
physical address space of HW
– Rather than pay extra level of indirection on every memory access
– VMM must trap any attempt by guest OS to change its page table
or to access the page table pointer
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
27
ISA Support for VMs & Virtual Memory
• IBM 370 architecture added additional level of
indirection that is managed by the VMM
– Guest OS keeps its page tables as before, so the shadow
pages are unnecessary
– (AMD Pacifica proposes same improvement for 80x86)
• To virtualize software TLB, VMM manages the
real TLB and has a copy of the contents of the
TLB of each guest VM
– Any instruction that accesses the TLB must trap
– TLBs with Process ID tags support a mix of entries from
different VMs and the VMM, thereby avoiding flushing of the
TLB on a VM switch
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
28
Impact of I/O on Virtual Memory
•
I/O most difficult part of virtualization
–
–
–
–
•
•
Increasing number of I/O devices attached to the computer
Increasing diversity of I/O device types
Sharing of a real device among multiple VMs
Supporting many device drivers that are required, especially if
different guest OSes are supported on same VM system
Give each VM generic versions of each type of I/O
device driver, and let VMM to handle real I/O
Method for mapping virtual to physical I/O device
depends on the type of device:
– Disks partitioned by VMM to create virtual disks for guest VMs
– Network interfaces shared between VMs in short time slices,
and VMM tracks messages for virtual network addresses to
ensure that guest VMs only receive their messages
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
29
Example: Xen VM
•
Xen: Open-source System VMM for 80x86 ISA
– Project started at University of Cambridge, GNU license model
•
Original vision of VM is running unmodified OS
– Significant wasted effort just to keep guest OS happy
•
“paravirtualization” - small modifications to guest OS to
simplify virtualization
3 Examples of paravirtualization in Xen:
1. To avoid flushing TLB when invoke VMM, Xen mapped
into upper 64 MB of address space of each VM
2. Guest OS allowed to allocate pages, just check that didn’t
violate protection restrictions
3. To protect the guest OS from user programs in VM, Xen
takes advantage of 4 protection levels available in 80x86
–
–
–
–
4/30/2017
Most OSes for 80x86 keep everything at privilege levels 0 or at 3.
Xen VMM runs at the highest privilege level (0)
Guest OS runs at the next level (1)
Applications run at the lowest privilege level (3)
CSCE 430/830, Advanced Memory Hierarchy
30
Example: Xen VM
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
31
Xen changes for paravirtualization
• Port of Linux to Xen changed  3000 lines,
or  1% of 80x86-specific code
– Does not affect application-binary interfaces of guest OS
• OSes supported in Xen 2.0
OS
Linux 2.4
Linux 2.6
NetBSD 2.0
NetBSD 3.0
Plan 9
FreeBSD 5
Runs as host OS
Runs as guest OS
Yes
Yes
No
Yes
No
No
Yes
Yes
Yes
Yes
Yes
Yes
http://wiki.xensource.com/xenwiki/OSCompatibility
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
32
Xen and I/O
• To simplify I/O, privileged VMs assigned to each
hardware I/O device: “driver domains”
– Xen Jargon: “domains” = Virtual Machines
• Driver domains run physical device drivers,
although interrupts still handled by VMM before
being sent to appropriate driver domain
• Regular VMs (“guest domains”) run simple virtual
device drivers that communicate with physical
devices drivers in driver domains over a channel
to access physical I/O hardware
• Data sent between guest and driver domains by
page remapping
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
33
Xen 3.0
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
34
Xen Performance
Performance relative to
native Linux
• Performance relative to native Linux for Xen for 6
benchmarks from Xen developers
100%
99%
98%
97%
96%
95%
94%
93%
92%
91%
90%
100%
99%
97%
95%
96%
92%
SPEC INT2000
Linux build
time
PostgreSQL
Inf. Retrieval
PostgreSQL
OLTP
dbench
SPEC WEB99
• User-level processor-bound programs? I/Ointensive workloads? I/O-Bound I/O-Intensive?
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
46
Xen Performance, Part II
• Subsequent study noticed Xen experiments based
on 1 Ethernet network interfaces card (NIC), and
single NIC was a performance bottleneck
Linux
Xen-privileged driver VM ("driver dom ain")
Xen-guest VM + driver VM
Receive Throughput (Mbits/sec)
2500
2000
1500
1000
500
0
1
2
3
4
Number of Network Interface Cards
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
47
Xen Performance, Part III
Event count relative to
Xen-priviledged driver domain
Linux
Xen-privileged driver VM only
Xen-guest VM + driver VM
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
Intructions
L2 m isses
I-TLB m isses
D-TLB m isses
1. > 2X instructions for guest VM + driver VM
2. > 4X L2 cache misses
3. 12X – 24X Data TLB misses
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
48
Xen Performance, Part IV
1. > 2X instructions: page remapping and page
transfer between driver and guest VMs and due to
communication between the 2 VMs over a channel
2. 4X L2 cache misses: Linux uses zero-copy
network interface that depends on ability of NIC to
do DMA from different locations in memory
– Since Xen does not support “gather DMA” in its virtual network
interface, it can’t do true zero-copy in the guest VM
3. 12X – 24X Data TLB misses: 2 Linux optimizations
– Superpages for part of Linux kernel space, and 4MB pages
lowers TLB misses versus using 1024 4 KB pages. Not in Xen
– PTEs marked global are not flushed on a context switch, and
Linux uses them for its kernel space. Not in Xen
•
Future Xen may address 2. and 3., but 1. inherent?
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
49
Intel® Virtualization Technologies
Intel® VT-x
Processor
Intel® VT-d
Chipset
Intel® VT-c
Network
Intel® VT-x
Hardware assists for robust virtualization
Intel® VT FlexMigration – Flexible live migration
Intel® VT FlexPriority – Interrupt acceleration
Intel® EPT – Memory Virtualization
Intel® VT for Directed I/O
Reliability and Security through device Isolation
I/O performance with direct assignment
Intel® VT for Connectivity
NIC Enhancement with VMDq
Single Root IOV support
Network Performance and reduced CPU utilization
Intel® I/OAT for virtualization
Lower CPU Overhead and Data Acceleration
Software & Services Group
A Framework for Optimizing Virtualization
•
Reduce overheads from virtualization
•
•
•
•
•
•
Intel® VT-x Latency Reductions
Extended Page Tables
Virtual Processor IDs
APIC Virtualization (Flex Priority)
I/O Assignment via DMA Remapping
Introduce capabilities that increase
scaling out across VMs
•
•
•
Intel® Hyper-Threading Technology
PAUSE-loop Exiting
Network Virtualization
2. … and scale out across VMs
1.
Reduce
overhead
within
each VM…
OS
OS
…
OS
VMM
Host
Intel® Virtualization Technology (Intel® VT) for IA-32, Intel® 64 and Intel® Architecture (Intel® VT-x)
51
Software & Services Group
Reducing VT Latencies
What makes VM Context Switching expensive
Virtual Machine Ctl Structure (VMCS)
– Maintains Guest & Host reg. state
– “Backed” by host physical memory
Saving-Loading privileged state
• Compounded by consistency checking
Addressing Context changes
• Translation Lookaside Buffer (TLB) flushes
–Accessed via architectural VMREAD/VMWRITE
–Enables caching of VMCS state on-die
Virtual Processor IDs (VPIDs)
–Tag µarch structures (TLBs)
–Removes need to flush TLBs
Software & Services Group
52
Issues with abstracting physical memory
Address Translation
• Guest OS expects contiguous,
zero-based physical memory
• VMM must preserve this illusion
Page-table Shadowing
• VMM intercepts paging
operations
• Constructs copy of page tables
Overheads
• VM exits add to execution time
• Shadow page tables consume
significant host memory
VM0
VMn
Guest
Page Tables
Induced
VM Exits
Guest
Page Tables
Remap
VMM
Shadow
Page Tables
TLB
CPU0
Memory
Software & Services Group
53
How Extended Page Tables help with abstracting Physical
Memory
Extended Page Tables (EPT)
• Map guest physical to host address
• New hardware page-table walker
Performance Benefit
• A guest OS can modify its own page tables freely and without VM exits
Memory Savings
• A single EPT supports entire VM: instead of a shadow pagetable per guest
process
Intel® Virtualization Technology (Intel® VT) for IA-32, Intel® 64 and Intel® Architecture (Intel® VT-x)
Software & Services Group
54
Linear Addr
Extended Page Table (EPT) Walk
CR3
EPTP
L4
L3
L2
L1
EPTP
L4
L3
L2
L1
EPTP
L4
L3
L2
L1
EPTP
L4
L3
L2
L1
EPTP
L4
L3
L2
L1
PML4
TLB
PDP
PDE
PTE
Physical Addr
2-level TLB reduces page-table walks
• VPID tags help to retain TLB entries
Paging-structure caches
• Cache intermediate steps in walk
• Result: Reduce length of walks
• Common case: Much better than 24 steps
Software & Services Group
55
Difficulties in virtualizing I/O
1
Virtual Device Interface
• Traps device commands
• Translates DMA operations
• Injects virtual interrupts
Software Methods
• I/O Device Emulation
• Paravirtualize Device Interface
Para- 2
virtualization
I/O Device
Emulation
VM0
VMn
Guest Device
Driver
Guest Device
Driver
Device Model
Device Model
Physical Device Driver
VMM
Challenges
• Controlling DMA and interrupts
• Overheads of copying I/O
buffers
Storage
Memory
Network
Software & Services Group
56
Solution: DMA remapping (Intel® VT-d)
DMA Requests
Dev 31, Func 7
Device ID
Virtual Address
Length
Bus 255
Dev P, Func 2
4KB
Page
Fram
Bus N
Fault Generation
Bus 0
Dev P, Func 1
4KB Page
Tables
Dev 0, Func 0
DMA Remapping
Device D1
Address Translation
Structures
Device
Engine
Assignment
Translation Cache
Structures
Device D2
Address Translation
Context Cache
Structures
Memory-resident Partitioning &
Memory Access with Host Physical
Address
Translation Structures
Software & Services Group
57
e
Intel® VT FlexPriority
APIC (Adv. Programmable Interrupt Controller) Task Priority Register (TPR)
• Controls interrupt delivery through the APIC
• Accessed very frequently by some guest OSes
Without Intel VT FlexPriority
With Intel VT FlexPriority
VM
VM
Guest OS
No VM
Exits
VM
Exits
VMM
Guest OS
APIC TPR
access in HW
APIC-TPR access
in software
• Fetch/decode instruction
• Emulate APIC-TPR behavior
• Thousands of cycles per exit
VMM
configure
• Instruction executes directly
• Hardware emulates APIC-TPR access
• No VM exit in the common case
Software & Services Group
58
Technologies to improve VM scaling
• Hyper-threading
• Decreasing lock holder preemption impact
• Network virtualization with Virtual Machine Device Queues
• Single Root I/O Virtualization (SR-IOV)
Software & Services Group
59
Reducing Lock Holder preemption impact
Problem:
• In an SMP guest, a vCPU holding a lock may get preempted
• Other vCPUs that attempt to acquire that lock spin for the full quantum
spin_lock:
Solution: Pause-Loop Exiting (PLE)
attempt lock-acquire;
• Spin-locking code typically uses PAUSE instructions in a loop
if fail {
• A longer than “normal” loop duration taken as a sign of lockPAUSE;
holder preemption
jmp spin_lock
• When that happens, HW forces an exit into VMM
}
• VMM takes control and schedules some other vCPU
Software & Services Group
60
Network Virtualization: HW traffic management
Virtual Machine Device Queues (VMDq)
•
Data packets grouped and sorted in HW
•
Packets sent to their respective VMs
•
Round-robin servicing on transmit
Throughput (Gbps)
10.0
9.2
8.0
9.5
4.0
VMn
(vNIC)
(vNIC)
(vNIC)
VMM
Layer 2 Software Switch
Rx1
Rx1
Rx1
Rxn
Rxn
Rx2
NIC w/VMDq
4.0
0.0
w/o VMDq
w/ VMDq
Source Intel.
w/ VMDq
Jumbo
Frames
Tests measure Wire Speed Receive (Rx) Side Performance
With VMDq on Intel® 82598 10 Gigabit Ethernet Controller
Intel® Virtualization Technology (Intel® VT) for Connectivity
(Intel® VT-c)
61
VM2
Layer 2 Classified Sorter
6.0
2.0
VM1
LAN
MAC/PHY
Rx1
Rx2
Rx1
Rxn
Rx1
Rxn
Idle
Idle
Txn
Txn
Tx2
Txn
Tx2
Tx1
Software & Services Group
Protection and Instruction Set
Architecture
•
Example Problem: 80x86 POPF instruction
loads flag registers from top of stack in memory
–
–
–
–
•
One such flag is Interrupt Enable (IE)
In system mode, POPF changes IE
In user mode, POPF simply changes all flags except IE
Problem: guest OS runs in user mode inside a VM, so it expects to
see a changed IE, but it won’t
Historically, IBM mainframe HW and VMM took 3 steps:
1. Reduce cost of processor virtualization
» Intel/AMD proposed ISA changes to reduce this cost
2. Reduce interrupt overhead cost due to virtualization
3. Reduce interrupt cost by steering interrupts to proper VM
directly without invoking VMM
•
2. and 3. not yet addressed by Intel/AMD; in the future?
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
62
80x86 VM Challenges
•
18 instructions cause problems for virtualization:
1. Read control registers in user mode that reveal that the
guest operating system is running in a virtual machine
(such as POPF), and
2. Check protection as required by the segmented
architecture but assume that the operating system is
running at the highest privilege level
•
Virtual memory: 80x86 TLBs do not support
process ID tags  more expensive for VMM and
guest OSes to share the TLB
– each address space change typically requires a TLB
flush
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
63
Intel/AMD address 80x86 VM Challenges
•
•
•
•
Goal is direct execution of VMs on 80x86
Intel's VT-x
– A new execution mode for running VMs
– An architected definition of the VM state
– Instructions to swap VMs rapidly
– Large set of parameters to select the circumstances
where a VMM must be invoked
– VT-x adds 11 new instructions to 80x86
Xen 3.0 plan proposes to use VT-x to run Windows on Xen
AMD’s Pacifica makes similar proposals
–
•
Plus indirection level in page table like IBM VM 370
Ironic adding a new mode
–
4/30/2017
If OS start using mode in kernel, new mode would cause performance
problems for VMM since  100 times too slow
CSCE 430/830, Advanced Memory Hierarchy
64
Outline
•
•
•
•
Virtual Machines
Xen VM: Design and Performance
Discuss “Virtual Machines” paper
Conclusion
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
65
“Virtual Machine Monitors: Current
Technology and Future Trends” [1/2]
• Mendel Rosenblum and Tal Garfinkel, IEEE
Computer, May 2005
• How old are VMs? Why did it lie fallow so long?
• Why do authors say this technology got hot again?
– What was the tie in to massively parallel processors?
• Why would VM re-invigorate technology transfer
from OS researchers?
• Why is paravirtualization popular with academics?
What is drawback to commercial vendors?
• How does VMware handle privileged mode
instructions that are not virtualizable?
• How does VMware ESX Server take memory pages
away from Guest OS?
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
73
“Virtual Machine Monitors: Current
Technology and Future Trends” [2/2]
• How does VMware avoid having lots of redundant
copies of same code and data?
• How did IBM mainframes virtualize I/O?
• How did VMware Workstation handle the many
I/O devices of a PC? Why did they do it? What
were drawbacks?
• What does VMware ESX Server do for I/O? Why
does it work well?
• What important role do you think VMs will play in
the future Information Technology? Why?
• What is implication of multiple processors/chip
and VM?
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
74
And in Conclusion
• Virtual Machine Revival
– Overcome security flaws of modern OSes
– Processor performance no longer highest priority
– Manage Software, Manage Hardware
• “… VMMs give OS developers another opportunity
to develop functionality no longer practical in
today’s complex and ossified operating systems,
where innovation moves at geologic pace .”
[Rosenblum and Garfinkel, 2005]
• Virtualization challenges for processor, virtual
memory, I/O
– Paravirtualization, ISA upgrades to cope with those difficulties
• Xen as example VMM using paravirtualization
– 2005 performance on non-I/O bound, I/O intensive apps: 80% of
native Linux without driver VM, 34% with driver VM
• Opteron memory hierarchy still critical to
performance
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
75
In-class exercise
• Virtual machines have the potential for adding many
beneficial capabilities to computer systems. Could
VMs be used to provide the following capabilities? If
so, how could they facilitate this?
– Make it easy to consolidate a large number of applications
running on many old uniprocessors onto a single higherperformance CMP-based server?
– Limit damage caused by computer viruses, worms, or spyware?
– Higher performance in memory-intensive applications with large
memory footprints?
– Dynamically provision extra capacity for peak application loads?
– Run legacy application on old operating systems on modern
machines?
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
76
In-class exercise
• VMs can lose performance form a number of events,
such as the execution of privileged instructions,
TLB misses, traps, and I/O. These events are usually
handled by system code. Thus one way to
estimating the slowdown when running under a VM
is the percentage of application execution time in
system versus user mode.
– What types of programs would be expected to have larger
slowdown when running VMS?
– What type of VM, pure vs. para, is preferred for what applications?
4/30/2017
CSCE 430/830, Advanced Memory Hierarchy
77