Download ppt

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Memory Protection

In order to prevent one process from reading/writing another process’
memory, we must ensure that a process cannot change its virtual-tophysical translations

Typically, this is done by:
— Having two processor modes: user & kernel
• Only the OS runs in kernel mode
— Only allowing kernel mode to write to the virtual memory state:
• The page table
• The page table base pointer
• The TLB
1
Sharing Memory

Paged virtual memory enables sharing at the granularity of a page, by
allowing two page tables to point to the same physical addresses
For example, if you run two copies of a program, the OS will share the
code pages between the programs
Program A
Physical
Memory
Program B
Virtual Address
Virtual Address

Disk
2
Virtual Memory and the MIPS pipelined datapath

In the datapath, two addresses need to be translated (virtual  physical)
— instruction address (PC value)
• IF stage: fetch PC instruction, compute PC+4 (next instruction)
• ID stage: branch-target computation
— data address (for loads/stores)
• EX or MEM stage

Two possible solutions (discussed in section):
1. PC-register stores only PCvirtual
2. PC-register stores both PCvirtual and PCphysical

The first solution is simpler but slower since TLB needed before I-cache

The second solution is more complex, and requires TLB in ID stage as well
3
A third solution: caches use virtual addresses

If caches can be accessed using virtual addresses,
the datapath can be greatly simplified

What are some of the pitfalls of this approach, and
how can they be handled?
1.
Since the cache is shared, two programs using the
same virtual addresses use the same cache space
 cache tag must store process-ID
CPU
A little static
RAM (cache)
Lots of
dynamic RAM
2. If a process finishes and its process-ID is later reused
by a new process, the new process may get a false cache-hit
 when process finishes, the appropriate cache entries must be
invalidated
4
Summary
 Virtual memory is great:
— It means that we don’t have to manage our own memory
— It allows different programs to use the same memory
— It provides protect between different processes
— It allows controlled sharing between processes (albeit somewhat
inflexibly)
 The key technique is indirection:
— Yet another classic CS trick you’ve seen in this class
— Many problems can be solved with indirection
 Caching made a few cameo appearances, too:
— Virtual memory enables using physical memory as a cache for disk
— We used caching (in the form of the Translation Lookaside Buffer) to
make Virtual Memory’s indirection fast
5
Embedded Processors
 MIPS processors for mobile and embedded consumer applications
— e.g., multimedia-based devices, home entertainment systems, etc.
— cannot afford too much silicon (space, cost)
 For most processors, instructions are not normally issued every cycle. A
lot of time is wasted on cycles executing with no data available because
the CPU is fixing a cache miss.
 Traditional approaches (general CPUs):
— better branch prediction, out-of-order execution, …
— bigger, more associative caches
 Neither of these is feasible for embedded processors
6
Transistor usage over time (general CPU)
Source: UPCRC Distinguished Lecture Series, Yale Patt
Number of Transistors
Cache
Microprocessor
Tim e
7
MIPS Virtual Processor
 Maintains multiple contexts in hardware
— when there is a missed cycle, processor switches to another context
— two virtual processing elements corresponding to the OS-visible state,
each containing five thread contexts corresponding to the user state
 To the OS/application, each VPE/TC looks like a fully featured CPU
— ISA extensions to allow programmers access to these capabilities
• Example: fork $rd, $rs, $rt
• Start new TC with PC = $rs, new TC’s $rd = forking TC’s $rt
 “If the MIPS VPE/TC can capture a good portion of those wasted cycles
you have literally doubled the performance of the processor with no
additional cores, pipelines or higher clock rates, and at considerably
lower power consumption.”
 “A key unanswered question is: how easy will it be to use MIPS VPE
architecture? It is likely to be harder than MIPS would like us to believe.”
8