* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Review
Survey
Document related concepts
Transcript
CS 414 Midterm Review Operating System: Definition Definition An Operating System (OS) provides a virtual machine on top of the real hardware, whose interface is more convenient than the raw hardware interface. Applications OS interface Operating System Physical machine interface Hardware Advantages Easy to use, simpler to code, more reliable, more secure, … 2 You can say: “I want to write XYZ into file ABC” Crossing Protection Boundaries • User calls OS procedure for “privileged” operations • Calling a kernel mode service from user mode program: – Using System Calls – System Calls switches execution to kernel mode User process System Call Trap Mode bit = 0 Save Caller’s state User Mode Mode bit = 1 Resume process Kernel Mode Mode bit = 0 Return Mode bit = 1 Execute system call Restore state 3 What is a process? • The unit of execution • The unit of scheduling • Thread of execution + address space • Is a program in execution – Sequential, instruction-at-a-time execution of a program. The same as “job” or “task” or “sequential process” 4 Process State Transitions interrupt New Exit dispatch Ready Running Waiting Processes hop across states as a result of: • Actions they perform, e.g. system calls • Actions performed by OS, e.g. rescheduling • External actions, e.g. I/O 5 Context Switch • For a running process – All registers are loaded in CPU and modified • E.g. Program Counter, Stack Pointer, General Purpose Registers • When process relinquishes the CPU, the OS – Saves register values to the PCB of that process • To execute another process, the OS – Loads register values from PCB of that process Context Switch Process of switching CPU from one process to another Very machine dependent for types of registers 6 Threads and Processes • Most operating systems therefore support two entities: – the process, • which defines the address space and general process attributes – the thread, • which defines a sequential execution stream within a process • A thread is bound to a single process. – For each process, however, there may be many threads. • Threads are the unit of scheduling • Processes are containers in which threads execute 7 Schedulers • Process migrates among several queues – Device queue, job queue, ready queue • Scheduler selects a process to run from these queues • Long-term scheduler: – load a job in memory – Runs infrequently • Short-term scheduler: – Select ready process to run on CPU – Should be fast • Middle-term scheduler – Reduce multiprogramming or memory consumption 8 CPU Scheduling • • • • • • • • FCFS LIFO SJF SRTF Priority Scheduling Round Robin Multi-level Queue Multi-level Feedback Queue 9 Race conditions • Definition: timing dependent error involving shared state – Whether it happens depends on how threads scheduled • Hard to detect: – All possible schedules have to be safe • Number of possible schedule permutations is huge • Some bad schedules? Some that will work sometimes? – they are intermittent • Timing dependent = small changes can hide bug The Fundamental Issue: Atomicity • Our atomic operation is not done atomically by machine – Atomic Unit: instruction sequence guaranteed to execute indivisibly – Also called “critical section” (CS) When 2 processes want to execute their Critical Section, – One process finishes its CS before other is allowed to enter 11 Critical Section Problem • Problem: Design a protocol for processes to cooperate, such that only one process is in its critical section – How to make multiple instructions seem like one? Process 1 CS1 Process 2 CS2 Time Processes progress with non-zero speed, no assumption on clock speed Used extensively in operating systems: Queues, shared variables, interrupt handlers, etc. 12 Solution Structure Shared vars: Initialization: Process: ... ... Entry Section Critical Section Added to solve the CS problem Exit Section 13 Solution Requirements • Mutual Exclusion – Only one process can be in the critical section at any time • Progress – Decision on who enters CS cannot be indefinitely postponed • No deadlock • Bounded Waiting – Bound on #times others can enter CS, while I am waiting • No livelock • Also efficient (no extra resources), fair, simple, … 14 Semaphores • Non-negative integer with atomic increment and decrement • Integer ‘S’ that (besides init) can only be modified by: – P(S) or S.wait(): decrement or block if already 0 – V(S) or S.signal(): increment and wake up process if any • These operations are atomic semaphore S; P(S) { while(S ≤ 0) ; S--; } V(S) { S++; } 15 Semaphore Types • Counting Semaphores: – Any integer – Used for synchronization • Binary Semaphores – Value 0 or 1 – Used for mutual exclusion (mutex) Process i Shared: semaphore S P(S); Init: S = 1; Critical Section V(S); 16 Mutexes and Synchronization semaphore S; Init: S = 1; 0; P(S) { while(S ≤ 0) ; S--; } Process i Process j V(S) { S++; } P(S); P(S); Code XYZ Code ABC V(S); V(S); 17 Monitors • Hoare 1974 • Abstract Data Type for handling/defining shared resources • Comprises: – Shared Private Data • The resource • Cannot be accessed from outside – Procedures that operate on the data • Gateway to the resource • Can only act on data local to the monitor – Synchronization primitives • Among threads that access the procedures 18 Synchronization Using Monitors • Defines Condition Variables: – condition x; – Provides a mechanism to wait for events • Resources available, any writers • 3 atomic operations on Condition Variables – x.wait(): release monitor lock, sleep until woken up condition variables have waiting queues too – x.notify(): wake one process waiting on condition (if there is one) • No history associated with signal – x.broadcast(): wake all processes waiting on condition • Useful for resource manager • Condition variables are not Boolean – If(x) then { } does not make sense 19 Types of Monitors What happens on notify(): • Hoare: signaler immediately gives lock to waiter (theory) – Condition definitely holds when waiter returns – Easy to reason about the program • Mesa: signaler keeps lock and processor (practice) – Condition might not hold when waiter returns – Fewer context switches, easy to support broadcast • Brinch Hansen: signaler must immediately exit monitor – So, notify should be last statement of monitor procedure 20 Deadlocks Definition: Deadlock exists among a set of processes if – Every process is waiting for an event – This event can be caused only by another process in the set • Event is the acquire of release of another resource Kansas 20th century law: “When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone” 21 Four Conditions for Deadlock • Coffman et. al. 1971 • Necessary conditions for deadlock to exist: – Mutual Exclusion • At least one resource must be held is in non-sharable mode – Hold and wait • There exists a process holding a resource, and waiting for another – No preemption • Resources cannot be preempted – Circular wait • There exists a set of processes {P1, P2, … PN}, such that – P1 is waiting for P2, P2 for P3, …. and PN for P1 All four conditions must hold for deadlock to occur 22 Dealing with Deadlocks • Proactive Approaches: – Deadlock Prevention • Negate one of 4 necessary conditions • Prevent deadlock from occurring – Deadlock Avoidance • Carefully allocate resources based on future knowledge • Deadlocks are prevented • Reactive Approach: – Deadlock detection and recovery • Let deadlock happen, then detect and recover from it • Ignore the problem – Pretend deadlocks will never occur – Ostrich approach 23 Safe State • A state is said to be safe, if it has a process sequence {P1, P2,…, Pn}, such that for each Pi, the resources that Pi can still request can be satisfied by the currently available resources plus the resources held by all Pj, where j < i • State is safe because OS can definitely avoid deadlock – by blocking any new requests until safe order is executed • This avoids circular wait condition – Process waits until safe state is guaranteed 24 Banker’s Algorithm • Decides whether to grant a resource request. • Data structures: n: integer m: integer available[1..m] max[1..n,1..m] allocation[1..n,1..m] need[1..n,1..m] # of processes # of resources available[i] is # of avail resources of type i max demand of each Pi for each Ri current allocation of resource Rj to Pi max # resource Rj that Pi may still request let request[i] be vector of # of resource Rj Process Pi wants 25 Basic Algorithm 1. If request[i] > need[i] then error (asked for too much) 2. If request[i] > available[i] then wait (can’t supply it now) 3. Resources are available to satisfy the request Let’s assume that we satisfy the request. Then we would have: available = available - request[i] allocation[i] = allocation [i] + request[i] need[i] = need [i] - request [i] Now, check if this would leave us in a safe state: if yes, grant the request, if no, then leave the state as is and cause process to wait. 26 Memory Management Issues • Protection: Errors in process should not affect others • Transparency: Should run despite memory size/location gcc Load Store CPU Translation box (MMU) virtual address Physical legal addr? address Physical memory Illegal? fault data How to do this mapping? 27 Segmentation • Processes have multiple base + limit registers • Processes address space has multiple segments – Each segment has its own base + limit registers – Add protection bits to every segment Real memory 0x1000 0x3000 gcc Text seg r/o 0x5000 Stack seg 0x6000 0x2000 Base&Limit? r/w How to do the mapping? 0x8000 0x6000 28 Mapping Segments • Segment Table – An entry for each segment – Is a tuple <base, limit, protection> • Each memory reference indicates segment and offset fault Virtual addr no mem yes ? 0x1000 3 128 + Seg# offset Seg table Prot base r len 0x1000 512 seg 128 29 Fragmentation • “The inability to use free memory” • External Fragmentation: – Variable sized pieces many small holes over time • Internal Fragmentation: – Fixed sized pieces internal waste if entire piece is not used Word ?? gcc External fragmentation emacs allocated stack doom Unused (“internal 30 fragmentation”) Paging • Divide memory into fixed size pieces – Called “frames” or “pages” • Pros: easy, no external fragmentation Pages typical: 4k-8k gcc emacs internal frag 31 Mapping Pages • If 2m virtual address space, 2n page size (m - n) bits to denote page number, n for offset within page Translation done using a Page Table Virtual addr 3 VPN ((1<<12)|128) 128 (12bits) page offsetpage table ? “invalid” Prot VPN r 3 PPN 1 mem 0x1000 seg 128 PPN 32 Paging + Segmentation • Paged segmentation – Handles very long segments – The segments are paged • Segmented Paging – When the page table is very big – Segment the page table – Let’s consider System 370 (24-bit address space) Seg # (4 bits) page # (8 bits) page offset (12 bits) 33 What is virtual memory? • Each process has illusion of large address space – 232 for 32-bit addressing • However, physical memory is much smaller • How do we give this illusion to multiple processes? – Virtual Memory: some addresses reside in disk page table disk Physical memory 34 Virtual Memory • Load entire process in memory (swapping), run it, exit – Is slow (for big processes) – Wasteful (might not require everything) • Solutions: partial residency – Paging: only bring in pages, not all pages of process – Demand paging: bring only pages that are required • Where to fetch page from? – Have a contiguous space in disk: swap file (pagefile.sys) 35 Page Faults • On a page fault: – OS finds a free frame, or evicts one from memory (which one?) • Want knowledge of the future? – Issues disk request to fetch data for page (what to fetch?) • Just the requested page, or more? – Block current process, context switch to new process (how?) • Process might be executing an instruction – When disk completes, set present bit to 1, and current process in ready queue 36 Page Replacement Algorithms • Random: Pick any page to eject at random – Used mainly for comparison • FIFO: The page brought in earliest is evicted – Ignores usage – Suffers from “Belady’s Anomaly” • Fault rate could increase on increasing number of pages • E.g. 0 1 2 3 0 1 4 0 1 2 3 4 with frame sizes 3 and 4 • OPT: Belady’s algorithm – Select page not used for longest time • LRU: Evict page that hasn’t been used the longest – Past could be a good predictor of the future 37 Thrashing • Processes in system require more memory than is there – Keep throwing out page that will be referenced soon – So, they keep accessing memory that is not there • Why does it occur? – No good reuse, past != future – There is reuse, but process does not fit – Too many processes in the system 38 Approach 1: Working Set • Peter Denning, 1968 – Defines the locality of a program pages referenced by process in last T seconds of execution considered to comprise its working set T: the working set parameter • Uses: – Caching: size of cache is size of WS – Scheduling: schedule process only if WS in memory – Page replacement: replace non-WS pages 39 Working Sets • The working set size is num pages in the working set – the number of pages touched in the interval (t, t-Δ). • The working set size changes with program locality. – during periods of poor locality, you reference more pages. – Within that period of time, you will have a larger working set size. • Don’t run process unless working set is in memory. 40 Approach 2: Page Fault Frequency • thrashing viewed as poor ratio of fetch to work • PFF = page faults / instructions executed • if PFF rises above threshold, process needs more memory – not enough memory on the system? Swap out. • if PFF sinks below threshold, memory can be taken away 41 Allocation and deallocation • What happens when you call: – int *p = (int *)malloc(2500*sizeof(int)); • Allocator slices a chunk of the heap and gives it to the program – free(p); • Deallocator will put back the allocated space to a free list • Simplest implementation: – Allocation: increment pointer on every allocation – Deallocation: no-op – Problems: lots of fragmentation heap (free memory) allocation current free position 42 Buddy-Block Scheme • Invented by Donald Knuth, very simple • Idea: Work with memory regions that are all powers of 2 times some “smallest” size – 2k times b • Round each request up to have form b*2k 43 Buddy-Block Scheme 44 Buddy-Block Scheme • Keep a free list for each block size (each k) – When freeing an object, combine with adjacent free regions if this will result in a double-sized free object • Basic actions on allocation request: – If request is a close fit to a region on the free list, allocate that region. – If request is less than half the size of a region on the free list, split the next larger size of region in half – If request is larger than any region, double the size of the heap (this puts a new larger object on the free list) 45 Good luck! 46