* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Module 6: CPU Scheduling - Simon Fraser University
		                    
		                    
								Survey							
                            
		                
		                
                            
                            
								Document related concepts							
                        
                        
                    
						
						
							Transcript						
					
					Chapter 5: CPU Scheduling Chapter 5: Objectives  Understand  Scheduling Criteria  Scheduling Algorithms  Multiple-Processor Scheduling Operating System Concepts 5.2 Silberschatz, Galvin and Gagne ©2005 Basic Concepts  Maximum CPU utilization obtained with multiprogramming  CPU–I/O Burst Cycle  Process execution consists of a cycle of CPU execution and I/O wait  How long is the CPU burst? Operating System Concepts 5.3 Silberschatz, Galvin and Gagne ©2005 CPU Burst Distribution  CPU bursts vary greatly from process to process and from computer to computer  But, in general, they tend to have the following distribution (expo) Many short bursts Few long bursts Operating System Concepts 5.4 Silberschatz, Galvin and Gagne ©2005 CPU Scheduler  Selects one process from ready queue to run on CPU  Scheduling can be  Nonpreemptive  Once a process is allocated the CPU, it does not leave unless: 1. it has to wait, e.g., for I/O request or for a child to terminate 2. it terminates  Preemptive  OS can force (preempt) a process from CPU at anytime – Say, to allocate CPU to another higher-priority process  Which is harder to implement? and why?  Preemptive is harder: Need to maintain consistency of  data shared between processes, and more importantly,  kernel data structures (e.g., I/O queues) – Think of a preemption while kernel is executing a sys call on behalf of a process (many OSs, wait for sys call to finish) Operating System Concepts 5.5 Silberschatz, Galvin and Gagne ©2005 Dispatcher  Scheduler: selects one process to run on CPU  Dispatcher: allocates CPU to the selected process, which involves:  switching context  switching to user mode  jumping to the proper location (in the selected process) and restarting it  Dispatch latency – time it takes for the dispatcher to stop one process and start another running  How does scheduler select a process to run? Operating System Concepts 5.6 Silberschatz, Galvin and Gagne ©2005 Scheduling Criteria  CPU utilization – keep the CPU as busy as possible  Maximize  Throughput – # of processes that complete their execution per time unit  Maximize  Turnaround time – amount of time to execute a particular process (time from submission to termination)  Minimize  Waiting time – amount of time a process has been waiting in the ready queue  Minimize  Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for timesharing environment)  Minimize Operating System Concepts 5.7 Silberschatz, Galvin and Gagne ©2005 Scheduling Algorithms  First Come, First Served  Shortest Job First  Priority  Round Robin  Multilevel queues  Note: A process may have many CPU bursts, but in the examples we show only one for simplicity Operating System Concepts 5.8 Silberschatz, Galvin and Gagne ©2005 First-Come, First-Served (FCFS) Scheduling Process Burst Time P1 24 P2 3 P3 3  Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: P1 P2 0 24 P3 27 30  Waiting time for P1 = 0; P2 = 24; P3 = 27  Average waiting time: (0 + 24 + 27)/3 = 17 Operating System Concepts 5.9 Silberschatz, Galvin and Gagne ©2005 FCFS Scheduling (Cont.) Suppose that the processes arrive in the order P2 , P3 , P1 3, 3, 24  The Gantt chart for the schedule is: P2 0 P3 3 P1 6 30  Waiting time for P1 = 6; P2 = 0; P3 = 3  Average waiting time: (6 + 0 + 3)/3 = 3  Much better than previous case  Convoy effect: short process behind long process Operating System Concepts 5.10 Silberschatz, Galvin and Gagne ©2005 Shortest-Job-First (SJF) Scheduling  Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time  Two schemes:  nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst  preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF)  SJF is optimal – gives minimum average waiting time for a given set of processes Operating System Concepts 5.11 Silberschatz, Galvin and Gagne ©2005 Example of Non-Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4  SJF (non-preemptive) P1 0 3 P3 7 P2 8 P4 12 16  Average waiting time = (0 + 6 + 3 + 7)/4 = 4 Operating System Concepts 5.12 Silberschatz, Galvin and Gagne ©2005 Example of Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4  SJF (preemptive, SRJF) P1 0 P2 2 P3 4 P2 5 P4 P1 11 7 16  Average waiting time = (9 + 1 + 0 +2)/4 = 3 Operating System Concepts 5.13 Silberschatz, Galvin and Gagne ©2005 Determining Length of Next CPU Burst  Can only estimate the length  Can be done by using the length of previous CPU bursts, using exponential averaging 1. t n  actual lenght of n th CPU burst 2.  n 1  predicted value for the next CPU burst 3.  , 0    1 4. Define :  n 1   t n  1   n  Operating System Concepts 5.14 Silberschatz, Galvin and Gagne ©2005 Exponential Averaging  If we expand the formula, we get: n+1 =  tn +  (1 - ) tn -1 +  (1 -  )2 tn -2 + … + (1 -  )n +1 0  Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor  Examples:   = 0 ==> n+1 = n ==> Last CPU burst does not count (transient value)   =1 ==> n+1 =  tn ==> Only last CPU burst counts (history is stale) Operating System Concepts 5.15 Silberschatz, Galvin and Gagne ©2005 Prediction CPU Burst Lengths: Expo Average  Assume  = 0.5, 0 = 10 Operating System Concepts 5.16 Silberschatz, Galvin and Gagne ©2005 Priority Scheduling  A priority number (integer) is associated with each process  The CPU is allocated to the process with the highest priority (smallest integer  highest priority)  Preemptive  nonpreemptive  SJF is a priority scheduling where priority is the predicted next CPU burst time  Problem  Starvation – low priority processes may never execute  Solution  Aging – as time progresses increase the priority of the process  Aging: increase the priority of a process as it waits in the system Operating System Concepts 5.17 Silberschatz, Galvin and Gagne ©2005 Round Robin (RR)  Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.  If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.  Performance  q large  FCFS  q small  q must be large with respect to context switch, otherwise overhead is too high Operating System Concepts 5.18 Silberschatz, Galvin and Gagne ©2005 Example of RR with Time Quantum = 20 Process P1 P2 P3 P4  The Gantt chart is: P1 0 P2 20 37 P3 Burst Time 53 17 68 24 P4 57 P1 77 P3 97 117 P4 P1 P3 P3 121 134 154 162  Typically, higher average turnaround than SJF, but better response Operating System Concepts 5.19 Silberschatz, Galvin and Gagne ©2005 Time Quantum and Context Switch Time  Smaller q  more responsive but more context switches (overhead) Operating System Concepts 5.20 Silberschatz, Galvin and Gagne ©2005 Turnaround Time Varies With The Time Quantum  Turnaround time varies with quantum, then stabilizes  Rule of thumb for good performance:  80% of CPU bursts should be shorter than time quantum Operating System Concepts 5.21 Silberschatz, Galvin and Gagne ©2005 Multilevel Queue  Ready queue is partitioned into separate queues:  foreground (interactive)  background (batch)  Each queue has its own scheduling algorithm  foreground – RR  background – FCFS  Scheduling must be done between the queues  Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation.  Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; e.g.,  80% to foreground in RR  20% to background in FCFS Operating System Concepts 5.22 Silberschatz, Galvin and Gagne ©2005 Multilevel Queue Scheduling Operating System Concepts 5.23 Silberschatz, Galvin and Gagne ©2005 Multilevel Feedback Queue  A process can move between various queues; aging can be implemented this way  Multilevel-feedback-queue scheduler defined by the following parameters:  number of queues  scheduling algorithms for each queue  method used to determine when to upgrade a process  method used to determine when to demote a process  method used to determine which queue a process will enter when that process needs service Operating System Concepts 5.24 Silberschatz, Galvin and Gagne ©2005 Example of Multilevel Feedback Queue  Three queues:  Q0 – RR with time quantum 8 milliseconds  Q1 – RR time quantum 16 milliseconds  Q2 – FCFS  Scheduling  A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.  At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2 Operating System Concepts 5.25 Silberschatz, Galvin and Gagne ©2005 Multilevel Feedback Queues  Notes:  Short processes get served faster (higher prio)  more responsive  Long processes (CPU bound) sink to bottom  served FCFS  more throughput Operating System Concepts 5.26 Silberschatz, Galvin and Gagne ©2005 Multiple-Processor Scheduling  Multiple processors ==> divide load among them  More complex than single CPU scheduling  How to divide load?  Asymmetric multiprocessor   One master processor does the scheduling for others Symmetric multiprocessor (SMP)  Each processor runs its own scheduler  One common ready queue for all processors, or one ready queue for each  Win XP, Linux, Solaris, Mac OS X support SMP Operating System Concepts 5.27 Silberschatz, Galvin and Gagne ©2005 SMP Issues  Processor affinity  When a process runs on a processor, some data is cached in that processor’s cache  A process migrates to another processor ==>      Cache of new processor has to be re-populated Cache of old processor has to be invalidated ==> performance penalty Load balancing  One processor has too much load and another is idle  Balance load using  Push migration: A specific task periodically checks load on all processors and evenly distributes it by moving (pushing) tasks  Pull migration: Idle processor pulls a waiting task from a busy processor  Some systems (e.g., Linux) implement both Tradeoff between load balancing and processor affinity: what would you do?  May be, invoke load balancer when imbalance exceeds a threshold Operating System Concepts 5.28 Silberschatz, Galvin and Gagne ©2005 Real-time Scheduling  Hard-real time systems  A task must be finished within a deadline  Ex: Control of spacecraft  Soft-real time systems  A task is given higher priority over others  Ex: Multimedia systems Operating System Concepts 5.29 Silberschatz, Galvin and Gagne ©2005 Operating System Examples  Windows XP scheduling  Linux scheduling Operating System Concepts 5.30 Silberschatz, Galvin and Gagne ©2005 Windows XP Scheduler  Priority-based, preemptive scheduler  The highest-priority thread will always run  32 levels of priorities, each has a separate queue  Scheduler traverses queues from highest to lowest till it finds a thread that is ready to run  Priorities are divided into classes, each has several relative priorities Operating System Concepts 5.31 Silberschatz, Galvin and Gagne ©2005 Windows XP Scheduler (cont’d)  Real-time class (fixed): Levels 16 to 31  Other classes (variable): Levels 1 to 15  Priority may change (decrease or increase)  Priority decreases  After thread’s quantum time runs out   but never goes below the base (normal) value of its class Limit CPU consumption of CPU-bound threads  Priority increases  After a thread is released from a wait operation  Bigger increase if thread was waiting for mouse or keyboard  Moderate increase if it was waiting for disk  Also, active window gets a priority boost  Yield good response time Operating System Concepts 5.32 Silberschatz, Galvin and Gagne ©2005 Linux Scheduler  Priority-based, preemptive scheduler with two separate ranges  Real-time: 0 to 99  Nice: 100 to 140 (from -20 to 19)  Higher priority tasks get larger quanta (unlike Win XP, Solaris) Operating System Concepts 5.33 Silberschatz, Galvin and Gagne ©2005 Linux Scheduler (cont’d)  A task is initially assigned a time slice (quantum)  Runqueue has two arrays: active and expired  A runnable task is eligible for CPU if it still has time left in its time slice  If the time slice runs out, the task is moved to the expired array  When there are no tasks in the active array, the expired array becomes the active array and vice versa (change of pointers) Operating System Concepts 5.34 Silberschatz, Galvin and Gagne ©2005 Algorithm Evaluation  Deterministic modeling  Takes a particular predetermined workload and defines the performance of each algorithm for that workload  Not general  Queuing models  Use queuing theory to analyze algorithms  Many (unrealistic) assumptions to facilitate analysis  Simulation  Build a simulator and test with  synthetic workload (e.g., generated randomly), or  Traces collected from running systems  Implementation  Code it up and test! Operating System Concepts 5.35 Silberschatz, Galvin and Gagne ©2005 Summary  Process execution: cycle of CPU bursts and I/O bursts  CPU bursts lengths: many short bursts, and few long ones  Scheduler selects one process from ready queue  Dispatcher performs the switching  Scheduling criteria (usually conflicting)  CPU utilization, waiting time, response time, throughput, …  Scheduling Algorithms  FCFS, SJF, Priority, RR, Multilevel Queues, …  Multiprocessor Scheduling  Processor affinity vs. load balancing  Evaluation of Algorithms  Modeling, simulation, implementation  Examples  Win XP, Linux Operating System Concepts 5.36 Silberschatz, Galvin and Gagne ©2005