Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
بسم هللا الرحمن الرحيم CPCS361 – Operating Systems I Chapter 5: CPU Scheduling Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 2 Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various CPU-scheduling algorithms To discuss evaluation criteria for selecting a CPUscheduling algorithm for a particular system Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 3 5.1 Basic Concepts Maximum CPU utilization obtained with multiprogramming The objective of multiprogramming is to have some process running at all times The idea is to execute a process until it must wait, typically for the completion of some I/O request. In a simple computer system, the CPU just sits idle. All this waiting time is wasted; no useful work is accomplished Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 4 Basic Concepts With multiprogramming, we try to use this time productively Several processes are kept in memory at one time; when one process has to wait, the OS takes the CPU away from that process and gives the CPU to another process. This pattern continues Scheduling of this kind is a fundamental OS function Scheduling is central to OS design Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 5 Basic Concepts The success of CPU scheduling depends on an observed property of processes CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait Processes alternate between these two states: CPU burst I/O burst CPU burst I/O burst … CPU burst (system request to terminate execution) Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 6 Basic Concepts The durations of CPU bursts have been measured extensively. Although they vary greatly from process to process and from computer to computer, they tend to have a frequency curve similar to that shown in the CPU burst distribution (histogram) The curve is generally characterized as exponential, with a large number of short CPU bursts and a small number of long CPU bursts I/O-bound program: many short CPU bursts CPU-bound program: few long CPU bursts Distribution is important in the selection of an appropriate CPUscheduling algorithm Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 7 Alternating Sequence of CPU And I/O Bursts Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 8 Histogram of CPU-burst Times Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 9 CPU Scheduler Selects from among the processes in memory (ready queue) that are ready to execute, and allocates the CPU to one of them Selection is carried by the short-term scheduler (CPU scheduler) Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 10 CPU Scheduler CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state (e.g. I/O request) 2. Switches from running to ready state (e.g. Interrupt) 3. Switches from waiting to ready (e.g. I/O completion) 4. Terminates P NP Running Ready NP Terminated Waiting P Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 11 CPU Scheduler Scheduling under 1 and 4 is nonpreemptive (cooperative) Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state e. g. Windows 3.x, Windows 95 The only method that can be used on certain hardware platforms, because it does not require special hardware (e.g. a timer) needed for preemptive scheduling Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 12 CPU Scheduler Scheduling under 2 and 3 is preemptive Unfortunately, it incurs a cost associated with access to shared data Case of two processes that share data; while one is updating the data, it is preempted so that the second process can run. The second process then tries to read the data, which are in an inconsistent state. A new mechanism to coordinate access to shared data is needed Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 13 CPU Scheduler Scheduling under 2 and 3 is preemptive Preemptive also affects the design of OS kernel During the processing of a system call, the kernel may be busy with an activity on behalf of a process. Such activity may involve changing important kernel data (for instance, I/O queues). What happens if the process is preempted in the middle of these changes and the kernel (or device driver) needs to read or modify the same structure? Chaos ensues Certain OSs (including UNIX) deal with this problem by waiting either for a system call to complete or for an I/O block to take place before doing a context switch Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 14 Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: Switching context Switching to user mode Jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 15 5.2 Scheduling Criteria Different CPU scheduling algorithms have different properties Many criteria have been suggested for comparing scheduling algorithms Which characteristics are used for comparing CPU scheduling algorithms can make substantial difference in which algorithm is judged to be best. Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 16 5.2 Scheduling Criteria The criteria include the following CPU utilization – keep the CPU as busy as possible Conceptually from 0 to 100% In real system it should range from 40% (lightly used system) to 90% (heavily used system) Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Interval from the time of submission to the time of completion of a process Sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 17 5.2 Scheduling Criteria The criteria include the following Waiting time – amount of time a process has been waiting in the ready queue (sum of the periods spent waiting in the ready queue) Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 18 Scheduling Algorithm Optimization Criteria It is desirable to Under some circumstances, it is desirable to optimize the minimum or maximum values rather than the average Maximize: CPU utilization, throughput Minimize: turnaround time, waiting time, response time e. g. to guarantee all users get good service, we may want to minimize the maximum response time A system with reasonable and predictable response time may be considered more desirable than a system that is faster on the average but is highly variable Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 19 5.3 Scheduling Algorithms CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated to the CPU There are many different CPU scheduling algorithms: First Come First Serve (FCFS) Shortest Job First (SJF) Priority Round-Robin (RR) Multilevel Queue (MLQ) Multilevel Feedback-Queue (MLFQ) Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 20 First-Come, First-Served (FCFS) Scheduling Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1, P2, P3 The Gantt Chart for the schedule is: P1 0 P3 P2 24 27 30 Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 21 FCFS Scheduling (Cont) Suppose that the processes arrive in the order P2 , P3 , P1 The Gantt chart for the schedule is: P2 0 P3 3 P1 6 30 Waiting time for P1 = 6;P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect short process behind long process Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 22 FCFS Scheduling (Cont) The average waiting time under FCFS policy is generally not minimal and may vary substantially if the process’s CPU burst times vary greatly FCFS scheduling algorithm is nonpreemptive FCFS is particularly troublesome for time-sharing systems, where it is important that each user get a share of the CPU at regular intervals. It would be disastrous to allow one process to keep the CPU for an extended period Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 23 Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time If the next CPU bursts of two processes are the same, FCFS is used to break the tie SJF is optimal – gives minimum average waiting time for a given set of processes Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 24 Example of SJF Process Arrival Time P1 0.0 P2 2.0 P3 4.0 P4 5.0 SJF scheduling chart P4 0 P1 3 Burst Time 6 8 7 3 P3 9 P2 16 24 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 With FCFS average waiting time = 10.25 Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 25 Determining Length of Next CPU Burst The real difficulty with the SJF algorithm is knowing the length of the next CPU burst For long-term scheduling we can use as the length the process time limit that a user specifies when he submits the job SFJ is used frequently in long-term scheduling Although SFJ algorithm is optimal, it cannot be implemented at the level of short-term scheduling. There is no way to know the length of the next CPU burst Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 26 Determining Length of Next CPU Burst One approach is to try to estimate SFJ scheduling. We may not know the length of the next CPU burst, but we may be able to predict its value Expect that the next CPU burst will be similar in length to the previous ones Thus, by computing an approximation of the length of the next CPU burst, we can pick the process with the shortest predicted CPU burst Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 27 Determining Length of Next CPU Burst Can only estimate the length Can be done by using the length of previous CPU bursts, using exponential averaging 1. t n actual length of n th CPU burst 2. n 1 predicted value for the next CPU burst 3. , 0 1 4. Define : nn+1 1 t n 1 n . Example: Assume = ½ , 0 = 10, and t0 = 6 Then, 1 = ½ (6) + ½ (10) = 8, t1= 4 2 = ½ (4) + ½ (8) = 6 Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 28 Determining Length of Next CPU Burst Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 29 Examples of Exponential Averaging =0 =1 n+1 = n Recent history does not count n+1 = tn Only the actual last CPU burst counts If we expand the formula, we get: n+1 = tn+(1 - ) tn -1 + … +(1 - )j tn -j + … +(1 - )n +1 0 Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 30 Priority Scheduling A priority number (integer) is associated with each process SJF is a special case of priority scheduling The CPU is allocated to the process with the highest priority (smallest integer highest priority) Preemptive Nonpreemptive Equal-priority processes are scheduled in FCFS order SJF is a priority scheduling where priority is the inverse of the predicted next CPU burst time (larger CPU-burst lower the priority) Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 31 Priority Scheduling Priority can be defined Internally: Use some measurable quantity or quantities to compute the priority of a process; e. g. Time limits Memory requirements # open files Ratio of average I/O burst to average CPU burst Externally: set by criteria outside the OS; such as, Importance of the process Type and amount of funds being paid for computer use Department sponsoring the work Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 32 Priority Scheduling Priority scheduling: when a process arrives at the ready queue, its priority is compared to the priority of the currently running process Preemptive: Preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process Nonpreemptive: Put the new process at the head of the ready queue Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 33 Priority Scheduling Problem Starvation (Indefinite blocking)– low priority processes may never execute in a heavily loaded computer system Solution Aging – a technique if gradually increase the priority of processes that wait in the system for a long time; that is as time progresses increase the priority of the process. For example, increase the priority of a waiting process by 1 every 15 minutes It would take no more than 32 hours for a pririty-127 process to age to a priority-0 process Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 34 Priority Scheduling Process P1 P2 P3 P4 P5 The Gantt chart is: P2 0 P5 1 Burst Time 10 1 2 1 5 Priority 3 1 4 5 2 P1 6 P3 16 P4 18 19 Average waiting time = 8.2 ms Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 35 Round Robin (RR) Designed for time-sharing systems Similar to FCFS scheduling, but preemption is added to switch between processes Each process gets a small unit of CPU time (time quantum / time slice), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval up to 1 time quantum Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 36 Round Robin (RR) CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process; two cases: CPU burst 1 time quantum process itself will release the CPU voluntarily. Scheduler will then proceed to the next process in the ready queue CPU burst >1 time quantum timer will go off and will cause an interrupt, context switch will be executed, and the process will be put at the tail of the ready queue, scheduler will then select the next process in the ready queue Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 37 Example of RR with Time Quantum = 4 Process Burst Time P1 P2 P3 24 3 3 The Gantt chart is: P1 0 P2 4 P3 7 P1 10 P1 14 P1 18 22 P1 26 P1 30 Average waiting time = 17 /3 = 5.66 ms Typically, higher average turnaround than SJF, but better response Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 38 Round Robin (RR) No process is allocated the CPU for more than 1 time quantum unless it is the only runnable process If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Example: n = 5, q = 20 ms Then, each process will get up to 20 ms every 100 ms Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 39 Round Robin (RR) Performance: depends heavily on the size of the time quantum q large FIFO (FCFS) policy q small q must be large with respect to context switch, otherwise overhead is too high; Example: CPU burst of p = 10 time units If q = 12 time units, the process finishes in < 1 quantum, with no overhead if q = 6, the process requires 2 quanta, resulting in context switch if q = 1, then 9 context switch will occur, slowing execution of CPU Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 40 Time Quantum and Context Switch Time Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 41 Round Robin (RR) Time quantum to be large with respect to the context switch time If the time quantum is about 10% of the quantum time, then about 10% of the CPU time will be spent in context switching In practice, modern systems have time quantum ranging from 10 to 100 milliseconds. Time required for context switch is typically < 10ms (a small fraction of the time quantum) Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 42 Round Robin (RR) Turnaround time also depends on the size of the time quantum Average turnaround time of a set of processes does not necessarily improve as the quantum time increases Example: 3 processes each 10 time units, q = 1 average turnaround time = 29 q = 10 average turnaround time = 20 Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 43 Round Robin (RR) Average turnaround time can be improved if most processes finish their next CPU burst in a single time quantum Although the time quantum should be large compared with the context switch time, it should be not too large If the time quantum is too large, RR scheduling degenerates to FCFS policy Rule of thumb: 80% of the CPU bursts should be shorter than time quantum Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 44 Turnaround Time Varies With The Time Quantum Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 45 Multilevel Queue (MLQ) Processes are classified into different groups e.g. ready queue is partitioned into separate queues: foreground (interactive) background (batch) They have different response-time requirements and so may have different scheduling needs FG processes may have priority (externally defined) over BG processes MLQ scheduling partitions the ready queue into several separate queues Processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 46 Multilevel Queue (MLQ) Each queue has its own scheduling algorithm foreground – RR background – FCFS Scheduling must be done between the queues Fixed priority preemptive scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. If an interactive editing process entered the ready queue while a batch process was running, the batch process would be preempted. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes 80% to foreground queue in RR 20% to background queue in FCFS Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 47 Multilevel Queue Scheduling Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 48 Multilevel Feedback Queue (MLFQ) In MLQ a process is permanently assigned to a queue when it enter the system Processes do not move from one queue to another Advantage: low scheduling overhead Disadvantage: inflexible In MLFQ a process can move between the various queues; aging can be implemented this way The idea is to separate processes according to the characteristics of their CPU bursts A process that uses too much CPU time will be moved to lower – priority queue. This scheme leaves I/O-bound and interactive processes in the higher-priority queues A process that waits too long in lower-priority queue may be moved to a higher-priority queue Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 49 Multilevel Feedback Queue (MLFQ) A process can move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 50 Example of Multilevel Feedback Queue Three queues: Q0 – RR with time quantum 8 milliseconds Q1 – RR time quantum 16 milliseconds Q2 – FCFS Scheduling A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2. Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 51 Multilevel Feedback Queues Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 52 5.4 Multiple-Processor Scheduling With multiple CPUs load sharing becomes possible CPU scheduling more complex when multiple CPUs are available Homogeneous (identical) processors within a multiprocessor Asymmetric multiprocessing – only one processor accesses the system data structures and the other processors execute only user code, reducing the need for data sharing. Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in common ready queue, or each has its own private queue of ready processes Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 53 Multiple-Processor Scheduling Processor Affinity Data most recently accessed by the process populates the cache for the processor; and as a result, successive memory accesses by the process are often satisfied in cache memory If process migrates to another processor, the contents of the cache memory must be invalidated for the processor being migrated from, and the cache memory for the processor being migrated to must be re-populated Because of the high cost of invalidating and re-populating caches, most SMP systems try to avoid migration of processes from one processor to another and instead attempt to keep a process running on the same processor. This is known as: Processor affinity Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 54 Multiple-Processor Scheduling Processor Affinity Process has affinity for processor on which it is currently running Soft affinity: OS has a policy of attempting to keep a process running on the same processor – but not guaranteeing that it will do so Hard affinity: Systems that provide system calls that support allowing a process to specify that it is not to migrate to another processors; e.g. Linux Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 55 Multiple-Processor Scheduling Load Balancing On SMP systems, it is important to keep the workload balanced among all processors to fully utilize the benefits of having more than one processor. Otherwise, one or more processor may sit idle while other processors have high workload along with lists of processes awaiting the CPU Attempts to keep the workload evenly distributed across all processors in an SMP system Not needed on systems with common run queue Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 56 Multiple-Processor Scheduling Load Balancing Two approaches: Push migration: A specific task periodically checks the load on each processor, if it finds an imbalance, evenly distributed the load by moving (or pushing) processes from overloaded to idle or less-busy processors Pull migration: Occurs when an idle processor pulls a waiting task from a busy processor Load balancing often counteracts the benefits of processor affinity No absolute rule concerning what policy is best In some systems, an idle processor always pulls a process from a nonidle processor; and in other systems, processors are moved only if the imbalance exceeds a certain threshold Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 57 5.7 Algorithm Evaluation Deterministic modeling: takes a particular predetermined workload and defines the performance of each algorithm for that workload Queuing models: Processes that are run vary from day to day, so there is no static set of processes to use for deterministic modeling Solution: Use CPU and I/O-bursts distributions, and arrivaltime distribution to compute: throughput, utilization, waiting time, and so on for most algorithms Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 58 5.7 Algorithm Evaluation Simulations: Running simulations involves programming a model of the computer system The simulator modifies the system status to reflect the activities of the devices, the processes, and the scheduler As the simulation executes, statistics that indicate algorithm performance are gathered and printed Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 59 Evaluation of CPU schedulers by Simulation Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 60 End of Chapter 5 Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail, CSD, FCIT, KAU, 1431H 61