Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Chapter 6: CPU Scheduling Operating System Concepts – 9th Edition Modified by Dr. Neerja Mhaskar for CS 3SH3 Silberschatz, Galvin and Gagne ©2013 Basic Concepts OS schedules almost all resources available to the system. CPU being the primary resource, scheduling is central to OS design. Process alternate between the following two states in a continuing cycle represented by a CPU-I/O Burst Cycle. CPU execution (CPU burst ) I/O wait (I/O burst) Operating System Concepts – 9th Edition 6.2 Silberschatz, Galvin and Gagne ©2013 CPU Scheduler Short-term scheduler (aka CPU scheduler) selects from among the processes in ready queue, and allocates the CPU to one of them Queue may be ordered in various ways Nodes are PCBs (Process Control Blocks) CPU scheduling decisions take place when a process 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates Operating System Concepts – 9th Edition 6.3 Silberschatz, Galvin and Gagne ©2013 CPU Scheduler Cont. When scheduling takes place under conditions 1 and 4 the scheduling scheme is called nonpreemptive. Process assigned to CPU, keeps the CPU till it terminates or changes its state to waiting. When scheduling takes place under conditions 2 and 3 the scheduling scheme is called preemptive. Processes can be removed from the CPU. Operating System Concepts – 9th Edition 6.4 Silberschatz, Galvin and Gagne ©2013 Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running Operating System Concepts – 9th Edition 6.5 Silberschatz, Galvin and Gagne ©2013 Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for timesharing environment) Scheduling Algorithm Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time Operating System Concepts – 9th Edition 6.6 Silberschatz, Galvin and Gagne ©2013 First- Come, First-Served (FCFS) Scheduling Process requesting the CPU first is allocated the CPU first. The implementation of the FCFS policy can be achieved with a FIFO queue. FCFS scheduling algorithm is non preemptive. Disadvantage: Average CPU waiting time for a process to use CPU is long. Example: Suppose processes P1, P2, P3 have CPU burst time 24,3,3 respectively. Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: P1 P2 0 24 P3 27 30 The waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 Operating System Concepts – 9th Edition 6.7 Silberschatz, Galvin and Gagne ©2013 FCFS Scheduling (Cont.) Suppose that the processes arrive in the order: P2 , P3 , P1 The Gantt chart for the schedule is: P2 0 P3 3 P1 6 30 Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect - short process behind long process Results in lower CPU and device utilization Operating System Concepts – 9th Edition 6.8 Silberschatz, Galvin and Gagne ©2013 Shortest-Job-First (SJF) Scheduling Associated with each process is the length of its next CPU burst Use these lengths to schedule the process with the shortest time SJF is optimal – gives minimum average waiting time for a given set of processes The difficulty is knowing the length of the next CPU request, and therefore it is generally not implemented at the level of short-term CPU scheduling. Can however estimate the length of the next CPU burst. It is done by exponential averaging. SJF algorithm can either be preemptive or nonpreemptive Preemptive version of SJF is called shortest-remaining-time- first Operating System Concepts – 9th Edition 6.9 Silberschatz, Galvin and Gagne ©2013 Example of SJF ProcessArrival Time CPU Burst Time P1 0.0 6 P2 2.0 8 P3 4.0 7 P4 5.0 3 SJF scheduling chart P4 0 P1 3 P3 9 P2 16 24 Average waiting time = (3 + 9 + 16 + 0) / 4 = 7 Operating System Concepts – 9th Edition 6.10 Silberschatz, Galvin and Gagne ©2013 Example of Shortest-remaining-time-first Now we add the concepts of varying arrival times and preemption to the analysis ProcessAarri Arrival TimeT Burst Time P1 0 8 P2 1 4 P3 2 9 P4 3 5 Preemptive SJF Gantt Chart P1 0 P2 1 P4 5 P1 10 P3 17 26 Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec Operating System Concepts – 9th Edition 6.11 Silberschatz, Galvin and Gagne ©2013 Priority Scheduling A priority number (integer) is associated with each process Equal priority process scheduled in FCFS order. The CPU is allocated to the process with the highest priority (smallest integer highest priority) Preemptive Nonpreemptive SJF is priority scheduling where priority is the inverse of predicted next CPU burst time Problem Starvation – low priority processes may never execute Solution Aging – as time progresses increase the priority of the process Operating System Concepts – 9th Edition 6.12 Silberschatz, Galvin and Gagne ©2013 Example of Priority Scheduling ProcessAarri Burst Time(ms)T Priority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 Priority scheduling Gantt Chart P2 P5 P1 P3 P4 Average waiting time = (0 + 1 + 6 + 16 + 18 )/5 = 8.2 ms (milliseconds) Operating System Concepts – 9th Edition 6.13 Silberschatz, Galvin and Gagne ©2013 Round Robin (RR) Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Preemptive Scheduling Timer interrupts every quantum to schedule next process Performance q large FIFO q small q must be large with respect to context switch, otherwise overhead is too high Operating System Concepts – 9th Edition 6.14 Silberschatz, Galvin and Gagne ©2013 Example of RR with Time Quantum = 4 Process P1 P2 P3 Burst Time 24 3 3 The Gantt chart is: P1 0 P2 4 P3 7 P1 10 P1 14 P1 18 P1 22 P1 26 30 Typically, higher average turnaround than SJF, but better response q should be large compared to context switch time q usually 10ms to 100ms, context switch < 10 usec Operating System Concepts – 9th Edition 6.15 Silberschatz, Galvin and Gagne ©2013 Multilevel Queue and Multilevel Feedback Queue Multilevel Queue Scheduling: Ready queue is partitioned into various separate queues. Process permanently in a given queue. Each queue has its own scheduling algorithm Scheduling must be done between the queues: Fixed priority preemptive scheduling: Serve processes of highest priority first. Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes. Disadvantage: Since processes are permanently assigned to a given queue, it is inflexible. In contrast, in Multilevel Feedback Queue Scheduling a process can move between the various queues. Process in low priority queue can be moved to high priority queue, and aging can be implemented this way. Operating System Concepts – 9th Edition 6.16 Silberschatz, Galvin and Gagne ©2013 Thread Scheduling When threads supported, threads scheduled, not processes To run on CPU user level threads must be mapped to an associated kernel level thread. POSIX Pthreads allows setting scheduling parameters during thread creation. Operating System Concepts – 9th Edition 6.17 Silberschatz, Galvin and Gagne ©2013 Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in common ready queue, or each has its own private queue of ready processes Currently, most common Operating System Concepts – 9th Edition 6.18 Silberschatz, Galvin and Gagne ©2013 Issues with SMP Issues with SMP: Cache invalidated with process migration. High cost involved. Process Affinity: Process has an affinity for the processor on which it is currently running. Scheduler schedules processes based on their affinity. Load balancing attempts to keep workload evenly distributed Push migration – periodic task checks load on each processor, and if found pushes task from overloaded CPU to other CPUs Pull migration – idle processors pulls waiting task from busy processor Load balancing often counteracts the benefits of process affinity. Operating System Concepts – 9th Edition 6.19 Silberschatz, Galvin and Gagne ©2013 Real-Time CPU Scheduling Real-time Operating Systems need to immediately respond to a real-time process as soon as it requires CPU. Soft real-time systems – no guarantee as to when critical real-time process will be scheduled Hard real-time systems – task must be serviced by its deadline Therefore for real-time scheduling, scheduler must support preemptive, priority-based scheduling algorithm. Guarantees only soft real-time functionality. For hard real-time must also provide ability to meet deadlines Operating System Concepts – 9th Edition 6.20 Silberschatz, Galvin and Gagne ©2013 Priority-based Scheduling Processes have new characteristics: periodic ones require CPU at constant intervals Has processing time t, deadline d, period p 0≤t≤d≤p Rate of periodic task is 1/p Based on the above characteristics schedulers can assign priorities according to processes deadline and rate requirements. Operating System Concepts – 9th Edition 6.21 Silberschatz, Galvin and Gagne ©2013 Rate Monotonic Scheduling This algorithm schedules periodic tasks using static priority policy with preemption. A priority is assigned based on the inverse of its period Shorter periods = higher priority and longer periods = lower priority Rate monotonic considered optimal, that is if a set of processes cannot be scheduled by this algorithm it cannot be scheduled by any other algorithm. For a process Pi if pi = period of Pi, ti= processing time of Pi, then CPU utilization of Pi = ti/ pi CPU utilization bounded and not always possible to fully to maximize CPU utilization. Operating System Concepts – 9th Edition 6.22 Silberschatz, Galvin and Gagne ©2013 Rate – Monotonic Scheduling Example Missed Deadlines with Rate Monotonic Scheduling Operating System Concepts – 9th Edition 6.23 Silberschatz, Galvin and Gagne ©2013 Earliest Deadline First Scheduling (EDF) Priorities are assigned according to deadlines: the earlier the deadline, the higher the priority; the later the deadline, the lower the priority Theoretically optimal as theoretically CPU utilization is 100%, but in practice it is impossible to achieved due to cost of context switching and interrupt handling. Operating System Concepts – 9th Edition 6.24 Silberschatz, Galvin and Gagne ©2013 Algorithm Evaluation (Optional) How to select CPU-scheduling algorithm for an OS? Determine criteria, then evaluate algorithms. Four evaluation methods are discussed: Deterministic modeling: Type of Analytic evaluation. Given a predetermined load, computes performance of each algorithm for it. For example, given the processes and their CPU bursts, one can compute the average waiting times and evaluate which algorithm gives least waiting times. This type of modelling is simple and fast Most cases, processes running on a system vary, hence this type of modelling is not useful. Operating System Concepts – 9th Edition 6.25 Silberschatz, Galvin and Gagne ©2013 Algorithm Evaluation Cont. Queuing Models: Describes the arrival of processes, and CPU and I/O bursts probabilistically Computer system described as network of servers, each with queue of waiting processes Knowing arrival rates and service rates Computes utilization, average queue length, average wait time, etc. Operating System Concepts – 9th Edition 6.26 Silberschatz, Galvin and Gagne ©2013 Little’s Formula n = average queue length W = average waiting time in queue λ = average arrival rate into queue Little’s law – in steady state, processes leaving queue must equal processes arriving, thus: n=λxW Valid for any scheduling algorithm and arrival distribution For example, if on average 7 processes arrive per second, and normally 14 processes in queue, then average wait time per process = 2 seconds Operating System Concepts – 9th Edition 6.27 Silberschatz, Galvin and Gagne ©2013 Simulation and Implementation Simulations: Provide more accurate evaluation of scheduling algorithm Running Simulations involves programming a model of the system and maintaining a clock. Simulator modifies system state and logs the activities of the devices, processes and scheduler. Implementation: Most accurate way of evaluating a scheduling algorithm is to implement it and put it in the Operating system and test it in real time! Operating System Concepts – 9th Edition 6.28 Silberschatz, Galvin and Gagne ©2013 End of Chapter 6 Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013