Download Module 6: CPU Scheduling

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Chapter 6: CPU Scheduling
 Basic Concepts
 Scheduling Criteria
 Scheduling Algorithms
 Multiple-Processor Scheduling
 Real-Time Scheduling
 Algorithm Evaluation
Operating System Concepts
6.1
Silberschatz, Galvin and Gagne 2002
Basic Concepts
 Maximum CPU utilization obtained with
multiprogramming
 CPU–I/O Burst Cycle – Process execution consists of a
cycle of CPU execution and I/O wait (Fig. 6.1).
 CPU burst distribution is generally characterized as
exponential or hyperexponential, with many short CPU
bursts (I/O-bound), and a few long CPU bursts (CPUbound) (Fig. 6.2).
Operating System Concepts
6.2
Silberschatz, Galvin and Gagne 2002
Alternating Sequence of CPU And I/O Bursts
Operating System Concepts
6.3
Silberschatz, Galvin and Gagne 2002
Histogram of CPU-burst Times
Operating System Concepts
6.4
Silberschatz, Galvin and Gagne 2002
CPU Scheduler
 CPU scheduler (short-term scheduler) selects from
among the processes in memory that are ready to
execute, and allocates the CPU to one of them.
 The records in the queues are generally process control
blocks (PCBs) of the processes.
 CPU scheduling decisions may take place when a
process:
1.
2.
3.
4.
Switches from running to waiting state.
Switches from running to ready state.
Switches from waiting to ready.
Terminates.
 Scheduling under 1 and 4 is nonpreemptive.
 All other scheduling is preemptive.
Operating System Concepts
6.5
Silberschatz, Galvin and Gagne 2002
CPU Scheduler
 Under nonpreemptive scheduling, once the CPU has
been allocated to a process, the process keeps the CPU
until it releases the CPU. This is used by Windows 3.1
and Macintosh operating system.
 Under preemptive scheduling a process switches in and
out of CPU processing.
 Preemptive scheduling could cause inconsistent shared
or kernel data.
 For systems to scale efficiently, interrupts state changes
must be minimized.
Operating System Concepts
6.6
Silberschatz, Galvin and Gagne 2002
Dispatcher
 Dispatcher module gives control of the CPU to the
process selected by the short-term scheduler; this
involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to restart
that program
 Dispatch latency – time it takes for the dispatcher to stop
one process and start another running.
Operating System Concepts
6.7
Silberschatz, Galvin and Gagne 2002
Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their
execution per time unit
 Turnaround time – amount of time to execute a particular
process
 Waiting time – amount of time a process has been waiting
in the ready queue
 Response time – amount of time it takes from when a
request was submitted until the first response is
produced, not output (for time-sharing environment)
Operating System Concepts
6.8
Silberschatz, Galvin and Gagne 2002
Optimization Criteria
 Maximize CPU utilization
 Maximize throughput
 Minimize turnaround time
 Minimize waiting time
 Minimize response time
 Minimize the variance in the response time. A system
can have reasonable and predictable response time.
 CPU scheduling deals with the problems of deciding
which of the processes in the ready queue is to be
allocated the CPU.
Operating System Concepts
6.9
Silberschatz, Galvin and Gagne 2002
First-Come, First-Served (FCFS) Scheduling
Process
Burst Time
P1
24
P2
3
P3
3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1
P2
0
24
P3
27
30
 Waiting time for P1 = 0; P2 = 24; P3 = 27
 Average waiting time: (0 + 24 + 27)/3 = 17
Operating System Concepts
6.10
Silberschatz, Galvin and Gagne 2002
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order
P2 , P3 , P1 .
 The Gantt chart for the schedule is:
P2
0
P3
3
P1
6
30
 Waiting time for P1 = 6; P2 = 0; P3 = 3
 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case.
 Convoy effect: short process wait behind long process
Operating System Concepts
6.11
Silberschatz, Galvin and Gagne 2002
Shortest-Job-First (SJR) Scheduling
 Associate with each process the length of its next CPU
burst. Use these lengths to schedule the process with the
shortest time.
 The real difficulty with the SJF algorithm is knowing the
length of the next CPU request.
 SJF scheduling is used frequently in long-term
scheduling.
 The next CPU burst is generally predicated as an
exponential average of the measured lengths of previous
CPU bursts.
Operating System Concepts
6.12
Silberschatz, Galvin and Gagne 2002
Shortest-Job-First (SJR) Scheduling
 Associate with each process the length of its next CPU
burst. Use these lengths to schedule the process with the
shortest time.
 The real difficulty with the SJF algorithm is knowing the
length of the next CPU request.
 Two schemes:
 nonpreemptive – once CPU given to the process it cannot
be preempted until completes its CPU burst.
 preemptive – if a new process arrives with CPU burst length
less than remaining time of current executing process,
preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF).
 SJF is optimal – gives minimum average waiting time for
a given set of processes.
Operating System Concepts
6.13
Silberschatz, Galvin and Gagne 2002
Example of Non-Preemptive SJF
Process
Arrival Time
P1
0.0
P2
2.0
P3
4.0
P4
5.0
 SJF (non-preemptive)
P1
0
3
P3
7
Burst Time
7
4
1
4
P2
8
P4
12
16
 Average waiting time = (0 + 6 + 3 + 7)/4 = 4
Operating System Concepts
6.14
Silberschatz, Galvin and Gagne 2002
Example of Preemptive SJF
Process
P1
P2
P3
P4
 SJF (preemptive)
P1
0
P2
2
P3
4
Arrival Time
0.0
2.0
4.0
5.0
P2
5
Burst Time
7
4
1
4
P4
7
P1
11
16
 Average waiting time = (9 + 1 + 0 +2)/4 = 3
Operating System Concepts
6.15
Silberschatz, Galvin and Gagne 2002
Determining Length of Next CPU Burst
 Can only estimate the length.
 Can be done by using the length of previous CPU bursts,
using exponential averaging.
1. tn  actual lenght of nthCPU burst
2.  n1  predicted value for the next CPU burst
3.  , 0    1
4. Define :
 n1   tn  1    n .
Operating System Concepts
6.16
Silberschatz, Galvin and Gagne 2002
Prediction of the Length of the Next CPU Burst
Operating System Concepts
6.17
Silberschatz, Galvin and Gagne 2002
Examples of Exponential Averaging
  =0
 n+1 = n
 Recent history does not count.
  =1
 n+1 = tn
 Only the actual last CPU burst counts.
 If we expand the formula, we get:
n+1 =  tn+(1 - )  tn -1 + …
+(1 -  )j  tn -1 + …
+(1 -  )n=1 tn 0
 Since both  and (1 - ) are less than or equal to 1, each
successive term has less weight than its predecessor.
Operating System Concepts
6.18
Silberschatz, Galvin and Gagne 2002
Priority Scheduling
 A priority number (integer) is associated with each
process
 The CPU is allocated to the process with the highest
priority (smallest integer  highest priority).
 Preemptive
 nonpreemptive
 SJF is a priority scheduling where priority is the predicted
next CPU burst time.
 Problem  Starvation – low priority processes may never
execute.
 Solution  Aging – as time progresses increase the
priority of the process.
Operating System Concepts
6.19
Silberschatz, Galvin and Gagne 2002
Example of Priority Scheduling
Process
P1
P2
P3
P4
P5
P6
Burst Time
5.0
2.0
1.0
4.0
2.0
2.0
Priority
6
1
3
5
2
4
 Priority
P2
0
P5
2
P3
4
P6
5
P4
7
P1
11
16
 Average waiting time = (2 + 4 + 5 + 7 + 11 +2) / 6 = 4.83
Operating System Concepts
6.20
Silberschatz, Galvin and Gagne 2002
Round Robin (RR)
 Each process gets a small unit of CPU time (time
quantum), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the
end of the ready queue.
 If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time
in chunks of at most q time units at once. No process
waits more than (n-1)q time units.
 Performance
 q large  FIFO
 q small  q must be large with respect to context switch,
otherwise overhead is too high.
Operating System Concepts
6.21
Silberschatz, Galvin and Gagne 2002
Example of RR with Time Quantum = 20
Process
P1
P2
P3
P4
 The Gantt chart is:
P1
0
P2
20
37
P3
Burst Time
53
17
68
24
P4
57
P1
77
P3
97 117
P4
P1
P3
P3
121 134 154 162
 Typically, higher average turnaround than SJF, but better
response.
Operating System Concepts
6.22
Silberschatz, Galvin and Gagne 2002
Time Quantum and Context Switch Time
Operating System Concepts
6.23
Silberschatz, Galvin and Gagne 2002
Turnaround Time Varies With The Time Quantum
Operating System Concepts
6.24
Silberschatz, Galvin and Gagne 2002
Multilevel Queue
 Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
 Each queue has its own scheduling algorithm,
foreground – RR
background – FCFS
 Scheduling must be done between the queues.
 Fixed priority scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
 20% to background in FCFS
Operating System Concepts
6.25
Silberschatz, Galvin and Gagne 2002
Multilevel Queue Scheduling
 Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
 Each queue has its own scheduling algorithm,
foreground – RR
background – FCFS
 Scheduling must be done between the queues.
 Fixed priority scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
 20% to background in FCFS
Operating System Concepts
6.26
Silberschatz, Galvin and Gagne 2002
Multilevel Queue Scheduling
Operating System Concepts
6.27
Silberschatz, Galvin and Gagne 2002
Multilevel Feedback Queue
 A process can move between the various queues; aging
can be implemented this way.
 Multilevel-feedback-queue scheduler defined by the
following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter
when that process needs service
Operating System Concepts
6.28
Silberschatz, Galvin and Gagne 2002
Example of Multilevel Feedback Queue
 Three queues:
 Q0 – time quantum 8 milliseconds
 Q1 – time quantum 16 milliseconds
 Q2 – FCFS
 Scheduling
 A new job enters queue Q0 which is served FCFS. When it
gains CPU, job receives 8 milliseconds. If it does not finish
in 8 milliseconds, job is moved to queue Q1.
 At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted
and moved to queue Q2.
Operating System Concepts
6.29
Silberschatz, Galvin and Gagne 2002
Multilevel Feedback Queues
Operating System Concepts
6.30
Silberschatz, Galvin and Gagne 2002
Multiple-Processor Scheduling
 CPU scheduling more complex when multiple CPUs are
available.
 Homogeneous processors are identical in terms of their
functionality within a multiprocessor system.
 Load sharing can occur if several identical processors are
available.
 Asymmetric multiprocessing – only one processor
accesses the system data structures, alleviating the need
for data sharing.
Operating System Concepts
6.31
Silberschatz, Galvin and Gagne 2002
Real-Time Scheduling
 Hard real-time systems – required to complete a critical




task within a guaranteed amount of time.
The scheduler can either admits the process or rejects it.
This is known as resource reservation.
Usually, hard real-time systems are composed of specialpurpose software running on specific hardware.
Soft real-time computing – requires that critical processes
receive priority over less fortunate ones.
Implementing soft real-time functionality requires:
 The system must have priority scheduling.
 The dispatch latency must be small.
Operating System Concepts
6.32
Silberschatz, Galvin and Gagne 2002
Real-Time Scheduling
 To keep dispatch latency low, we need to allow system
calls to be preemptible: preemption points or preempt
the entire kernel.
 The high-priority process would be waiting for a lowerpriority one to finish. This situation is known as priority
inversion.
 The priority inversion can be solved via the priorityinheritance protocol, in which all these processes
inherit high priority until they are finished.
 The conflict phase of dispatch latency has two
components:
1. Preemption of any process running in the kernel.
2. Release by low-priority processes resources needed by the
high-priority process.
Operating System Concepts
6.33
Silberschatz, Galvin and Gagne 2002
Dispatch Latency
Operating System Concepts
6.34
Silberschatz, Galvin and Gagne 2002
Algorithm Evaluation
 Deterministic modeling – takes a particular predetermined
workload and defines the performance of each algorithm
for that workload.
 Queueing models:
 knowing arrival rates and service rates, we can compute
utilization, average queue length, average wait time, and so
on.
 This area of study is called queueing-network analysis.
 To get a more accurate evaluation of scheduling
algorithms, we can use simulations. Simulations involve
programming a model of the computer system.
 Implementation – The only completely accurate way to
evaluate a scheduling algorithm.
Operating System Concepts
6.35
Silberschatz, Galvin and Gagne 2002
Deterministic modeling
Process Burst Time
P1
10
P2
29
P3
3
P4
7
P5
12
 For the FCFS algorithm, the average waiting time is (0 +
10 + 39 + 42 + 49) / 5 = 28 milliseconds.
 With the nonpreemptive SJF algorithm, the average
waiting time is (10 + 32 + 0 + 3 + 20) / 5 = 13
milliseconds.
 With the RR algorithm, the average waiting time is (0 + 32
+ 20 + 23 + 40) / 5 = 23 milliseconds.
Operating System Concepts
6.36
Silberschatz, Galvin and Gagne 2002
Evaluation of CPU Schedulers by Simulation
Operating System Concepts
6.37
Silberschatz, Galvin and Gagne 2002
Comparisons of Evaluation Methods
 Deterministic modeling is to specific, and requires too
much exact knowledge, to be useful.
 Queueing models:
 Little’s formula:
n   W, n = average queue length
W is the average waiting time,  is the average arrival rate.
 Queueing models are often only an approximation of a real
system.
 A more detailed simulation provides more accurate
results but the design, coding and debugging of the
simulator can be a major task.
 The major difficulty of implementation is the cost.
Operating System Concepts
6.38
Silberschatz, Galvin and Gagne 2002
Solaris 2 Scheduling
 Solaris 2 uses priority-based process scheduling.
 It has four classes of scheduling, which are, in order of
priority, real time, system, time sharing, and interactive.
 The scheduler converts the class-specific priorities into
global priorities, and selects to run the thread with the
highest global priority.
 The selected thread runs on the CPU until one of the
following occurs:
1. It blocks
2. It uses its time slice (if it is not a system thread)
3. It is preempted by a higher-priority thread
Operating System Concepts
6.39
Silberschatz, Galvin and Gagne 2002
Solaris 2 Scheduling
Operating System Concepts
6.40
Silberschatz, Galvin and Gagne 2002
Windows 2000 Priorities
 Windows 2000 schedules threads using a priority-based,
preemptive scheduling algorithm.
 The portion of the Windows 2000 kernel that handles
scheduling is called the dispatcher.
 Priorities are divided into two classes: the variable class
contains threads having priorities from 1 to 15, and the
real-time class contains threads with priority ranging
from 16 to 31.
 Windows 2000 distinguishes between the foreground
process and the background processes. A foreground
process has three times quantum of a background
process.
Operating System Concepts
6.41
Silberschatz, Galvin and Gagne 2002
Windows 2000 Priorities
Operating System Concepts
6.42
Silberschatz, Galvin and Gagne 2002
An Example: Linux
 Linux provides two separate process-scheduling
algorithms:
 One is a time-sharing algorithm for fair preemptive among
multiple processes.
 The other is designed for real-time tasks where absolute
priorities are more important than fairness.
 Linux uses a prioritized, credit-based algorithm for time-
sharing processes.
 Linux implements the two real-time scheduling classes
required by POSIX.1b: first come, first served (FCFS),
and round-robin (RR).
Operating System Concepts
6.43
Silberschatz, Galvin and Gagne 2002