Download Fair-share scheduling

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Unix security wikipedia , lookup

Distributed operating system wikipedia , lookup

DNIX wikipedia , lookup

Burroughs MCP wikipedia , lookup

Thread (computing) wikipedia , lookup

Process management (computing) wikipedia , lookup

Transcript
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Q. What is scheduling? Explain various short term scheduling criteria.
Q. When and how the short-term, medium-term and long-term scheduling policies are
applied? Draw the queuing diagram for scheduling.
Ans: Process Scheduling
A scheduling is fundamental operating system. All computer resources are scheduled before
use. Since CPU is one of the primary computer resources are scheduled before use. Since is one of
the primary computer resources, its scheduling is central to operating system design.
Scheduling refers to a set of policies and mechanism supported by operating system that controls
the order in which the work to be done is completed.
A scheduler is an operating system program that selects the next job to be admitted for
execution. The main objective of scheduling is to increase CPU utilization and higher
Throughput.
Throughput is the amount of work accomplished in a given time interval.
CPU scheduling is the basis of operating system which supports multiprogramming concepts. By
having a number of programs in computer memory at the same time, the CPU may be shared
among them.
 The assignment of physical processors to processes allows processors to accomplish work.
The problem of determining when processors should be assigned and to which processes
is called processor scheduling or CPU scheduling.
 When more than one process is run able, the operating system must decide which one
first. The part of the operating system concerned with this decision is called the scheduler,
and algorithm it uses is called the scheduling algorithm.
Types of Scheduler
1. Short term Scheduler/ CPU scheduler
The short term scheduler selects the process for the processor among the processes which
are already in queue (in memory).
The scheduler will execute quite frequently (mostly at least once every 10 milliseconds). It
has to be very fast in order to achieve better processor utilization.
(Dispatches from ready queue)
2. Long term Scheduler/ Job Scheduler (loads from disk)
Long term scheduler selects processes from the process pool and loads selected processes
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 1
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
into memory for execution.
The long term scheduler executes much less frequently when compared with the short
term scheduler. It controls the degree of multiprogramming (no. of process in memory at
a time).
3. Medium term scheduler
Sometimes it can be good to reduce the degree of multiprogramming by removing process
from memory and storing into disk. These processes can then be reintroduced into
memory by the medium – term scheduler. This operation is also known as swapping.
****************************************************************************************
Q. Explain the goal of scheduling.
Ans:
Goals of scheduling (objectives)
In this section we try to answer following question: What the scheduler try to achieve?
Many objectives must be considered in the design of a scheduling discipline. In particular, a
scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc.,
Some of these goals depends on the system one is using for example batch system, interactive
system or real-time system, etc. but there are also some goals that are desirable in all systems.
Fairness
Fairness is important under all circumstances. A scheduler makes sure that each process gets
its fair share of the CPU and no process can suffer indefinite postponement. Note that giving
equivalent or equal time is not fair. Think of safety control and payroll at a nuclear plant.
Policy Enforcement
The scheduler has to make sure that system's policy is enforced. For example, if the local
policy is safety then the safety control processes must be able to run whenever they want to, even
if it means delay in payroll processes.
Efficiency
Scheduler should keep the system (or in particular CPU) busy cent percent of the time when
possible. If the CPU and all the Input/output devices can be kept running all the time, more work
gets done per second than if some components are idle.
Response Time
A scheduler should minimize the response time for interactive user.
Turnaround
A scheduler should minimize the time batch users must wait for an output.
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 2
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Throughput
A scheduler should maximize the number of jobs processed per unit time.
Be Predictable
A given job should utilize the same amount of time and should cost the same regardless of the
load on the system.
Minimize Overhead
Scheduling should minimize the wasted resources overhead.
*********************************************************************************************
Q. Explain various CPU Scheduling Criteria?
Ans:
The goal of scheduling algorithm is to identify the process whose selection will result in the best
possible system performance. There are different scheduling algorithms, which has different
properties and may favor one class of processes over another, which algorithm is best, to
determine this there are different characteristics used for comparison. The scheduling algorithms
determine the importance of each of the criteria.
1. CPU Utilization
The key idea is that if CPU is busy all the time, the utilization factor of all the components of the
system will be also high.
CPU utilization is the ratio of busy time of the processor to the total time passes for processes to
finish.
Formula:
Processor utilization= (processor busy time) / processor busy time + processor idle time)
2. Throughput
It refers to the amount of work completed in a unit of time. One way to measure throughput is by
means of the number of processes that are completed in a unit of time. The higher the number of
processes, the more work apparently is being done by the system. The throughput can be
calculated by using the
Formula:
Throughput= (No. of process completed) / (time unit)
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 3
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
3. Turnaround Time
It may be defined as interval from the time of submission of a process to the time of its
completion. It is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, CPU time and I/O operations.
Formula:
Turnaround Time= t (process completed) – t (process submitted)
4. Waiting Time
This is the time spent in the ready queue. In multiprogramming operating system several jobs
reside at a time in memory. CPU executes only one job at a time. The rest of jobs wait for the CPU.
The waiting time may be expressed as turnaround time, less than the actual processing time.
Formula:
Waiting time= Turnaround time – Processing time
5. Response Time
Time between submission and first response.
Formula:
Response time = t (first response) – t (submission of request)
It is used in time sharing and real time OS. However it characteristics differ in the two systems. In
time sharing system it may be defined as interval from the time the last character of a command
line of a program or transaction is entered to the time the last result appears on the terminal. In
real time system it may be defined as interval from the time an internal or external event is
signaled to the time the first instruction of the respective service routine is executed.
Throughput and CPU utilization may be increased by executing the large number of processes but
then response time may suffer.
*******************************************************************************************
Q. What is Processor Scheduling? Write a short note on Round Robin Algorithm in detail.
Q. What is preemption? Explain various preemptive scheduling policies.
Ans: Scheduling Algorithms
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated to the CPU.The Scheduling algorithms can be divided into two categories with
respect to how they deal with clock interrupts.
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 4
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Non-preemptive Scheduling
A scheduling discipline is non - preemptive if, once a process has been given the CPU, the CPU
cannot be taken away from that process.
Following are some characteristics of non - preemptive scheduling


In non - preemptive system, short jobs are made to wait by longer jobs but the overall
treatment of all processes is fair.
In non- preemptive system, response times are more predictable because incoming high
priority jobs can not displace waiting jobs.
In non - preemptive scheduling, a scheduler executes jobs in the following two situations.


When a process switches from running state to the waiting state.
When a process terminates.
First Come First Served (FCFS) and Shortest Job First (SJF), are considered to be the non –
preemptive scheduling algorithms.
Preemptive Scheduling
A scheduling discipline is preemptive if, once a process has been given the CPU can taken away.
Preemption means the operating system moves a process from running to ready without the
process requesting it. An OS implementing this algorithm switches to the processing of a new
request before completing the processing of the current request. The preempted request is put
back into the list of the pending requests.
The strategy of allowing processes that are logically run able to be temporarily suspended is
called Preemptive Scheduling and it is contrast to the "run to completion" method.
Round Robin scheduling, Priority based scheduling and SRTN scheduling are considered to
be the preemptive scheduling algorithms.
1. First – Come First – Serve (FCFS)
The simplest scheduling algorithm is first Come First Serve (FCFS). Jobs are scheduled in the
order they are received. FCFS is non – preemptive. Implementation is easily accomplished by
implementing a queue of the processes to be scheduled or by storing the time the process was
received and selecting the process with the earliest time.
Example 1:
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 5
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Draw the Gantt chart for the FCFS policy, considering the following set of processes that arrives at
time 0, with the length of CPU time given in milliseconds. Calculate average waiting time, average
turnaround time, throughput and CPU utilization.
Process
P1
P2
P3
Processing Time
13
08
83
Solution:
If the process arrives in the order p1, p2 and p3, then the Gantt chart will be as:
P1
0
P2
13
P3
21
Completed Time
Process completed
0
13
21
104
P1
P2
P3
104
Turnaround Time=
t(process
completed –
t(process
submitted)
13 - 0=13
21- 0=21
104 - 0=104
Waiting Time=
Turnaround time –
Processing time
13 - 13 = 0
21 – 8=13
104 – 83=21
Average Turnaround Time= 13 + 21 + 104 / 3 =46 ms
Average Waiting Time= 0 + 13 + 21 /3 = 11.33 ms
Throughput = number of process completed / time unit
Throughput = 3 / 104 =.028
Processor Utilization = Processor busy time / Processor busy time + Processor Idle time
Processor Utilization = (104 / 104 + 0) * 100 =100%
Example 2:
Calculate the turnaround time, waiting time, average turnaround time, average waiting time,
throughput and processor utilization for the given set of processes that arrive at a given arrive
time shown in the table, with the length of processing time given in milliseconds.
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 6
SUBJECT: OPERATING SYSTEM
Process
P1
P2
P3
P4
P5
UNIT – III PROCESS SCHEDULING
Arrival Time
0
2
3
5
8
PATEL GROUP OF INSTITUTION
Processing Time
3
3
1
4
2
Solution:
If the processes arrive as per the arrival time, the Gantt chart will be
P1
0
P2
3
P3
6
Completed Time
Process completed
0
3
6
7
11
13
P1
P2
P3
P4
P5
P4
7
Turnaround Time=
t(process
completed –
t(process
submitted)
3 – 0 =3
6 – 2 =4
7–3=4
11 – 5 =6
13 – 8 =5
P5
11
13
Waiting Time=
Turnaround time –
Processing time
3 – 3 =0
4 – 3 =1
4 – 1 =3
6 – 4 =2
5 – 2 =3
Average Turnaround Time= 3 + 4 + 4 + 6 + 5 / 5 =4.4 ms
Average Waiting Time= 0 + 1 + 3 + 2 + 3 /5 = 1.8 ms
Throughput = number of process completed / time unit
Throughput = 5 / 13 = 0.38
Processor Utilization = Processor busy time / Processor busy time + Processor Idle time
Processor Utilization = (13/ 13 + 0) * 100 =100%
2. Shortest Job First (SJF)
This algorithm is assigned to the process that has smallest next CPU processing time or burst
time, when the CPU is available. In case of a tie, FCFS scheduling algorithm can be used. It is
originally implemented in a batch processing environment.
Example 3:
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 7
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Consider the following set of processes with the following processing time which arrived at the
same time. Calculate average turnaround time, average waiting time and throughput.
Process
P1
P2
P3
P4
Processing Time
06
08
07
03
Solution:
Using SJF scheduling because the shortest length of process will first get execution the Gantt chart
will be
P4
0
P1
3
P3
9
P2
16
24
Because the shortest processing time is of the process p4, then process p1 and then p3 and
process p2. The waiting time for process p1 is 3 ms, for process p2 is 16ms, for process p3 is 9 ms
and for the process p4 is 0 ms as
Completed Time
Process completed
0
3
9
16
24
P4
P1
P3
P2
Turnaround Time=
t(process
completed –
t(process
submitted)
3 – 0 =3
9 – 0 =9
16 – 0 = 16
24 – 0 =24
Waiting Time=
Turnaround time –
Processing time
3 – 3 =0
9 – 6 =3
16 –7 =9
24 –8 =16
Average Turnaround Time= 3 + 9 + 16 + 24 / 4 =13 ms
Average Waiting Time= 0 + 3 + 9 + 16 / 4= 7 ms
Throughput = number of process completed / time unit
Throughput = 4 / 24 = 0.16
Processor Utilization = Processor busy time / Processor busy time + Processor Idle time
Processor Utilization = (24/ 24 + 0) * 100 =100%
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 8
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
(Explain the round-robin scheduling policy with example.)
Round Robin (RR)
Round Robin (RR) scheduling is a preemptive algorithm that relates the process that has been
waiting the longest. This is one of the oldest, simplest and widely used algorithims.The round
robin scheduling algorithms is primarily used in time-sharing and a multi-user system
environment where the primary requirement is to provide reasonable good response times and
in general to share the system fairly among all system users. Basically the CPU time is divided
into time slices. Each process is allocated a small time-slice called quantum. No process can
run for more than can one quantum while others are waiting in the ready queue. If a process
needs more CPU time to complete after exhausting one quantum, it goes to the end of ready
queue to await the next allocation. To implement the RR scheduling, Queue data structure is used
to maintain the Queue of Ready process. A new process is added at the tail of that Queue. The CPU
scheduler picks the first process from the ready Queue, Allocate processor for a specified time
Quantum. After that time the CPU scheduler will select the next process is the ready Queue.
Example 4:
Consider the following set of process with the processing time give in milliseconds.
Process
P1
P2
P3
Processing Time
24
03
03
Solution:
If we use a time Quantum of 4 milliseconds then process PI gets the first 4 milliseconds. Since at
requires another 20 milliseconds, it is preempted after the first time Quantum, and the CPU is
given to next process in the Queue, Process P2.Since process P2 does not need and milliseconds, it
quits before its time Quantum expires. The CPU is then given to the next process, Process P3 one
each process has received 1 time Quantum, the CPU is returned to process P1 for an additional
time quantum. The Gant chart will be:
P1
0
P2
4
P3
7
P1
10
P1
14
Design by: Asst. Prof. Vikas Katariya +918980936828
P1
18
P1
22
26
P1
30
Page 9
SUBJECT: OPERATING SYSTEM
Completed Time
UNIT – III PROCESS SCHEDULING
Process completed
Turnaround Time=
t(process
completed –
t(process
submitted)
0
30
P1
30 – 0 =30
7
P2
7 – 0 =7
10
P3
10 – 0 = 10
Average turn around time = (30+7+10)/3 = 47/3 = 15.66
PATEL GROUP OF INSTITUTION
Waiting Time=
Turnaround time –
Processing time
30 – 24 =6
7 – 3 =4
10 –3 =7
Average waiting time = (6+4+7)/3 = 17/3 = 5.66
Throughput = 3/30 =0.1
Processer utilization = (30/30) * 100 =100%
3.
Shorted Remaining Time Next (SRTN)
This is the preemptive version of shortest job first. These permits a process that enters the ready
list to preempt the running process if the time for the new process (or for its next burst) is less
then the remaining time for the running process (or for its current burst). Let us understand with
the help of an example.
Example 5:
Consider the set of for processes arrived as per timings described in the table:
Process
P1
P2
P3
P4
Arrival Time
0
1
2
3
Processing Time
5
2
5
3
Solution:
At time 0, only process P1 has entered the system, so it is the process that executes. At time 1,
process P2 arrives. At that time, process P1 has 4 time units left to execute At this juncture
process 2 ‘s processing time is less compared to the P1 left out time (4 units). So P2 starts
executing at time 1. At time 2, process P3 enters the system with the processing time 5 units.
Process P2 continues executing as it has the minimum number of time units when compared with
P1 and P3. At time 3, process P2 terminates and process P4 enters the system. Of the processes
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 10
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
P1, P3 and P4, has the smallest remaining execution time so it starts executing. When process P1
terminates at time 10, process P3 executes .The Gantt chart is shown below:
P1
0
P2
1
P4
3
P1
6
P3
10
15
Turnaround time for each process can be computed by subtracting the time it terminated from
the arrival time.
Turn around Time =t (Process Completed) - t (Process Submitted)
The turnaround time for each of the processes is:
P1: 10 – 0 = 10
P2: 3 – 1= 2
P3: 15 – 2 = 13
P4: 6 – 3 = 3
The average turnaround time is (10+2+13+3) / 4 = 7
The waiting time can be computed by subtracting processing time from turnaround time, yielding
the following 4 results for the process as
P1: 10 – 5 = 5
P2: 2 – 2 = 0
P3: 13 – 5 = 8
P4: 3 – 3 = 0
The average waiting time = (5+0+8+0) / 4 = 3.25 milliseconds
Four jobs executed in 15 time units, so throughput is 4 / 15 = 0.26 time units/job.
Example 6:Consider the set of for processes arrived as per timings described in the table:
Process
P1
P2
P3
P4
P5
Arrival Time
0
3
5
7
2
Design by: Asst. Prof. Vikas Katariya +918980936828
Processing Time
8
4
9
5
3
Page 11
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Solution:
P1
P5
0
2
P2
5
P4
9
Completed Time
Process completed
19
5
9
13
28
P1
P5
P2
P4
P3
P1
14
P3
20
Turnaround Time=
t(process
completed –
t(process
submitted)
20 – 0 =20
5 – 2 =3
9 – 3 =6
14 – 7 = 7
29– 5 =24
29
Waiting Time=
Turnaround time –
Processing time
20 – 8 = 12
3 – 3 =0
6 – 4 =2
7 –5 =2
24 –9 =15
The average waiting time = (12+0+2+2 + 15) / 5 = 6.2 milliseconds
4.
Priority Based Scheduling or Event-Driven (ED) Scheduling
A priority is associated with each process and the scheduler always picks up the highest priority
process for execution from the ready queue. Equal priority processes are scheduled FCFS.The
level of priority may be determined on the basis of resource requirements, processes
characteristics and its run time behavior.
A major problem with a priority based scheduling is indefinite blocking or starvation of a lost
priority process by a high priority process. In general, completion of a process within finite time
cannot be guaranteed with this scheduling algorithm. A solution to the problem of indefinite
blockage of low priority process is provided by aging priority. Aging priority is a technique of
gradually increasing the priority of processes (of low priority) that wait in the system for a long
time.Eventually,tha older processes attain high priority and are ensured of completion in a finite
time.
Example 7:
As an example, consider the following set of five processes, assumed to have arrived at the same
time with the length of processor timing in milliseconds: Process
P1
P2
P3
P4
P5
Priority
3
1
4
5
2
Design by: Asst. Prof. Vikas Katariya +918980936828
Processing Time
10
1
2
1
5
Page 12
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Solution:
P2
0
P5
1
P1
6
Completed Time
Process completed
0
1
6
16
18
19
P2
P5
P1
P3
P4
P3
16
P4
18
Turnaround Time=
t(process
completed –
t(process
submitted)
1 – 0 =1
6 – 0 =6
16 – 0 = 16
18 – 0 =18
19 – 0 =19
19
Waiting Time=
Turnaround time –
Processing time
1 – 1 =0
6 – 2 =4
16 – 10 =6
18 – 2 =16
19 – 1 =18
Using priority scheduling us would schedule these processes according to the following Gantt
chart:
Average turn around time = (1+6+16+18+19) / 5 = 60/5 = 12
Average waiting time = (0+4+6+16+18) / 5 = 8.8
Throughput = 5/19 = 0.26
Processor utilization = (19/19) * 100 = 100%
Priorities can be defined either internally or externally. Internally defined priorities use one
measurable quantity or quantities to complete the priority of a process.
Example 8:
For the given five processes arriving at time 0, in order with the length of CPU time in
milliseconds.
Process
P1
P2
P3
P4
P5
Design by: Asst. Prof. Vikas Katariya +918980936828
Processing Time
10
29
03
07
12
Page 13
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Consider the FCFS, SJF and RR (time slice=10 milliseconds) scheduling algorithms for the above
set of process which algorithm would give the minimum average waiting time?
Solution:
1.
P1
0
For FCFS algorithm the Gantt chart is as follows:
P2
P3
P4
10
39
42
49
Process
P1
P2
P3
P4
P5
Processing Time
10
29
3
7
12
P5
61
Waiting Time
0
10
39
42
49
Average Waiting Time= (0 + 10 + 39 + 42 + 49) / 5 =28 milliseconds
2.
P3
0
For SJF scheduling algorithm, we have
P4
P1
3
10
20
Process
P3
P4
P1
P5
P2
Processing Time
3
7
10
12
29
P5
P2
61
32
Waiting Time
00
3
10
20
32
Average Waiting Time= (0+3+10+20+32) / 5 = 13 milliseconds
For Round Robin scheduling algorithm (time quantum = 10milliseconds )
P1
0
P2
10
P3
20
P4
23
P5
30
Design by: Asst. Prof. Vikas Katariya +918980936828
P2
40
P5
50
P2
52
61
Page 14
SUBJECT: OPERATING SYSTEM
Process
P1
P2
P3
P4
P5
UNIT – III PROCESS SCHEDULING
Processing Time
10
29
03
07
12
PATEL GROUP OF INSTITUTION
Waiting Time
0
32
20
23
40
Average waiting Time= (0+ 32 + 20 + 23 + 40) / 5 = 23 milliseconds
From the above calculation of average waiting time we found that SJF policy results in less than
from FCFS and RR.
********************************************************************************************
Q. Explain Highest response ratio next ( HRRN ) & Multilevel Feedback CPU Scheduling
algorithm.
Ans:
Highest Response Ratio Next (HRRN) scheduling is a non-preemptive discipline, similar to Shortest
Job Next (SJN), in which the priority of each job is dependent on its estimated run time, and also the
amount of time it has spent waiting. Jobs gain higher priority the longer they wait, which prevents
indefinite postponement (process starvation). In fact, the jobs that have spent a long time waiting
compete against those estimated to have short run times.
Developed by Brinch Hansen to correct certain weaknesses in SJN including the difficulty in estimating
run time.
Multiple Feedback Queues:
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 15
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Fair-share scheduling:
Fair-share scheduling is a scheduling strategy for computer operating systems in which the CPU usage
is equally distributed among system users or groups, as opposed to equal distribution among processes.
For example, if four users (A,B,C,D) are concurrently executing one process each, the scheduler will
logically divide the available CPU cycles such that each user gets 25% of the whole (100% / 4 = 25%).
If user B starts a second process, each user will still receive 25% of the total cycles, but each of user B's
processes will now use 12.5%. On the other hand, if a new user starts a process on the system, the
scheduler will reapportion the available CPU cycles such that each user gets 20% of the whole (100% /
5 = 20%).
Another layer of abstraction allows us to partition users into groups, and apply the fair share algorithm
to the groups as well. In this case, the available CPU cycles are divided first among the groups, then
among the users within the groups, and then among the processes for that user. For example, if there
are three groups (1,2,3) containing three, two, and four users respectively, the available CPU cycles
will be distributed as follows:


100% / 3 groups = 33.3% per group
Group 1: (33.3% / 3 users) = 11.1% per user
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 16
SUBJECT: OPERATING SYSTEM


UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Group 2: (33.3% / 2 users) = 16.7% per user
Group 3: (33.3% / 4 users) = 8.3% per user
One common method of logically implementing the fair-share scheduling strategy is to recursively
apply the round-robin scheduling strategy at each level of abstraction (processes, users, groups, etc.)
The time quantum required by round-robin is arbitrary, as any equal division of time will produce the
same results.
*********************************************************************************************
Q.1 Explain the term Multiprocessor Scheduling in terms of loosely coupled and tightly
coupled system.
Ans: Multiprocessor Scheduling
When a computer system contains more than a single processor, several new issues are
introduced into the design of scheduling functions. We will examine these issues and the details
of scheduling algorithms for tightly coupled multi-processor systems.
Classification of multiprocessor systems
Loosely coupled or distributed multiprocessor or cluster:

Each processor has its own main memory and I/O channels.
Functionally specialized processors:


an example is an I/O processor
controlled by a master processor
Tightly coupled multiprocessor:


processors share a common main memory
controlled by the operating system
Synchronization granularity
A good way of characterizing multiprocessor and placing them in context with other
architectures is to consider the synchronization granularity.
Scheduling concurrent processes has to take into account the synchronization of processes.
Synchronization granularity means the frequency of synchronization between processes in a
system.
Applications exhibit (showup) parallelism at various levels. There are at least five categories of
parallelism that differ in the degree of granularity.
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 17
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Types of synchronization granularity
Fine – parallelism inherent in a single instruction stream
Medium – parallel processing or multitasking within a single application
Coarse – multiprocessing of concurrent processes in a multiprogramming environment
Very coarse – distributed processing across network nodes to form a single computing
environment
Independent – multiple unrelated processes
Independent parallelism
With independent parallelism, there is no explicit synchronization among processes.
Key features:




Separate application or job
No synchronization
Same service as a multi programmed uni-processor
Time-sharing systems exhibit this type of parallelism
Coarse and very coarse-grained parallelism
With coarse and very coarse-grained parallelism, there is synchronization among processes,
but at a very gross level. (e.g. at the beginning and at the end)
This kind of situation is easily handled as a set of concurrent processes running on a
multiprogrammed uniprocessor and can be supported on a multiprocessor with little or no
change to user software.
In general, any collection of concurrent processes that need to communicate or synchronize
can benefit from the use of a multiprocessor architecture.
Medium-grained parallelism
Medium-grained parallelism is present in parallel processing or multitasking within a single
application.
A single application can be effectively implemented as a collection of threads within a single
process.
Because the various threads of an application interactt so frequently, scheduling decisions
concerning one thread may affect the performance of the entire application.
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 18
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Fine-grained parallelism
Fine-grained parallelism represents a much more complex use of parallelism than is found in the
use of threads. Usually does not involve the OS but done at compilation stage. High data
dependency ==> high frequency of synch.
Key features:


Highly parallel applications
Specialized and fragmented area
Granularity Example: Valve Game Software
Valve is an entertainment and technology company that has developed a number of
popular games, as well as source engine.
Source engine is the 3D engine or animation engine used by valve for its game.
In recent year, Valve has reprogrammed the source engine software to use multithreading
to exploit the power of multicore processor chips from Intel and AMD.
Multicore refers the placement of multiprocessor on single chip typically 2 or 4 processor.
An SMP system can consist of a single chip or multiple chips.
Individual modules called system are assigned to individual processor. In the source
engine case putting rendering on one processor, AI on another processor and physics on
another. It is known as Coarse threading.
Many similar or identical tasks are spread across multiple processor for example a loop
that iterates over an array of data can be split into a number of smaller parallel loops
individual threads. It is known as Fine grained threading.
To involve the selective use of fine grain threading for some system and single threading
for other system is known as Hybrid threading.
Design issues




Scheduling on a multiprocessor involves three interrelated issues:
The assignment of processes to processors
The use of multiprogramming on individual processors
The actual dispatching of a process
The scheduling depends on


degree of granularity
number of processors available
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 19
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Assignment of processes to processors
The simplest scheduling approach is to treat the processors as a pooled resource
and assign processes to processors on demand.
Static or dynamic assignment of a process
Static assignment: a process is permanently assigned to one processor from activation until its
completion. A dedicated short-term queue is maintained for each processor.
Advantages: less overhead in the scheduling.
Disadvantages: one processor can be idle, with an empty queue, while another processor has a
backlog.
Dynamic assignment: All processes go into one global queue and are scheduled to any available
processor. Thus, over the life of a process, the process may be executed on different processors at
different times.
Advantages: better processor utilization.
Disadvantages: inefficient use of cache memory, more difficult for the processors to
communicate.
--------------------------------------------------------------------------------------------------------------------Q.1 Explain the term Thread scheduling in concurrent processing.
Ans: Key features of threads:
An application can be a set of threads that cooperate and execute concurrently in the same
address space.
Threads running on separate processors yield a dramatic gain in performance.
General approaches to thread scheduling:
Load sharing: processes are not assigned to a particular processor. A global queue of ready
threads is maintained, and each processor, when idle, selects a thread from the queue.
Versions of Load Sharing:
First come first served (FCFS): when a job arrives, each of its threads is placed consecutively at
the end of the shared queue.
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 20
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Smallest number of threads first: the shared ready queue is organized as a priority queue, with
highest priority given to threads from jobs with the smallest number of unscheduled threads.
Preemptive smallest number of threads first: highest priority is given to jobs with the smallest
number of unscheduled threads. An arriving job with a smaller number of threads than an
executing job will preempt threads belonging to the scheduled job.
Gang scheduling: a set of related threads is scheduled to run on a set of processors at the same
time, on a one-to-one basis. Simultaneous scheduling of threads that make up a single process
Dedicated processor assignment: each program is allocated a number of processors equal to
the number of threads in the program, for the duration of the program execution (this is the
opposite of the load-sharing approach)
Comparison with gang scheduling:
Similarities - threads are assigned to processors at the same time
Differences - in dedicated processor assignment threads do not change processors.
Dynamic scheduling: the application is responsible for assigning its threads to processors. It
may alter the number of threads during the course of execution.
On request for a processor, OS does the following:
If there are idle processors, use them to satisfy the request.
Otherwise, if the job making the request is a new arrival, allocate it a single processor by taking
one away from any job currently allocated more than one processor.
If any portion of the request cannot be satisfied, it remains outstanding until either a processor
becomes available for it or the job rescinds the request.
Upon release of one or more processors (including job departure), OS does the following:
Scan the current queue of unsatisfied requests for processors. Assign a single processor to each
job in the list that currently has no processors (i.e., to all waiting new arrivals). Then scan the list
again, allocating the rest of the processors on an FCFS basis. The overhead of this approach may
negate this apparent performance advantage.
--------------------------------------------------------------------------------------------------------------------------Q.1 How do you classify the different approaches for Real-time scheduling? State various
Real-time scheduling techniques available and discuss any one in detail.
Ans: Real-Time Scheduling
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 21
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Correctness of the system depends not only on the logical result of the computation,
but also on the time at which the results are produced.
Tasks or processes attempt to control or react to events that take place in the outside world.
These events occur in "real time" and processes must be able to keep up with them.
Examples:
o
o
o
o
o
o
Control of laboratory experiments
Process control plants
Robotics
Air traffic control
Telecommunications
Military command and control systems
Types of Tasks
A. With respect to urgency
A hard real-time task is one that must meet its deadline;
otherwise it will cause undesirable damage or a fatal error to the system.
A soft real-time task has an associated deadline that is desirable but not mandatory;
it still makes sense to schedule and complete the task even if it has passed its deadline.
B. With respect to execution
An non-periodic task has a deadline by which it must finish or start,
or it may have a constraint on both start and finish time.
A periodic task is one that executes once per period T or exactly T units apart.
Characteristics of Real-time Operating Systems
Real-time operating systems can be characterized as having unique requirements in five general
areas:
o
o
o
o
o
Determinism
Responsiveness
User control
Reliability
Fail-soft operation
Determinism
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 22
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
Operations are performed at fixed, predetermined times or within predetermined time intervals
Concerned with how long the operating system delays before acknowledging an interrupt
Responsiveness
How long, after acknowledgment, it takes the operating system to service the interrupt


Includes amount of time to begin execution of the interrupt
Includes the amount of time to perform the interrupt
Determinism and responsiveness together make up the response time to external events.
User control
It is essential to allow the user fine-grained control over task priority. The user should be able to
distinguish between hard and soft tasks and to specify relative priorities within each class.
o
o
o
o
o
User specifies priority
Specifies paging
What processes must always reside in main memory
Disks algorithms to use
Rights of processes
Reliability
Loss or degradation of performance may have catastrophic (Dangerous) consequences.
Fail-soft operation is a characteristic that refers to the ability of a system to fail in such a way
as to preserve as much capability and data as possible.
o
attempt either to correct the problem or minimize its effects while
continuing to run.
Stability: A real-time system is stable if, in cases where it is impossible to meet all task deadlines,
the system will meet the deadlines of its most critical, highest-priority tasks, even if some less
critical task deadlines are not always met
Real-Time Scheduling
Real-time scheduling is one of the most active areas of research in computer science.
The algorithms can be classified along three dimensions:
o
o
When to dispatch
How to schedule
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 23
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
PATEL GROUP OF INSTITUTION
(1) whether a system performs schedulability analysis,
(2) if it does, whether it is done statically or dynamically, and
(3) whether the result of the analysis itself produces a schedule or plan
according to which tasks are dispatched at run time.
When to dispatch
The problem here concerns how often the operating system will interfere to make a scheduling
decision. Examples of different policies are listed below:
o
o
o
o
Round-robin preemptive scheduler
Priority-driven non-preemptive scheduler
Priority-driven preemptive scheduler on preemption points
Immediate preemptive scheduler
How to schedule
Classes of algorithms
Static table-driven approaches: these perform a static analysis of feasible schedules of
dispatching. The result of the analysis is a schedule that determines, at run time, when a task
must begin execution.


Applicable to periodic tasks
Inflexible approach - any change requires the schedule
to be redone
Static priority-driven preemptive approaches: again, a static analysis is performed,
but no schedule is drawn up. Rather, the analysis is used to assign priorities to tasks,
so that a traditional priority-driven preemptive scheduler can be used.
Dynamic planning-based approaches: feasibility is determined at run time (dynamically).
An arriving task is accepted for execution only if it is feasible to meet its time constraints.
Dynamic best effort approaches: no feasibility analysis is performed.
The system tries to meet all deadlines and aborts any started process whose deadline is missed.
Used for non-periodic tasks.
*******************************************************************************************
Design by: Asst. Prof. Vikas Katariya +918980936828
Page 24
SUBJECT: OPERATING SYSTEM
UNIT – III PROCESS SCHEDULING
Design by: Asst. Prof. Vikas Katariya +918980936828
PATEL GROUP OF INSTITUTION
Page 25