Download MODERN OPERATING SYSTEMS Third Edition ANDREW S. …

Document related concepts

Unix security wikipedia , lookup

Distributed operating system wikipedia , lookup

DNIX wikipedia , lookup

Burroughs MCP wikipedia , lookup

Thread (computing) wikipedia , lookup

Process management (computing) wikipedia , lookup

Transcript
INTRODUCTION TO OPERATING SYSTEMS
Summer Semester 2013
PROCESSES AND THREADS
Intro To OS, Chapter 2 By Rana Umer
Outline
• Process
• Threads
• Process Scheduling
Center Process Unit (CPU)
Intro To OS, Chapter 2 By Rana Umer
Process
• What is a Process?
– A Program in execution…
– But some programs run as multiple
processes….
– And the same program can be running
multiply as more than one process at the
same time
– Memory Management Information(Page Table)
– Processes are in User/System Mode.
Intro To OS, Chapter 2 By Rana Umer
Process
• What is a Psedoparallelism?
• Several jobs at the same time.
• Multiprogramming system, the CPU also switches from
program to program (At any instant of time, the CPU is
running only one program)
• Giving the users the illusion of parallelism
• The other possibility is real parallelism in which two or
more processes (Multiprocessor Systems )are actually
running at once because the computer system is a
parallel processor, i.e., has more than one processor.
Intro To OS, Chapter 2 By Rana Umer
Process
Brief Explanation
• A process is an instance of a program running in a
computer.
• A process can initiate a subprocess, which is a called
achild process (and the initiating process is sometimes
referred to as its parent ). A child process is a replica of
the parent process and shares some of its resources,
but cannot exist if the parent is terminated.
• Processes can exchange information or synchronize
their operation through several methods of inter
process communication
Intro To OS, Chapter 2 By Rana Umer
The Process Model
• In this Model OS System, is organized into a
number of sequential processes , or just processes
for short.
• Conceptually, each process has its own virtual CPU.
• A Collection of process running in (pseudo)
Parallel.
• CPU perform Multiprogramming.
• A single processor may be shared among several
processes, with some scheduling algorithm being
used to determine when to stop work on one
process and service a different one.
Intro To OS, Chapter 2 By Rana Umer
The Process Model
(a) Multiprogramming of four programs. (b) Conceptual model of
four independent, sequential processes. (c) Only one
program is active at once.
Intro To OS, Chapter 2 By Rana Umer
Difference Between Process & Program
Intro To OS, Chapter 2 By Rana Umer
Process Creation
There are several mechanisms for creating a process.
•
•
•
•
System initialization.
Execution of a process creation system call by a
running process.
A user request to create a new process.
Initiation of a batch job.
Intro To OS, Chapter 2 By Rana Umer
Process Termination
Several events which cause process termination:
• Normal exit (voluntary).
• Error exit (voluntary).
• Fatal error (involuntary).
• Killed by another process (involuntary).
Intro To OS, Chapter 2 By Rana Umer
Process Hierarchies
Modern general purpose operating systems permit a
user to create and destroy processes.
• Parent creates a child process, child processes can
create its own process
• „Forms a hierarchy
• „UNIX calls this a "process group"
• „Windows has no concept of process hierarchy
• „all processes are created equal
Intro To OS, Chapter 2 By Rana Umer
Process States
OS kernel that allows multi-tasking needs processes to have certain
states.
• First, the process is "created" - it is loaded from a secondary
storage device into main memory. After that the process
scheduler assigns it the state "waiting".
• While the process is "waiting" it waits for the scheduler to do a
so-called context switch and load the process into the processor.
• The process state then becomes "running", and the processor
executes the process instructions.
• If a process needs to wait for a resource (wait for user input or file to open
...), it is assigned the "blocked" state. The process state is changed back to
"waiting" when the process no longer needs to wait.
• Once the process finishes execution, or is terminated by the operating
system, it is no longer needed. The process is removed instantly or is
moved to the "terminated" state. When removed, it just waits to be
removed from main memory.
Intro To OS, Chapter 2 By Rana Umer
Process States
„Possible process states (Lifecycle of process)
• „Running
• „Blocked
• „Ready
• „Transitions between states shown
A process can be in running, blocked, or ready state. Transitions
between these states are as shown.
Intro To OS, Chapter 2 By Rana Umer
Process States
Intro To OS, Chapter 2 By Rana Umer
Implementation of Processes
• The OS organizes the data about each process in a table
naturally called the Process Table.
• Each entry in this table is called a Process Table
Entry (PTE) or Process Control Block.
• One entry per process.
• The central data structure for process management.
• A process state transition (e.g., moving from blocked to
ready) is reflected by a change in the value of one or
more fields in the PTE.
• We have converted an active entity (process) into a data
structure
Intro To OS, Chapter 2 By Rana Umer
Implementation of Processes
Process Table
• The process table is a data structure maintained by
the operating system to facilitate context switching
and scheduling, and other Each entry in the table,
often called a context block, contains information
about a process such as process name and state
,priority ,registers, and a semaphore it may be
waiting on.
• The exact contents of a context block depends on the
operating system.
• For instance, if the OS supports paging, then the
context block contains an entry to the page table.
Intro To OS, Chapter 2 By Rana Umer
Implementation of Processes
Some of the fields of a typical process table entry
Intro To OS, Chapter 2 By Rana Umer
Implementation of Processes
Tasks on interrupt
Intro To OS, Chapter 2 By Rana Umer
Implementation of Processes
Q1. OS Should try to maximize processor use?
Q2. What should the OS do when a process does
something that will involve a long time?
Q3. Can 2 processes be running at the same time?
Intro To OS, Chapter 2 By Rana Umer
Threads
• Thread is the smallest sequence of programmed
instructions that can be managed independently by
an operating system scheduler.
• A thread is a light-weight process. The
implementation of threads and processes differs
from one operating system to another
• Most cases, a thread is contained inside a process.
• Multiple threads can exist within the same process
and share resources such as memory.
• Has own program counter, register set, stack
Threads have some of the properties of Processes
Intro To OS, Chapter 2 By Rana Umer
Threads
• Multi-threading is a widespread programming
and execution model that allows multiple
threads to exist within the context of a single
process.
• These threads share the process' resources,
but are able to Execute Independently.
Intro To OS, Chapter 2 By Rana Umer
Thread
Intro To OS, Chapter 2 By Rana Umer
Thread
Intro To OS, Chapter 2 By Rana Umer
Thread Libraries
A Thread Library provides the programmer an API for
creating and managing threads. There are two primary ways
of implementing a thread library.
1. The first approach is to provide a library entirely in
user space with no kernel support. All code and data
structures for the library exist in user space. This
means that invoking a function in the library results in
a local function call in user space and not a system
call.
2. The second approach is to implement a kernel-level
library supported directly by the OS. In this case, code
and data structures for the library exist in kernel
space.
Invoking
a
function
in
the
API
for
the
library
Intro To OS, Chapter 2 By Rana Umer
typically results in a system call to the kernel.
Threads Calls
•
•
•
•
Thread _ create
Thread _ exit
Thread _ wait
Thread _ yield
Intro To OS, Chapter 2 By Rana Umer
Threads Calls
Thread _ create
When multithreading is present, processes
normally start with a single thread present. This
thread has the ability to create new threads by
calling a library procedure, for example,
thread_create. A parameter to thread_create
typically specifies the name of a procedure for
the new thread to run.
Intro To OS, Chapter 2 By Rana Umer
Threads Calls
Thread _ exit
When a thread has finished its work, it can exit by calling a
library procedure, say, thread_exit .
Thread _ Wait
• In some thread systems, one thread can wait for a
(specific) thread to exit by calling a (thread_wait)
procedure.
Thread _ yield
• Which allows a thread to voluntarily give up the CPU to
let another thread run.
Thread creation and termination is very much like process creation and
termination, with approximately the same options as well. Intro To OS, Chapter 2 By Rana Umer
Process VsThreads
• Items shared by all threads in a process
• Items private to each thread in thread
• Each thread has its own stack!
Intro To OS, Chapter 2 By Rana Umer
Thread Model/Implementation
• There are two types of threads to be managed in
a modern system:
1. User Threads
2. Kernel Threads.
3. Hybrid Threads.
• User threads are supported above the kernel,
without kernel support. These are the threads
that application programmers would put into
their programs.
Intro To OS, Chapter 2 By Rana Umer
Thread Model/Implementation
• Kernel threads are supported within the
kernel of the OS itself.
• All modern OS support kernel level threads,
allowing the kernel to perform multiple
simultaneous tasks and/or to service
multiple kernel system calls simultaneously.
Intro To OS, Chapter 2 By Rana Umer
Thread Model/Implementation
• Hybird threads has each kernel-level thread has some set
of user-level threads.
• M:N maps some M number of application threads onto
some N number of kernel entities
• In this design, the kernel is aware of only the kernel-level
threads and schedules those. Some of those threads may
have multiple user-level threads multiplexed on top of
them.
• These user-level threads are created, destroyed, and
scheduled just like user-level threads in a process that
runs on an operating system without multithreading
capability.
Intro To OS, Chapter 2 By Rana Umer
Thread Model/Implementation
• Hybird threads
Intro To OS, Chapter 2 By Rana Umer
Thread Model/Implementation
Intro To OS, Chapter 2 By Rana Umer
Thread Model
Kernel Model Implementation.
The user threads must be mapped to kernel
threads, using one of the following strategies.
–Many To One Model
–One To One Model
–Many to Many Model
Intro To OS, Chapter 2 By Rana Umer
Thread Model
Many To One Model
• In the many-to-one model, many user-level threads are
all mapped onto a single kernel thread.
• Thread management is handled by the thread library in
user space, which is very efficient.
• However, if a blocking system call is made, then the
entire process blocks, even if the other user threads
would otherwise be able to continue.
• Because a single kernel thread can operate only on a
single CPU, the many-to-one model does not allow
individual processes to be split across multiple CPUs.
Intro To OS, Chapter 2 By Rana Umer
Thread Model
Many To One Model
Intro To OS, Chapter 2 By Rana Umer
Thread Model
One To One Model
• The one-to-one model creates a separate kernel thread to
handle each user thread.
• One-to-one model overcomes the problems listed above
involving blocking system calls and the splitting of
processes across multiple CPUs.
Intro To OS, Chapter 2 By Rana Umer
Thread Model
Many To Many Model
• The many-to-many model multiplexes any number of
user threads onto an equal or smaller number of kernel
threads, combining the best features of the one-to-one
and many-to-one models.
• Users have no restrictions on the number of threads
created.
• Blocking kernel system calls do not block the entire
process.
• Processes can be split across multiple processors.
• Individual processes may be allocated variable numbers
of kernel threads, depending on the number of CPUs
present and other factors.
Intro To OS, Chapter 2 By Rana Umer
Thread Model
Many To Many Model
Intro To OS, Chapter 2 By Rana Umer
Thread Usage
• Multiple parallel activities.
• „Faster to create and destroy one thread than
a whole process
• „When one thread waits, another might be
able to execute
• „Shared data area of threads in one process.
Efficient resource sharing and easy
communication.
• Efficient time sharing
• Doing background processing
Intro To OS, Chapter 2 By Rana Umer
Thread Usage
A word processor with three threads
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
• Processes frequently need to communicate with other
processes.
• A capability supported by some operating systems that
allows one process to communicate with another process.
• The processes can be running on the same computer or
on different computers connected through a network.
• IPC enables one application to control another
application, and for several applications to share the
same data without interfering with one another.
• IPC is required in all multiprocessing systems operating
systems
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
There are three issues here.
1. How one process can pass information to another.
2. Make sure two or more processes do not get into each
other’s way when engaging in critical activities (suppose
two processes each try to grab the last 1 MB of
memory).
3. Concerns proper sequencing when dependencies are
• present: if process A produces data and process B prints
them, B has to wait until A has produced some data
before starting to print.
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
• Race condition
– Two or more processes are reading or writing
some shared data and the final result depends on
who runs precisely when.
– Race conditions result in intermittent (Partially)
errors that are very difficult to debug
• Very difficult to test programs for race
conditions
– Must recognize where race conditions can occur
• Programs can run for months or years before a
race condition is discovered
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Race Conditions
Two processes want to access shared memory at same time
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Critical Regions
How we avoiding Race condition
• Mutual exclusion
– Prohibit more than one process from reading and
writing the shared data at the same time
• Critical region
– Part of the program where the shared memory is
accessed
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Critical Regions
How we avoiding Race condition
• Critical region
– However sometimes a process have to access
shared memory or files, or doing other critical
things that can lead to races. That part of the
program where the shared memory is accessed is
called the critical region or critical section .
– If we could arrange matters such that no two
processes were ever in their critical regions at the
same time, we could avoid races.
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Critical Regions
How we avoiding Race condition
Critical region
Four conditions to provide mutual exclusion
1.
No two processes simultaneously in critical
region
2.
No assumptions made about speeds or numbers
of CPUs
3.
No process running outside its critical region
may block another process
4.
No process must wait forever to enter its critical
region
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Critical Regions
Mutual exclusion using critical regions
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Mutual Exclusion with Busy Waiting
Proposals for Achieving Mutual exclusion
• Disable interrupt
• Lock Variables
• Strict Alternation
• Peterson’s Solution
• The TSL Instruction
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Mutual Exclusion with Busy Waiting
• Disable interrupt (Proposal)
– After entering critical region, disable all interrupts
– No Clock Interrupts can occur.
– Since clock is just an interrupt, no CPU
preemption can occur
– Disabling interrupt is useful for OS itself, but not
for users…
– Unattractive approach
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Mutual Exclusion with Busy Waiting
• Lock variable
– A software solution
– A single, shared (lock) variable, initially 0
– A process wants to enter its critical region
• First tests the lock
–if 0 then set it to 1
–Enter its critical region
–Otherwise wait until it becomes 0
– 0 means no process is in critical region
– 1 means process is in critical region
– Same problem as spooler directory
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Mutual Exclusion with Busy Waiting
• Lock variable
EnterCriticalSec( lock )
{
while(lock <> 0)
lock = 1
}
LeaveCriticalSec( lock )
{
lock = 0
}
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Mutual Exclusion with Busy Waiting
Strict Alternation (Share The Turn Variable )
• int turn = 0;
• This means that only process #0 is allowed to enter its critical
region. If process #2, for example, wishes to enter its critical region,
it must busywait for the variable to change to 2.
void enterCriticalSection(int myProcessNumber)
{
while(turn != myProcessNumber){ // While it's not my turn,
// ...busywait.
}
}
Eventually, process #0 enters its critical region, and when it exits, it
sets the shared variable to 1.
void leaveCriticalSection(){
turn = (turn + 1) % NUMBER_OF_PROCESSES; // Let the next
process have a turn.
}
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Mutual Exclusion with Busy Waiting
Strict Alternation
• When process #1 enters its critical region, it sets the
variable to 2 when it leaves.
• Process #2 would still be waiting by this time, and would
enter its critical region as soon as the variable goes to 2.
• If it were the last of the cooperating processes, it would set
the variable back to 0 again when it left its critical region.
A few problems with this solution:
• Requires you know in advance how many processes will
want to share the memory
1. A process may be held up indefinitely waiting for its turn,
even if no other process is in its critical section (which
violates one of the requirements). Proper sequencing when
Intro To OS, Chapter 2 By Rana Umer
dependencies are present
Interprocess Communication
Mutual Exclusion with Busy Waiting
Strict Alternation Concepts:
• Turn Variable
• Busy waiting
– Continuously testing a variable until some value
appears
• Spin lock
– A lock that uses busy waiting is call a spin lock
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Mutual Exclusion with Busy Waiting
Peterson’s Solution
• The solution is a combined approach (Lock &
Strict Alternation).
• Each process gets its own lock variable, and a
variable 'turn' is used to judge in case of conflicts.
The procedure for entering a critical region now
becomes:
– Set your own lock variable to indicate that you wish to
enter your critical section
– Then check all the other lock variables and see whose
turn it is.
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Mutual Exclusion with Busy Waiting
Peterson’s Solution
public class Mutex {
boolean interested[2]; //
Whether each process wants to
enter its critical region.
int turn; // In case of conflict, whose turn is it?
void enterCriticalSection(int thisProcess){
int otherProcess = 1 - thisProcess;
interested[thisProcess] = true;
turn = otherProcess; // No, please, after you.
while(interested[otherProcess] && turn != thisProcess){
/* busywait */
}
}
void leaveCriticalSection(int thisProcess){
interested[thisProcess] = false;
}
Intro To OS, Chapter 2 By Rana Umer
}
Interprocess Communication
Mutual Exclusion with Busy Waiting
TSL/TAS Instruction (Test & Set Lock)
• This involves a special hardware instruction called Test And Set
Lock .
• This instruction has the special property that it is indivisible
(atomic), even across processors.
• It also doesn't require supervisor mode requires hardware support
(Registers) (but most modern processors implement it)---uses
special memory access mode.
Assembly
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Sleep and wakeup
• Sleep causes the caller to block
• Wakeup call awakened the process that
is in sleep
• Sleep and wakeup block the processes
instead of wasting the CPU time when
they are not allowed to enter their
critical region.
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Producer-Consumer Problem
• Two processes share a common, fixedsized buffer (Share)
• Producer puts information into the buffer
• Consumer takes information from buffer
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Producer-Consumer Problem
Example:
• The essence of the problem is that one process is producing
things (like, maybe decoding video from a file), and the
other is consuming things (like, maybe displaying the
decoded video on the screen).
• Between the producer and consumer is a fixed-size buffer.
• When the buffer is neither full nor empty, the producer
may freely place items in the buffer, and the consumer may
freely remove them.
• When the buffer is full, however, the producer must wait
till the consumer removes some items. Similarly, when the
buffer is empty, the consumer must wait until the producer
has placed items in the buffer before it can remove any
items.
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Semaphore
• An integer variable to count the number of
wakeups saved for future use
• Semaphore could have the value 0, indicating
that no wakeups were saved
• Two operations,
– down (Sleep)=P(wait)
• Checks on semaphore to see if the value is greater than
0 then decrement it
– up (wakeup)=V(Signal)
• Increments the value of the semaphore addressed
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Semaphore
Solve Producer-Consumer problem
• Essentially the first semaphore counts the number of slots in the
buffer which are empty. This will allow the producer to Down
(Sleep)=P(wait) for an empty slot, and allow the consumer to Up
(wakeup)=V(Signal) the producer to produce more items.
• Initially the buffer is empty (we assume), so the emptySlots
semaphore has its counter initially set to BUFFER_CAPACITY.
• The second semaphore counts the number of slots in the buffer
which contain an item.
• The producer can use it to SIGNAL the consumer, in case the
consumer is Waiting on an item.
• The third semaphore is simply to ensure that both processes do
not attempt to update the buffer simultaneously. It is ensuring
mutual exclusion while in the critical section of updating the
shared buffer.
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Semaphore
Solve Producer-Consumer problem
void producer(){
for(/*ever*/;;){
Object item = produceItem();
wait(emptySlots); // Wait for buffer space.
wait(mutualExclusion); // Enter critical section.
buffer.insertItem(item);
signal(mutualExclusion); // Leave critical section.
signal(fullSlots); // Signal the consumer.
}
}and for the consumer:void consumer(){
for(/*ever*/;;){
wait(fullSlots); // Wait for item in buffer.
wait(mutualExclusion); // Enter critical section.
Object item = buffer.removeItem();
signal(mutualExclusion); // Leave critical section.
signal(emptySlots); // Signal the producer.
consumeItem(item);
}
}
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Mutexes
• A simplified version of the semaphore, called a
Mutex
• Mutexes are good only for managing mutual
exclusion to some shared resource or piece of code
• A Mutex is a variable that can be in one of two
states: unlocked or locked ( 0 – unlocked and nonzero – locked )
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Mutexes
Implementation of mutex_lock and mutex_unlock
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Monitors
• A monitor is a collection of procedures, variables, and
data structures that are all grouped together in a
special kind of module or package
• Processes may call the procedures in a monitor
whenever they want to
• Process can not access internal data structures from
outside procedures
• Only one process can be active in a monitor at any
instant
• To block a process we need condition variables with
two operations
– Wait
– signal
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Classic IPC Problems
• Dining philosopher problem
– A philosopher either eat or think
– If goes hungry, try to get up two forks and eat
• Reader Writer problem
– Models access to a database
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Classic IPC Problems
Dining Philosophers Problem
Lunch time in the Philosophy Department.
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Classic
IPC
Problems
Dining Philosophers Problem
• There are 5 philosophers sitting at a round table.
Between each adjacent pair of philosophers is a
chopstick. In other words, there are five
chopsticks.
• Each philosopher does two things: think and eat.
• The philosopher thinks for a while, and then
stops thinking and becomes hungry.
• When the philosopher becomes hungry, he/she
cannot eat until he/she owns the chopsticks to
his/her left and right.
• When the philosopher is done eating he/she puts
down the chopsticks and begins thinking again.
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Classic
IPC
Problems
Dining Philosophers Solution
• One idea is to instruct each philosopher to
behave as follows:
– Think until the left fork is available and when it is
pick it up
– Think until the right fork is available and when it is
pick it up
– Eat
– Put the right fork down
– Put the left fork down
– Repeat from the beginning
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Classic IPC Problems
The Reader and Writer Problem
• Multiple processes reading the database at the
same time
• If one process is updating (writing) the database,
no other processes may have access to the
database, not even readers
• Solution: The first reader to get access to the
database does a down on the semaphore db
– readers increment a counter, rc
– As readers leave, they decrement the counter and the
last one out does an up on the semaphore
– allow a blocked writer to enter
Intro To OS, Chapter 2 By Rana Umer
Interprocess Communication
Classic IPC Problems
The Reader and Writer Problem
Intro To OS, Chapter 2 By Rana Umer
Scheduler Activations
• Is approach devised by Anderson et al in 1992
• To get the best efficiency and flexibility of the user level
threads
• The goals of the scheduler activation work are to mimic the
functionality of kernel threads, usually associated with
threads packages implemented in user space.
• when a thread blocks on a system call or on a page fault, it
should be possible to run other threads within the same
process, if any are ready.
• Efficiency is achieved by avoiding unnecessary transitions
between user and kernel space. If a thread blocks waiting
for another thread to do something
• Relice in upcall
Intro To OS, Chapter 2 By Rana Umer
Scheduler Activations
UpCall
• Normally user programs call functions in the
kernel. Which call as system calls
• Sometime, though the karnel calls a user-level
process to report an event
Intro To OS, Chapter 2 By Rana Umer
Scheduler Activations
Intro To OS, Chapter 2 By Rana Umer
Pop-Up Threads
• The arrival of a message causes the system to create
a new thread to handle the message. Such a thread
is called a Pop-up Thread
• They do not have any history—registers, stack, etc.
• The new thread is given the incoming message to
process
• Having the pop-up thread run in kernel space is
usually easier and faster than putting it in user space
• A Pop-up Thread in kernel space can easily access all
the kernel’s tables and the I/O devices, which may be
needed for interrupt processing
Intro To OS, Chapter 2 By Rana Umer
Making Single-Threaded Code Multithreaded
See the page 97
Intro To OS, Chapter 2 By Rana Umer
Threads: Benefits
• User responsiveness
– When one thread blocks, another may handle user I/O
– But: depends on threading implementation
• Resource sharing: economy
– Memory is shared (i.e., address space shared)
– Open files, sockets shared
• Speed
• Utilizing hardware parallelism
– Like heavy weight processes( Contextual Information
Saved ), can also make use of multiprocessor
architectures
Intro To OS, Chapter 2 By Rana Umer
Threads: Drawbacks
• There is a lack of coordination between threads
and operating system kernel.
• Thread may crash your application. If in your
application there is some problem in the thread
and it crashes, it will crash your whole
application.
• Threads functioning is dependent on OS support.
Intro To OS, Chapter 2 By Rana Umer
Scheduling
• Which process to run next?
• When a process is running, should CPU run to
its end, or switch between different jobs?
• Process switching is expensive
– Switch between user mode and kernel mode
– Current process must be saved
– Memory map must be saved
– Cache flushed and reloaded
Intro To OS, Chapter 2 By Rana Umer
Scheduling
CPU-I/O Burst
• Process execution consists of a cycle of CPU Execution & I/O wait.
• Process execution begins with a CPU Burst.
• CPU Burst Followed by an I/O burst.
Burst Former Calls Called:
• Compute-bound
• I/O-bound .
Compute-bound processes typically have long CPU bursts and thus infrequent
I/O waits.
I/O-bound processes have short CPU bursts and thus frequent I/O waits.
The key factor is the length of the CPU burst, not the length of the I/O burst.
I/O-bound processes are I/O bound because they do not compute much
between I/O requests, not because they have especially long I/O requests. The
basic idea here is that if an I/O-bound process wants to run.
Intro To OS, Chapter 2 By Rana Umer
Scheduling
CPU-I/O Burst
Bursts of CPU usage alternate with periods of waiting for I/O. (a) A CPU-bound
process. (b) An I/O-bound process.
Intro To OS, Chapter 2 By Rana Umer
Brief of CPU Burst
• When CPU gets faster, processes are getting
more I/O bounded
• A basic idea is if an I/O bound process wants
to run, it should get a change quickly
Intro To OS, Chapter 2 By Rana Umer
CPU Scheduler
Multiprogramming
A number of programs can be in
memory at the same time. Allows
overlap of CPU and I/O.
Jobs
(batch) are programs
without user interaction.
User
(time shared) are programs that
may have user interaction.
Process
is the common name for both.
CPU - I/O burst cycle
Characterizes process execution,
which alternates, between CPU and
I/O activity.
CPU times are
generally much shorter than I/O
times.
that
run
Preemptive Scheduling
An interrupt causes currently
running process to give up the CPU
and be replaced by another process.
Intro To OS, Chapter 2 By Rana Umer
CPU Scheduler
•
Selects from among the processes in memory that are
ready to execute, and allocates the CPU to one of
them
•
CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
•
Scheduling under 1 and 4 is nonpreemptive
•
All other scheduling is preemptive
Intro To OS, Chapter 2 By Rana Umer
CPU Scheduler
• Preemptive Algorithm
– If a process is still running at the end of its time
interval, it is suspended and another process is
picked up.
• Non-preemptive
– Picks a process to run and then lets it run till it
blocks or it voluntarily releases the CPU
Intro To OS, Chapter 2 By Rana Umer
CPU Scheduler
The Dispatcher
• Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
•
switching context
•
switching to user mode
•
jumping to the proper location in the user program to
restart that program
• Dispatch latency – time it takes for the dispatcher to stop
one process and start another running
Intro To OS, Chapter 2 By Rana Umer
Categories of Scheduling Algorithms
• Different environments need different scheduling
algorithms because of different application areas
– Batch
• Maximize jobs per hour
• Non-preemptive algorithms reduces process switches
– Interactive (User Base)
• Preemptive is necessary
• Respond to request quickly (meet user expectations)
– Real time
• Processes run quickly and block
• Avoid losing data
Intro To OS, Chapter 2 By Rana Umer
Categories of Scheduling Algorithms
Batch Systems, Non Preemptive algorithms, or Preemptive algorithms
with long time periods for each process are often acceptable. This
approach reduces process switches and thus improves performance.
Interactive Users, preemption is essential to keep one process from
hogging the CPU and denying service to the others. Preemption is
needed to prevent this behaviour.
Real-time Constraints, Preemption is, oddly enough, sometimes not
needed because the processes know that they may not run for long
periods of time and usually do their work and block quickly.
The difference with interactive systems is that real-time systems run only
programs that are intended to further the application at hand.
Interactive systems are general purpose and may run random programs
that are not cooperative or even harmful.
Intro To OS, Chapter 2 By Rana Umer
Scheduling Algorithm Goals
Some goals of the scheduling algorithm under different
circumstances.
Intro To OS, Chapter 2 By Rana Umer
Criteria For Performance Evaluation
UTILIZATION
The fraction of time a device is in use. ( ratio of in-use time / total
observation time )
THROUGHPUT The number of job completions in a period of time. (jobs / second )
SERVICE TIME The time required by a device to handle a request. (seconds)
QUEUEING TIME Time on a queue waiting for service from the device. (seconds)
RESIDENCE TIME The time spent by a request at a device.
RESIDENCE TIME = SERVICE TIME + QUEUEING
TIME.
RESPONSE TIME Time used by a system to respond to a User Job. ( seconds )
THINK TIME
The time spent by the user of an interactive system to figure out the
next request. (seconds)
The goal is to optimize both the average and the amount of variation
Intro To OS, Chapter 2 By Rana Umer
Scheduling in Batch Systems
•
•
•
First-come first-served
Shortest job first
Three-Level Scheduling
Intro To OS, Chapter 2 By Rana Umer
Scheduling in Batch Systems
FIRST-COME, FIRST SERVED:
• ( FCFS) same as FIFO
• Simple, fair, but poor performance. Average
queueing time may be long.
• Each process joins the Ready queue
• When the current process ceases to execute,
the oldest process in the Ready queue is
selected
Intro To OS, Chapter 2 By Rana Umer
Scheduling in Batch Systems
FIRST-COME, FIRST SERVED:
0
1
2
3
4
5
Intro To OS, Chapter 2 By Rana Umer
5
10
15
20
Scheduling in Batch Systems
FIRST-COME, FIRST SERVED:
Disadvantage:
• Short jobs suffer penalty
• Favors CPU-bound processes over I/O bound
processes
– I/O processes have to wait until CPU-bound
process completes
• May result in inefficient use of both the
processor & I/O devices
Intro To OS, Chapter 2 By Rana Umer
Scheduling in Batch Systems
Shortest Process/Job Next (SPN or SJF)
0
5
10
15
20
1
2
3
4
5
• Non-preemptive policy
• Process with shortest expected processing time is
selected next
• Short process jumps ahead of longer processes
Intro To OS, Chapter 2 By Rana Umer
Scheduling in Batch Systems
Shortest Process/Job Next (SPN or SJF)
0
5
10
15
20
1
2
3
4
5
Shortest Remaining Time Next
A preemptive version of shortest job first is shortest remaining time next . With
this algorithm, the scheduler always chooses the process whose remaining run
time is the shortest. Again here, the run time has to be known in advance. When a
new job arrives, its total time is compared to the current process’ remaining time.
If the new job needs less time to finish than the current process, the current
process
Intro To OS, Chapter 2 By Rana Umer
Scheduling in Batch Systems
Three-Level Scheduling
1. Admission Scheduler
Admission scheduler decides which jobs to admit to
the system.
2. Memory Scheduler
Memory scheduler determines which processes are
kept in memory and which on the disk.
3. CPU Scheduler
picking one of the ready processes in main memory
to run next.
Intro To OS, Chapter 2 By Rana Umer
Scheduling in Batch Systems
Intro To OS, Chapter 2 By Rana Umer
Scheduling in Interactive Systems
•
•
•
•
•
•
•
Round-robin scheduling
Priority scheduling
Multiple queues
Shortest process next
Guaranteed scheduling
Lottery scheduling
Fair-share scheduling
Intro To OS, Chapter 2 By Rana Umer
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Round-Robin Scheduling
• Each process is assigned a time interval, called its
quantum , which it is allowed to run.
 Use a timer to cause an interrupt after a
predetermined time. Preempts if task exceeds it’s
quantum.
 Definitions:
– Context Switch: Changing the processor from
running one task (or process) to another. Implies
changing memory.
– Processor Sharing: Use of a small quantum such
that each process runs frequently at speed 1/n.
– Reschedule latency: How long it takes from
when a process requests to run, until it finally
gets control of the CPU.
Intro To OS, Chapter 2 By Rana Umer
Round-Robin Scheduling
Round-robin scheduling.
(a) The list of runnable processes. (b) The list of runnable
processes after B uses up its quantum.
Intro To OS, Chapter 2 By Rana Umer
Round-Robin Scheduling
Size Of Time Quantum
• The performance of RR depends heavily on
the size of time quantum
• Time quantum
– Too big, = FCFS
– Too small:
• Hardware: Process sharing
• Software: context switch, high overhead, low CPU
utilization
– Must be large with respect to context switch
Intro To OS, Chapter 2 By Rana Umer
Priority scheduling
• Each priority has a priority number
• The highest priority can be scheduled first
• If all priorities equal, then it is FCFS
Intro To OS, Chapter 2 By Rana Umer
Example
•
Process
Burst Time
Priority
P1
10
3
P2
1
1
P3
2
3
P4
1
4
P5
5
2
E.g.:
• Priority (nonpreemprive)
2
P5
0 1
P1
6
• Average waiting time
= (6 + 0 + 16 + 18 + 1) /5 = 8.2
Intro To OS, Chapter 2 By Rana Umer
3 4
16
18 19
Multiple Queues
• CTSS
– Only one process in memory
– It is wise to give large quantum to CPU-bound
process and small quantum to interactive process
– Priority classes
• Highest class run for one quantum; next-highest for two
quanta, and so on
• When a process used up its quantum, it will be moved
to the next class
Intro To OS, Chapter 2 By Rana Umer
Example of Multiple Queues
• Three queues:
– Q0 – time quantum 8 milliseconds, FCFS
– Q1 – time quantum 16 milliseconds, FCFS
– Q2 – FCFS
Intro To OS, Chapter 2 By Rana Umer
Shortest Process Next
• In interactive systems, it is hard to predict the
remaining time of a process
• Make estimate based on past behavior and run
the shortest estimated running time
T0 , T0 / 2  T1 / 2, T0 / 4  T1 / 4  T2 / 2....
• Aging:
– Estimate next value in a series by taking the weighted
averaged of the current and previous estimate
Intro To OS, Chapter 2 By Rana Umer
Guaranteed Scheduling
• A completely different approach to scheduling is
to make real promises to the users about
performance and then live up to them. One
promise that is realistic to make and easy to live
up to is this
• If there are n users logged in while you are
working, you will receive about 1/n of the CPU
power. Similarly, on a single user system with n
processes running, all things being equal, each
one should get 1/n of the CPU cycles.
Intro To OS, Chapter 2 By Rana Umer
Lottery Scheduling
• The basic idea is to give Processes Lottery Tickets
for various system resources, such as CPU time.
• Whenever a scheduling decision has to be made,
a lottery ticket is chosen at random, and the
process holding that ticket gets the resource.
• When applied to CPU scheduling, the system
might hold a lottery 50 times a second, with each
winner getting 20 msec of CPU time as a prize.
Intro To OS, Chapter 2 By Rana Umer
Fair-Share Scheduling
• Each process is scheduled on its own, without
regard to who owner is.
As a result, if user 1 starts up 9 processes and
user 2 starts up 1 process, with round robin or
equal priorities, user 1 will get 90% of the CPU
and user 2 will get only 10% of it.
Intro To OS, Chapter 2 By Rana Umer
Scheduling in Real Time
See the Page 148
Intro To OS, Chapter 2 By Rana Umer
Policy versus Mechanism
• Separate what is allowed to be done with how
it is done
– A process knows which of its children
threads are important and need priority
• Scheduling algorithm parameterized
– Mechanism in the kernel
• Parameters filled in by user processes
– Policy set by user process
Intro To OS, Chapter 2 By Rana Umer
Thread Scheduling
• Threads can be scheduled, and the threads
library provides several facilities to handle
and control the scheduling of threads.
• Scheduling in such systems differs
substantially depending on whether
• user-level threads
• kernel-level threads (or both) are
supported.
Intro To OS, Chapter 2 By Rana Umer