Download CPU Scheudling, Process Synchronization, Deadlock

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
CSIT345-Operating Systems
Lecture 15
CPU Scheduling, Process
Synchronization, Deadlock
Prof. Boxiang Dong
www.cs.stevens.edu/~bdong
Office: RI-320
Email: [email protected]
Motivation - Multiprogramming
• Process concept
• Process is the program in execution.
• Process is the unit of work in modern
time-sharing operating systems.
• Multiprogramming
• Multiple programs run at the same
time.
Motivation - CPU/IO Burst for a Process
• CPU I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and
I/O wait.
• CPU burst followed by I/O burst.
• CPU burst distribution is of main concern.
Preemptive v.s. Non-preemptive Scheduling
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible.
• Throughput – # of processes that complete their execution per time
unit.
• Turnaround time – time between process submission and
completion.
• Waiting time – time between process submission and starting to be
executed.
• Response time – time between process submission and the first
output.
Scheduling Algorithms – First-Come FirstServe Scheduling (FCFS)
Scheduling Algorithms – Shortest-Job-First
Scheduling (SJF)
Scheduling Algorithms – Shortest-Job-First
Scheduling (SJF)
• In theory, the SJF scheduling is the optimal in that it always gives the
minimum average waiting time.
• However, it is difficulty to know the length of the next CPU burst
before execution.
Scheduling Algorithms – Shortest-Job-First
Scheduling (SJF)
Scheduling Algorithms – Priority Scheduling
Average waiting time = 8.2 msec
Scheduling Algorithms – Round-Robin
Scheduling
• Principle:
• A quantum, which is a short time
period, is defined.
• The ready queue is implemented as
a circular queue.
• The CPU scheduler allocates the
CPU to each process for a time
interval up to 1 quantum.
An example of Round-Robin when
quantum = 2 units
Scheduling Algorithms – Round-Robin
Scheduling
Quantum q = 4
Scheduling Algorithm – Multilevel Queue
Scheduling
• Principle:
• Partition the ready queue into several separate queues.
• Each queue has its own scheduling algorithm.
Scheduling Algorithm – Multilevel Feedback
Queue Scheduling
• Motivation:
• Multilevel Queue Scheduling
suffers from inflexibility in that a
process is assigned to a queue
permanently.
• Allow a process to move between
queues.
• Dynamically assign processes to
the queues based on the
characteristics of their CPU bursts.
Race Condition
If the low-level statements are executed in the following order,
we end up with Counter = 6.
If we switch the order of T4 and T5, we end up with Counter = 6.
Race Condition
• When several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the order
in which the access takes place, we have a race condition.
• To guard against the race condition, we can only allow at most one
process to manipulate the data at the same time.
Race Condition
• We face a lot of race conditions with multi-programming with shared
memory.
Critical Section
• Given a set of processes 𝑃0 , 𝑃1 , … , 𝑃𝑛−1
• Each process has a segment of code in which it change common variables.
• This code segment is named critical section.
• To avoid race condition, if one process is executing in its critical
section, no other process is allowed to execute in its critical section at
the same time.
Critical Section
• Each process must request
permission to enter its critical
section.
• Entry section: request permission.
• Exit section: terminate the
permission.
Relationship between Race Condition and
Critical Section
• Critical section: code segments that access and manipulate the
shared data.
• Race condition: at least two processes that are executing in the same
critical section.
Mutex Lock
• A mutex lock has a boolean variable available whose value indicates if
the lock is available or not.
• It supports two atomic functions: acquire() and release().
Mutex Lock
• A process must
• Acquire the lock before entering
the critical section.
• Release the lock after leaving the
critical section.
Programming – Pthread Mutex Lock
• pthread_mutex_lock (mutex): If the mutex is already
locked by another thread, this call will block the calling
thread until the mutex is unlocked.
• pthread_mutex_trylock (mutex): if the mutex is already
locked, the routine will return immediately with a
"busy" error code.
• pthread_mutex_unlock (mutex): An error will be
returned if:
• If the mutex was already unlocked
• If the mutex is owned by another thread
Semaphore
• A semaphore has an integer value.
• It supports two atomic operations: wait() and signal().
Semaphore
• Semaphore usage:
• Binary semaphore: the same as mutex lock.
Conditional Variable
• Conditional variable provides another way to synchronize processes
based on the value of data.
• Conditional variable is often used in combination with mutex.
• Semaphore = a variable / counter + a mutex + a conditional variable
Programming – Pthread Conditional Variable
• Pthread conditional variable standard usage:
• Wait:
• Signal:
Programming - Solve the Producer-Consumer
Problem with Pthread Conditional Variable
Condition that the buffer is not full
Condition that the buffer is not empty
Programming - Solve the Producer-Consumer
Problem with Pthread Conditional Variable
If the buffer is full, wait for
the not full signal.
After producing a product, signal
that the buffer is not empty
Programming - Solve the Producer-Consumer
Problem with Pthread Conditional Variable
If the buffer is empty, wait for
the not empty signal.
After consuming a product, signal
that the buffer is not full.
Programming - Solve the Producer-Consumer
Problem with Pthread Conditional Variable
Programming - Solve the Producer-Consumer
Problem with Pthread Conditional Variable
Deadlock
• A set of blocked processes each holding a resource and waiting to acquire a
resource held by another process in the set.
• Example:
• System has 2 disk drives
• P1 and P2 each hold one disk drive and each needs another one
• Note that the resource competition can happen between processes and
threads
• Processes: compete for system-wide resources.
• Threads: compete for process-wide resources.
Deadlock
• We can construct a resource-allocation graph G=(V, E) to represent
the resource competition.
• V is partitioned into two types:
• P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
• R = {R1, R2, …, Rm}, the set consisting of all resource types in the system
• request edge – directed edge Pi  Rj: Pi waits for Rj.
• assignment edge – directed edge Rj  Pi: Pi holds Rj.
Deadlock
Resource instances:
• One instance of resource type R1
• Two instances of resource type R2
• One instance of resource type R3
• Three instances of resource type R4
Process states:
• Process P1 is holding an instance of resource
type R2 and is waiting for an instance of
resource type R1.
• Process P2 is holding an instance of R1 and an
instance of R2 and is waiting for an instance of
R3.
• Process P3 is holding an instance of R3.
Deadlock
Deadlock may happen if all of the four conditions hold.
• Mutual exclusion: only one process at a time can use a resource.
• Hold and wait: a process holding at least one resource is waiting to
acquire additional resources held by other processes.
• No preemption: a resource can be released only voluntarily by the
process holding it, after that process has completed its task.
• Circular wait: there exists at least a circle in the resource-allocation
graph.
Deadlock
• However, the four
requirements are just
necessary condition for
deadlock, but not sufficient.
• Thus, it is possible that even
though there is a cycle in the
resource allocation graph,
there is still no deadlock.
If we allocate R1 to P3, there is no deadlock.
List of Linux Commands
•
•
•
•
•
•
•
•
•
•
uname: display operating system name
lscpu: list cpu information
lshw: list hardware information
ls: list directory contents
du: display disk utility statistics
top: display sorted information about processes
ps: report a snapshot of running process
ipcs: report IPC facility status
ulimit: get and set process limit
kill pid: kill a process
Some Practical Linux Commands - uname
• uname: get system name and kernel information
Some Practical Linux Commands - uname
• uname: get system name and kernel information
Some Practical Linux Commands - lshw
• lshw: print hardware information
Some Practical Linux Commands - lscpu
• lscpu: view detailed CPU information. Equivalent to ‘cat
/proc/cpuinfo’.
Some Practical Linux Commands - du
• du: view disk usage information.
Practical Linux Command – ipcs -m
• ipcs –m: display information about active shared memory segments.
Practical Linux Command – ipcs -q
• ipcs –q: provide meesage queue stats.
Process Concept – Memory Space
CSIT 345 - Operating Systems, Prof. Boxiang Dong, Montclair
State University
46
Practical Linux Command - ps
• ps: report a snapshot of
current processes.
• TTY: controlling terminal
associated with the process
CSIT 345 - Operating Systems, Prof. Boxiang Dong, Montclair
State University
47
Practical Linux Command - kill
• kill pid: terminate a process.
CSIT 345 - Operating Systems, Prof. Boxiang Dong, Montclair
State University
48
Related documents