Download Chapter 7 - Process Synchronization

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Genetic algorithm wikipedia , lookup

Scheduling (computing) wikipedia , lookup

Transcript
CHAPTER 7
PROCESS SYNCHRONIZATION
CGS 3763 - Operating System Concepts
UCF, Spring 2004
Chapter 7 -1
COOPERATING PROCESSES
• Independent processes cannot affect or be affected
by the execution of another process.
• Dependent processes can affect or be affected by the
execution of another process
– a.k.a., Cooperating Processes
• Processes may cooperate for:
–
–
–
–
Information sharing
Computation speed-up (Requires 2 or more CPUs)
Modularity
Convenience
Chapter 7 -2
REQUIREMENTS FOR COOPERATION
• In order to cooperate, processes must be able to:
– Communicate with one another
• Passing information between two or more processes
– Synchronize their actions
• coordinating access to shared resources
– hardware (e.g., printers, drives)
– software (e.g., shared code)
– files (e.g., data or database records)
– variables (e.g., shared memory locations)
• Concurrent access to shared data may result in data
inconsistency.
• Maintaining data consistency requires synchronization
mechanisms to ensure the orderly execution of cooperating
processes
– Synchronization itself requires some form of communication
Chapter 7 -3
PROCESS SYNCHRONIZATION
• Particularly important with concurrent execution
• Can cause two major problems:
– Race Condition • When two or more cooperating processes access and
manipulate the same data concurrently, and
• the outcome of the execution depends on the order in
which the access takes place.
– Deadlock (Chapter 8) • When two or more waiting processes require shared
resource for their continued execution
• but the required resources are held by other waiting
processes
Chapter 7 -4
RACE CONDITION
• Cooperating processes have within their programs
“critical sections”
– A code segment during which a process accesses and
changes a common variable or data item.
– When one processes is in its critical section, no other
process must execute in its own critical section.
– Execution of critical sections with respect to the same
shared variable or data item must be mutually exclusive
• A race condition occurs when two processes execute
critical sections in a non-mutually exclusive manner.
Chapter 7 -5
RACE CONDITION EXAMPLES
• File Level Access
– Editing files in a word processor
• Record Level Access
– Modifying a data base record
• Shared Variable Level
P1
:
LOAD X
X = X+1
STORE X
:
P2
:
LOAD X
X = X+1
STORE X
:
• Final value for X based on execution sequence
Chapter 7 -6
RACE CONDITION EXAMPLES (cont.)
• Think of concurrent access as a series of sequential
instructions with respect to the shared variable, data
or resource.
• If no interrupts during each code section: X = 1051
• If interrupts do occur results may vary: X = 51 or 1050
Chapter 7 -7
PREVENTING RACE CONDITIONS
• Two options (policies) for preventing race conditions:
– we must make sure that each process executes its
critical section without interruption (PROBLEMS????)
– make other processes wait until the current process
completes execution of its critical section
• Any mechanism for solving the critical section
problem must meet the following:
– Mutual Exclusion - Only one process at a time may
execute in its critical section
– Progress - Processes must be allowed to execute their
critical sections at some time
– Bounded Waiting - Cannot prevent a process from
executing its critical section indefinitely
Chapter 7 -8
SOLUTIONS TO CRITICAL SECTION
PROBLEM
• Text book contains various software and hardware
solutions to the critical section problem:
– Algorithm 1 (software/shared memory):
• Uses turn variable to alternate entry into critical sections
between two processes
• Assures mutual exclusion but not progress requirement
– Algorithm 2 (software/shared memory):
• Uses flag variables to show requests by processes
wishing to enter their critical sections.
• Assures mutual exclusion but not progress requirement
– Algorithm 3 (software/shared memory):
• Uses flags to show requests, then turn to break tie
• Meets all three requirements but only for two processes
Chapter 7 -9
SOLUTIONS TO CRITICAL SECTION
PROBLEM (cont.)
• Swap/Test-and-Set (hardware/software/shared
memory)
• Executed as atomic or indivisible instructions.
• Allows process to test status of a “lock” variable and set
lock at same time.
• Can execute critical section only if unlocked.
• Can be used to simplify programming of synchronization
algorithms.
Chapter 7 -10
SOLUTIONS (cont.)
• Previous examples require:
– OS to provide shared memory locations for variables
(turn, flags, locks)
– But programmers are responsible for how that shared
memory is used.
• Must write lots of code to support these types of solutions
• Must write correct code
• Algorithms 1, 2 & 3 (even with hardware support)
become very complicated when more than two
processes require access to critical sections
referencing the same shared variable/resource.
Chapter 7 -11
SOLUTIONS (cont.)
• Need more robust solution:
– Should be managed by the OS or supported by HLL
– Should work with two or more processes
– Should not require application process control or
masking of interrupts
– Should be workable in multi-processor environments
(parallel systems)
– Should keep processes from executing in the running
state (busy/waiting loops or no-ops) while waiting their
turn
• Must also meet our three requirements:
– Mutual exclusion, progress, bounded waiting
Chapter 7 -12
ROBUST SOLUTIONS
• Critical Regions
– High Level Language (HLL) construct for controlling execution
of critical sections
– Code oriented solution
– Regions referring to the same shared variable exclude each
other in time.
• Monitors
– High Level Language (HLL) construct for managing the shared
variable/resource referenced in critical sections
– More object- or data-oriented solution
– allows the safe sharing of an abstract data type among
concurrent processes
Chapter 7 -13
ROBUST SOLUTIONS (cont.)
• Semaphores
– Can be managed by OS or applications programmer
– Processes execute P( ) operation before entering critical
section
– Processes execute V( ) operation after leaving critical
section
– P( ) and V( ) can be a type of system call
Chapter 7 -14
SEMAPHORES
• Uses a shared variable s (can be owned by OS)
– Initialized to “1”
• P(s) - Atomic Operation (aka wait)
– Decrement value of s by 1
– If s<0, block process
• Move process from running state to wait_semaphore_s queue
• V(s) - Atomic Operation (aka signal)
– Increment value of s by 1
– If s <=0, unblock a waiting process
• Move process from wait_semaphore_s queue to ready queue
• Usually use FCFS/FIFO to ensure progress
Chapter 7 -15
SEMAPHORES (cont.)
• Semaphores can be used for more than just critical
section solutions
– Can be used to control access to resources
• for example, printers, files, records)
• The P( ) and V( ) operations described in previous
slide use a “counting” semaphore.
– Negative values of s tell how many processes are
waiting to access the share resource associated with
the semaphore.
– Can initialize s to number other than 1 if more instances
of the resource (e.g., printer) exist
Chapter 7 -16