Download CHAP10

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

DNIX wikipedia , lookup

Distributed operating system wikipedia , lookup

OS 2200 wikipedia , lookup

Process management (computing) wikipedia , lookup

Burroughs MCP wikipedia , lookup

Transcript
Multiprocessor and
Real-Time Scheduling
Chapter 10
Classification of Multiprocessor Systems

Loosely coupled multiprocessor




Each processor has its own memory and I/O channels
Message-passing
Distributed memory
Tightly coupled multiprocessors



Shared memory (e.g. single address space)
Controlled by a central operating system
Symmetric Multiprocessors
Types of Parallelism

Independent parallelism




Separate applications or jobs running on multiple processors
No synchronization between them
Average response time to users is small
Coarse-Grained Parallelism

Synchronization among processes is very sparse (e.g. once for every 10,000 instructions)

Examples:



Medium-Grained Parallelism

Parallel processing or multitasking within a single application
Single application is a collection of threads which interact frequently

Example: multiple concurrent searches on a database that are carried out in parallel and updates are


Multiple threads of a program assigned to run on multiple processors
noise filtering in a large image; image is divided into regions and each processor handles one region.
performed one at a time (synchronized) inside a critical section
Fine-Grained Parallelism

Highly parallel applications; parallelism at instruction level
lots of synchronization and communication between processes

Example: “sorting a list in parallel” or “finding prime numbers in parallel”

Grain Size
Description
Synchronization Interval
(Instructions)
Fine
Parallelism inherent in a single
instruction stream.
<20
Medium
Parallel processing or multitasking
within a single application
Coarse
Multiprocessing of concurrent processes
in a multiprogramming environment
200-2000
Very Coarse
Distributed processing across network
nodes to form a single computing
environment
2000-1M
Independent
Multiple unrelated processes
20-200
not applicable
Scheduling Issues on Multiprocessors

Should multiprogramming be used on individual processors?




If we want to get maximum speedup, then we may want to dedicate the
processors to the processes of a single application.
If a single queue is used for all processes, then a process can be
scheduled to run on any processor. (A master node is needed)
Unless a single queue is used for scheduling, it becomes more
difficult to maintain specific scheduling disciplines such as FCFS,
RR, SPN, SRT in a global sense.
However, preemptive schemes (e.g. RR) are costly to implement
with a single queue approach.
Assignment of Processes/Threads to Processors

Treat processors as a pooled resource and assign processes to processors on demand

Global queue: Schedule to run on any available processor

Master/slave architecture




Key kernel functions always run on a particular processor
Master is responsible for scheduling
Slave sends service request to the master
Disadvantages



Permanently assign a process to a processor



Failure of master brings down the whole system
Master can become a performance bottleneck
Dedicate short-term queue for each processor  Less overhead
Processor could be idle while another processor has a backlog
Gang Scheduling

Simultaneous scheduling of the threads that make up a single process
Real-Time Systems
• Tasks or processes attempt to control or react to events that take place in the
outside world in “real-time” and process must be able to keep up with them
Examples:
–
–
–
–
–
–
–
–
–
–
Control of laboratory experiments
Process control in industrial plants
Intelligent manufacturing
Robotics
Autonomous land rover
Air traffic control
Telecommunications
Military command and control systems
Space station
Undersea exploration
• Correctness of the system depends not only on the logical result of the
computation but also on the time at which the results are produced
Hard real-time task
• must meet the deadline
• otherwise it will cause unacceptable damage or a fatal error to the system
Soft real-time task
• has an associated deadline that is desirable but not mandatory
• it still makes sense to schedule and complete the task even if it has
passed its deadline
Periodic tasks
•
•
once per period T
exactly T units apart
Aperiodic tasks
• has a deadline by which it must finish or start
• may have a constraint on both start and finish time
Characteristics of Real-Time OS
• Deterministic
– Operations are performed at fixed, predetermined times or within
predetermined time intervals
– The speed that the OS can respond to high priority interrupts.
Could there be a deterministic upper bound?
• Responsiveness is critical
– How long, after acknowledgment, it takes the OS to service the
interrupt. Includes the amount of time to begin execution and
perform the ISR (Interrupt Service Routine)
Characteristics of Real-Time OS
• Increased User control:
– control over task priorities and rights
– user should be able to distinguish between hard and soft tasks and to specify
relative priorities within each class
– May allow user to specify such characteristics as:
paging or
process
swapping
what processes
must always be
resident in
main memory
what disk
transfer
algorithms
are to be
used
what rights the
processes in
various priority
bands have
• Must be Reliable
– Degradation of performance may have catastrophic consequences
– Fail-safe operation: Attempt to correct the problem or minimize its effects
while continuing to run (ex: failure of the traffic lights)
– Most critical, high priority tasks execute first
Features of Real-Time OS
To meet the foregoing requirements, real-time operating
systems typically include the following features:
• Fast context switch (minimize dispatch latency)
• Small size
• Ability to respond to external interrupts quickly (minimize interrupt latency)
• Use files that accumulate data at a fast rate
• Preemptive scheduling based on priorities
• Intervals at which interrupts are disabled are minimized
• Do not delay tasks more than a specified fixed amount of time
• May use special alarms and timeouts
Deadline Scheduling
• Real-time applications are not concerned with fairness or minimizing
the response time but with completing certain tasks in a timely
manner (i.e. before their deadlines are reached)
• Scheduling tasks with the earliest deadline could minimize the
number of tasks that miss their deadlines
• For proper scheduling, it’s better to know the following for each task:
– Ready time (time at which task becomes ready for execution - known in advance
for periodic tasks)
– Starting deadline (time by which a task must begin running)
– Completion deadline (time by which task must be completed)
– Processing time
– Resource requirements
– Priority
Execution Profile of Two Periodic Tasks
Comparison of Various Scheduling Algorithms
Rate Monotonic Scheduling
• Assigns priorities to tasks on the basis of their periods
• Highest-priority task is the one with the shortest period
• Processor Utilization = C1/T1 + C2/T2 + … Cn/Tn ≤ 1
• If the following inequality is satisfied, schedulability is guaranteed:
Upper Bound for RMS:
C1/T1 + C2/T2 + … Cn/Tn ≤ n(21/n-1)
• RMS has been widely adapted for use in industrial applications
Rate Monotonic Scheduling
Value of the RMS Upper Bound
C1/T1 + C2/T2 + … Cn/Tn
≤
n(21/n-1)
n
n(2 1/n -1)
______________________________
1
1.0
2
0.828
3
0.779
4
0.756
5
0.743
6
0.734
•
•
•
•
•
•
∞
ln 2 » 0.693
UNIX SVR4 Scheduling



Highest preference to real-time processes
Next-highest to kernel-mode processes
Lowest preference to other user-mode processes
LINUX Scheduling

Scheduling classes
SCHED_FIFO
: First-in-first-out real-time threads
SCHED_RR
: Round-robin real-time threads
SCHED_OTHER : Other, non-real-time threads

Within each class multiple priorities may be used