Download OS Slides Printable

yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Chapter 1
• An operating system is a program that manages a computer’s hardware.
• It acts as an intermediate between a user of a computer and the computer
• It is the program running at all times on the computer—usually called the kernel.
• A fundamental responsibility of an operating system is to allocate resources to
Components of a computer system:
• Users- people, machines, other computers
• Application programs – define the ways in which
the system resources are used to solve the users'
computing problems.
• Operating system- controls and coordinates use of
hardware among various applications for users.
• Hardware– provides basic computing resources for
the system.
1.1 What Operating Systems Do
User View:
• The user’s view of the computer varies according to the interface being used
• Users of mainframe share resources and may exchange information. OS in such
cases is designed to maximize resource utilization to assure that all available CPU
time, memory, and I/O are used efficiently and that no individual user takes more
than her fair share.
• The goal of the user of personal computers is to maximize the work that he is
performing. OS is designed mostly for ease of use and good performance. Don’t
care about resource utilization
• Mobile computers are resource poor. People who are primarily interested in using
computers for e-mail and web browsing.
• Some computers have little or no user interface, such as embedded computers in
devices and automobiles.
System View:
• From the computer’s point of view, the operating system is the program most
intimately involved with the hardware.
• OS is a resource allocator
o A computer system has many resources that may be required to solve a
problem: CPU time, memory space, file-storage space, I/O devices, and so
o OS acts as the manager of these resources.
o Facing numerous and possibly conflicting requests for resources, OS must
decide how to allocate them to specific programs and users so that it can
operate the computer system efficiently and fairly.
• OS is a control program
o OS controls execution of programs to prevent errors and improper use of
the computer
o It is especially concerned with the operation and control of I/O devices.
Computer Startup:
• For a computer to start running, when it is powered up or rebooted, it needs to
have an initial program to run. This initial program, bootstrap program, is stored in
read-only memory (ROM) or electrically erasable programmable read-only
memory (EEPROM), known as firmware.
• Bootstrap program initializes all aspects of the system, from CPU registers to
device controllers to memory contents. It must locate the operating-system kernel
and load it into memory
• Once the kernel is loaded and executing, it can start providing services to the
system and its users. Some services are provided outside of the kernel, by system
programs that are loaded into memory at boot time to become system processes
that run the entire time the kernel is running.
1.2 Computer-System Organization
• A computer system contains multiple device controllers that are connected
through a common bus providing access to shared memory. Each device controller
is in charge of a specific type of device (for example, disk drives, audio devices, or
video displays).
• A device controller has a local buffer storage and it is responsible for moving the
data between the peripheral devices that it controls and its local buffer storage.
• OS has a device driver for each device controller. This device driver understands
the device controller and provides the rest of the operating system with a uniform
interface to the device.
• CPU and the device controllers can execute in parallel, competing for memory
Computer System Operation:
CPU moves data from/to main memory to/from local buffers
I/O is from the device to local buffer of controller
Device controller informs CPU that it has finished its operation by causing an
interrupt. The occurrence of an event is usually signaled by an interrupt from
either the hardware or the software.
Hardware may trigger an interrupt at any time by sending a signal to the CPU,
usually by way of the system bus.
Software may trigger an interrupt by executing a special operation called a system
Software interrupt (exception or trap):
o Software error (e.g., division by zero)
o Request for operating system service
o Other process problems include infinite loop, processes modifying each
other or the operating system
Interrupt Handling
An interrupt is a suspension of the normal processing of the processor by an
external event. It performed in such a way that the process can be resumed.
Interrupts improve processing efficiency and allow the processor to execute other
instructions while an I/O operation is in progress
When the CPU is interrupted, it stops what it is doing and immediately transfers
execution into a service routine to examine the interrupt and performs whatever
actions are needed. After the execution of the interrupt service routine, the CPU
resumes the interrupted computation.
Storage-Device Hierarchy
Processor Registers
• A processor register is a local small high-speed storage space on a processor that
holds data that is being processed by CPU.
• Examples:
o Program Counter (PC): Contains the address of an instruction to be fetched
o Instruction Register (IR): Contains the instruction most recently fetched
o Accumulator (AC): A register contains temporary data
• The CPU can load instructions only from memory, so any programs to run must be
stored there.
• Main memory – only large storage media that CPU can access directly. Random
access memory (RAM) - Typically volatile
• ROM, cannot be changed, stores only static programs such as the bootstrap
program described earlier.
• All forms of memory provide an array of bytes. Each byte has its own address.
Interaction is achieved through a sequence of load or store instructions to specific
memory addresses.
• The load instruction moves a byte or word from main memory to an internal
register within the CPU.
• The store instruction moves the content of a register to main memory.
• Each location contains a bit pattern that can be interpreted as either an instruction
or data.
• Cache memory is checked first to determine if information is there
o If it is, information used directly from the cache (fast)
o If not, data copied to cache and used there
Secondary Storage
• Secondary storage – extension of main memory that provides large nonvolatile
storage capacity (hold large quantities of data permanently)
• This type of storage can be classified into two distinct types:
• Mechanical. Such as HDDs, optical disks, and magnetic tape.
• Electrical. Such as flash memory, FRAM, NRAM, and SSD. Electrical storage will be
referred to as nonvolatile memory (NVM).
• Mechanical storage is generally larger and less expensive per byte than electrical
storage. Conversely, electrical storage is typically costly, smaller, and faster than
mechanical storage.
• Hard disks – Most programs (system and application) are stored on a disk until
they are loaded into memory.
Performance of Various Levels of Storage
• The movement of information between levels of a storage hierarchy may be either
explicit or implicit, depending on the hardware design and the controlling
operating-system software.
• For instance, data transfer from cache to CPU and registers is usually a hardware
function, with no operating-system intervention. In contrast, transfer of data from
disk to memory is usually controlled by the operating system.
Direct Memory Access
• Direct memory access (DMA) is used for high-speed I/O devices able to transmit
information at close to memory speeds
• Device controller transfers blocks of data from buffer storage directly to main
memory without CPU intervention
• Only one interrupt is generated per block, to tell the device driver that the
operation has completed, rather than the one interrupt per byte.
• While the device controller is performing these operations, the CPU is available to
accomplish other work
Instruction Execution
A program consists of a set of instructions stored in memory:
1. processor reads (fetches) instructions from memory
2. processor executes each instruction
Program execution consists of repeating the process of instruction fetch and instruction
execution. Instruction execution may involve several operations and depends on the
nature of the instruction.
Instruction Categories
The fetched instruction is loaded into the instruction register (IR). The instruction
contains bits that specify the action the processor is to take. The processor interprets the
instruction and performs the required action. In general, these actions fall into four
• Processor-memory: Data may be transferred from processor to memory or from
memory to processor.
• Processor-I/O: Data may be transferred to or from a peripheral device by
transferring between the processor and an I/O module.
• Data processing: The processor may perform some arithmetic or logic operation on
• Control: An instruction may specify that the sequence of execution be altered. For
example, the processor may fetch an instruction from location 149, which specifies
that the next instruction will be from location 182. The processor sets the program
counter to 182. Thus, on the next fetch stage, the instruction will be fetched from
location 182 rather than 150.
Characteristics of a Hypothetical Machine
Example of Program Execution
In this example, three instruction cycles, each consisting of a fetch stage and an execute
stage, are needed to add the contents of location 940 to the contents of 941.
1.3 Computer-System Architecture
Single-Processor Systems
• Most systems use a single general-purpose processor capable of executing
instructions from user processes.
• Also, these systems have special-purpose processors which may come in the form
of device-specific processors, such as disk, keyboard, and graphics controllers.
• All of these special-purpose processors run a limited instruction set and do not run
user processes.
• For example, PCs contain a microprocessor in the keyboard to convert the
keystrokes into codes to be sent to the CPU.
• The use of special-purpose microprocessors is common and does not turn a singleprocessor system into a multiprocessor.
• If there is only one general-purpose CPU, then the system is a singleprocessor
Multiprocessor Systems
• On modern computers, from mobile devices to servers, multiprocessor systems
now dominate the landscape of computing.
• Multiprocessors (parallel) systems growing in use and importance  Have two or
more processors in close communication, sharing the computer bus and the clock,
memory, and peripheral devices.
• Advantages include:
o Increased throughput: Increasing the number of processors gets more work
done in less time.
o Economy of scale: Cost less than equivalent multiple singleprocessor
systems because they can share peripherals and mass storage.
• Symmetric Multiprocessing (SMP) – each CPU processor performs all tasks,
including operating system functions and user processes. All processors are peers;
no boss–worker relationship exists between processors. All modern operating
systems—including Windows, Mac OS X, and Linux—now provide support for SMP.
A Multicore Design
• A recent trend in CPU design is to include multiple computing cores on a single
chip. Such multiprocessor systems are termed multicore.
• They can be more efficient than multiple chips with single cores because on-chip
communication is faster than between-chip communication. In addition, one chip
with multiple cores uses significantly less power than multiple single-core
• It is important to note that while multicore systems are multiprocessor not all
multiprocessor systems are multicore.
Clustered Systems
• They like multiprocessor systems, but multiple systems working together
• They are composed of multiple nodes, joined together. Each node may be a single
processor system or a multicore system
• Usually sharing storage via a storage-area network (SAN) which allow many
systems to attach to a pool of storage.
• If the applications and their data are stored on the SAN, then the cluster software
can assign the application to run on any host that is attached to the SAN. If the
host fails, then any other host can take over.
1.5 Operating-System Operations
Multiprogramming & Multitasking
• Users want to run more than one program at a time as well. Multiprogramming
increases CPU utilization, as well as keeping users satisfied, by organizing programs
so that the CPU always has one to execute.
• Multiprogramming organizes jobs (code and data) so CPU always has one to
o A subset of total jobs in system is kept in memory.
o One job selected and run via job scheduling. When it has to wait (for I/O for
example), OS switches to another job
• Multitasking is a logical extension of multiprogramming. In multitasking systems,
the CPU executes multiple processes by switching among them, but the switches
occur frequently, providing the user with a fast response time.
o Each user has at least one program executing in memory process]
o If several jobs ready to run at the same time  CPU scheduling]
o If processes don’t fit in memory, processes are swapped in and out of main
memory to the disk  swapping].
Dual-Mode Operation
• OS and its users share the hardware and software resources.
• An OS must ensure that an incorrect program cannot cause other programs or OS
itself, to execute incorrectly. In order to ensure the proper execution of the
system, we must be able to separate between the execution of OS code and userdefined code.
• Dual-mode operation allows OS to protect itself and other programs.
• User mode and kernel mode (also called supervisor mode, system mode, or
privileged mode).
• Mode bit is added to the hardware of the computer to indicate the current mode:
kernel (0) or user (1).
o The hardware allows privileged instructions to be executed only in kernel
mode. If an attempt is made to execute a privileged instruction in user
mode, the hardware does not execute the instruction but rather treats it as
illegal and traps it to the operating system.
o Some examples of privileged instructions include switch to kernel mode, I/O
control, timer management, and interrupt management.
• Modern versions of the Intel CPU do provide dual-mode operation. Accordingly,
most contemporary operating systems—such as Microsoft Windows 7, as well as
Unix and Linux—take advantage of this dualmode feature and provide greater
protection for OS.
1.6 Process Management
• A process is a program in execution. It is a unit of work within the system. Program
is a passive entity, process is an active entity.
• A compiler is a process. A word-processing program being run by an individual user
on a PC is a process. A system task, such as sending output to a printer, can also be
a process (or at least part of one).
• Process needs resources to accomplish its task
o CPU, memory, I/O, files, Initialization data
• Process termination requires reclaim of any reusable resources
• Single-threaded process has one program counter specifying location of next
instruction to execute
o Process executes instructions sequentially, one at a time, until completion
• Multi-threaded process has one program counter per thread
• Typically system has many processes, some user, some operating system running
concurrently on one or more CPUs
o Concurrency by multiplexing the CPUs among the processes / threads
Process Management Activities
The operating system is responsible for the following activities in connection with
process management:
Creating and deleting both user and system processes.
Suspending and resuming processes.
Providing mechanisms for process synchronization.
Providing mechanisms for process communication.
Providing mechanisms for deadlock handling
1.7 Memory Management
• To execute a program all (or part) of the instructions must be in memory.
• All (or part) of the data that is needed by the program must be in memory.
• Memory management determines what is in memory and when
o Optimizing CPU utilization and computer response to users.
• Memory management activities
o Keeping track of which parts of memory are currently being used and by
o Deciding which processes (or parts thereof) and data to move into and out
of memory.
o Allocating and deallocating memory space as needed
1.8 Storage Management
• OS provides uniform, logical view of information storage
o Abstracts physical properties to logical storage unit – file
o Each medium is controlled by device (i.e., disk drive, tape drive)
▪ Varying properties include access speed, capacity, data-transfer rate,
access method (sequential or random)
• File-System management
o Files usually organized into directories
o Access control on most systems to determine who can access what
o OS activities include
▪ Creating and deleting files and directories
▪ Primitives to manipulate files and directories
▪ Mapping files onto secondary storage
▪ Backup files onto stable (non-volatile) storage media
1.9 Protection and Security
• If a computer system has multiple users and allows the concurrent execution of
multiple processes, then access to data must be regulated.
• For example, memory-addressing hardware ensures that a process can execute
only within its own address space.
• Protection – any mechanism for controlling access of processes or users to
resources defined by the OS. This mechanism must provide means to specify the
controls to be imposed and to enforce the controls.
• Security – defense of the system against internal and external attacks
o Huge range, including worms, viruses, identity theft, theft of service
• Systems first distinguish among users, to determine who can do what
o User identities (user IDs, security IDs) include name and associated number,
one per user
o User ID then associated with all files, processes of that user to determine
access control
o Group identifier (group ID) allows set of users to be defined and controls
managed, then also associated with each process, file
1.12 Open-Source Operating Systems
• Open-source operating systems are those available in source-code format rather
than as compiled binary code (closed-source)
• With the source code in hand, a student can modify the operating system and then
compile and run the code to try out those changes, which is an excellent learning
tool.  Linux is the most famous open source OS, while Microsoft Windows is a
well-known example of the opposite closed-source approach.
• Apple’s Mac OS X and iOS operating systems comprise a hybrid approach. They
contain an open-source kernel named Darwin yet include proprietary, closedsource components as well.
• Examples of open-source operating systems include GNU/Linux and BSD UNIX
(including core of Mac OS X), and many more
Chapter 2 Operating-System Structures
2.1 Operating-System Services
Operating System services for Users:
• User interface - Varies between Command-Line (CLI), Graphics User Interface
(GUI), and Batch. Some systems provide two or all three of these variations.
• Program execution - The system must be able to load a program into memory and
to run that program, end execution.
• I/O operations - A running program may require I/O, which may involve a file or an
I/O device.
• File-system manipulation - Programs need to read and write files and directories,
create and delete them, search them, list file Information.
• Communications – Processes may exchange information, on the same computer or
between computers over a network.
• Error detection – OS needs to be constantly aware of possible errors
o For each type of error, OS should take the appropriate action to ensure
correct and consistent computing.
Operating System services for Systems:
• Resource allocation - When multiple users or multiple jobs running concurrently,
resources must be allocated to each of them
o Many types of resources - CPU cycles, main memory, file storage, I/O
• Accounting - To keep track of which users use how much and what kinds of
computer resources.
• Protection and security - The owners of information stored in a multiuser or
networked computer system may want to control use of that information,
concurrent processes should not interfere with each other.
o Protection involves ensuring that all access to system resources is
o Security of the system from outsiders requires user authentication, extends
to defending external I/O devices from invalid access attempts.
2.2 User Operating System Interfaces
Command Line interface (CLI) allows direct command entry:
• Some operating systems include the command interpreter in the kernel.
• Others, such as Windows and UNIX, treat the command interpreter as a special
program that is running when a job is initiated or when a user first logs on (on
interactive systems).
• Sometimes implemented in kernel, sometimes by systems program and sometimes
commands built-in, sometimes just names of program.
• The UNIX command to delete a file: rm file.txt
Graphical User Interface (GUI):
• Icons represent files, programs, actions, etc. Various mouse buttons over objects in
the interface cause various actions.
• Many systems now include both CLI and GUI interfaces
o Microsoft Windows is GUI with CLI “command” shell
o Apple Mac OS X is “Aqua” GUI interface with UNIX kernel underneath and
shells available
o Unix and Linux have CLI with optional GUI interfaces
2.3 System Calls
• All system resources are managed by the kernel. Any request from application that
involves access to any system resource must be handled by kernel code
• System calls provide an interface to the services made available by an operating
system. These calls are generally available as functions written in C and C++,
although certain low-level tasks (for example, tasks where hardware must be
accessed directly) may have to be written using assembly-language.
• Mostly accessed by programs via a high-level Application Programming Interface
(API) rather than direct system call use
• The API specifies a set of functions that are available to an application
programmer, including the parameters that are passed to each function and the
return values the programmer can expect.
• Three most common APIs are Win32 API for Windows, POSIX API for POSIX-based
systems (including virtually all versions of UNIX, Linux, and Mac OS X), and Java API
for the Java virtual machine (JVM)
2.4 Types of System Calls
• Process control (create process, terminate process, end, abort, load, execute, get
process attributes, set process attributes, wait for time, wait event, signal event,
allocate and free memory)
• File management (create file, delete file, open, close file, read, write, reposition,
get and set file attributes)
• Device management (request device, release device, read, write, reposition, get
device attributes, set device attributes, logically attach or detach devices)
• Information maintenance (get time or date, set time or date, get system data, set
system data, get and set process, file, or device attributes)
• Communications
o create, delete communication connection
o send, receive messages
o attach or detach remote devices
• Protection
o Control access to resources
o Get and set permissions
2.5 System Services
System services provide a convenient environment for program development and
execution. They can be divided into:
• File management
o These programs Create, delete, copy, rename, print, dump, list, and
generally manipulate files and directories.
• Status information
o These programs ask the system for info - date, time, amount of available
memory, disk space, number of users.
File modification
o Text editors to create and modify files
o Special commands to search contents of files or perform transformations of
the text
Programming-language support
o Compilers, assemblers, debuggers and interpreters
Program loading and execution
o Loaders, linkage editors, debugging systems for higher-level and machine
o Provide the mechanism for creating virtual connections among processes,
users, and computer systems
o Allow users to send messages to one another’ s screens, browse web pages,
send electronic-mail messages, log in remotely, transfer files from one
machine to another
Background Services
o Launch at boot time
▪ Some for system startup, then terminate
▪ Some from system boot to shutdown
o Provide facilities like disk checking, process scheduling, error logging,
Application programs
o include Web browsers, word processors, spreadsheets, compilers, …
o Run by users and not typically considered part of OS
2.6 Operating-System Design and Implementation
Operating-System Design
• A problem in designing OS is to define goals and specifications.
• The design of the system will be affected by the choice of hardware and the type
of system: time sharing, single user, multiuser, distributed, real time, or general
• The requirements can be divided into two basic groups:
o User goals – Users want certain obvious properties in a system. The system
should be convenient to use, easy to learn and to use, reliable, safe, and
o System goals – OS should be easy to design, implement, and maintain, as
well as flexible, reliable, error-free, and efficient
Operating-System Implementation
• Early operating systems were written in assembly language. Now, most of them
are written in C, C++. Actually usually a mix of languages
o Lowest levels in assembly, Main body in C, Systems programs in C++
• The advantages of using a higher-level language for implementing OS:
o The code can be written faster, it is more compact, it is easier to understand
and debug, it is easier to move to some other hardware.
• MS-DOS was written in Intel 8088 assembly language. it runs natively only on the
Intel X86 family of CPUs.
• Linux is written mostly in C and is available natively on a number of different CPUs,
including Intel X86, Oracle SPARC, and IBMPowerPC
• The disadvantages of implementing an OS in a higher-level language are reduced
speed and increased storage requirements.
2.7 Operating-System Structure
• A system as large and complex as a modern operating system must be engineered
carefully if it is to function properly and be modified easily.
• A common approach is to partition the task into small components, or modules,
rather than have one single system.
• Each of these modules should be a well-defined portion of the system, with
carefully defined interfaces and functions.
• You may use a similar approach when you structure your programs: rather than
placing all of your code in the main() function, you instead separate logic into a
number of functions, clearly articulate parameters and return values, and then call
those functions from main().
• Various ways to structure ones
1. Monolithic
2. Layered
3. Microkernel –Mach
4. Modules
5. Hybrid Systems
1- Monolithic Structure (original UNIX)
• The simplest structure for organizing an OS is no structure at all.
• That is, place all of the functionality of the kernel into a single, static binary file that
runs in a single address space. This approach known as a monolithic is a common
technique for designing operating systems.
• UNIX consists of two separable parts Systems programs & Kernel
• The kernel consists of everything below the system-call interface and above the
physical hardware. It provides the file system, CPU scheduling, memory
management, and other OS functions; a large number of functions for one level
Monolithic Structure for Linux
• Linux operating system is based on UNIX. Applications
typically use a standard C library when communicating
with the system call interface to the kernel.
• Linux kernel is monolithic in that it runs entirely in
kernel mode in a single address space, but also it does
have a modular design that allows the kernel to be
modified during run time.
• Despite the apparent simplicity of monolithic kernels,
they are difficult to implement and extend.
• Therefore, despite the drawbacks of monolithic kernels,
their speed and efficiency explains why we still see
evidence of this structure in UNIX, Linux, and Windows.
2- Layered Approach
• The operating system is divided into a number of layers (levels). The layer 0 is the
hardware and the highest (layer N) is the user interface.
• The main advantage is simplicity of construction and debugging.
• Each layer is implemented only with operations provided by lower-level layers. A
layer needs to know only what these operations do; it
does not need to know how these operations are
• A problem with layered implementations is that they
tend to be less efficient than other types. For instance,
when a user program executes an I/O operation, it
executes a system call that is trapped to I/O layer,
which calls memory management layer, which calls
CPU-scheduling layer, which is then passed to the
3- Microkernel System Structure
• This method structures the operating system by removing all nonessential
components from the kernel and implementing them as system and user-level
programs. The result is a smaller kernel.
• Mac OS X kernel and QNX are examples of Microkernel
• The microkernel function is to provide communication between the client program
and the various services that are also running in user space.
• Unfortunately, the performance of microkernels can suffer due to increased
system-function overhead.
4- Modules
• The kernel has a set of core components and links in additional services via
modules, either at boot time or during run time.
• Overall, the approach is similar to layered system in that each kernel section has
defined, protected interfaces; but it is more flexible than a layered system,
because any module can call any other module.
• The approach is also similar to the microkernel approach in that the primary
module has only core functions and knowledge of how to load and communicate
with other modules; but it is more efficient, because modules do not need to
invoke message passing in order to communicate
• This type of design is common in modern implementations of UNIX, such as Solaris,
Linux, and Mac OS X, as well as Windows.
5- Hybrid Systems
• Most modern operating systems are actually not one pure model
• Hybrid combines multiple approaches to address performance, security, usability
• Linux and Solaris kernels are monolithic because having OS in a single address
space provides very efficient performance. However, they are also modular, so
that new functionality can be dynamically added to kernel
• Windows is monolithic as well (primarily for efficient performance), but it retains
some behavior typical of microkernel systems, including providing support for
separate subsystems that run as user-mode processes
• Examples of hybrid systems are : Android , IOS
Chapter 3 Processes
3.1 Process Concept
Early computers allowed only one program to be executed at a time. This program
had complete control of the system and had access to all the system’s resources.
Modern computer systems allow multiple programs to be loaded into memory and
executed concurrently.
A system therefore consists of a collection of processes. These processes can
execute concurrently, with CPU (or CPUs) multiplexed among them.
Process – a program in execution; process execution must progress in sequential
A program is a passive entity such as a file containing a list of instructions stored on
disk (executable file). A process is an active entity, with a program counter
specifying the next instruction to execute and a set of associated resources.
A program becomes process when executable file loaded into memory
Process Parts
• text section - the executable code
• Data section containing global variables
• Heap- memory that is dynamically allocated during run time
• Stack- temporary data storage when invoking functions
• The sizes of the text and data sections are fixed
• The stack and heap sections can shrink and grow dynamically during program
execution. Each time a function is called, an activation record containing function
parameters, local variables, and the return address is pushed onto the stack; when
control is returned from the function, the activation record is popped from the
• The heap will grow as memory is dynamically allocated, and will shrink when
memory is returned to the system.
• One program can be several processes
o A user may invoke many copies of the web browser program. Each of these
is a separate process; and although the text sections are equivalent, the
data, heap, and stack sections vary.
Process States
As a process executes, it changes state:
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
Only one process can be running on any processor at any instant while many processes
may be ready and waiting
Waiting State
• A process is put in the Waiting state if it requests something for which it must wait.
A request to the OS is usually in the form of a system service call; that is, a call
from the running program to a procedure that is part of the operating system
• For example, a process may request a service from the OS that the OS is not
prepared to perform immediately. It can request a resource, such as a file or a
shared section of virtual memory, that is not immediately available. Or the process
may initiate an action, such as an I/O operation, that must be completed before
the process can continue.
• When processes communicate with each other, a process may be in Waiting state
when it is waiting for another process to provide data or waiting for a message
from another process.
Process Control Block (PCB)
• Each information of a process is represented in OS by a process control block (PCB)
• Process state – running, waiting, etc.
• Program counter – the address of the next instruction to be
executed for this process.
• CPU registers – contents of all process-centric registers
• CPU scheduling information- priorities, scheduling queue pointers
• Memory-management information – memory allocated to the
• Accounting information – CPU used, clock time elapsed since start,
time limits.
• I/O status information – I/O devices allocated to process, list of
open files.
3.2 Process Scheduling
• The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization.
• The objective of time sharing is to switch the CPU among processes so frequently
that users can interact with each program while it is running.
• To meet these objectives, the process scheduler selects an available process
(possibly from a set of several available processes) for program execution on the
CPU. For a single-processor system, there will never be more than one running
process. If there are more processes, the rest will have to wait until the CPU is free
and can be rescheduled.
• The number of processes currently in memory is known as the degree of
Scheduling Queues
• As processes enter the system, they are put into a ready queue, where they are
ready and waiting to execute on a CPU’s core.
• This queue is generally stored as a linked list. A ready-queue header contains
pointers to the first and final PCBs in the list. Each PCB includes a pointer field that
points to the next PCB in the ready queue.
• The system also includes other queues. When a process is allocated a CPU, it
executes for a while and eventually terminates, is interrupted, or waits for the
occurrence of a particular event, such as the completion of an I/O request.
Suppose the process makes an I/O request to a device such as a disk. Since devices
run significantly slower than processors, the process will have to wait for the I/O to
become available. Processes that are waiting for a certain event to occur — such as
completion of I/O — are placed in a wait queue
Representation of Process Scheduling
• A new process is initially put in the ready queue. It waits there until it is selected
for execution. Once the process is allocated the CPU and is executing, one of
several events could occur:
1. The process could issue an I/O request and then be placed in an I/O queue.
2. The process could create a child process and wait for the child terminates.
3. The process could be removed from the CPU, as a result of an interrupt, and be
put back in the ready queue.
• In the first two cases, the process eventually switches from the waiting state to the
ready state and is then put back in the ready queue.
• A process continues this cycle until it terminates, at which time it is removed from
all queues and has its PCB and resources deallocated.
Context Switch: CPU Switch From Process to Process
• Interrupts cause the OS to change a CPU from its current task and to run a kernel
• When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch
• Context-switch time is overhead; the system does no useful work while switching
3.3 Operations on Processes
Process Creation
• Generally, process identified and managed via a unique process identifier (pid)
which is typically an integer number. The pid provides a unique value for each
process in the system, and it can be used as an index to access various attributes of
a process within the kernel.
• During the course of execution, a process may create several new processes. The
creating process is called a parent process, and the new processes are called the
children of that process. Each of these new processes may in turn create other
processes, forming a tree of processes
Reasons for Process Creation
Resource sharing options
When a process creates a child process, that child process will need certain resources
(CPU time, memory, files, I/O devices). A child process may be able to obtain its
resources directly from the operating system, or it may be constrained to a subset of the
resources of the parent process.
• Parent and children share all resources
• Children share subset of parent’ s resources
• Parent and child share no resources (A child process obtains its resources directly
from the operating system)
Execution options
• The parent continues to execute concurrently with its children.
• The parent waits until some or all of its children have terminated
Address space
• The child process is a duplicate of the parent process (it has the same program and
data as the parent).
• Child has a program loaded into it
Process Termination
A process terminates when it finishes executing its final statement and then asks
the OS to delete it using the exit() system call.
All the resources of the process—including physical and virtual memory, open files,
and I/O buffers—are deallocated by the OS.
A parent may terminate the execution of one of its children for a variety of
reasons, such as these:
o The child has exceeded its usage of some of the resources that it has been
allocated. (To determine whether this has occurred, the parent must have a
mechanism to inspect the state of its children.)
o The task assigned to the child is no longer required.
o Some systems do not allow a child to exist if its parent has terminated. In
such systems, if a process terminates (either normally or abnormally), then
all its children must also be terminated.
Reasons for Process Termination
3.4 Interprocess Communication
• Processes executing concurrently in the operating system may be either
independent processes or cooperating processes.
• Independent process does not share data with any other processes executing in
the system.
• Cooperating process can affect or be affected by other processes executing in the
system. Any process that shares data with other processes is a cooperating
• Reasons for cooperating processes:
o Information sharing- several applications may be interested in the same
piece of information
o Computation speedup: If we want to run a task faster, we must break it into
subtasks, each of which will be executing in parallel with others
o Modularity. We may want to construct the system in a modular fashion,
dividing the system functions into separate processes.
• Cooperating processes need interprocess communication (IPC) mechanism that
will allow them to exchange data and information.
• Two models of IPC Shared memory and Message passing
Communications Models
• Shared memory can be faster than message passing, since messagepassing systems
are implemented using system calls memory regions. Once shared memory is
established, all accesses are treated as routine memory accesses, and no
assistance from the kernel is required.
• Message passing is useful for exchanging smaller amounts of data, because no
conflicts need be avoided. Message passing is also easier to implement in a
distributed system than shared memory.
Shared Memory
• Processes can exchange information by reading and writing data in shared areas.
The form of the data and the location are determined by processes and are not
under the OS control. The processes are also responsible for ensuring that they are
not writing to the same location simultaneously.
• Example for cooperating processes, producer process produces information that is
consumed by a consumer process
• For example, a compiler may produce assembly code that is consumed by an
assembler. The assembler, in turn, may produce object modules that are
consumed by the loader.
• Another example, the client–server. A server is a producer and a client is a
consumer. A web server produces (provides) HTML files and images, which are
consumed (read) by the client web browser requesting the resource
• unbounded-buffer - Places no practical limit on the size of the buffer. The
consumer may have to wait for new items, but the producer can always produce
new items
• bounded-buffer - There is a fixed buffer size. The consumer must wait if the buffer
is empty, and the producer must wait if the buffer is full
Message Passing
• Processes communicate with each other without shared memory.
• It is particularly useful in a distributed environment, where the communicating
processes may reside on different computers connected by a network. For
example, an Internet chat program could be designed so that chat participants
communicate with one another by exchanging messages
• IPC facility provides two operations:
o send(message) and receive(message)
• If processes P and Q wish to communicate, they need to:
o Establish a communication link between them
o Exchange messages via send/receive
• Implementation of communication link
o Physical: Shared memory, Hardware bus, Network
o Logical:
▪ Naming: Direct or indirect communication
▪ Synchronous or asynchronous communication
▪ Automatic or explicit buffering
Naming: Direct Communication
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
Properties of a communication link
o Links are established automatically
o A link is associated with exactly one pair of communicating processes
o Between each pair of communicating processes, there exists exactly one link
• Disadvantage of Direct Communication
o changing the identifier of a process may necessitate examining all other
process definitions. All references to the old identifier must be found, so
that they can be modified to the new identifier
Naming: Indirect Communication
Messages are directed and received from mailboxes
Each mailbox has a unique id
Two processes can communicate only if they have a shared mailbox
Primitives are defined as:
send(A, message) – send a message to mailbox A receive(A, message)
receive a message from mailbox A
Properties of a communication link
o A link is established between a pair of processes only if both members of the
pair have a shared mailbox
o A link may be associated with more than two processes
o Between each pair of communicating processes, a number of different links
may exist, with each link corresponding to one mailbox.
A mailbox may be owned either by a process or by the operating system. If the
mailbox is owned by a process (that is, the mailbox is part of the address space of
the process), then we distinguish between the owner (which can only receive
messages through this mailbox) and the user (which can only send messages to the
When a process that owns a mailbox terminates, the mailbox disappears. Any
process that subsequently sends a message to this mailbox must be notified that
the mailbox no longer exists.
The process that creates a new mailbox is that mailbox’s owner by default. Initially,
the owner is the only process that can receive messages through this mailbox.
However, the ownership and receiving privilege may be passed to other processes
through appropriate system calls. Of course, this provision could result in multiple
receivers for each mailbox.
• OS must provide a mechanism that allows a process to do the operations
o create a new mailbox
o send and receive messages through mailbox
o Delete a mailbox
• Message passing may be either blocking or non-blocking
• Blocking is considered synchronous
o Blocking send -- The sending process is blocked until the message is received
by the receiving process or by the mailbox
o Blocking receive -- the receiver is blocked until a message is available
• Non-blocking is considered asynchronous
• Non-blocking send -- the sender sends the message and continue
• Non-blocking receive -- the receiver receives
o A valid message, or
o Null message
• Queue of messages attached to the link.
• implemented in one of three ways:
• 1. Zero capacity – The queue has a maximum length of zero; thus, the link cannot
have any messages waiting in it. In this case, the sender must block until the
recipient receives the message
• 2. Bounded capacity – The queue has finite length n; thus, at most n messages can
reside in it. If the queue is not full when a new message is sent, the message is
placed in the queue (either the message is copied or a pointer to the message is
kept), and the sender can continue execution without waiting. The link’s capacity is
finite, however. If the link is full, the sender must block until space is available in
the queue.
• 3. Unbounded capacity The queue’s length is potentially infinite; thus, any number
of messages can wait in it. The sender never blocks
Chapter 6: CPU Scheduling
6.1 Basic Concepts
• OS switches CPU among processes to make the computer more productive
• In a single-processor system, only one process can run at a time. Others must wait
until the CPU is free and can be rescheduled.
• The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization.
• A process is executed until it must wait, typically for the completion of some I/O
request. In a simple computer system, the CPU then just sits idle. All this waiting
time is wasted. With multiprogramming, several processes are kept in memory at
one time. When one process has to wait, OS takes the CPU away from that process
and gives the CPU to another process. This pattern continues.
Alternating Sequence of CPU and I/O Bursts
CPU–I/O Burst Cycle:
The success of CPU scheduling depends on an observed
property of processes: process execution consists of a cycle
of CPU execution and I/O wait. Processes alternate
between two states. Process execution begins with a CPU
burst. That is followed by an I/O burst, which is followed by
another CPU burst, then another I/O burst, and so on.
CPU Scheduler
• Short-term scheduler selects from among the
processes in ready queue, and allocates CPU to that
process. Queue may be ordered in various ways
• CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state (for example, as the result of an I/O
request or a call of wait() for the termination of a child process)
2. Switches from running to ready state (ex. when an interrupt occurs)
3. Switches from waiting to ready (ex. at completion of I/O)
• Under nonpreemptive scheduling, once the CPU has been allocated to a process,
the process keeps the CPU until it releases the CPU either by terminating or by
switching to the waiting state.
• Scheduling under 2 and 3 is preemptive scheduling under 1 and 4 is
• Microsoft Windows 3.x is nonpreemptive scheduling while Windows 95 and all
subsequent versions of Windows are preemptive scheduling,.
• The Mac OS X for the Macintosh also uses preemptive scheduling.
The Dispatcher
• The Dispatcher is the module that gives control of the CPU to the process selected
by the short-term scheduler; this involves:
o switching context
o switching to user mode
o jumping to the proper location in the user program to restart that program
• The dispatcher should be as fast as possible, since it is invoked during every
process switch
• Dispatch latency – time it takes for the dispatcher to stop one process and start
another running.
6.2 Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible. In a real system, it should range
from 40 percent (for a lightly loaded system) to 90 percent (for a heavily loaded
• Throughput – number of processes that are completed per time unit. For long
processes, this rate may be one process per hour; for short transactions, it may be
ten processes per second
• Turnaround time – The interval from the time of submission of a process to the
time of completion. Turnaround time is the sum of the periods spent waiting to get
into memory, waiting in the ready queue, executing on the CPU, and doing I/O. It is
limited by the speed of the output device
• Waiting time – amount of time a process has been waiting in the ready queue
• Response time – amount of time it takes from when a request was submitted until
the first response is produced, not output (for time-sharing)
• It is desirable to Maximize CPU utilization and throughput and to Minimize
turnaround time, waiting time, and response time.
6.3 Scheduling Algorithms
First- Come, First-Served (FCFS) Scheduling
• The process that requests the CPU first is allocated the CPU first.
• FCFS scheduling algorithm is a nonpreemptive – once CPU given to the process it
cannot be preempted until completes its CPU time
• FCFS algorithm is particularly worrying for time-sharing systems, where it is
important that each user get a share of the CPU at regular intervals
Example: Consider the following set of processes that arrive at time 0, with the length of
the CPU burst given in milliseconds. Suppose that the processes arrive in the order:
𝑃1 , 𝑃2 , 𝑃3
Waiting time = start running time – Arrival time
Waiting time for 𝑃1 = 0; 𝑃2 = 24; 𝑃3 = 27
Average waiting time: (0+24+27)/3 = 17
Suppose that the processes arrive in the order: 𝑃2 , 𝑃3 , 𝑃1
The Gantt chart for the schedule is:
Waiting time for 𝑃1 = 6; 𝑃2 = 0; 𝑃3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Shortest-Job-First (SJF) Scheduling
• This algorithm associates with each process the length of the process’s next CPU
burst. When the CPU is available, it is assigned to the process that has the smallest
next CPU burst.
• If the next CPU bursts of two processes are the same, FCFS scheduling is used to
break the tie.
• SJF is a nonpreemptive – once CPU given to the process it cannot be preempted
until completes its CPU time
• SJF is optimal – gives minimum average waiting time for a given set of processes.
The Gantt chart for the schedule is:
Waiting time for 𝑃1 = 0; 𝑃2 = 6; 𝑃3 = 3; 𝑃4 = 4
Average waiting time =
Shortest-remaining-time-first Scheduling SRTF
In SJF, the next CPU burst of the newly arrived process may be shorter than what is left of
the currently executing process. Shortest-remaining-time-first (SRTF) algorithm will
preempt the currently executing process.
SRTF scheduling algorithm is a preemptive version of SJF.
The Gantt chart for the schedule is:
Waiting time for 𝑃1 = (0 − 0) + (11 − 2); 𝑃2 = (2 − 2) + (5 − 4); 𝑃3 = (4 − 4) =
0; 𝑃4 = (7 − 5)
Average waiting time =
Priority Scheduling
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority (smallest integer 
highest priority). It can be Preemptive and Nonpreemptive
• SJF and SRTF are priority scheduling where priority is the inverse of predicted next
CPU burst time
• Priorities can be defined either internally or externally.
• Internal priorities use some measurable quantities to compute the priority of a
process. For example, time limits, memory requirements, and number of open files
have been used in computing priorities.
• External priorities are set by criteria outside OS, such as the importance of the
process, the type and amount of funds being paid for computer use, the
department sponsoring the work, and other, often political factors.
• Problem  Starvation (indefinite blocking) – low priority processes may never
execute. A process that is ready to run but waiting for the CPU can be considered
• Solution  Aging – involves regularly increasing the priority of processes that wait
in the system for a long time. For example, increase the priority of a waiting
process by 1 every 15 minutes.
Priority scheduling Gantt Chart:
Average waiting time = 8.2 msec
Round Robin (RR)
• The round-robin (RR) scheduling algorithm is designed especially for timesharing
systems. It is similar to FCFS scheduling, but preemption is added to enable the
system to switch between processes.
• Each process gets a small unit of CPU time (time quantum q), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to
the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process.
Example with Time Quantum = 4:
The Gantt chart is:
The average waiting time is ((10-4)+(4-0) + (7-0))= 17/3 = 5.66 millisec.
Typically, higher average turnaround than SJF, but better response
Example with Time Quantum = 3:
The Gantt chart is:
Waiting time is : 𝑃1 = (0 + 2 + 6) = 8; 𝑃2 = 1; 𝑃3 = (2 + 5 + 2 + 0) = 9; 𝑃4 =
(4 + 5) = 9
Average waiting time = 27/4 = 6.75
Time Quantum and Context Switch Time
Performance of RR
q large  FIFO
q small  q must be large with respect to context switch, otherwise overhead is too high
Most modern systems have time quanta ranging from 10 to 100 milliseconds. The time
required for a context switch is typically less than 10 microseconds; so, the contextswitch time is a small fraction of the time quantum.
Consider the scheduling algorithms for this set of processes. Which algorithm would give
the minimum average waiting time? RR (q = 2 millisecond)
Chapter 7: Deadlocks
• A process requests resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able to
change state, because the resources it has requested are held by other waiting
• Deadlock: A set of blocked processes each holding a resource and waiting to
acquire a resource held by another process in the set. In a deadlock, processes
never finish executing.
o A system has 2 disk drives. P1 and P2 each hold one disk drive and each
needs another one.
o A system with three CD RW drives. Suppose each of three processes holds
one of these CD RW drives. If each process now requests another drive, the
three processes will be in a deadlocked state. Each is waiting for the event
“CD RW is released,” which can be caused only by one of the other waiting
7.1 System Model
• A system consists of a finite number of resources to be distributed among a
number of competing processes
• Resource types R1 , R2 , . . ., Rm: CPU cycles, memory space, I/O devices
• Each resource type Ri has Wi instances. If a system has two CPUs, then the
resource type CPU has two instances. Similarly, the resource type printer may have
five instances.
• A process must request a resource before using it and must release the resource
after using it. Each process utilizes a resource as follows:
o Request: The process requests the resource.
o Use: The process can operate on the resource (for example, if the resource is
a printer, the process can print on the printer).
o Release: The process releases the resource
• The request and release of resources may be system calls.
• Examples are:
o 1) device: request() and release()
o 2) file: open() and close()
o 3) memory: allocate() and free().
7.2 Deadlock Characterization
Necessary Conditions for Deadlock
Deadlock can arise if four conditions hold at the same time:
• Mutual exclusion: only one process at a time can use a resource. If another
process requests that resource, the requesting process must be delayed until the
resource has been released.
• Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes
• No preemption: a resource can be released only by the process holding it, after
that process has completed its task
• Circular wait: there exists a set {𝑃0 , 𝑃1 , … , 𝑃𝑛 } of waiting processes such that 𝑃0 is
waiting for a resource that is held by 𝑃1 , 𝑃1 is waiting for a resource that is held by
𝑃2 , …, 𝑃𝑛–1 is waiting for a resource that is held by 𝑃𝑛 , and 𝑃𝑛 is waiting for a
resource that is held by 𝑃0 .
Resource-Allocation Graph
A set of vertices V and a set of edges E.
• V is partitioned into two types:
o P = {P1 , P2 , …, Pn }, the set consisting of all the processes in the system
o R = {R1 , R2 , …, Rm}, the set consisting of all resource types in the system
• request edge – directed edge Pi → Rj
• assignment edge – directed edge Rj → Pi
o Process
o Resource Type with 4 instances
o Pi requests instance of Rj
o Pi is holding an instance of Rj
Example of a Resource Allocation Graph
P = {P1 , P2 , P3 } R = {R1 , R 2 , R 3 , R 4 } E = {P1 → R1 , P2 → R 3 , R1 → P2 , R 2 → P2 , R 2 → P1 , R 3 → P3 }
Process States:
• Process P1 is holding an instance of resource type R2 and is
waiting for an instance of resource type R1.
• Process P2 is holding an instance of R1 and an instance of R2
and is waiting for an instance of R3.
• Process P3 is holding an instance of R3.
Graph With A Cycle
If graph contains no cycles  no deadlock
If graph contains a cycle 
if only one instance per resource type, then deadlock
if several instances per resource type, possibility of deadlock
7.3 Methods for Handling Deadlocks
• Ensure that the system will never enter a deadlock state:
o Deadlock prevention
o Deadlock avoidance
• Allow the system to enter a deadlock state and then recover
• Ignore the problem and pretend that deadlocks never occur in the system; used by
most operating systems, including UNIX.
7.4 Deadlock Prevention
By ensuring that at least one of these conditions cannot hold, we can prevent the
occurrence of a deadlock.
Side effects of preventing deadlocks are low device utilization and reduced system
• Mutual Exclusion
o must hold for non-sharable resources and not required for sharable
resources (e.g., If several processes attempt to open a read-only file at the
same time, they can be granted simultaneous access to the file)
• Hold and Wait – must guarantee that whenever a process requests a resource, it
does not hold any other resources
o Require process to request and be allocated all its resources before it begins
o Two disadvantages:
▪ Low resource utilization;
▪ starvation possible: A process that needs several popular resources
may have to wait indefinitely
• No Preemption –
• If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are released.
Process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting
• This protocol is often applied to resources whose state can be easily saved and
restored later, such as CPU registers and memory space.
• Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration: Ex: F(tape drive)
= 1, F(disk drive) = 5, F(printer) = 12
7.5 Deadlock Avoidance
• An alternative method for avoiding deadlocks is to require additional information
about how resources are to be requested.
• Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need
• The deadlock-avoidance algorithm dynamically examines the resourceallocation
state to ensure that there can never be a circular-wait condition
• Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes
Safe, Unsafe, Deadlock State
• A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock.
• System is in safe state if there exists a sequence of ALL the processes in the
systems such that for each Pi , the resources that Pi can still request can be
satisfied by currently available resources + resources held by all the Pj , with j < I
• If a system is in safe state  no deadlocks
• If a system is in unsafe state  possibility of deadlock
• Avoidance  ensure that a system will never enter an
unsafe state.
Safe State Example
Consider a system with 12 magnetic tape drives and three
processes: P0, P1, and P2. Then, there are 3 free tape drives.
The system is in a safe state. The sequence < P1, Po, P2> satisfies the safety condition.
Process P1 can get all its tape drives and then return them (the system will then have 5
available tape drives); then process P0 can get all its tape drives and return them (the
system will then have 10 available tape drives); and finally process P2 can get all its tape
drives and return them (the system will then have all 12 tape drives available).
Unsafe State Example
A system can go from a safe state to an unsafe state. Suppose that, process P2 requests
and is allocated one more tape drive. The system is in a unsafe state.
At this point, only process P1 can be allocated all its tape drives. When it returns them,
the system will have only 4 available tape drives. Since process P0 may request 5 tape
drives. Since they are unavailable, process Po must wait. Similarly, process P2 may
request 6 tape drives and have to wait, resulting in a deadlock. Our mistake was in
granting the request from process P2 for one more tape drive. If we had made P2 wait
until either of the other processes had finished and released its resources, then we could
have avoided the deadlock.
7.6 Deadlock Detection
Given the concept of a safe state, we can define avoidance algorithms that ensure
that the system will never deadlock. The idea is simply to ensure that the system
will always remain in a safe state.
Initially, the system is in a safe state. Whenever a process requests a resource that
is currently available, the system must decide whether the resource can be
allocated immediately or whether the process must wait. The request is granted
only if the allocation leaves the system in a safe state.
Single instance of a resource type
o Use a resource-allocation graph
Multiple instances of a resource type
o Use the banker’s algorithm
Banker’s Algorithm
• Multiple instances
• The name was chosen because the algorithm could be used in a banking system to
ensure that the bank never allocated its available cash in such a way that it could
no longer satisfy the needs of all its customers.
• When a new process enters the system, it must declare the maximum number of
instances of each resource type that it may need.
• When a user requests a set of resources, the system must determine whether the
allocation of these resources will leave the system in a safe state.
• If it will, the resources are allocated; otherwise, the process must wait until some
other process releases enough resources.
• When a process gets all its resources it must return them in a finite amount of
Data Structures for the Banker’s Algorithm
Let n = number of processes, and m = number of resources types.
• Available: Vector of length m. If available [j] = k, there are k instances of resource
type Rj available
• Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances
of resource type Rj
• Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k
instances of Rj
• Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to
complete its task
• Need [i,j] = Max[i,j] – Allocation [i,j]
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1 2.
2. Find an i such that both:
(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4 3.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish [i] == true for all i, then the system is in a safe state
Resource-Request Algorithm for Process Pi
Requesti = request vector for process Pi . If Requesti [j] = k then process Pi wants k
instances of resource type Rj
1. If Requesti  Needi go to step 2. Otherwise, raise error condition, since process has
exceeded its maximum claim
2. If Requesti  Available, go to step 3. Otherwise Pi must wait, since resources are
not available
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available – Requesti ;
Allocationi= Allocationi + Requesti ;
Needi = Needi – Requesti ;
• If safe  the resources are allocated to Pi
• If unsafe  Pi must wait, and the old resource-allocation state is restored
Example of Banker’s Algorithm
• 5 processes P0 through P4 ;
3 resource types:
A (10 instances), B (5 instances), and C (7 instances)
• The content of the matrix Need is defined to be Max – Allocation
• Snapshot at time T0 :
• The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies
safety criteria
Example: P1 Request (1,0,2)
Suppose now that process P1 requests one additional instance of resource type A and
two instances of resource type C.
Check that Request  Available (that is, (1,0,2)  (3,3,2)  true
• Executing safety algorithm shows that sequence < P1 , P3 , P4 , P0 , P2> satisfies
safety requirement
• Can request for (3,3,0) by P4 be granted? No
• Can request for (0,2,0) by P0 be granted? Yes, resulting state is unsafe
Deadlock Detection
• If a system does not employ either a deadlock-prevention or a deadlock avoidance
algorithm, then a deadlock situation may occur.
• Detection Algorithm: An algorithm that examines the state of the system to
determine whether a deadlock has occurred
• Recovery Algorithm: An algorithm to recover from the deadlock.
Single Instance of Each Resource Type
• Maintain wait-for graph
o Nodes are processes
o Pi → Pj if Pi is waiting for Pj
• Periodically invoke an algorithm that searches for a cycle in the graph. If there is a
cycle, there exists a deadlock
• An algorithm to detect a cycle in a graph requires an order of n2 operations, where
n is the number of vertices in the graph.
Resource-Allocation Graph and Wait-for Graph
Detection Algorithm for Several Instances
• Five processes P0 through P4 ; three resource types A (7 instances), B (2 instances),
and C (6 instances).
• Snapshot at time T0 :
• Sequence < 𝑃0 , 𝑃2 , 𝑃3 , 𝑃1 , 𝑃4 > will result in Finish[i] = true for all i
• P2 requests an additional instance of type C
State of system?
• Can reclaim resources held by process P0 , but insufficient resources to fulfill other
processes; requests
• Deadlock exists, consisting of processes P1 , P2 , P3 , and P4
7.7 Recovery from Deadlock
• Process Termination
o Abort all deadlocked processes
▪ The deadlocked processes may have computed for a long time, and
the results of these partial computations must be discarded and
probably will have to be recomputed later
o Abort one process at a time until the deadlock cycle is eliminated
▪ Aborting a process may not be easy. If a process was in the middle of
updating a file, terminating it will leave that file in an incorrect state
o In which order should we choose to abort?
1. Priority of the process
2. How long process has computed, and how much longer to
3. Resources the process has used or resources it needs to complete
4. How many processes will need to be terminated
• Resource Preemption
o To eliminate deadlocks using resource preemption, we successively preempt
some resources from processes and give these resources to other processes
until the deadlock cycle is broken.