Download Operating System

Document related concepts

Plan 9 from Bell Labs wikipedia , lookup

Copland (operating system) wikipedia , lookup

RSTS/E wikipedia , lookup

Distributed operating system wikipedia , lookup

Spring (operating system) wikipedia , lookup

DNIX wikipedia , lookup

Burroughs MCP wikipedia , lookup

Unix security wikipedia , lookup

Process management (computing) wikipedia , lookup

Memory management unit wikipedia , lookup

VS/9 wikipedia , lookup

CP/M wikipedia , lookup

Paging wikipedia , lookup

Transcript
Operating System
SNS COLLEGE OF TECHNOLOGY
(an autonomous institution)
COIMBATORE - 35
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING (UG & PG)
Academic Year (2016-2017)
Second Year Computer Science and Engineering-Fourth Semester
Subject Code & Name : CS203 & OPERATING SYSTEMS
Prepared by : Ms.R.Roopa chandrika, AP/CSE, Ms.V.Praveena, AP/CSE,
Mr.M.Karthick,AP/CSE
PART – A ( 2 Marks )
1. What is operating system? (AUC MAY 2012)

An operating system is a set of program that controls, co-ordinates and supervises the
activities of the computer hardware and software.

An OS is a program that acts an intermediary between the user of a computer and
computer hardware.
2. Is Os a resource Manager? If so justify your answer. (AUC NOV 2006)

Operating system is known as resource manager because it control all the activities of
computer system and acts as a interface between user and hardware

The OS system provides an orderly and controlled allocation of the processors, memories
and I/O devices.
3. List down the functions of operating systems?(AUC NOV 2010, AUC MAY 2012)
(i) Memory Management.
(ii) Processor management.
(iii) Interrupt Handling.
(iv) Accounting.
(v) Automatic job sequencing.
(vi) Management and control of I/O devices
4. Differentiate between tightly coupled systems and loosely coupled systems. AUC NOV 2006)
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 1
Operating System
5. What are the differences between Batch OS and Multiprogramming?(AUC NOV 2008)
Batch OS

Batch systems allowed automatic job sequencing by a resident operating system and
greatly improved the overall utilization of the computer.

The computer no longer had to wait for human operation.
Multiprogramming

Multi programming was extended to allow for multiple terminals to be connected to the
computer, with each in-use terminal being associated with one or more jobs on the
computer.

The operating system is responsible for switching between the jobs, now often called
processes,. If the context-switches occurred quickly enough, the user had the impression
that he or she had direct access to the computer.
6. Mention the objectives and functions of an operating system.
(AUC APR 2010, NOV
2006)
7. What does the CPU do when there is no user program to run?
(AUC NOV/DEC 2011)
The CPU will always do processing. Even though there are no application programs running,
the operating system is still running and the CPU will still have to process many system processes
during the operation of the computer.
8. What is the principal advantage of the multiprogramming? (AUC NOV/DEC 2011)
A Multi Processing System is one in which there are more than one CPU, interleaved with each
other. So it helps in improving the amount of work done
(i)
Improves the System Performance.
(ii)
Allows Time Sharing.
(iii)
Supports multiple simultaneous interactive users
9. What are the differences between Multitasking and Multiprogramming? (AUC APR 2010)

In a multiprogramming system there are one or more programs (processes or customers)
resident in computer’s main memory ready to execute. Only one program at a time gets
the CPU for execution while the others are waiting their turn.

Multitasking is the term used in modern operating systems when multiple tasks share a
common processing resource (CPU and Memory). At any point in time the CPU is
executing one task only while other tasks waiting their turn

There are few main differences between multitasking and multiprogramming (based on
the definition provided in this article). A task in a multitasking operating system is not a
whole application program (recall that programs in modern operating systems are divided
into logical pages). Task can also refer to a thread of execution when one process is
divided into sub tasks (will talk about multi threading later).
10. What do you mean by multiprogramming?
(AUC NOV 2010)
The ability of keeping several jobs in the memory at one time, where The cpu is
switched back and forth among them is called as Multi programming. Multi programming
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 2
Operating System
helps to increase CPU utilization, and to decrease the total time needed to execute the jobs.
11. List the challenges in designing a distributed operating system.
 Transparent

Fault – tolerant system

Scalability
12. Define process control block
(AUC NOV/DEC 2008)
(AUC NOV/DEC 2008)
Each process contains the process control block (PCB). PCB is the data structure used by the
operating system. Operating system groups all information that needs about particular process..
13. Specify the critical factor to be strictly followed in real time systems.
(AUC APR 2007)
A solution to critical section problem must satisfy the following three requirements.
(1) Mutual Exclusion
(2) Program
(3) Bounded waiting
14. What do you mean by graceful degradation in multiprocessor systems? (AUC JUNE 2009)
The ability to continue providing service proportional to the level of surviving hardware is called
graceful degradation. Systems designed for graceful degradation are also called fault tolerant.
15. What is the kernel? (AUC APR 2007)
“The one program running at all times on the computer” is the kernel. Everything else is either a
system program (ships with the operating system)or an application program.
16. What are the differences between user level and kernel level threads?AUC MAY 2010,
2012)
User level threads
1.
User level threads are
faster to create and
Kernel level threads
Kernel level threads are
slower to create and manage
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 3
Operating System
2.
3.
4.
manage
Implemented by a
thread library at the
user level
User level thread can
run on any operating
system
Multithread application
cannot take advantage
of multiprocessing
Operating system support
directly to kernel threads
Kernel level threads are
specific to the operating
system
Kernel routines themselves can
be multithreaded
17. What do you mean by short term scheduler?
(AUC NOV 2010)
CPU scheduler selects from among the processes that are ready to execute and allocates the CPU to
one of them. Short term scheduler also known as dispatcher, execute most frequently and makes the
fine grained decision of which process to execute next. Short term scheduler is faster than long tern
scheduler
18. What is system call? Explain the five categories.
(AUC JUNE 2009, APR/MAY
2011)
System calls provide the interface between a process and the operating system. A system call
instruction is an instruction that generates an interrupt that cause the operating system to gain
control of the processor.
Types of System Call: A system call is made using the system call machine language instruction.
System calls can be grouped into five major categories.
Process control:
end, abort; load, execute; create process, terminate process; get process attributes,
set
process attributes; wait for time; wait event, signal event; allocate and free memory
File management
Create file, delete file; open, close; read, write, reposition; get file attributes, set file
attributes
Device management
Request device, release device; read, write, reposition; get device attributes, set
device attributes;Logically attach or detach devices
Information maintenance
Get time or date, set time or date; get system data, set system data; get process,
file, or device attributes; set process, file, or device attributes
Communications
Create, delete communication connection; send, receive messages; transfer status
information;Attach or detach remote devices
19. What are the use of job queues, ready queues and device queues? (AUC APR 2006)
o
As a process enters a system, they are put into a job queue. This queue consists of all jobs in
the system.
o
The processes that are residing in main memory and are ready & waiting to execute are kept
on a list called ready queue.
o
The list of processes waiting for a particular I/O device is kept in the device queue.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 4
Operating System
20. What is meant by context switch? (AUC MAY 2006,NOV 2008,MAY 2010)

The scheduler switches the CPU from executing one process to executing another, the
context switcher saves the content of all processor registers for the process being removed
from the CPU in its process being removed from the CPU in its process descriptor.

Context switching can significantly affect performance, since modern computers have a lot
of general and status registers to be saved
20. What is the use of inter process communication. (AUC NOV 2008)
Inter-process communication (IPC) is a set of methods for the exchange of data among
multiple threads in one or more processes. Processes may be running on one or more computers
connected by a network. IPC methods are divided into methods for message passing,
synchronization , shared memory, and remote procedure calls(RPC). The method of IPC used may
vary based on the bandwidth and latency of communication between the threads, and the type of data
being communicated
22. How can a user program disturb the normal operation of the system? (AUC MAY 2008)
Issuing illegal I/O operation.
By accessing memory locations within the OS itself.
Refusing to relinquish the CPU.
23. What are the three major activities of an operating system in regard to Secondary-storage
management? (AUC APR 2006)
Free space management.
Storage allocation.
Disk scheduling.
24. What are the benefits of multithreaded programming?
(AUC NOV 2006)
1. Responsiveness. Multithreading an interactive application may allow a program to continue
running even if part of it is blocked or is performing a lengthy operation, thereby increasing
responsiveness to the user..
2. Resource sharing. The benefit of sharing code and data is that it allows an application to
have several different threads of activity within the same address space.
3. Economy of Overheads. Allocating memory and resources for process creation is costly..
4. Utilization of multiprocessor architectures. The benefits of multithreading can be greatly
increased in a multiprocessor architecture, where threads may be running in parallel on
different processors (real parallelism).
25. What is the use of fork and exec system calls? (AUC MAY 2009)
 The fork system call creates a new process that is essentially a clone of the existing one. The
child is a complete copy of the parent.
 exec identifies the required memory allocation for the new program and alters the memory
allocation of the process to accommodate it.
 The exec system call reinitializes a process from a designated program; the program changes
while the process remains! Without fork, exec is of limited use; without exec, fork is of
limited use
26. Define thread cancellation & target thread. (AUC MAY 2008)
The thread cancellation is the task of terminating a thread before it has completed. A thread that is to
be cancelled is often referred to as the target thread. For example, if multiple threads are
concurrently searching through a database and one thread returns the result, the remaining threads
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 5
Operating System
might be cancelled
27. What is a process?.Mention the various state of the process. (AUC APR 2007)
A process is sequential program in execution. A process defines the fundamental unit of
computation for the computer. Process state is defined as the current activity of the process
.Process state contains five states. Each process is in one of the states. The states are listed below.
New
Ready
Running
Waiting
Terminated(exist)
28. What is co-operating process?
(AUC APR 2007, MAY 2009)
• Processes within a system may be independent or cooperating. Independent process
cannot affect or be affected by the execution of another process.
• Reasons for cooperating processes:
– Information sharing
– Computation speed-up
– Modularity
– Convenience
29.. Discuss the difference between symmetric and asymmetric multiprocessing. (AUC MAY
2007)
The difference between symmetric and asymmetric multiprocessing: all processors of
symmetric multiprocessing are peers; the relationship between processors of asymmetric
multiprocessing is a master-slave relationship.
CPU in symmetric multiprocessing runs the same copy of the OS, while in asymmetric
multiprocessing, they split responsibilities typically, therefore each may have specialized
(different) software and roles.
30. State the assumption behind the bounded buffer problem? (AUC APR 2010)
The bounded-buffer producer consumer problem consumers a fixed buffer size. In this case
the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.
The buffer may either be provided by the operating system through the use of an
interprocess-communication (IPC) facility, or by explicitly coded by the application
programmer with the use of shared memory.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 6
Operating System
PART-B (16 Marks)
1.
Discuss about the mainframe about systems. (8) (AUC JUNE 2009)
Mainframe computer systems were the first computers used to tackle many commercial and
scientific applications.mainframe systems include the following systems.
Batch Systems
The operating system in these early computers was fairly simple. Its major task was to transfer
control automatically from one job to the next .To speed up processing, operators batched together
jobs with similar needsand ran them through the computer as a group.
Memory layout for a simple batch system
The operator would sort programs into batches with similar requirements and, as the computer became
available,would run each batch. The output from each job would be sent back to the appropriate
programmer.
Multiprogrammed Systems
The operating system keeps several jobs in memorysimultaneously). This set of jobs is a
subset of the jobs kept in the job pool-since the number of jobs that can be kept simultaneously in
memory is usually much smaller than the number of jobs that can be in the job pool. The operating
system picks and begins to execute one of the jobs in the memory
Memory layout for a multi programmed system
Multiprogramming is the first instance where the operating system must make decisions for the users.
Multiprogrammed operating systems are therefore fairly sophisticated.multiprogramming features are
the following
I/O routine supplied by the system.
Memory management – the system must allocate the memory to several jobs.
CPU scheduling – the system must choose among several jobs ready to run.
Allocation of devices.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 7
Operating System
Time-Sharing Systems
When two or more programs are in memory at the same time, sharing the processor is referred
to the multiprogramming operating system.
Multiprogramming assumes a single processor that is being shared. It increases CPU
utilization by organizing jobs so that the CPU always has one to execute. switches occur so
frequently that the users may interact with each program while it is running.
Time-sharing systems were developed to provide interactive use of a computer system at a
reasonable cost. A time-shared operating system uses CPU scheduling and multiprogramming
to provide each user with a small portion of a time-shared computer. Each user has at least one
separate program in memory.
A program that is loaded into memory and is executing is commonly referred to as a process.
When a process executes, it typically executes for only a short time before it either finishes or
needs to perform I/O. I/O may be interactive; that is, output is to a display for the user and
input is from a user keyboard. Since interactive I/O typically runs at people speeds, it may take
a long time to completed.
A time-shared operating system allows the many users to share the computer simultaneously.
Since each action or command in a time-shared system tends to be short, only a little CPU
time is needed for each user. As the system switches rapidly from one user to the next, each
user is given the impression that she has her own computer, whereas actually one computer is
being shared among many users.
Time-sharing operating systems are even more complex than are multi-programmed operating
systems. As in multiprogramming, several jobs must be kept simultaneously in memory, which
requires some form of memory management and protection.
2. How the clustered systems differ from multiprocessor systems? What is required for two
machines belonging to a cluster to co-operate to provide a highly available service?
Clustered systems
The systems are typically constructed by combining multiple computers into a single system
to perform a computational task distributed across the cluster. Multiprocessor systems on the other
hand could be a single physical entity comprising of multiple CPUs. A clustered system is less tightly
coupled than a multiprocessor system. Clustered systems communicate using messages, while
processors in” a multiprocessor system could communicate using shared memory. In order two
machines to provide a highly shared memory.
Multiprocessor systems
o 2-64 processors today
o Shared-everything architecture
o All processors share all the global resources available o
Single copy of the OS runs on these systems
o suffers from scalability
In order for two machines to provide a highly available service, the state on the two machines should
be replicated and should be consistently updated. When one of the machines fails, the other could
then take-over the functionality of the failed machine
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 8
Operating System
3. Explain the components of an operating system (8)
(AUC JUNE 2009)
Modern operating systems share the goal of supporting the system components. The system
components are :
Process Management
Process is a program in execution --- numerous processes to choose from in a
multiprogrammed system,
Process creation/deletion (bookkeeping)
Process suspension/resumption (scheduling, system vs. user)
Process synchronization
Process communication
Deadlock handling
Memory Management
1. Maintain bookkeeping information
2. Map processes to memory locations
3. Allocate/deallocate memory space as requested/required
I/O Device Management
1. Disk management functions such as free space management, storage allocation, fragmentation
removal, head scheduling
2. Consistent, convenient software to I/O device interface through buffering/caching, custom
drivers for each device.
File System
Built on top of disk management
1. File creation/deletion.
2. Support for hierarchical file systems
3. Update/retrieval operations: read, write, append, seek
4. Mapping of files to secondary storage
Protection
Controlling access to the system
1. Resources --- CPU cycles, memory, files, devices
2. Users --- authentication, communication
3. Mechanisms, not policies
Network Management
Often built on top of file system
1.
2.
3.
4.
5.
TCP/IP, IPX, IPng
Connection/Routing strategies
``Circuit'' management --- circuit, message, packet switching
Communication mechanism
Data/Process migration
Network Services (Distributed Computing)
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 9
Operating System
Built on top of networking
1. Email, messaging (GroupWise)
2. FTP
3. gopher, www
4. Distributed file systems --- NFS, AFS, LAN Manager
5. Name service --- DNS, YP, NIS
6. Replication --- gossip, ISIS
7. Security --- kerberos
User Interface
1. Character-Oriented shell --- sh, csh, command.com ( User replaceable)
2. GUI --- X, Windows 95
4. Describe the differences among short-term, medium-term and long –term scheduling.(8)
(AUC NOV2008 MAY, NOV2006)
S. No.
1
Long Term
It is job scheduler
2
Speed is less than
short term
scheduler
It controls degree
of
multiprogramming
3
4
5
6
7
Short Term
It is CPU
Scheduler
Speed is very fast
Medium Term
It is swapping
Less control over
degree of
multiprogramming
Reduce the degree
of
multiprogramming
.
Time sharing
system use
medium term
scheduler.
Absent or minimal
in time sharing
system.
Minimal in time
sharing system.
It select processes
from pool and
load them into
memory for
execution.
Process state is
(New to Ready)
It select from
among the
processes that are
ready to execute.
Select a good
process, mix of
Process state is
(Ready to
Running)
Select a new
process for a CPU
SNSCT – Department of Computer Science & Engineering (UG&PG)
Speed is in
between both
Process can be
reintroduced into
memory and its
execution can be
continued.
-
-
Page 10
Operating System
I/O bound and
CPU bound.
quite frequently.
5. Explain the hardware protection can be achieved and discuss in detail the dual mode of
operations.(8) (AUC NOV/DEC2010)
For single user programmer operating system, programmer has the complete control over the
system. They operate the system from the console. When new operating systems developed with
some additional features, the system control transfers from programmer to the operating system.
Early operating systems were called resident monitors, and starting with the resident monitor,
the operating system began to perform many of the functions, like input-output operation.
. Sharing of resource among different programmers is possible without increasing cost. It
improves the system utilization but problems increase. If single system was used without share,
an error occurs, that could cause problems for only the one program which was running on the
machine. In sharing other programs also affected by single program. For example batch
operating system faces the problem of infinite loop.
This loop could prevent the correct operations of many jobs. In multiprogramming system, one
erroreous program affects the other program (or) data of that program. For proper operation and
error free results, protection of error is required without protection, only single process will
execute one at a time otherwise the output of each program is separated. While designing the
operating system, this type of care must be taken into consideration. Many programming errors
are delected by the computer hardware.
(i) Memory Protection: A system to prevent one process corrupting the memory of any other
process including operating system. For proper operation and correct result, interrupt vector
table must be protected from modification by a user program. User must provide memoryprotection at least for the interrupt vector and the interrupt service routines of the operating
system. The Fig.l Logical address space protection usually relies on a combination of hardware
and software to allocate memory to processes and handle exceptions. Effectiveness of memory
protection varies from one operating system to another operatingsystem. Memory protection is
implemented in several ways. Fig.l shows the memory protection using base register and a limit
registers. Every program requires the memory address for storing and executing. User must have
the ability to determine the range of legal addresses that the program may access. Using base
register and limit register, it is possible to provide the protection to the memory.
Base register and a limit register defines a logical address space. Base register: Holds the smallest
legal physical memory address. Limit register: It contains the size of the range.
For example: Suppose base register contains 300000 and limit register is 110000.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 11
Operating System
The program can legally access all addresses from base register through limit register i.e.,
Base register = 300000 Limit register = 110000
= 410000 (logical address space) The program can access the address space from 30000 through
410000 inclusive. CPU hardware provides this type of protection. Address generated in user
mode with the registers in compared by the CPU hardware. If any user which is executing in user
mode tries to access the monitor mode memory, it will trap to the operating system. It is
considered as fatal error. So this type of scheme prevents the user program from modifying the
code and data of the operating system or users. Fig.2 shows the hardware address protection with
base and limit registers.
Base and limit registers can be loaded only by the operating system, which uses a special
privileged instruction. Privileged instructions are executed only in the monitor mode and the
operating system executes in monitor mode. So the only operating system can load the base
register and limit register. Operating system prevents the user programs from changing the
content of the registers. While executing in the monitor mode, operating system is given
unrestricted access to both monitor and user’s memory. In a multiprogramming environment,
protection of main memory is essential. Paging or segmentation or both in combination provides a
effective means of managing main memory. An example of the hardware support that can be
provided for memory protection is that of the IBM system/370 family of machines, on which
VMS runs. Microsoft windows 3.1/95 offer memory protection. Microsoft Win NT also offers
memory protection. In Unix, almost impossible to corrupt another process memory.
I/O Protection: A user program may disrupt the normal operation of the system by issuing illegal
I/O instructions. Using various mechanisms to ensure that such disruptions cannot take place in
the system. To prevent users from performing illegal I/O, we define all I/O instructions to be
privileged instructions. Users cannot issue I/O instruction directly. Through operating system user
can issue the privileged instructions. Let us consider a computer executing in the user mode. It
will switch to monitor mode whenever an interrupt or trap occurs, jumping to the address
determined from the interrupt vector. If a user program as part of its execution, stores a new
address in the interrupt vector, this new address could overwrite the previous address with an
address in the user program. When trap or interrupt occurred, the hardware would switch to
monitor mode and would transfer control through the modified interrupt vector to the user
program. The user program could gain control through the modified interrupt vector to the user
program. The user program could gain control of the computer in the monitor mode.
CPU Protection: CPU protection is also required. Operating system must maintain the control
over the system. Otherwise user program will go to the infinite loop and never return the control
to the operating system. Using timer, we can protect from this type of situation. A timer can be set
to interrupt the computer after a specified period. The period may be variable or fixed. Fixed rate
clock and counter is used in variable timer. The operating system sets the counter and for every
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 12
Operating System
clock ticks, the counter is decremented. An interrupt occurs, when counter reaches to 0. The
operating system will take care while transferring the control to the user. At that time, the timer is
set to interrupt. If the timer interrupts, control transfers automatically to the operating system,
which may treat the interrupt as a fatal error. The common use of timer is to implement the time
sharing. The timer could be set to interrupt every N milliseconds. Each user is allowed to execute
for N milliseconds of timer. After completion of timer, next user gets control of the CPU.
(ii) Dual Mode Operation: For proper operation and correct output, operating system must be
protected. The users program and data must be protected from any malfunctioning program.
Shared resource also needs some kind of protection. In dual mode operation, two separate modes
are used for working of operating system. These modes are user mode and monitor mode. The
monitor mode also called system mode, supervisor mode or privileged mode. For indicating mode
of the system, mode bit is used in the computer hardware. The mode bit is 0 for monitor and I for
user. With the mode bit, we are able to distinguish between a task that is executed in user mode or
monitor mode. This feature helps to the operating system in many ways. At the booting time, the
hardware starts in the monitor mode, then operating system is loaded. The hardware switches
from user mode to monitor mode when interrupts occur. When the operating system gains control
of the system, it is in monitor mode.
The dual mode operation provides the protection to the operating system from unauthorized
users. The privileged instructions are executed only in the monitor mode. The computer hardware
is not allowed for executing the privilege instructions in other mode, i.e., user mode. If anybody
tries to execute the instructions in user mode, it is considered as illegal instruction and also traps it
to the operating system. Software may trigger an interrupt by executing a special operation called
a system call. System call is one type of request which is invoked by the user or system. Using
privileged instructions, user will interact will the operating system. This type of request is
invoked by user to execute the privileged instructions. As said earlier, this request is called
system call or monitor call. When a system call is executed, it is treated by the hardware as a
software interrupt.
6. Explain in detail any two operating systems.(8)
(AUC OV/DEC2010)
Operating system is a program that controls the execution of application programs and acts as an
interface between the user of a computer and the computer hardware.
Batch System
Some computer systems only did one thing at a time. They had a list of the computer system
may be dedicated to a single program until its completion, or they may be dynamically reassigned
among a collection of active programs in different stages of execution.
Batch operating system is one where programs and data are collected together in a batch
before processing starts. A job is predefined sequence of commands, programs and data that are
combined in to a single unit called job.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 13
Operating System
the memory layout for a simple batch system. Memory management in batch system is very
simple. Memory is usually divided into two areas : Operating system and user program area.
Scheduling is also simple in batch system. Jobs are processed in the order of submission i.e first come
first served fashion.
When job completed execution, its memory is releases and the output for the job gets copied into an
output spool for later printing.
Batch system often provides simple forms of file management. Access to file is serial. Batch systems
do not require any time critical device management.
Batch systems are inconvenient for users because users can not interact with their jobs to fix
problems. There may also be long turn around times. Example of this system id generating monthly
bank statement.
Advantages o Batch System
Move much of the work of the operator to the computer.
Increased performance since it was possible for job to start as soon as the previous job
finished.
Disadvantages of Batch System
Turn around time can be large from user standpoint.
Difficult to debug program.
A job could enter an infinite loop.
A job could corrupt the monitor, thus affecting pending jobs. Due to lack of protection
scheme, one batch job can affect pending jobs
Time Sharing Systems
Multi-programmed batched systems provide an environment where the various system
resources (for example, CPU, memory, peripheral devices) are utilized effectively.
Time sharing, or multitasking, is a logical extension of multiprogramming. Multiple jobs are
executed by the CPU switching between them, but the
switches occur so frequently that the users may interact with each program while it is running.
An interactive, or hands-on, computer system provides on-line communication between the
user and the system. The user gives instructions to the operating system or to a program
directly, and receives an immediate response. Usually, a keyboard is used to provide input, and
a display screen (such as a cathode-ray tube (CRT) or monitor) is used to provide output.
If users are to be able to access both data and code conveniently, an on-line file system must
be available. A file is a collection of related information defined by its creator. Batch systems
are appropriate for executing large jobs that need little interaction.
Time-sharing systems were developed to provide interactive use of a computer system at a
reasonable cost. A time-shared operating system uses CPU scheduling and multiprogramming
to provide each user with a small portion of a time-shared computer. Each user has at least one
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 14
Operating System
separate program in memory. A program that is loaded into memory and is executing is
commonly referred to as a process. When a process executes, it typically executes for only a
short time before it either finishes or needs to perform I/O. I/O may be interactive; that is,
output is to a display for the user and input is from a user keyboard. Since interactive I/O
typically runs at people speeds, it may take a long time to completed.
A time-shared operating system allows the many users to share the computer simultaneously.
Since each action or command in a time-shared system tends to be short, only a little CPU
time is needed for each user. As the system switches rapidly from one user to the next, each
user is given the impression that she has her own computer, whereas actually one computer is
being shared among many users.
Time-sharing operating systems are even more complex than are multi-programmed operating
systems. As in multiprogramming, several jobs must be kept simultaneously in memory, which
requires some form of memory management and protection
7. What is the need for system calls? How the system calls are used? Explain with example.
(AUC MAY 2009)
It provides the interface between a process and the operating system.
It is available as assembly-language instructions.
Certain systems allow system calls to be made directly from a higher level language program,
this may generate a call to a special run time routine that makes the system call.
C, C++, Perl replaces assembly language for systems programming. These languages allow
system calls to be made directly.
E.g.: General approach for writing a simple program to read data from one file and to copy
them to another file.
Communications with OS using System calls,
Asking for two file names
Opening the input file-1.if error occurs print the message on console and terminate
abnormally 2.if not open the file.
Creating a new output file-1.if error occurs like the name exists already then ask user to
delete the existing file 2.create a output file.
Read input file and write output file-1.enter a loop and each read and write must return status
information.
After writing-1.close both the file 2.write a message to console 3.terminate normally
The run time support system for most programming languages provides a much simpler
interface.
In c++ and other programming languages the OS interface is hidden from the
programmer by the compiler.
System calls are grouped into five major categories:
1. Process control
1. End,abort
2. Load,execute
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 15
Operating System
3.
4.
5.
6.
7.
Create process,terminate process
Get process attributes,set process attributes
Wait for time
Wait for event,signal event
Allocate and free memory
2. File management
1. Create file,delete file
2. Open,close
3. Read,write,reposition
4. Get file attributes,set file attributes
3. Device management
1. Request device,release device
2. Read,write,reposition
3. Get device attributes,set device attributes
4. Logically attach or detach devices
4. Information maintenance
1. Get time or date,set time or date
2. Get system data,set system data
3. Get process,file,or device attributes
4. Set process,file,or device attributes
5. Communications
1. Create,delete communication connection
2. Send,receive messages
3. Transfer status information
4. Attach or detach remote devices
8. What is meant by a process? Explain states of process with neat sketch and discuss the
process state transition with a neat diagram.(16)(AUC MAY ,NOV2010)
When process executes, it changes state. Process state is defined as the current activity of the
process. Fig. 3.1 shows the general form of the process state transition diagram. Process state
contains five states. Each process is in one of the states. The states are listed below.
1. New
2. Ready
3. Running
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 16
Operating System
4. Waiting
5. Terminated(exist)
1. New : A process that just been created.
2. Ready : Ready processes are waiting to have the processor allocated to them by the operating
system so that they can run.
3. Running : The process that is currently being executed. A running process possesses all the
resources needed for its execution, including the processor.
4. Waiting : A process that can not execute until some event occurs such as the completion of an I/O
operation. The running process may become suspended by invoking an I/O module.
5. Terminated : A process that has been released from the pool of executable processes by the
operating system.
Whenever processes changes state, the operating system reacts by placing the process PCB in the list
that corresponds to its new state. Only one process can be running on any processor at any instant and
many processes may be ready and waiting state.
Suspended Processes
1. Suspended process is not immediately available for execution.
2. The process may or may not be waiting on an event.
3. For preventing the execution, process is suspend by OS, parent process, process itself and an agent.
4. Process may not be removed from the suspended state until the agent orders the removal.
Swapping is used to move all of a process from main memory to disk. When all the process by putting
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 17
Operating System
it in the suspended state and transferring it to disk.
Reasons for process suspension
1. Swapping : OS needs to release required main memory to bring in a process that is ready to
execute.
2. Timing : Process may be suspended while waiting for the next time interval
3. Interactive user request : Process may be suspended for debugging purpose by user.
4. Parent : To modify the suspended process or to coordinate the activity of various descendants.
9. Process that want to communicate must have a way to refer to each other. Explain the
various methods of referring the process.(8) (AUC NOV/DEC2008)
A process is sequential program in execution. A process defines the fundamental unit of
computation for the computer. Components of process are :
1. Object Program
2. Data
3. Resources
4. Status of the process execution.
Object program i.e. code to be executed. Data is used for executing the program. While
executing the program, it may require some resources. Last component is used for verifying
the status of the process execution. A process can run to completion only when all requested
resources have been allocated to the process. Two or more processes could be executing the
same program, each using their own data and resources.
Processes and Programs
Process is a dynamic entity, that is a program in execution. A process is a sequence of information
executions. Process exists in a limited span of time. Two or more processes could be executing the
same program, each using their own data and resources.
Program is a static entity made up of program statement. Program contains the instructions. A
program exists at single place in space and continues to exist. A program does not perform the
action by itself.
Process State
When process executes, it changes state. Process state is defined as the current activity of the
process. Figure shows the general form of the process state transition diagram. Process state
contains five states. Each process is in one of the states. The states are listed below.
1. New
2. Ready
3. Running
4. Waiting
5. Terminated(exist)
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 18
Operating System
1. New : A process that just been created.
2. Ready : Ready processes are waiting to have the processor allocated to them by the operating
system so that they can run.
3. Running : The process that is currently being executed. A running process possesses all the
resources needed for its execution, including the processor.
4. Waiting : A process that can not execute until some event occurs such as the completion of an
I/O operation. The running process may become suspended by invoking an I/O module.
5. Terminated : A process that has been released from the pool of executable processes by the
operating system.
Process Control Block (PCB)
Each process contains the process control block (PCB). PCB is the data structure used by the
operating system. Operating system groups all information that needs about particular process.
Pointer
Process
State
Process Number
Program Counter
CPU registers
Memory Allocation
Event Information
List of open files
Process Management / Process Scheduling
Multiprogramming operating system allows more than one process to be loaded into the executable
memory at a time and for the loaded process to share the CPU using time multiplexing.
The scheduling mechanism is the part of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of particular
strategy.
Schedules
Schedulers are of three types.
1. Long Term Scheduler
2. Short Term Scheduler
3. Medium Term Scheduler
OS must select for schedule purposes processes from these queues in some fashion. The
selection process in carried out by appropriate scheduler.
Long-term scheduler (or job scheduler) – selects which processes should be brought into
the ready queue.
Short-term scheduler (or CPU scheduler) – selects which process should be executed
next and allocates CPU.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 19
Operating System
Short term scheduler selects new process for the CPU frequently. The process may execute
for only a few milliseconds before waiting for an I/O request, hence the scheduler should execute at
least once every 100 milliseconds.
Long term scheduler executes less frequently. It controls the degree of multiprogramming.
The average rate of creation must be equal to the average rate of departure. So the long term
scheduler is invoked only when the process leaves the system.
10. Define the four essential properties of the following types of operating systems:
(1)Batch(Refre q.no.7) (2)Time
sharing(Refre q.no.7) (3)Real
time
(4)Distributed (8)
(AUC MAY 2012, MAY 2006,NOV 2006)
Real-Time Systems
A real-time system functions correctlyonly if it returns the correct result within its time onstraints.
Contrast this requirement to a time-sharing system, where it is desirable to respond quickly, or to a
batch system, which may have no time constraints at all.
Often used as a control device in a dedicated application such as controlling scientific
experiments, medical imaging systems, industrial control systems, and some display systems.
Well-defined fixed-time constraints.
Real-Time systems may be either hard or soft real-time.
Hard real-time:
Secondary storage limited or absent, data stored in short term memory, or read-only
memory (ROM)Conflicts with time-sharing systems, not supported by generalpurpose operating systems.
Soft real-time
Limited utility in industrial control of robotics
Useful in applications (multimedia, virtual reality) requiring advanced operating-system
features.
Distributed Systems
Distributed systems depend on networking for their functionality.By being able to
communicate, distributed systems are able to share computational tasks, and provide a rich set
of features to users.
Most operating systems support TCP/IP, including the Windows and UNIX operating systems.
Some systems support proprietary protocols to suit their needs. To an operating system, a
network protocol
simply needs an interface device-a network adapter, for example-with a device driver to
manage it, and software to package data in the communications protocol to send it and to
unpackage it to receive it.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 20
Operating System
Networks are typecast based on the distances between their nodes. A local-area network
(LAN), exists within a room, a floor, or a building. A wide-area network (WAN), usually
exists between buildings, cities, or countries.
A global company may have a WAN to connect its offices, worldwide. These networks could
run one protocol or several protocols.
11. List five services provided by an operating system. Explain how each provides convenience
to the users. Explain also in which cases it would be impossible for user-level programs to
provide these services. (AUC NOV/DEC 2011,MAY 2012)
An operating system provides services to programs and to the users of those programs. It provided by
one environment for the execution of programs. The services provided by one operating system is
difficult than other operating system. Operating system makes the programming task easier.
The common service provided by the operating system is listed below.
1. Program execution
2. I/O operation
3. File system manipulation
4. Communications
5. Error detection
1. Program execution: Operating system loads a program into memory and executes the program.
The program must be able to end its execution, either normally or abnormally.
2. I/O Operation : I/O means any file or any specific I/O device. Program may require any I/O
device while running. So operating system must provide the required I/O.
3. File system manipulation : Program needs to read a file or write a file. The operating system gives
the permission to the program for operation on file.
4. Communication : Data transfer between two processes is required for some time. The both
processes are on the one computer or on different computer but connected through computer network.
Communication may be implemented by two methods:
a. Shared memory
b. Message passing.
5. Error detection : error may occur in CPU, in I/O devices or in the memory hardware. The
operating system constantly needs to be aware of possible errors. It should take the appropriate action
to ensure correct and consistent computing.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 21
Operating System
Operating system with multiple users provides following services.
1. Resource Allocation
2. Accounting
3. Protection
A) Resource Allocation :
If there are more than one user or jobs running at the same time, then resources must be
allocated to each of them. Operating system manages different types of resources require
special allocation code, i.e. main memory, CPU cycles and file storage.
There are some resources which require only general request and release code. For allocating
CPU, CPU scheduling algorithms are used for better utilization of CPU. CPU scheduling
algorithms are used for better utilization of CPU. CPU scheduling routines consider the speed
of the CPU, number of available registers and other required factors.
B) Accounting :
Logs of each user must be kept. It is also necessary to keep record of which user how much
and what kinds of computer resources. This log is used for accounting purposes.
The accounting data may be used for statistics or for the billing. It also used to improve system
efficiency.
C) Protection :
Protection involves ensuring that all access to system resources is controlled. Security starts
with each user having to authenticate to the system, usually by means of a password. External
I/O devices must be also protected from invalid access attempts.
In protection, all the access to the resources is controlled. In multiprocess environment, it is
possible that, one process to interface with the other, or with the operating system, so
protection is required.
12.
What two advantages do threads have over multiple processes? What major
disadvantages do they have? Suggest one application that would benefit from the use of
threads.(8) Explain the various issues associated with the thread in detail. (8) (AUC MAY 2012)
1.The fork and exec System Calls
In a multithreaded program environment, fork and exec system calls is changed. Unix system
have two version of fork system calls. One call duplicates all threads and another that duplicates only
the thread that invoke the fork system call. Whether to use one or two version of fork system call
totally depends upon the application. Duplicating all threads is unnecessary, if exec is called
immediately after fork system call.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 22
Operating System
2.Cancellation
Thread cancellation is the task of terminating a thread before it has completed.For example, if
multiple threads are concurrently searching through a database and one thread returns the result, the
remaining threads might be cancelled.
Another situation might occur when a user presses a button on a web browser that stops a web
page from loading any further. Often a web page is loaded in a separate thread. When a user presses
the stop button, the thread loading the page is cancelled.
A thread that is to be cancelled is often referred to as the target thread.
1. Asynchronous cancellation: One thread immediately terminates the target thread.
2. Deferred cancellation: The target thread can periodically check if it should terminate, allowing it.
3.Signal Handling
A signal may be received either synchronously or asynchronously,depending upon the source
and the reason for the event being signalled.Whether a signal is synchronous or asynchronous, all
signals follow the samepattern:
1. A signal is generated by the occurrence of a particular event.
2. A generated signal is delivered to a process.
3. Once delivered, the signal must be handled.
When a signal is generated by an event external to a running process, that process receives the
signal asynchronously. Examples of such signals include terminating a process with specific
keystrokes (such as <control><C>) or having a timer expire. Typically an asynchronous signal is sent
to another process.
Every signal may be handled by one of two possible handlers:
1. A default signal handler: run by the kernel whenhandling the signal.
2. A user-defined signal handler: handle the signal rather than the default action
Handling signals in single-threaded programs is straightforward; signals are always delivered to a
process. However, delivering signals is more complicated in multithreaded programs, as a process
may have several threads. Where then should a signal be delivered?
In general, the following options exist:
1. Deliver the signal to the thread to which the signal applies.
2. Deliver the signal to every thread in the process.
3. Deliver the signal to certain threads in the process.
4. Assign a specific thread to receive all signals for the process.
4. Thread Pools
Thread pool is to create a number of threads at process startup and place them into a pool,
where they sit and wait for work.When a server receives a request, it awakens a thread from this poolif oneis available-passing it the request to service. Once the thread completes its service, it returns to
the pool awaiting more work. If the pool contains noavailable thread, the server waits until one
becomes free.In particular, the benefits of thread pools are:
1. It is usually faster to service a request with an existing thread than waiting to create a thread.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 23
Operating System
2. A thread pool limits the number of threads that exist at any one point. This is particularly important
on systems that cannot support a large number of concurrent threads. The number of threads in the
pool can be set heuristically based upon factors such as the number of CPUs in the system, the amount
of physical memory, and the expected number of concurrent client requests. More sophisticated
thread-pool architectures can dynamically adjust the number of threads in the pool according to usage
patterns. Such architectures provide the further benefit of having a smaller pool-thereby consuming
less memory-when the load on the system is low.
5 .Thread-Specific Data
Threads belonging to a process share the data of the process. Indeed, this sharing of data provides one
of the benefits of multithreaded programming. However, each thread might need its own copy of
certain data in some circumstances.. For example, in a transaction-processing system, we might
service each transaction in a separate thread.Most thread libraries-including Win32 and Pthreadsprovide some form of support for thread-specific data. Java provides support as well.
13. (i) Explain Process Control Block. (Marks 4)
(iii)Describe the Inter Process communication in client-server systems. (8) (AUC DEC 2008,
MAY2009, MAY2010)
Each process contains the process control block (PCB). PCB is the data structure used by the
operating system. Operating system groups all information that needs about particular process. Fig.
shows the process control block.
Pointer
Process
State
Process Number
Program Counter
CPU registers
Memory Allocation
Event Information
List of open files
1. Pointer : Pointer points to another process control block. Pointer is used for maintaining the
scheduling list.
2. Process State : Process state may be new, ready, running, waiting and so on.
3. Program Counter : It indicates the address of the next instruction to be executed for this process.
4. Event information : For a process in the blocked state this field contains information concerning
the event for which the process is waiting.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 24
Operating System
5. CPU register : It indicates general purpose register, stack pointers, index registers and
accumulators etc. number of register and type of register totally depends upon the computer
architecture.
6. Memory Management Information : This information may include the value of base and limit
register. This information is useful for deallocating the memory when the process terminates.
7. Accounting Information : This information includes the amount of CPU and real time used, time
limits, job or process numbers, account numbers etc.
Process control block also includes the information about CPU scheduling, I/O resource
management, file management information, priority and so on.
The PCB simply serves as the repository for any information that may vary from process to process.
When a process is created, hardware registers and flags are set to the values provided by the loader
or linker. Whenever that process is suspended, the contents of the processor register are usually
saved on the stack and the pointer to the related stack frame is stored in the PCB. In this way, the
hardware state can be restored when the process is scheduled to run again.
Interprocess Communication (IPC)
Mechanism for processes to communicate and to synchronize their actions.
Message system – processes communicate with each other without resorting to shared
variables.
IPC facility provides two operations:
send(message) – message size fixed or variable
receive(message)
If P and Q wish to communicate, they need to:
establish a communication link between
them exchange messages via send/receive
Implementation of communication link
physical (e.g., shared memory, hardware bus)
Logical (e.g., logical properties)
Naming-Processes that want to communicate must have a way to refer to each other. They can use
either direct or indirect communication.
Direct Communication
Symmetry in addressing
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 25
Operating System
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
Properties of communication link
o Links are established automatically.
o A link is associated with exactly one pair of communicating processes.
o Between each pair there exists exactly one link.
o The link may be unidirectional, but is usually bi-directional.
Asymmetry in addressing
send (P, message) – send a message to process P
receive (id, message) – receive a message from process ;the variable id is
set to the name of the process.
Disadvantage
Changing the name of a process may necessitate examining all other process
definitions.
Indirect Communication
Messages are directed and received from mailboxes (also referred to as ports).
Each mailbox has a unique id.
Processes can communicate only if they share a mailbox.
Send and receive primitives
1. Send (A, message)-send a message to mailbox A.
2. Receive (A, message)-receive a message from mailbox A.
Properties of communication link
Link established only if processes share a common mailbox
A link may be associated with many processes.
Each pair of processes may share several communication
links. Link may be unidirectional or bi-directional.
o
Mailbox sharing
P1, P2, and P3 share mailbox
A. P1, sends; P2 and P3
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 26
Operating System
receive.
Who gets the message?
o Solutions
Allow a link to be associated with at most two processes.
Allow only one process at a time to execute a receive operation.
Allow the system to select arbitrarily the receiver. Sender is notified who
the receiver was.
A mail box may be owned either by a process or by OS.
If process is the owner then the mailbox is part of the address space of the process. If the
process terminates then any process which sends a message to this mailbox must be
notified that the mail box no longer exists.
A mailbox owned by the OS is independent and is not attached to any particular
process. OS gives certain mechanisms that the process can do,
1. create a new mailbox
2. send and receive messages through mailbox
3. destroy a mailbox
Synchronization
Message passing may be either blocking or nonblocking. Blocking is considered synchronous
Non-blocking is considered asynchronous
Send and receive primitives may be either blocking or non-blocking.
Blocking send: the sending process is blocked until the message is received by the
receiving process.
Non-blocking send: the sending process sends the message and resumes the operation.
Blocking receive: the receiver blocks until a message is available.
Non-blocking receive: the receiver retrieves either a valid message or a null.
Buffering
Whenever the communication is direct or indirect, messages exchanged
reside in a temporary queue attached to the link; it is implemented in one of three ways.o Zero
capacity – 0 messages
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 27
Operating System
Sender must block until the recipient receives the message. Rendezvous).
o Bounded capacity – finite length of n messages Sender must wait if link full. o
Unbounded capacity – infinite length Sender never waits.
14. Discuss in detail the concept of virtual machines, with neat sketch.(8) (AUC NOV/DEC2011)
A virtual machine takes the layered approach to its logical conclusion. It treats hardware
and the operating system kernel as though they were all hardware.
A virtual machine provides an interface identical to the underlying bare hardware.
The operating system creates the illusion of multiple processes, each executing on its own
processor with its own (virtual) memory.
The resources of the physical computer are shared to create the virtual machines.
CPU scheduling can create the appearance that users have their own
processor.
Spooling and a file system can provide virtual card readers and virtual line printers. A
normal user time-sharing terminal serves as the virtual machine operator’s console.
System models. (a) No virtual machine. (b) Virtual machine
Advantages/Disadvantages of Virtual Machines
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 28
Operating System
The virtual-machine concept provides complete protection of system resources since
each virtual machine is isolated from all other virtual machines. This isolation, however,
permits no direct sharing of resources.
A virtual-machine system is a perfect vehicle for operating-systems research and
development. System development is done on the virtual machine, instead of on a physical
machine and so does not disrupt normal system operation.
The virtual machine concept is difficult to implement due to the effort required to provide
an exact duplicate to the underlying machine.
Java Virtual Machine
Compiled Java programs are platform-neutral byte codes executed by a Java Virtual
Machine (JVM).
JVM consists of
o class loader o
class verifier
o runtime interpreter
Just-In-Time (JIT) compilers increase performance.
Java Virtual Machine
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 29
Operating System
System Design Goals
User goals – operating system should be convenient to use, easy to learn, reliable, safe, and
fast.
System goals – operating system should be easy to design, implement, and maintain, as well
as flexible, reliable, error-free, and efficient.
Mechanisms and Policies
Mechanisms determine how to do something, policies decide what will be done.
The separation of policy from mechanism is a very important principle, it allows
maximum flexibility if policy decisions are to be changed later.
System Implementation
Traditionally written in assembly language, operating systems can now be written in
higher-level languages.
Code written in a high-level language:
It can be written faster.
It is more compact.
It is easier to understand and debug.
An operating system is far easier to port (move to some other hardware) if it is written in a high-level
language.
System Generation (SYSGEN)
Operating systems are designed to run on any of a class of machines; the system must be configured
for each specific computer site.
SYSGEN program obtains information concerning the specific configuration of the hardware system.
Booting – starting a computer by loading the kernel.
Bootstrap program – code stored in ROM that is able to locate the kernel, load it into memory,and
start its execution.
15. Write detailed notes on process control and file manipulation. (16)
16. Explain in client-server communications.
(AUC NOV/DEC
2011)
(AUC MAY
2010)
Communication in client – server systems
Sockets
Remote Procedure Calls
Remote Method Invocation (Java)
Sockets
A socket is defined as an endpoint for communication. Concatenation of IP address and port
The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8
Communication consists between a pair of sockets
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 30
Operating System
Communication consists between a pair of sockets.
Remote Procedure Calls
Remote procedure call (RPC) abstracts procedure calls between processes on networked
systems.
Stubs – client-side proxy for the actual procedure on the server.
The client-side stub locates the server and Marshalls the parameters.
The server-side stub receives this message, unpacks the marshalled parameters, and performs the
procedure on the server.
Execution of RPC
Remote Method Invocation
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 31
Operating System
Remote Method Invocation (RMI) is a Java mechanism similar to RPCs.
RMI allows a Java program on one machine to invoke a method on a remote object
Marshalling Parameters
UNIT – II: PROCESS SCHEDULING AND SYNCHRONIZATION
PART – A (2 Marks)
1.What is deadlock?
(AUC NOV2010)
A deadlock is a situation in which two or more competing actions are each waiting for the other to
finish, and thus neither ever does.
2.
Distinguish pre-emption and No-pre-emption
2012)
(AUC NOV2008,AUC MAY
3.
Preemption means the operating system moves a process from running to ready without the process
requesting it.
4.
Without preemption, the system implements ―run to completion (or yield or block)‖.
5.
The ―preempt‖ arc in the diagram.
6.
Preemption needs a clock interrupt (or equivalent).
7.
Preemption is needed to guarantee fairness.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 32
Operating System
8.
Preemption is found in all modern general purpose operating systems.
Even non preemptive systems can be multiprogrammed (e.g., when processes block for
9.
I/O).
3.
Is it possible to have a deadlock involving only one process? Explain your
answer.
(AUC NOV2008)
No. This follows directly from the hold-and-wait condition .
4.
What are conditions under which a deadlock situation may arise?
Deadlock can arise if four conditions hold simultaneously.
Mutual exclusion: only one process at a time can use a resource.
Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other
processes.
No preemption: a resource can be released only voluntarily by the process holding it, after that process has
completed its task.
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource
that is held by P1
5.
What is critical section problem?
A critical section is a piece of code that accesses a shared resource that must not be concurrently accessed
by more than one thread of execution. A critical section will usually terminate in fixed time, and a thread.
6.
Define busy waiting and spin lock
When a process is in its critical section, any other process that tries to enter its critical section must loop
continuously in the entry code. This is called as busy waiting and this type of semaphore is also called a
spinlock, because the process while waiting for the lock.
7.
What are the four necessary conditions that are needed for deadlock can occur?
(AUC MAY 2012)
Mutual Exclusion - At least one resource must be held in a non-sharable mode; If any other process
requests this resource, then that process must wait for the resource to be released.
Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at
least one resource that is currently being held by some other process.
No preemption - Once a process is holding a resource ( i.e. once its request has been granted ), then
that resource cannot be taken away from that process until the process voluntarily releases it.
Circular Wait - A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is waiting
for P[ ( i + 1 ) % ( N + 1 ) ]
8.
What is a semaphore? State the two parameters. (AUC APR/MAY 2010,AUC
NOV/DEC 2011)
A semaphore S is integer variable that can be only be accessed via two indivisible (atomic) operations wait
and signal.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 33
Operating System
2
What is deadlock? What are the schemes used in operating system to handle
deadlocks?
(AUC APR/MAY 2010)
Ensure that the system will never enter a deadlock state.
Allow the system to enter a deadlock state and then recover.
Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems,
including UNIX.
10.
What is dispatcher?
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this
involves:
o
switching context
o
switching to user mode
o
jumping to the proper location in the user program to restart that program
11.
What is dispatch latency?
The dispatch latency is referred as time it takes for the dispatcher to stop one process and start another
running.
12.
Define Mutual Exclusion.
2011)
If process pi is executing in its critical section,then no other processes
critical section.
(AUC NOV/DEC
an be executing in their
13.
What is turnaround time?
Turnaround time is the difference of time between the time of arrival of process and time of
dispatch of process or we can say the time of completion of process
14.
Why CPU scheduling is required?.
(AUC JUN 2009)
Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them.
15.
List the three requirements that must be satisfy by the critical-section problem.
(AUC JUN 2009)
Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in
their critical sections.
Progress. If no process is executing in its critical section and there exist some processes that wish to enter
their critical section, then the selection of the processes that will enter the critical section next cannot be
postponed indefinitely.
Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their
critical sections after a process has made a request to enter its critical section and before that request is
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 34
Operating System
granted.
16.
Define throughput
Throughput in CPU scheduling is the number of processes that are completed per unit time. For long
processes, this rate may be one process per hour; for short transactions, throughput might be 10 processes per
second.
21.
Define race condition
When several process access and manipulate same data concurrently, then the outcome of the execution
depends on particular order in which the access takes place is called race condition. To avoid race condition,
only one process at a time can manipulate the shared variable
18.
What is a resource-allocation graph?
Deadlock can be described through a resource allocation graph.
• The RAG consists of a set of vertices P={P1,P2 ,…,P n} of processes and R={R1,R2,…,Rm} of resources.
• A directed edge from a processes to a resource, Pi->R j, implies that Pi has requested Rj.
• A directed edge from a resource to a process, Rj->Pi, implies that Rj has been allocated by Pi.
• If the graph has no cycles, deadlock cannot exist. If the graph has a cycle, deadlock may exist.
19.
Define deadlock prevention
Deadlock prevention is a set of methods for ensuring that at least one of the four necessary conditions like
mutual exclusion, hold and wait, no preemption and circular wait cannot hold. By ensuring that that at least
one of these conditions cannot hold, the occurrence of a deadlock can be prevented.
27.
Define deadlock avoidance.
Avoiding deadlocks is to require additional information about how resources are to be requested. Each request
requires the system consider the resources currently available, the resources currently allocated to each
process, and the future requests and releases of each process, to decide whether the could be satisfied or must
wait to avoid a possible future deadlock.
21.
What are a safe state and an unsafe state?
A state is safe if the system can allocate resources to each process in some order and still avoid a deadlock. A
system is in safe state only if there exists a safe sequence. A
sequence of processes <P1,P2,....Pn> is a safe sequence for the current allocation state if, for each Pi, the
resource that Pi can still request can be satisfied by the current available resource plus the resource held by all
the Pj, with j<i. if no such sequence exists, then the system state is said to be unsafe.
22.
What is banker’s algorithm?
Banker's algorithm is a deadlock avoidance algorithm that is applicable to a resource allocation system with
multiple instances of each resource type. The two algorithms used for its implementation are:
a. Safety algorithm: The algorithm for finding out whether or not a system is in a safe state.
b. Resource-request algorithm: if the resulting resource allocation is safe, the transaction is completed and
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 35
Operating System
process Pi is allocated its resources. If the new state is unsafe Pi must wait and the old resource-allocation
state is restored.
PART-B (16 Marks )
1. Explain in detail about any two CPU scheduling algorithms with suitable examples. (16)
(AUC APR’10,NOV’11)
CPU Scheduler
Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them.
CPU scheduling decisions may take place when a process:
Switches from running to waiting state.
31.
32.
Switches from running to ready state.
33.
Switches from waiting to ready.
34.
Terminates.
35.
Scheduling under 1 and 4 is non preemptive.
36.
All other scheduling is preemptive
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the
first
response is produced, not output
Scheduling Algorithm
First-Come, First-Served (FCFS) Scheduling
1. Suppose that the processes arrive in the order: P1, P2, and P3 The Gantt
chart for the schedule is
2. Waiting time for P1 = 0; P2 = 24; P3 = 27
3. Average waiting time: (0 + 24 + 27)/3 = 17
Suppose that the processes arrive in the order P2, P3, P1…..n
The Gantt chart for the schedule is:
Average waiting time: (6 + 0 + 3)/3 = 3
Convoy effect short process behind long process
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 36
Operating System
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Convoy effect short process behind long process
Shortest-Job-First (SJR) Scheduling
1. Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with
the shortest time.
2. Two schemes:
a. nonpreemptive – once CPU given to the process it cannot be preempted until Completes its CPU burst.
b. preemptive – if a new process arrives with CPU burst length less than remaining time of current executing
process, preempt. This scheme is known as the Shortest-Remaining-Time-First (SRTF).
3. SJF is optimal – gives minimum average waiting time for a given set of processes
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1
0.0
7
P2
2.0
4
P3
4.0
1
P4
5.0
4
1. SJF (non-preemptive)
Average waiting time = (0 + 6 + 3 + 7)/4 - 4
Example of Preemptive SJF
2. SJF (preemptive)
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 37
Operating System
4. Average waiting time = (9 + 1 + 0 +2)/4 – 3
Determining Length of Next CPU Burst
Can only estimate the length.
Can be done by using the length of previous CPU bursts, using exponential averaging.
o tn actual length of n th CPU burst.
o tn+1=Predicted value for the next CPU burst o α, 0 ≤α
≤1
o Define :
Examples of Exponential Averaging
Recent history does not count.
Only the actual last CPU burst counts.
2.
What is a deadlock? What are the necessary conditions for a deadlock to occur?
(AUC NOV/DEC 2011) A process requests resources; if the resources are not available at that time, the
process enters a wait state. Waiting processes may never again change state, because the resources
they have requested are held by other waiting processes. This situation is called a deadlock.
Deadlock can arise if four conditions hold simultaneously.
Mutual exclusion: only one process at a time can use a resource.
Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other
processes.
No preemption: a resource can be released only voluntarily by the process holding it, after that process has
completed its task.
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting
for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a
resource that is held by Pn, and P0 is waiting for a resource that is held by P0.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 38
Operating System
3.
How can a system recover from deadlock? (10)
(AUCNOV/DEC 2011)
When a detection algorithm determines that a deadlock exists, several alternatives exist. One possibility is to
inform the operator that a deadlock has spurred, and to let the operator deal with the deadlock manually. The
other possibility is to let the system recover from the deadlock automatically. There are two options for
breaking a deadlock. One solution is simply to abort one or more processes to break the circular wait. The
second option is to preempt some resources from one or more of the deadlocked processes.
1. Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the system
reclaims all resources allocated to the terminated processes.
• Abort all deadlocked processes: This method clearly will break the dead – lock cycle, but at a great
expense, since these processes may have computed for a long time, and the results of these partial
computations must be discarded, and probably must be recomputed.
• Abort one process at a time until the deadlock cycle is eliminated: This method incurs considerable
overhead, since after each process is aborted a deadlock-detection algorithm must be invoked to determine
whether a processes are still deadlocked.
2. Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some resources from processes
and give these resources to other processes until he deadlock cycle is broken.
Selecting a victim – minimize cost.
Rollback – return to some safe state, restart process for that state.
Starvation – same process may always be picked as victim, include number of rollback incost factor.
Combined Approach to Deadlock Handling
4.
Combine the three basic approaches
Prevention,avoidance,detection allowing the use of the optimal approach for each of resources in the system.
Partition resources into hierarchically ordered classes.
Use most appropriate technique for handling deadlocks within each class
4.
What is synchronization? Explain how semaphores can be used to deal with nprocess critical section problem. ( 8)
(AUC MAY 2006 APR/MAY 2010)]
1. Synchronization tool that does not require busy waiting.
2. Semaphore S – integer variable
3. can only be accessed via two indivisible (atomic) operations wait (S):
while S
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 39
Operating System
do no-op; S--;
signal (S): S++;
Critical Section of n Processes
Shared data:
semaphore mutex; //initially mutex = 1 Process
Pi:
do { wait(mutex);
critical section
signal(mutex);
remainder section }
while (1);
Semaphore Implementation
Define a semaphore as a record typedef struct
{
int value;
struct process *L; }
semaphore;
Assume two simple operations:
Block suspends the process that invokes it.
Wakeup (P) resumes the execution of a blocked process P.
Implementation
Semaphore operations now defined as wait(S):
S.value--;
if (S.value < 0) {
block;
add this process to S.L;
}
signal(S):
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 40
Operating System
S.value++;
if (S.value <= 0) {
remove a process P from S.L;
wakeup(P);
}
Semaphore as a General Synchronization Tool
Execute B in Pj only after A executed in Pi
Use semaphore flag initialized to 0
Code:
Pi Pj A wait (flag)
Signal (flag) B
5.
Explain Banker's deadlock-avoidance algorithm with an illustration. (8) (AUC
APR/MAY 2010)
Banker’s Algorithm
Multiple instances.
Each process must a priori claim maximum use.
When a process requests a resource it may have to wait.
When a process gets all its resources it must return them in a finite amount of time.
Data Structures for the Banker’s Algorithm
Let n = number of processes, and m = number of resources types.
Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj available.
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj.
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj.
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task.
5. Need [i,j] = Max[i,j] – Allocation [i,j].
Safety Algorithm
Let Work and Finish be vectors of length m and n, respectively. Initialize: Work =
Available
Finish [i] = false for i - 1,3, …, n.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 41
Operating System
Find Available [i]
Finish [i] = false
Need [i] = work
If so such i exists, go to step 4.
2 Work = Work + Allocation[i]
Finish[i] = true
go to step 2.
3 If Finish [i] == true for all i, then the system is in a safe state. ResourceRequest Algorithm for Process Pi
Request = request vector for process Pi.
If Request i [j] = k then process Pi wants k instances of resource type Rj.
4 .If Request [i]=Need [i] go to step 2.
Otherwise, raise error condition, since process has exceeded its maximum claim.
5 If Request [i]=Available, go to step 3.
Otherwise Pi must wait, since resources are not available.
Pretend to allocate requested resources to Pi by modifying the state as follows: Available =
Request [i];
Allocation [i] = Allocation [i] + Request [i]; Need
[i] = Need [i] – Request [i];
If safe the resources are allocated to Pi.
If unsafe Pi must wait, and the old resource-allocation state is restored
Example of Banker’s Algorithm
1. 5 processes P0 through P4; 3 resource types A (10 instances), B (5instances, and C (7 instances).
2. Snapshot at time T0:
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 42
Operating System
The content of the matrix. Need is defined to be Max – Allocation.
The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria.
6. (i) What is a Gantt chart? Explain how it is used?
(ii) Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:
Process Burst Time Priority
P1
10
3
P2
1
1
P3
2
3
P4
1
4
P5
5
2
The processes are arrived in the order P1, P2, P3, P4, P5, ALL AT TIME 0.
(1) Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a non
preemptive priority (a smaller priority number implies a higher priority), and RR (quantum=1)
scheduling.
(2) What is the turnaround time of each process for each of the scheduling algorithms in part a?
(3) What is the waiting time of each process for each of the scheduling algorithms in part a?
(4) Which of the schedules in part a results in the minimal average waiting time (overall
processes)? (16)
(AUC MAY/JUNE
2012)
a)FCFS algorithm
P1
0
P2
10
P3
11
P4
13
P5
14
19
i)waiting time
process
schedule time – arrival time = waiting time
p1
0
0
0
p2
10
0
10
p3
11
0
11
p4
13
0
13
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 43
Operating System
p5
14
0
14
48
Average Waiting time=48/5=9.6
ii)Turn around time
process
complete time – arrival time =Turnaround time
p1
10
0
10
p2
11
0
11
p3
13
0
13
p4
14
0
14
p5
19
0
19
Average turn around time = 67/5 = 13.4
b)waiting time
process p1
p2
p3
p4
p5
16
Average waiting time = 16/5 =3.2
ii)Turn around time
process
completed time –
p1
19
0
19
p2
1
0
1
p3
4
0
4
p4
2
0
2
arrival time = waiting time
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 44
Operating System
p5
9
0
9
35
Average waiting time= 35/5 =7
c) Non pre emptive scheduling
P1
0
P5
P1
1
i)waiting time
process
P3
3
P4
13
15
19
schedule time – arrival time = waiting time
p1
3
0
3
p2
0
0
0
p3
13
0
13
p4
15
0
15
p5
1
0
14
32
Average waiting time = 32/5 =6.4
ii)Turn around time
process p1
p2
p3
p4
p5
Average turn around time = 51/5 =10.2
d)Round Robin Scheduling
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 45
Operating System
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 46
Operating System
9.
What is critical section problem? Explain the two processes, multiple solutions.
Explain the Dining philosopher’s problem using semaphores.
(AUCMAY
2006,NOV 2010)
1. n processes all competing to use some shared data
2.
Each process has a code segment, called critical section, in which the shared data is accessed.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 47
Operating System
3. Problem – ensure that when one process is executing in its critical section, no other process is allowed to
execute in its critical section.
Solution to Critical-Section Problem
Mutual Exclusion. If process Pi is executing in its critical section, then no other processes canbe executing in
their critical sections.
Progress. If no process is executing in its critical section and there exist some processes that wish to enter
their critical section, then the selection of the processes that will enter the critical section next cannot be
postponed indefinitely.
Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their
critical sections after a process has made a request to enter its critical section and before that request is
granted.
6.
Assume that each process executes at a nonzero speed
7.
No assumption concerning relative speed of the n processes.
Initial Attempts to Solve Problem
Only 2 processes, P0 and P1
General structure of process Pi (other process Pj)
do {
entry section
critical section
exit section
reminder section
} while (1);
Processes may share some common variables to synchronize their actions.
Algorithm 1
4. Shared variables:
int
turn;
initially turn = 0
turn – i // Pi can enter its critical section
5. Process Pi
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 48
Operating System
do {
while (turn != i) ;
critical section
turn = j; //reminder section }
while (1);
3. Satisfies mutual exclusion, but not progress
Algorithm 2
Shared
variables
a)
boolean flag[2];
initially flag [0] = flag [1] = false.
b) flag [i] = true => Pi ready to enter its critical section
Process Pi
do
{
flag[i] := true; while
(flag[j]) ; //critical
section flag [i] = false;
} while (1);
i.remainder section
3) Satisfies mutual exclusion, but not progress requirement.
Algorithm 3
Combined shared variables of algorithms 1 and 2. Process Pi do {
flag [i]:= true; turn
= j;
while (flag [j] and turn = j);
critical section
flag [i] = false;
remainder section }
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 49
Operating System
while (1);
Meets all three requirements; solves the critical-section problem for two processes.
Bakery Algorithm
Critical section for n processes
1) Before entering its critical section, process receives a number. Holder of the smallest number
enters the critical section.
If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else Pj is served first.
The numbering scheme always generates numbers in increasing order of enumeration; i.e.,
1,2,3,3,3,3,4,5...
Notation <lexicographical order (ticket #, process id #)
a) (a,b) < c,d) if a < c or if a = c and b < d
b) max (a0,…, an-1) is a number, k, such that k ai for i 0, …, n – 1
5) Shared data
(a) boolean choosing[n];
(b) int number[n];
6) Data structures are initialized to false and 0 respectively do {
choosing[i] = true;
number*i+ = max(number*0+, number*1+, …, number *n – 1])+1;
choosing[i] = false;
for (j = 0; j < n; j++) {
while (choosing[j]) ;
while ((number[j] != 0) && (number[j,j] < number[i,i])) ;
}
critical section
number[i] = 0;
remainder section
} while (1);
Dining-Philosophers Problem
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 50
Operating System
The dining philosopher’s problem is an example problem often used in concurrent algorithm design to
illustrate synchronization issues and techniques for resolving them.
IssuesThe problem was designed to illustrate the problem of avoiding deadlock, a system state inwhich no
progress is possible.
One idea is to instruct each philosopher to behave as follows:
Think until the left fork is available and when it is pick it up
Think until the right fork is available and when it is pick it up
Eat for a fixed amount of time
Put the right fork down
Put the left fork down
Repeat from the beginning.
This attempt at a solution fails: It allows the system to reach a deadlock state in which each philosopher has
picked up the fork to the left, waiting for the fork to the right to be put down— which never happens, because
A) Each right fork is another philosopher's left fork, and no philosopher will put down that forkuntil s/he eats,
and
B) no philosopher can eat until s/he acquires the fork to his/her own right, which has alreadybeen picked up
by the philosopher to his/her right as described above.
wait(mutex);
readcount++;
if (readcount == 1)
wait(rt); signal(mutex);
…
reading is performed
… wait(mutex); readcount--; if
(readcount == 0) signal(wrt);
signal(mutex):
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 51
Operating System
The situation of the dining philosophers.
1. Shared data semaphore
chopstick[5]; Initially all values
are 1
2. Philosopher i:
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
…
eat
… signal(chopstick[i]); signal(chopstick[(i+1) % 5]);
…
think
…
} while (1);
5.
Explain about the methods used to prevent deadlocks(8)
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 52
Operating System
Restrain the ways request can be made.
1. Mutual Exclusion – not required for sharable resources; must hold for non-sharable resources.
2. Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other
resources.
Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has none.
Low resource utilization; starvation possible.
3. No Preemption – If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released. Preempted resources are
added to the list of resources for which the process is waiting. Process will be restarted only when it can regain
its old resources, as well as the new ones that it is requesting.
4. Circular Wait – impose a total ordering of all resource types, and require that each process requests
resources in an increasing order of enumeration.
SNS COLLEGE OF TECHNOLOGY
(an autonomous institution)
COIMBATORE - 35
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING (UG & PG)
Academic Year (2014-2015)
Second Year Computer Science and Engineering-Fourth Semester
Subject Code & Name : CS203 & OPERATING SYSTEMS
Prepared by : Mrs.T.Maragatham AP/CSE, Mrs.M.Kavitha, AP/CSE, Mrs.S.Vidya AP/CSE
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 53
Operating System
1. What do u mean by swapping technique?
(AUC JUNE2009)
A process needs to be in memory to be executed. However a process can be swapped temporarily out
of memory to a backing tore and then brought back into memory for continued
execution. This process is called swapping.
2.
Why page sizes are are always powers of 2?
(AUC NOV/DEC 2008)
Page size is always a power of 2 because all addresses are binary and are divided into page
(or) page frame number and offset.
3. Define TLB.
(AUC ARR/MAY2011)
It is a special,small,fast lookup hardware cache used to solve a problem in swapping.
4. Define dynamic loading.
To obtain better memory-space utilization dynamic loading is used. With dynamic loading, a routine
is not loaded until it is called. All routines are kept on disk in a are locatable load format. The main
program is loaded into memory and executed. If the routine needs another routine, the calling routine
checks whether the routine has been loaded. If not, the relocatable linking loader is called to load the
desired program into
memory.
5. Define dynamic linking.
Dynamic linking is similar to dynamic loading, rather that loading being postponed until execution
time, linking is postponed. This feature is usually used with system libraries, such as language
subroutine libraries. A stub is included in the image for each library routine reference. The stub is a
small piece of code that indicates how to locate the appropriate memory-resident library routine, or
how to load the library if the routine is not already present.
6. What are overlays?
To enable a process to be larger than the amount of memory allocated to it, overlays are used.
The idea of overlays is to keep in memory only those instructions and data that are needed at a given
time. When other instructions are needed, they are loaded into space occupied previously by
instructions that are no longer needed.
7. Define logical address and physical address.
An address generated by the CPU is referred as logical address. An address seen by the
memory unit that is the one loaded into the memory address register of the memory is
commonly referred to as physical address.
8. How do you limit the effect of thrashing?
(AUC ARR/MAY2011)
To limit the effect of thrashing we can use local replacement algorithm. With Local replacement
algorithm, if the process starts thrashing, it cannot steal frames from another process and cause the
latter to thrash as well. The problem is not entirely solved.Thus the effective access time will
increase even for the process that is not thrashing
9. What is address binding?
(AUC NOV2010)
Logical and physical addresses are the same in compile-time and load-time address-binding schemes;
logical (virtual) and physical addresses differ in execution-time address-binding scheme.
10. What do you mean by page fault?
(AUC NOV‟10,MAY‟12)
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 54
Operating System
. During address translation, if valid–invalid bit in page tableentry is 0 ⇒ page fault.
11. What is the advantage of demand paging?
Bring a page into memory only when it is needed.
(AUC MAY/JUNE 2012 )
Large virtual memory.
More efficient use of memory.
Unconstrained multiprogramming. There is no limit on degree of multiprogramming.
2 Define lazy swapper.
Rather than swapping the entire process into main memory, a lazy swapper is used. A lazy swapper
never swaps a page into memory unless that page will be needed.
13. Define effective access time.
Let p be the probability of a page fault (0£p£1). The value of p is expected to be close to 0; that is,
there will be only a few page faults. The effective access time is Effective access time = (1-p) * ma +
p * page fault time. a : memory-access time
14. Differentiate a page from a segment.
(AUC ARR/MAY2010)
In paging, memory is divided in to equal size segments called pages whereas memory segments
could vary in size (this is why each segment is associated with a length attribute). Sizes of the
segments are determined according to the address space required by a process, while address space
of a process is divided in to pages of equal size in paging. Segmentation provides security associated
with the segments, whereas paging does not provide such a mechanism.
15. What is meant by thrashing? Give an example.
(AUC ARR/MAY2010,NOV 2007)
If the process is spending more time in paging than executing is called as thrashing. For
example, if two nodes compete for write access to a single data item, it may be transferred back and
forth at such a high rate that no real work can get done ( a Ping-Pong effect )
16. What are the various page replacement algorithms used for page replacement?
5.
FIFO page replacement
6.
Optimal page replacement
7.
LRU page replacement
8.
LRU approximation page replacement
9.
Counting based page replacement
10.
Page buffering algorithm.
17. What is page frame?
(AUC NOV/DEC 2011)
The physical address space is likewise divided into page frames. The MMU is responsible for
maintain a map of pages to page frames. A page frame will be the same size as a page,
and it will have the same page alignment. This simplifies the mapping — the MMU must maintain
explicitly only a map between the pages and page frames.
18. What are the major problems to implement demand paging?
The two major problems to implement demand paging is Developing
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 55
Operating System
10.
Frame allocation algorithm
11.
b. Page replacement algorithm
19. What is a reference string?
An algorithm is evaluated by running it on a particular string of memory references and computing the
number of page faults. The string of memory reference is called a reference
string.
20. What is internal fragmentation?
(AUC NOV/DEC 2011)
The unusable memory is contained within an allocated region. This arrangement termed fixed
partitions suffers from inefficient memory use - any process, no matter how small, occupies an entire
partition. This waste is called internal fragmentation
21. What are the common strategies to select a free hole from a set of available holes?
The most common strategies are
(4)
First fit
(5)
Best fit
(6)
Worst fit
22. What do you mean by best fit?
Best fit allocates the smallest hole that is big enough. he entire list has to be searched, unless it is
sorted by size. This strategy produces the smallest leftover hole.
23. What do you mean by first fit?
First fit allocates the first hole that is big enough. Searching can either start at the beginning of the set
of holes or where the previous first-fit search ended. Searching can be stopped as soon as a free hole
that is big enough is found.
PART-B (16 Marks)
1. Explain the most commonly used techniques for structuring the page table (16)
(AUCAPR „11,NOV‟11)
Page Table Structure
20. Hierarchical Paging
21. Hashed Page Tables
22. Inverted Page Tables
Hierarchical Page Tables
Break up the logical address space into multiple page tables. A simple technique is a two-level page
table.
Example: Two-Level Paging
A logical address (on 32-bit machine with 4K page size) is divided into: A page number consisting of
20 bits. A page offset consisting of 12 bits. Since the page table is paged, the page number is further
divided into: 10-bit page number, 10-bit page offset. Where pi is an index into the outer page table,
and p2is the displacement within the page of the outer page table.
One simple solution to this problem is to divide the page table into smaller pieces. One way is to use a
two-level paging algorithm, in which the page table itself is also paged.32-bit machine with a page
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 56
Operating System
size 4 KB. A logical address is divided in to a page number consisting of 20 bits, and a page offset
consisting of 12 bits.
Two-level page table scheme.
Address-translation scheme for a two-level 32-bit paging architecture
Hashed Page Table
Common in address spaces > 32 bits. The virtual page number is hashed into a page table. This page
table contains a chain of elements hashing to the same location. Virtual page numbers are compared in
this chain searching for a match. If a match is found, the corresponding physical frame is extracted.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 57
Operating System
Hashed Page Table
Inverted page Table
Entry for each real page of memory. Entry consists of the virtual address of the page stored in that real
memory location, with information about the process that owns that page.Decreases memory needed
to store each page table, but increases time needed to search thetable when a page reference occurs.
Use hash table to limit the search to one — or at most a few
— page-table entries.
Inverted page table
Shared Pages
Shared code. One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems). Shared code must appear in same location in the logical address space of
all processes. Private code and data. Each process keeps a separate copy of the code and data. The
pages for the private code and data can appear anywhere in the logical address space.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 58
Operating System
2. Explain FIFO, Optimal and LRU page replacement algorithms (16) (AUC APR 2011)
There are many different page replacement algorithms. We evaluate an algorithm by running it on a
particular string of memory reference and computing the number of page faults. The string of memory
references is called reference string. Reference strings are generated artificially or by tracing a given
system and recording the address of each memory reference.
In all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
Graph of Page Faults Versus The Number of Frames
First-In-First-Out (FIFO) Algorithm
22.
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
23.
3 frames (3 pages can be in memory at a time per process)
24.
4 frames
25.
FIFO Replacement – Belady’s Anomaly
26.
more frames less page faults
Figure . FIFO Page Replacement
FIFO Illustrating Belady‟s Anamoly
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 59
Operating System
Optimal Algorithm
Replace page that will not be used for longest period of time. 4
frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Optimal Page Replacement
Least Recently Used (LRU) Algorithm
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Counter implementation
o Every page entry has a counter; every time page is referenced through this entry, copy the clock
into the counter.
o When a page needs to be changed, look at the counters to determine which are to change.
LRU Page Replacement
2
3
Stack implementation – keep a stack of page numbers in a double link form:
a.Page referenced:
move it to the top.
ii. requires 6 pointers to be changed.
b.No search for replacement
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 60
Operating System
Use Of A Stack to Record The Most Recent Page References
LRU Approximation Algorithms
Reference bit
28. With each page associate a bit,
29. initially = 0
b. When page isreferenced bit set to 1.
c. Replace the one which is 0 (if one exists).
3
4
5
Second chance
a. Need reference bit. b.
Clock replacement.
c. If page to be replaced (in clock order) has reference bit = 1. then:
set reference bit 0.
leave page in memory.
replace next page (in clock order), subject to same rules.
Second-Chance (clock) Page-Replacement Algorithm
Keep a counter of the number of references that have been made to each page.
LFU Algorithm: replaces page with smallest count.
MFU Algorithm: based on the argument that the page with the smallest count was probably just
brought in and has yet to be used.
Discuss segmentation in detail. Compare it with paging. (8) (AUC APR2010)
Memory-management scheme that supports user view of memory.
A program is a collection of segments.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 61
Operating System
30..
A segment is a logical unit such as: Main program, procedure, function, method, object,
local variables, global variables, common block stack, stack symbol table, arrays.
1. Segmentation Architecture
Segmentation is memory-management scheme that support this user view of memory. A
address space is a collection of segments. Each segment has a name and a length. the addresses
specify both the segment name and offset within the segment .The user therefore specifies each
address by two quantities: a segment name and an offset.
Logical address consists of a two tuple:
<segment-number,offset>
Segment table – maps two-dimensional physical address. Each
table entry has:
base – contains the starting physical address where the segments reside in memory
limit – specifies the length of the segment
Segment-table base register (STBR) points to the segment table’s location in memory
Segment-table length register (STLR) indicates number of segments used by a program; segment
number s is legal if s < STLR
Segmentation Hardware
Define an implementation to map two-dimensional user-defined addresses into one dimensional
physical addresses. This mapping is affected by a segmentation table. Each entry of the
segmentation table has a segment base and a segment limit. The segment base contains the starting
physical address where the segment resides in memory, whereas the segment limit specifies the
length of the segment.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 62
Operating System
Segmentation Hardware
Segmentation With Paging
The IBM OS/2 32-bit version is an operating system running on top of the intel 386
architecture. The 386 uses segmentation with paging for memory management .The maximum
number of segment per process is 16 KB, and each segment can be as 4 gigabytes. The page size
is 4 KB.
Comparison between paging and segmentation
In order to compare these two virtual addresing schemes and to discover situations where each is
appropriate, let us evaluate each scheme using the folowing criteria:
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 63
Operating System
37.
38.
39.
Memory Utilization
Memory Allocation
Sharing of Code
Memory Utilization:
Paging achieves very good utilization of memory. One can arrange concurrent user programs in
memory, each being represented by only those pages that it is currently using. the main memory
available to user programs will normally be allocated and in active use.
Of course , paging entails the allocation of some integral number of page frames to a process; it is
unlikely that a process will actually fit exactly in an integral number of page frames and consequently
one has internal fragmentation. Though internal fragmentation is present in paged systems, the
wastage is not much. It will only be on the last pages of programs currently being multiprogrammed.
As a system runs, segments are loaded, used and then freed. Once freed, the spaces in memory that
they occupied are reallocated. It would be typical for an incoming segment to exactly fit in the space
released by a departing segment; consequently, one has extrnal fragmentation.
Memory Allocation: Paging simplifies the task of memory allocation. With paging, the system has a
pool of identical resource units - the page frames. Requests by processes for additional resources (page
frames) can be satisfied by the allocation of any of the free page frame.
The only problem that may arise are due to the requests for new page frames coming much faster than
the voluntary release of page frames (this situation is normal during job termination). Usually there
will be no page frame free; so the system has to preempt a page frame, i.e., swap out a page occupying
a frame. This preempted page frame may be a frame allocated to the same process.
Allocation of spaces for segment is more difficult. Memory allocation schemes that attempt to find the
best-fit/ first-fit/ worst fit are all very well so long as there are "holes" sufficient in size to take a new
segment. but if external fragmentation has got too rampant, then though there may be lots of unused
memory there might be no areas of sufficient size to allocate to a new segment.
Then the system could "compact" memory - but it is firesome, lots of copying, lots of updating of
segment maps, and nesty restrictions of I/O is in progress for any of the segments that must be moved.
Alternatively, the system can try swapping segments out. This is even more difficult. The system
could try to swap out several neighboring segments, or maybe one segments adjacent to a couple of
small "holes" so as to get a "hole" suffecient for the new segment.This however, involves lots of
messy adress translations and calculations - and these will have to be repeated when a swapped out
segment is swapped back in.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 64
Operating System
Sharing of Code Segmentation was designed in part to allow for sharing of code and / or read- only
data among processes.
Segments can be made to specify allowed usage on it - any execute - only/ read-only segment is
intrinsically sharable. The system can keep track of segments loaded (particularly easy if a segment
identifier can be related to a file identifier as a possible in many segmented system) and can determine
when one process is requesting a segment already loaded by some other process. Arrangements for
sharing simply require that the segment maps of various processes are kept consistent.
Sharing is difficult in paged systems. Pages do not correspond to logical divisions of a program. One
FORTRAN program on a paged system might on any page contain some marvelous mixture of code
and data. Each process must keep its private data separate.
7. Explain about contiguous memory allocation with neat diagram.(16)
( AUC
NOV/DEC2011)
Main memory usually into two partitions: Resident operating system, usually held in low memory
with interrupt vector. User processes then held in high memory.
Single-partition allocation
Relocation-register scheme used to protect user processes from each other, and from changing
operating-system code and data.Relocation register contains value of smallest physical address; limit
register contains range of logical addresses – each logical address must be less than the limit register
Hardware Support for Relocation and Limit Registers
Contiguous Allocation
Multiple-partition allocation
o Hole – block of available memory; holes of various size are scattered throughout memory.
o When a process arrives, it is allocated memory from a hole large enough to accommodate it. o
Operating system maintains information about: allocated partitions, free partitions
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 65
Operating System
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes. Firstfit: Allocate the first hole that is big enough.
Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by
size. Produces the smallest leftover hole.
Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole.
First-fit and best-fit better than worst-fit in terms of speed and storage utilization.
Fragmentation
External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous.
Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size
difference is memory internal to a partition, but not being used.
Reduce external fragmentation by compaction
o Shuffle memory contents to place all free memory together in one large block.
o Compaction is possible only if relocation is dynamic, and is done at execution time.
10. Given memory partition of 100 KB, 500 KB, 200 KB and 600 KB (in order). Show with neat
Sketch how would each of the first-fit, best-fit and worst-fit algorithms place processes of 412 KB,317
KB, 112 KB and 326 KB(in order). Which algorithm is most efficient in memory allocation? (AUC
NOV 2010)
First fit
412KB is put in 500KB position
317KB is put in 600KBposition
112KB is put in 283KB position(new partition 283=600-317) 326
KB must wait.
Best fit
412KB is put in 500 KB position 317
KB is put in
Best
Fit:
212K is put in 300K partition.
417K is
put in 500K partition.
112K is put in 200K partition.
426K is put in 600K partition.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 66
Operating System
Worst
Fit:
212K is put in 600K partition.
417K is put in 500K partition.
112K is put in 388K partition.
426K must wait.
In this example, Best
Fit turns out to be t he
best
11. Explain the concept of demand paging. How can demand paging be implemented with Virtual
memory (16) (AUC NOV 2010)
A demand paging is similar to a paging system with swapping.When we want to execute a process, we
swap it into memory. Rather than swapping the entire process into memory. When a process is to be
swapped in, the pager guesses which pages will be used before the process is swapped out again
Instead of swapping in a whole process, the pager brings only those necessary pages into memory.
Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time
and the amount of physical memory needed. Hardware support is required to distinguish between
those pages that are in memory and those pages that are on the disk using the valid-invalid bit scheme.
Where valid and invalid pages can be checked checking the bit and marking a page will have no effect
if the process never attempts to access the pages. While the process executes and accesses pages that
are memory resident, execution proceeds normally.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 67
Operating System
Transfer of a paged memory to continuous disk space
Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating
system's failure to bring the desired page into memory. But page fault can be handled as following
Steps in handling a page fault
1. We check an internal table for this process to determine whether the reference was a valid or invalid
memory access.
2. If the reference was invalid, we terminate the process. If .it was valid, but we have not yet brought
in that page, we now page in the latter.
3. We find a free frame.
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and the page
table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the illegal address trap. The process can now
access the page as though it had always been memory.
Therefore, the operating system reads the desired page into memory and restarts the process as though
the page had always been in memory.
The page replacement is used to make the frame free if they are not in used. If no frame is free then
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 68
Operating System
other process is called in.
Advantages of Demand Paging:
1. Large virtual memory.
2. More efficient use of memory.
Unconstrained multiprogramming. There is no limit on degree of multiprogramming
Disadvantages of Demand Paging:
1. Number of tables and amount of processor over head for handling page interrupts are greater
than in the case of the simple paged management techniques.
2. due to the lack of an explicit constraints on a jobs address space size.
12. A page-replacement algorithm should minimize the number of page faults. Minimization
by distributing heavily used pages evenly over all of memory, rather having They compete for a
small number of page frames. We can associate with each
page
frame a
Counter of the number of pages that are associated with that
frame. Then, to replace a page,
We search for the page frame with the smallest
counter.
a.Define a page-replacement algorithm using this basic idea.
b.Specifically address the problems of
(1) what the initial value of the counters is,
(2) when counters are increased,
(3) when counters are decreased, and
(4) how the page to be replaced is selected. (8)
c. How many page faults occur for your algorithm for the following reference string, for four page
frames?
1, 2, 3, 4, 5, 3, 4, 1, 6, 7, 8, 7, 8, 9, 7, 8, 9, 5, 4, 5, 4, 2. (4)
d. What is the minimum number of page faults for an optimal page-replacement Strategy
for the reference string in part b with four page frames? (4)
solution
a. Define a page-replacement algorithm addressing the problems of:
Initial value of the counters—0.
Counters are increased—whenever a new page is associated with that frame.
iii. Counters are decreased—whenever one of the pages associated with that frame is no
longer required.
iv. How the pages to be replaced are selected—find a frame with the smallest counter. Use
FIFO for breaking ties.
b. 14 page faults
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 69
Operating System
c. 11 page faults
Which of the following programming techniques and structures are “good” for a
demand-paged
environment ? Which are “not good”? Explain your answers.
a. Stack
b. Hashed symbol table
c. Sequential search
d. Binary search
e. Pure code
f. Vector operations g.
Indirection
Answer:
a. Stack—good.
b. Hashed symbol table—not good.
c. Sequential search—good.
d. Binary search—not good.
e. Pure code—good.
f. Vector operations—good.
g. Indirection—not good.
Unit 4
FILE SYSTEMS
1. What is File system?
A file is a named collection of related information that isrecorded on secondary storage. A file
contains either programs ordata. A file has certain "structure" based on its type.
2.
File attributes: Name, identifier, type, size, location, protection, time, date
3.
File operations: creation, reading, writing, repositioning, deleting, truncating,
appending, renaming
4.
File types: executable, object, library, source code etc.
2. Write the attributes of the file.
(AUC APR/MAY 2011)
3
A file has certain other attributes, which vary from one operating system to another, but typically
consist of these: Name, identifier, type, location, size, protection, time, date and user identification
What are the various layers of a file system?
The file system is composed of many different levels. Each level in the design uses the feature of
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 70
Operating System
the lower levels to create new features for use by higher levels.
(i) Application programs
(ii) Logical file system
(iii) File-organization module
(iv) Basic file system
(v) I/O control
(vi) Devices
4. What is the content of a typical file control block? .
(AUC APR/MAY 2011)
11. What are the various file operations?
The six basic file operations are
i Creating a file
ii Writing a file
iii Reading a file
iv Repositioning within a file
Deleting a file
Truncating a file
7. What are the advantages and disadvantages of Contiguous allocation?
The advantages are
a. Supports direct access
b. Supports sequential access
c. Number of disk seeks is minimal
The disadvantages are
a. Suffers from external fragmentation
b. Suffers from internal fragmentation
c. Difficulty in finding space for a new file
8. Why is protection needed in file sharing systems?. (AUC NOV/DEC 2008)
Files are
the main information storage mechanism in most computer systems, file protection is

needed.
Access to files can be controlled separately for each type of access – read, write, execute,
append, delete, list directory, and
so on. File protection can be provided by access lists,
passwords, or other techniques.



SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 71
Operating System

9. List any four types of file
9. What are the advantages and disadvantages of linked allocation?
The advantages are
a. No external fragmentation
b. Size of the file does not need to be declared
The disadvantages are
a. Used only for sequential access of files.
b. Direct access is not supported
c. Memory space required for the pointers.
d. Reliability is compromised if the pointers are lost or damaged
(7) Mention the objectives of file management system. (AUC APR/MAY 2010)
To meet the data management needs and requirements of the user which include storage of data
and the ability to perform the aforementioned operations.
• To guarantee, to the extent possible, that the data in the file are valid.
• To optimize performance, both from the system point of view in terms of overall throughput.
• To provide I/O support for a variety of storage device types.
• To minimize or eliminate the potential for lost or destroyed data.
To provide a standardized set of I/O interface routines to use processes.
17. Differentiate the various file access methods.(AUC APR/MAY 2010)
The different types of accessing a file are:
Sequential access: Information in the file is accessed sequentially
Direct access: Information in the file can be accessed without any particular order.
Other access methods: Creating index for the file, indexed sequential access method (ISAM)
etc.
18. A direct or sequential access has a fixed file-size S-byte record. At what logical location,
the first byte of record N will start?
(AUC NOV/DEC2011)
Soln: Record N will start at byte ((N-1)*S)+1
13. Give an example of a situation where variable-size records would be useful. (AUC
NOV2011)
Variable length record is useful when the members size is not same for all examples.
14. What is garbage collection? (AUC MAY/JUNE 2012)
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 72
Operating System
Garbage collection is the process of collecting the unused memory spaces.
15. How can the index blocks be implemented in the indexed allocation scheme?
The index block can be implemented as follows
a. Linked scheme
b. Multilevel scheme
c. Combined scheme
16. What are the structures used in file-system implementation?
4
Several on-disk and in-memory structures are used to
implement a file system
a. On-disk structure include
· Boot control block
· Partition block
· Directory structure used to organize the files
· File control block (FCB)
b. In-memory structure include
In-memory partition table
2. In-memory directory structure
3. System-wide open file table
4. Per-process open table
What are the functions of virtual file system (VFS)?
a. It separates file-system-generic operations from their implementation defining a clean VFS
interface. It allows transparent access to different types of file systems mounted locally.
b. VFS is based on a file representation structure, called a vnode. It contains a numerical value
for a network-wide unique file .The kernel maintains one vnode structure for each active file or
directory.
5
Define seek time and latency time.
The time taken by the head to move to the appropriate cylinder or track is called seek time. Once
the head is at right track, it must wait until the desired block rotates under the read-write head.
This delay is latency time.
19. What is Directory?
The device directory or simply known as directory records information-such as name, location,
size, and type for all files on that particular partition. The directory can be viewed as a symbol
table that translates file names into their directoryentries
20. What are the operations that can be performed on a directory?
The operations that can be performed on a directory are
• Search for a file
• Create a file
• Delete a file
• Rename a file
• List directory
• Traverse the file system
21. What are the most common schemes for defining the logical structure of a directory?
The most common schemes for defining the logical structure of a directory
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 73
Operating System
• Single-Level Directory
• Two-level Directory
• Tree-Structured Directories
• Acyclic-Graph Directories
• General Graph Directory
22. What is the information associated with an open file?
Several pieces of information are associated with an open
file which may be:
• File pointer
• File open count
• Disk location of the file
• Access rights
23. Define UFD and MFD.
In the two-level directory structure, each user has her own user file directory (UFD). Each UFD
has a similar structure, but lists only the files of a single user. When a job starts the system's
master file directory (MFD) is searched. The MFD is indexed by the user name or account
number, and each entry points to the UFD for that user.
24. What are the most common schemes for defining the logic structure directory?
The most common schemes for defining the logical structure of a directory o
Single-Level Directory
o Two-level Directory
o Tree-Structured Directories o
Acyclic-Graph Directories o
General Graph Directory
25. What is a path name?
A pathname is the path from the root through all subdirectories to a specified file. In a two-level
directory structure a user name and a file name define a path name.
23. What is cache? How it improves the performance of the system?
A cache is a region of fast memory that holds copies of data. Access to the cached copy is more
efficient than access to the original. Caching and buffering are distinct functions, but sometimes a
region of memory can be used for both purposes
24. What is full and incremental backup?
25. What are the allocation methods of a disk space?
Three major methods of allocating disk space which are widely in use are
Contiguous allocation
Linked allocation
c. Indexed allocation
29. What are the two types of system directories?.(AUC MAY/JUNE 2012)
PART-B ( 16 Marks )
1. Distinguish between demand paging and anticipatory paging (8) (AUC NOV’08)
Demand paging: In computer operating systems, demand paging is an application of virtual memory.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 74
Operating System
In a system that uses demand paging, the operating system copies a disk page into physical memory
only if an attempt is made to access it (i.e., if a page fault occurs). It follows that a process begins
execution with none of its pages in physical memory, and many page faults will occur until most of a
process’s working set of pages is located in physical memory. This is an example of lazy loading
techniques.
Advantages
Demand paging, as opposed to loading all pages immediately:




Only loads pages that are demanded by the executing process. 

As there is more space in main memory, more processes can be loaded reducing context
switching time which utilizes large amounts of resources. 

Less loading latency occurs at program startup, as less information is accessed from
secondary storage and less information is brought into main memory. 

Does not need extra hardware support than what paging needs, since protection fault can be
used to get page fault. 
Disadvantages




Individual programs face extra latency when they access a page for the first time. So demand
paging may have lower performance than anticipatory paging algorithms such as prepaging. 
Programs running on low-cost, low-power embedded systems may not have a memory
management unit that supports page replacement. 
Memory management with page replacement algorithms becomes slightly more complex. 

• Possible security risks, including vulnerability to timing attacks;
Anticipatory paging
Some systems use demand paging — waiting until a page is actually requested before loading it into
RAM.
Other systems attempt to reduce latency by guessing which pages not in RAM are likely to be
needed soon, and pre-loading such pages into RAM, before that page is requested. (This is often in
combination with pre-cleaning, which guesses which pages currently in RAM are not likely to be
needed soon, and pre-writing them out to storage).
When a page fault occurs, ―anticipatory paging‖ systems will not only bring in the referenced page,
but also the next few consecutive pages (analogous to a prefetch input queue in a CPU).
The swap prefetch mechanism goes even further in loading pages (even if they are not consecutive)
that are likely to be needed soon.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 75
Operating System
2. Comment on inverted page tables and their use in paging and segmentation (8)(AUC
NOV’08)
i) Hierarchical paging: Recent computer system suppoil a large logical address space from V2 to
2M. In this system the page table becomes large. It is difficult to allocate contiguous main memory for
page table. To solve this problem, two level page table scheme is used. The page table is divided
into number of smaller pieces. Fig.l shows the two level page table scheme. In this two level page
table scheme, the page itself is also paged.
ii) Hashed page table: Hashed page table handles the address space larger than 32 bits. The virtual
page number is used as hashed value. Linked list is used in the hash table. Each entry contains the
linked list of element that hash to the same location. Each element in the hash table contains
following fields.
1. Virtual page number
2. Mapped page frame value
3. Pointer to the next element in the linked list. Fig.2 shows hashed page table
Working:
3. Virtual page number is taken from virtual address.
4. Virtual page number is hashed into the hash (able.
5. Virtual page number is compared with the first element of linked list.
4. Both the value is matched, that value is (ie„ page frame) used for calculating physical
address.
5. If the both value is not matched, the entire linked list is searched for a matching.
Clustered page table is same as hashed page table but only difference is that each entry in the
hash table refers to several pages. Hi) Inverted page table: As address spaces have grown to 64 bits,
the side of traditional page table becomes a problem. Even with two level page tables, the tables
themselves can become too large. Inverted page table is used to solve this problem. The inverted
page table has one entry for each real page of memory.’ A physical page table instead of a logical
one. The physical page table is often called as inverted page table. This table contains one entry per
page frame. An inverted page table is very good at mapping from physical page to logical page
number, but not very good at mapping from virtual page number to physical page number. Fig shows
the inverted page table.
Page table Inverted page table There is no other hardware or registers dedicated to memory
mapping the TLB can be quite larger, so that missing faults are rare. With an inverted page table,
most address translations are handled by the TLB when there is a miss in the translation look aside
buffer, the operating system is notified and TLB miss handler is invoked. Hashing is used to speed up
the search.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 76
Operating System
3. Explain the two level directory and three –structured directory. (16)
(AUC APR’11)
Explain the various schemes used for defining the logical structure of a directory.(8)
(AUC MAY/JUNE 2012)
The directory can be viewed as a symbol table that translates file name into their directory entries
When considering a particular directory structure, we need to keep in mind the operations that are to
be performed on a directory:
Search for a file: We need to be able to search a directory structure to find the entry for a particular
file. Since files have symbolic names and similar names may indicate a relationship between files, we
may want to be able to
find all files whose names match a particular pattern.
Create a file: New files need to be created and added to the directory .Delete a file: When a file is
no longer needed, we want to remove it fromthe directory.
List a directory: We need to be able to list the files in a directory, and the contents of the directory
entry for each file in the list.
Rename a file: Because the name of a file represents its contents to its users,the name must be
changeable when the contents or use of the file changes. Renaming a file may also allow its position
within the directory structure
to be changed.
Traverse the file system: We may wish to access every directory, and everyfile within a directory
structure.
Single-Level Directory
The simplest directory structure is the single-level directory. All files arecontained in the same
directory, which is easy to support and understand A single-level directory has significant limitations,
however, when the
number of files increases or when the system has more than one user. Sinceall files are in the same
directory, they must have unique names. If two userscall their data file test, then the unique-name
rule is violated.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 77
Operating System
Two-Level Directory
A single-level directory often leads to confusion of file names between different users. The standard
solution is to create a separate directory for each user.In the two-level directory structure, each user
has her own user file directory(UFD). Each UFD has a similar structure, but lists only the files of a
singleuser. When a user job starts or a user logs in, the system's master file directory(MFD) is
searched. The MFD is indexed by user name or account number, and each entry points to the UFD
for that user
Although the two-level directory structure solves the name-collision problem, it still has
disadvantages. This structure effectively isolates one user from another. This isolation is an
advantage
when
the
users
are
completely
indepen
Two-Level Directory
Tree-structured directory structure. some other predefined location) to find an entry for this user (for
accounting
purposes). In the accounting file is a pointer to (or the name of) the user's initial directory. This pointer
is copied to a local variable for this user that specifies the user’s initial current directory.Path names
can be of two types: absolicte path names or relative path names.An absolute path name begins at
the root and follows a path down to the specified file, giving the directory names on the path. A
relative path name defines a path from the current directory
Acyclic-Graph Directories:
It has shared subdirectories and files. It also has two different names that it is called aliasing. If dict
deletes list dangling pointer.
Allow only links to file not subdirectories.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 78
Operating System
Garbage collection.
Every time a new link is added use a cycle detection algorithm to determine whether it is OK.
Solutions:
Backpointers, so we can delete all pointers. Variable size records a problem.
Backpointers using a daisy chain organization. Entry-hold-count solution Figure
Acyclic – Graph Directories
General Graph Directory
Figure Graph Directory
4. Explain various file allocation methods in detail. (8) (AUC NOV 2010)
Contiguous Allocation
An allocation method refers to how disk blocks are allocated for files:
Contiguous allocation
Linked allocation
Indexed allocation
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 79
Operating System
Each file occupies a set of contiguous blocks on the disk. Simple – only starting location
(block #) and length (number of blocks) are required. It takes a random access. It occupy
wasteful of space (dynamic storage-allocation problem) and files cannot grow.
Contiguous Allocation of Disk Space
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 80
Operating System
Figure Contiguous Allocation
Extent-Based Systems
5. Many newer file systems use a modified contiguous allocation scheme.
6. Extent-based file systems allocate disk blocks in extents.
7. An extent is a contiguous block of disks. Extents are allocated for file allocation. A
file consists of one or more extents.
Linked Allocation
Each file is a linked list of disk blocks: blocks may be scattered anywhere on the disk.
Simple – need only starting address
Free-space management system – no waste of space
No random access
Mapping
Block to be accessed is the Qth block in the linked chain of blocks representing the file.
Displacement into block = R + 1. File-allocation table (FAT) – disk-space allocation used
by MS-DOS and OS/2.
Figure Linked Allocation
File-Allocation Table
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 81
Operating System
Figure. File-Allocation Table (FAT)
Indexed Allocation
Brings all pointers together into the index block.
Logical view.
Need index table
Random access: Dynamic access without external fragmentation, but have overhead
of index block. Mapping from logical to physical in a file of maximum size of 256K words and
block size of 512 words. We need only 1 block for index table.
Q = displacement into index table and R = displacement into block
5. Mapping from logical to physical in a file of unbounded length (block size of 512 words).
6. Linked scheme – Link blocks of index table (no limit on size).
7. Mapping from logical to physical in a file of unbounded length (block size of 512 words).
8. Linked scheme – Link blocks of index table (no limit on size).
Combined Scheme: UNIX (4K bytes per block)
Figure UNIX – File Allocation
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 82
Operating System
5.Explain in detail the free space management with neat diagram. (16 (AUC NOV 2010)
Block number calculation (number of bits per word) * (number of 0-value words) +
offset of first 1 bit. Bit map requires extra space. Example:block size = 212
bytes disk size = 230 bytes (1 gigabyte)
n = 230/212 = 218 bits (or 32K bytes)
Easy to get contiguous files
Linked list (free list): Cannot get contiguous space easily and No waste of space
Grouping
Counting
Need to protect:
o Pointer to free list
o Bit map: It must be kept on disk. Copy in memory and disk may differ. It cannot allow for
block[i] to have a situation where bit[i] = 1 in memory and bit[i] = 0 on disk.
Solution:
Set bit[i] = 1 in disk.
Allocate block[i] and Set bit[i] = 1 in memory
Linked Free Space List on Disk
Since there is only a limited amount of disk space, it is necessary to reuse the space from
deleted files for new files, if possible.
Bit Vector
Free-space list is implemented as a bit map or bit vector. Each block is represented by 1 bit. If
the block is free, the bit is 1; if the block is allocated, the bit is 0.
For example consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26, and 27
are free, and the rest of the blocks are allocated. The free-space bit map would be
001111001111110001100000011100000 …..
The main advantage of this approach is that it is relatively simple and efficient to find the first
free block or n consecutive free blocks on the disk.
The calculation of the block number is
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 83
Operating System
(number of bits per word) x (number of 0-value words) + offset of first 1 bit
Linked List
Another approach is to link together all the free disk blocks, keeping a pointy to the first free
block in a special location on the disk and caching it in memory. This first block contains a
pointer to the next free disk block, and so on. Block 2 would contain a pointer to block 3, which
would point to block 4, which would point to block 5, which would point to block 8, and so on.
Usually, the operating system simply needs a free block so that it can allocate that block to a
file, so the first block in the free list is used.
Grouping
A modification of the free-list approach is to store the addresses of n free blocks in the first free
block. The first n-1 of these blocks are actually free. The importance of this implementation is
that the addresses of a large number of free blocks can be found quickly, unlike in the standard
linked-list approach.
Counting
Several contiguous blocks may be allocated or freed simultaneously, particularly when
space is allocated with the contiguous allocation algorithm or through clustering. A list of n
free disk addresses, we can keep the address of the first free block and the number n of free
contiguous blocks that follow the first block.
Each entry in the free-space list then consists of a disk address and a count. Although each
entry requires more space than would a simple disk address, the overall list will be shorter, as
long as count is generally greater than 1.
6.Explain linked file allocation method. (6)
(AUC APR’10, APR ‘11)
Linked allocation solves all problems of contiguous allocation. With link allocation, each
file is a linked list disk blocks; the disk blocks may be scattered anywhere on the disk. This
pointer is initialized to nil (the end-of-list pointer value) to signify an empty file. The size field is
also set to 0. A write to the file causes a free bio to be found via the free-space management
system, and this new block is the written to, and is linked to the end of the file There is no
external fragmentation with linked allocation, and any free! block on the free-space list can be
used to satisfy a request.
Notice also that there is no need to declare the size of a file when that file is created. A
file can continue to grow as long as there are free blocks. Consequently, it is never necessary
to compact disk space.
The major problem is that it can be used effectively for only sequential access files. To
find the ith block of a file we must start at the beginning of that file, and follow the pointers until
we get to the ith block. Each access to a pointer requires a disk read and sometimes a disk
seek.
Consequently, it is inefficient to support a direct-access capability for linked allocation
files. Linked allocation is the space required for the pointers If a pointer requires 4 bytes out of a
512 Byte block then 0.78 percent of the disk is being used for pointer, rather than for
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 84
Operating System
information. The usual solution to this problem is to collect blocks into multiples, called clusters,
and to allocate the clusters rather than blocks.
For instance, the file system define a cluster as 4 blocks and operate on the disk in only
cluster units. Pointers then use a much smaller percentage of the file's disk space. This method
allows the logical-to-physical block mapping to remain simple, but improves disk throughput
(fewer disk head seeks) and decreases the space needed for block allocation and free-list
management.
The cost of this approach an increase in internal fragmentation. Yet another problem is
reliability. Since the files are linked together by pointers scattered all over the disk, consider
what would happen if a pointer— were lost or damaged. Partial solutions are to use doubly
linked lists or to store the file name and relative block number in each block; however, these
schemes require even more overhead for each file.
An important variation, on the linked allocation method is the use of a file allocation table
(FAT). This simple but efficient method of disk-space allocation is used by the MS-DOS and
OS/2 operating systems. A section of disk at the beginning of each-partition is set aside to
contain the table. The table has one entry for each disk block, and is indexed by block number.
The FAT is used much as is a linked list.
The directory entry contains the block number of the first block of the file. The table entry
indexed by that block number then contains the block number of the next block in the file. This
chain continues until the last block, which has a special end-of-file value -as the table entry.
Unused blocks are indicated by a 0 table value. Allocating a new block to a file is a simple
matter of finding the first 0-valued table entry, and replacing the previous end-of-file value with
the address of the new block. The 0 is then replaced with the end-offile value. An illustrative
example is the FAT structure of for a file consisting of disk blocks 217, 618, and 339.
7. Explain the advantages and shortcomings of LRU page replacement.(16) (AUC
NOV’08)
pages that have been heavily used in the last few instructions will probably beheavily used
again in the next few. Conversely, pages that have not been used forages will probably remain
unused for a long time. This idea suggests a realizable algorithm: when a page fault occurs,
throw out the page that has been unused for the longest time. This strategy is calledLRU(Least
Recently Used) paging. Although LRU is theoretically realizable, it is not cheap. To fully
implementLRU, it is necessary to maintain a linked list of all pages in memory, with themost
recently used page at the front and the least recently used page at the rear.
The difficulty is that the list must be updated on every memory reference. Find-ing a page in
the list, deleting it, and then moving it to the front is a very time con-suming operation, even in
hardware (assuming that such hardware could be built).However, there are other ways to
implement LRU with special hardware. Letus consider the simplest way first. This method
requires equipping the hardware with a 64-bit counter, that is automatically incremented after
each instruction.
Furthermore, each page table entry must also have a field large enough to containthe
counter. After each memory reference, the current value ofCis stored in thepage table entry for
the page just referenced. When a page fault occurs, theoperating system examines all the
counters in the page table to find the lowestone. That page is the least recently used.Now let us
look at a second hardware LRU algorithm.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 85
Operating System
For a machine withpage frames, the LRU hardware can maintain a matrix ofn×nbits, initially
all zero. Whenever page framekis referenced, the hardware first sets all the bits of row kto 1,
then sets all the bits of columnkto 0. At any instant, the row whosebinary value is lowest is the
least recently used, the row whose value is next lowest is next least recently used, and so forth.
The workings of this algorithm are given in Fig for four page frames and page references in the
order 0123210323
LRU using a matrix when pages are referenced in the order 0, 1, 2,3, 2, 1, 0, 3, 2, 3.
8.(i) Explain the various attributes of a file
The file attributes are,
Name – only information kept in human-readable form.
Type – needed for systems that support different types.
Location – pointer to file locat ion on device.
Size – current file size.
Protection – controls who can do reading, writing, executing.
Time, date, and user identification – data for protection, security, and usage
monitoring.
Information about files are kept in directory structure.
(ii)Consider a file currently consisting of 100 blocks. Assume that the file control
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 86
Operating System
block (and the index block, in the case of indexed allocation) is already in memory.
Calculate how many disk I/O operations are required for contiguous, linked, and
indexed (single-level) allocation strategies, if, for one block, the following conditions
hold. In the contiguous allocation case, assume that there is no room to grow in the
beginning, but there is room to grow in the end. Assume that the block information to
be added is stored in memory.
(1) The block is added at the beginning.
(2) The block is added in the middle.
(3) The block is added at the end.
7. The block is removed from the end.
8. The block is removed from the middle.
(6) The block is removed from the end. (12)
(AUC MAY/JUNE 2012)
3. What is the role of Access matrix for protection? Explain
The model of protection that we have been discussing can be viewed as an access
matrix, in which columns represent different system resources and rows represent
different protection domains. Entries within the matrix indicate what access that domain
has to that resource. 
Figure - Access matrix.

Domain switching can be easily supported under this model, simply by providing "switch"
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 87
Operating System
access to other domains: 
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 88
Operating System
Figure - Access matrix of Figure 14.3 with domains as objects.
 The ability to copy rights is denoted by an asterisk, indicating that processes in that
domain have the right to copy that access within the same column, i.e. for the same
object. There are two important variations: 

o If the asterisk is removed from the original access right, then the right is
transferred, rather than being copied. This may be termed a transfer right as
opposed to a copy right. 

o If only the right and not the asterisk is copied, then the access right is added to
the new domain, but it may not be propagated further. That is the new domain
does not also receive the right to copy the access. This may be termed a
limited copy right, as shown in Figure 14.5 below: 
Figure - Access matrix with copy rights.

The owner right adds the privilege of adding new rights or removing existing ones: 
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 89
Operating System
Figure - Access matrix with owner rights.

Copy and owner rights only allow the modification of rights within a column. The
addition of control rights, which only apply to domain objects, allow a process
operating in one domain to affect the rights available in other domains. For example
in the table below, a process operating in domain D2 has the right to control any of
the rights in domain D4. 
Figure Modified access matrix of Figure
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 90
Operating System
Implementation of Access Matrix
Global Table



The simplest approach is one big global table with < domain, object, rights >
entries. 

Unfortunately this table is very large ( even if sparse ) and so cannot be kept in
memory ( without invoking virtual memory techniques. ) 
There is also no good way to specify groupings - If everyone has access to some
resource, then it still needs a separate entry for every domain. 

Access Lists for Objects



Each column of the table can be kept as a list of the access rights for that
particular object, discarding blank entries. 
For efficiency a separate list of default access rights can also be kept, and
checked first. 
Capability Lists for Domains







In a similar fashion, each row of the table can be kept as a list of the capabilities
of that domain. 
Capability lists are associated with each domain, but not directly accessible by
the domain or any user process. 
Capability lists are themselves protected resources, distinguished from other
data in one of two ways: 
o A tag, possibly hardware implemented, distinguishing this special type of
data. ( other types may be floats, pointers, booleans, etc. ) 
o The address space for a program may be split into multiple segments, at
least one of which is inaccessible by the program itself, and used by the
operating system for maintaining the process's access right capability list. 
A Lock-Key Mechanism







Each resource has a list of unique bit patterns, termed locks. 
Each domain has its own list of unique bit patterns, termed keys. 
Access is granted if one of the domain's keys fits one of the resource's locks. 
Again, a process is not allowed to modify its own keys. 
Comparison



Each of the methods here has certain advantages or disadvantages, depending
on the particular situation and task at hand. 
Many systems employ some combination of the listed methods. 

SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 91
Operating System
10. Write a detailed note on various file access methods with neat sketch.(16)
(AUC APR’11)
1. Sequential Access: Read, next write next reset
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 92
Operating System
no read after last write (rewrite)
2. Direct Access: read n write n position
to n read next write
next
rewrite n
n = relative block number
Sequential-access File
Figure Sequential Access File
Simulation of Sequential Access on a Direct-access File
Figure Simulation of Access Files
Example of Index and Relative Files
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 93
Operating System
Figure Index and Relative Files
A Typical File-system Organization
Figure File-System Organization
11. Write note on
Log structured file system
Log structured (or journaling) file systems record each update to the file system as a
transaction.



All transactions are written to a log. A transaction is considered committed once it
is written to the log. However, the file system may not yet be updated. 
The transactions in the log are asynchronously written to the file system. When the file
system is modified, the transaction is removed from the log. 
If the file system crashes, all remaining transactions in the log must still be performed. 

(ii) Efficiency and Usage of disk space
Efficiency and Performance
Efficiency dependent on the disk allocation and directory algorithms and types of
data kept in file’s directory entry. Performance depends on the terms of,
disk cache – separate section of main memory for frequently used blocks
free-behind and read-ahead – techniques to optimize sequential access
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 94
Operating System
improve PC performance by dedicating section of memory as virtual disk
Various Disk-Caching Locations
Figure Disk-Cache
Page Cache
page cache caches pages rather than disk blocks using ual
mory y
chniques.
emory-mapped I/O uses a page cache.
utine I/O through the file system uses the buffer (disk) cache.
I/O Without a Unified Buffer Cache
Figure Buffer Cache without I/O
Unified Buffer Cache: A unified both buffer cache uses the same page cache to cache
memory-mapped pages and
ordinary file system I/O.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 95
Operating System
I/O Using a Unified Buffer Cache
Figure Buffer Cache with I/O
(iii) File system mounting
. A file system must be mounted before it can be accessed.
2. A unmounted file system (i.e. Fig. 11-11(b)) is mounted at a mount point.
(a) Existing
(b) Unmounted Partition
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 96
Operating System
Figure Mount Point
File Sharing
Sharing of files on multi-user systems is desirable. Sharing may be done through a
protection scheme. On distributed systems, files may be shared across a network.
Network File System (NFS) is a common distributed file-sharing method.
4.1.5. Protection
1. File owner/creator should be able to control:
2. Types of access
a. Read b. Write c. Execute d. Append e. Delete f. List
Mode of access:
T h e s e t y p e s o f a c c e s s a r e read, write, execute. The three classes of users
are ask manager to create a group (unique name), say G, and add some users to the
group. For a particular file (say game) or subdirectory, define an appropriate access
Sub Code:CS2411
Sub Name: Operating Systems
Dept: CSE
Sem/Year:VII/IV
UNIT-V
PART – A (2 Marks )
1. Define swap space.
(AUC NOV’08, NOV ‘10, APR’10)
The main goal for the design and implementation of swap space is to provide the
best throughput for the virtual memory system.
Swap-space — Virtual memory uses disk space as an extension of main memory. Swapspace can be carved out of the normal file system, or, more commonly, it can be in a
separate disk partition.
2. Write the basic functions which are provided by the hardware clocks and timers.
(AUC APR/MAY2011)
Most computers have hardware clocks and timers that provide three basic functions:
10.
Give the current time
11.
Give the elapsed time
12.
Set a timer to trigger operation X at time T
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 97
Operating System
These functions are used heavily by the operating system, and also by time sensitive
applications. The hardware to measure elapsed time and to trigger operations is called a
programmable interval timer.
4. What is polling?
Polling-The host repeatedly reads the busy bit until that bit becomes clear.





The interaction on between the host and controller can be done usinghand shaking concept.
This can be done the following 
steps.
Determines state of device

command-ready

busy

Error

Busy-wait cycle to wait for I/O from device
5. What is storage-area network? . (AUC APR/MAY2011)
It is a private network among the services and storage units separates from the LAN and
WAN that connects the servers to the clients.
5. What is rotational latency?
(AUC NOV 2010)
Rotational latency is the additional time waiting for the disk to rotatethe desired sector to
the disk head.
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 1
Page 98
Operating System
6.
What are the advantages of DMA?






DMA can be used with either polling or interrupt software. DMA is particularly
useful on devices like disks,
 where many bytes of information can be transferred
in single I/O operations.
When used in conjunction withan interrupt, the CPU is notified only after the entire block
of data has been transferred.
For each byte or word transferred, it must
 provide the memory address and all the
bus signals that control the data transfer.
7. What are the responsibilities of DMA Controller?
The work of moving data between devices and main memory is performed by the CPU as
programmed I/O or is offloaded to a DMA controller.
8. What are the differences between blocking I/O and non-blocking I/O?



Blocking - process suspended until I/O completed
 Easy to use and understand 
 Insufficient for some needs 
Non Blocking - I/O call returns as much as available
User interface, data copy (buffered I/O) 
Implemented via multi- threading 
Returns quickly with count of bytes read or written 

10. Define caching
A cache is a region of fast memory that holds copies of data. Access to the cached copy
is more efficient than access to the original. Caching and buffering are distinct functions,
but sometimes a region of memory can be used for both purposes
10. Define spooling.
(AUC NOV 2007)
A spool is a buffer that holds output for a device, such as printer, that cannot accept
interleaved data streams. When an application finishes printing, the spooling system queues
the corresponding spool file for output to the printer. The spooling system copies the queued
spool files to the printer one at a time.
11. What is the need for disk scheduling? .(AUC APR/MAY2010)




The operating system is responsible for using
hardware efficiently — for the disk drives, this means
having a fast access time and disk bandwidth
Access time has two major components

a. Seek time is the time for the disk are to move the heads to the cylinder containing the
desired sector.
b. Rotational latency is the additional time waiting for the disk to rotate the desired sector
to
the disk head.
To Minimize seek time is the major process for disk scheduling.
What is low-level formatting?
Before a disk can store data, it must be divided into sectors that the disk controller can read
and write. This process is called low-level formatting or physical formatting. Low-level
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 99
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 2
Page 100
Operating System
2
formatting fills the disk with a special data structure for each sector. The data structure for a
sector consists of a header, a data area, and a trailer.
What is the use of boot block?
3
For a computer to start running when powered up or rebooted it needs to have an initial
program to run. This bootstrap program tends to be simple. It finds the operating system on
the disk loads that kernel into memory and jumps to an initial address to begin the operating
system execution. The full bootstrap program is stored in a partition called the boot blocks,
at fixed location on the disk. A disk that has boot partition is called boot disk or system disk
What is sector sparing?
Low-level formatting also sets aside spare sectors not visible to the operating system. The
controller can be told to replace each bad sector logically with one of the spare sectors. This
scheme is known as sector sparing or forwarding.
4
What is RAID? List out its advantages
Redundant Array of Inexpensive Disks is a series of increasing reliable & expensive ways of
organizing multiple physical hard disks into groups that work as a single logicaldisk.
5
Writable CD-ROM media are available in both 650 MB and 700 MBversions. What is
the principle disadvantage, other than cost, of the 700MB version? .(AUC
NOV/DEC2011)
It can be store more data.
6
Which disk scheduling algorithm would be best to optimize the performance of a RAM
disk?
(AUC NOV/DEC2011)
Shortest seek time first algorithm is used to optimize the performance of a RAM disk.
18. What is mirroring?
It duplicates the data from one disk onto a second disk using a single disk controller.
19. Give some examples for tertiary storage
1. Low cost is the defining characteristic of tertiary storage.
2. Generally, tertiary storage is built using removable media
3. Common examples of removable media are floppy disks and CD-ROMs..
(AUC MAY/JUNE 2012)
20. What is seek time?
The seek time is the time for the disk arm to the heads to the cylinder containingdesired
sector.
(AUC MAY/JUNE 2012)
21. What characteristics determine the disk access speed?
Access time has two major components
a. Seek time is the time for the disk are to move the heads to the cylinder containing the
desired sector.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 101
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 3
Page 102
Operating System
b. Rotational latency is the additional time waiting for the disk to rotate the desired sector
to the disk head.
PART-B ( 16 Marks )
1.
Explain in detail various disk scheduling algorithms with suitable example. (16)
(AUC NOV 2010), (AUC NOV/DEC2011)
The operating system is responsible for using hardware efficiently — for the disk drives,
this means having a fast access time and disk bandwidth.
2. Access time has two major components
23.
Seek time is the time for the disk are to move the heads to the cylinder
containing the desired sector.
24.
Rotational latency is the additional time waiting for the disk to rotate the desired
sector to the disk head.
3. Minimize seek time
4. Seek time =seek distance
5. Disk bandwidth is the total number of bytes transferred, divided by the total time
between the first request for service and the completion of the last transfer.
Several algorithms exist to schedule the servicing of disk I/O requests.
Different types of scheduling algorithms are as follows.
1. First Come, First Served scheduling algorithm(FCFS).
27.
Shortest Seek Time First (SSTF) algorithm
28.
SCAN algorithm
29.
Circular
algorithm FCFS
SCAN
(C-SCAN)
The simplest form of scheduling is first-in-first-out (FIFO) scheduling, which processes
items from the queue in sequential order. This strategy has the advantage of being fair,
because every request is honored and the requests are honored in the order received.
With FIFO, if there are only a few processes that require access and if many of the
requests are to clustered file sectors, then we can hope for good performance.
Priority With a system based on priority (PRI), the control of the scheduling is outside
the control of disk management software.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 103
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 4
Page 104
Operating System
Last In First Out ln transaction processing systems, giving the device to the most recent
user should result. In little or no arm movement for moving through a sequential file.
Taking advantage of this locality improves throughput and reduces queue length.
Illustration shows total head movement of 640 cylinders.
FCFS disk scheduling
Shortest Seek Time First (SSTF) algorithm
The SSTF policy is to select the disk I/O request the requires the least movement of the
disk arm from its current position. Scan With the exception of FIFO, all of the policies
described so far can leave some request unfulfilled until the entire queue is emptied.
That is, there may always be new requests arriving that will be chosen before an existing
request.
The choice should provide better performance than FCFS algorithm.
19. Selects the request with the minimum seek time from the current head position.
20. SSTF scheduling is a form of SJF scheduling; may cause starvation of some
requests.
21. Illustration shows total head movement of 236 cylinders
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 105
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 5
Page 106
Operating System
SSTF scheduling
Under heavy load, SSTF can prevent distant request from ever being serviced. This
phenomenon is known as starvation. SSTF scheduling is essentially a from of shortest
job first scheduling. SSTF scheduling algorithm are not very popular because of two
reasons.
• Starvation possibly exists.
• it increases higher overheads.
3 SCAN scheduling algorithm
The scan algorithm has the head start at track 0 and move towards the highest
numbered track, servicing all requests for a track as it passes the track. The service
direction is then reserved and the scan proceeds in the opposite direction, again picking
up all requests in order.
SCAN algorithm is guaranteed to service every request in one complete pass through
the disk. SCAN algorithm behaves almost identically with the SSTF algorithm. The
SCAN algorithm is sometimes called elevator algorithm.
• The disk arm starts at one end of the disk, and moves toward the other end,
servicing requests until it gets to the other end of the disk, where the head movement is
reversed and servicing continues.
• Sometimes called the elevator algorithm.
• Illustration shows total head movement of 208 cylinders.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 107
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 6
Page 108
Operating System
4 C SCAN Scheduling Algorithm
The C-SCAN policy restricts scanning to one direction only. Thus, when the last track
has been visited in one direction, the arm is returned to the opposite end of the disk and
the scan begins again.
20. Provides a more uniform wait time than SCAN.
21. The head moves from one end of the disk to the other. Servicing requests as it
goes. When it reaches the other end, however,
a. it immediately returns to the beginning of the disk, without servicing any requests
on the return trip.
22. Treats the cylinders as a circular list that wraps around from the last cylinder to the
firstone.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 109
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 7
Page 110
Operating System
C-SCAN scheduling
5.C - LOOK Scheduling Algorithm
Start the head moving in one direction. Satisfy the request for the closest track in that
direction when there is no more request in the direction, the head is traveling, reverse
direction and repeat. This algorithm is similar to innermost and outermost track on each
circuit.
30. Version of C-SCAN
31. Arm only goes as far as the last request in each direction, then reverses
direction immediately, without first going all the way to the end of the disk.
C-LOOK Scheduling
Selecting a Disk-Scheduling Algorithm
SSTF is common and has a natural appeal
SCAN and C-SCAN perform better for systems that place a heavy load on the disk.
Performance depends on the number and types of requests.
Requests for disk service can be influenced by the file-allocation method.
The disk-scheduling algorithm should be written as a separate module of the
operating
system, allowing it to be replaced with a different algorithm if necessary.
Either SSTF or LOOK is a reasonable choice for the default algorithm.
31.. Write short notes on the following :
(i) I/O Hardware (8 Marks)
(ii) RAID structure. (8 Marks)
(AUC NOV 2010/MAY 2010)
(i)I/O Hardware
Computer operate variety of I/O devices
Storage devices (disks, tapes) Transmission
devices (network card, modems)
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 111
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 8
Page 112
Operating System
Human-interface devices (screen, keyboard, mouse)
A device communicates with a computer system by sending signals over acable or
port or bus.
Port-connection point
Bus-It is a set wires and a rigidly defined protocol that specifies a set ofmessages that
can be sent on the wires.
Controller has one or more registers for data and control signals. The
processorcommunicate with the controller by reading and writing bit patterns in these
registers.
I/O instructions triggers bus lines to select the proper device and to move bits intoor
out of a device registers. control devices
Devices controller can support memory-mapped I/O. The device –control registersare
mapped into the address, space of the processor
A Typical PC Bus Structure
Device I/O Port Locations on PCs (partial)
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 113
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 9
Page 114
Operating System
1. Polling












The interaction on between the host and controller can be done usinghand shaking
concept. This can be done the following steps.
 Determines state of device 
 command-ready 
 busy 
 Error 
 Busy-wait cycle to wait for I/O from device 
.2. Interrupt 
 CPU Interrupt request line triggered by I/O device 
 Interrupt handler receives interrupts 
 Maskable to ignore or delay some interrupts 
 Interrupt vector to dispatch interrupt to correct handler 
 Based on priority 
 Some unmaskable 
 Interrupt mechanism also used for exceptions 
Interrupt-Driven I/O Cycle
Interrupt vector contains the memory addresses of specialized interrupt Handles.
The purpose of a vectored interrupt mechanism is to reduce the need for a single
interrupt handlerto search all possible sources of interrupts to determine which one
needs service.
RAID structure
To provide redundancy at lower cost by using the idea of disk striping combinedwith
"parity" bits (which we describe next) have been proposed. These schemeshave
different cost-performance trade-offs and are classified according tolevels called RAID
levels..
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 115
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 10
Page 116
Operating System
4. RAID – multiple disk drives provides reliability via redundancy.
5. RAID is arranged into six different levels.
6. Several improvements in disk-use techniques involve the use of multiple disks
workingcooperatively.
7. Disk striping uses a group of disks as one storage unit.
8. RAID schemes improve performance and improve the reliability of the storage system
bystoring redundant data.
Mirroring or shadowing keeps duplicate of each disk.
Block interleaved parity uses much less redundancy.
RAID Level 0. RAID level 0 refers to disk arrays with striping at the level ofblocks but
without any redundancy
RAID Level 1. RAID level 1 refers to disk mirroring
RAID Level 2. RAID level 2 is also known as memory-style error-correcting
code(ECC) organization. Memory systems have long detected certain errors by using
parity bits. Each byte in a memory system may have a parity bit associated with it that
records whether the number of bits in the
byte set to 1 is even (parity = 0) or odd (parity = 1). If one of the bits in the byte is
damaged (either a 1 becomes a 0, or a 0 becomes a 1), the parity of the byte changes
and thus will not match the stored parity.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 117
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 11
Page 118
Operating System
ECC can be used directly in disk arrays via striping of bytes across disks. If one of the
disks fails, the remaining bits of the byte and the associated error-correction bits can be
read from otherdisks and used to reconstruct the damaged data.
RAID Level 3. RAID level 3, or bit-interleaved parity organization. If one of the sectors is
damaged, whether any bit in the sector is a 1 or a 0 by computing the parity of the
corresponding bits from sectors in the other disks. If the parity of the remaining bits is
equalto the stored parity, the missing bit is 0; otherwise, it is 1.RAID level 3 is as good as
level 2 but is less expensive in the number of extra disks requiredRAID level 3 has two
advantages over level 1.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 119
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 12
Page 120
Operating System

 First, the storage overheadis reduced because only one parity disk is
needed for several regulardisks, whereas one mirror disk is needed for
every disk in level 1. 
 Second,since reads and writes of a byte are spread out over multiple
disks withA/-way striping of data, the transfer rate for reading or writing a
singleblock is N times as fast as with RAID level 1. On the negative side,
RAIDlevel 3 supports fewer I/O’s per second, since every disk has to 
participate in every I/O request..
RAID Level 4. RAID level 4, or block-interleaved parity organization, uses block-level
striping, as in RAID 0, and in addition keeps a parity block on a separate disk for
corresponding blocks from A! other disks. If one of the disks fails, the parity block can be
used with the corresponding blocks from the other disks to restore the blocks of the
failed disk. Data-transfer rate for each accessis slower, but multiple read accesses can
proceed in parallel, leading to ahigher overall I/O rate.
The transfer rates for large reads are high, since allthe disks can be read in
parallel; large writes also have high transfer rates, since the data and parity can be
written in parallel. An operatingsystem write of data smaller than a block requires that
the block be read,modified with the new data, and written back. The parity block has to
beupdated as well. This is known as the read-modify-write cycle. Thus, asingle write
requires four disk accesses: two to read the two old blocks andtwo to write the two new
blocks.
RAID Level 5. RAID level 5, or block-interleaved distributed parity, differsfrom level 4 by
spreading data and parity among all N + 1 disks, ratherthan storing data in N disks and
parity in one disk. For each block, one ofthe disks stores the parity, and the others store
data.
RAID Level 6. RAID level 6, also called the P + Q redundancy scheme, is much like
RAID level 5 but stores extra redundant information to guard against multiple disk
failures. Instead of parity, error-correcting codes such as the Reed-Solomon codes are
used.
RAID Level 0 + 1. RAID level 0 + 1 refers to a combination of RAID levels0 and 1. RAID
0 provides the performance, while RAID 1 provides the reliability.
(ii)Kernel I/O Subsystem
Kernels provide many services related to I/O. Several services—scheduling,
buffering, caching, spooling. The I/O subsystem is also responsible for protecting itself
from errant processes and malicious users.
1.I/O Scheduling
Operating-system developers implement scheduling by maintaining a waitqueue
of requests for each device. When an application issues a blocking I/Osystem call, the
request is placed on the queue for that device. The I/O scheduler rearranges the order of
the queue to improve the overall system efficiencyand the average response time
experienced by applications.
The operating system may also try to be fair, so that no one application receives
especially poor service, or it may give priority service for delay-sensitive requests.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 121
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 13
Page 122
Operating System
When a kernel supports asynchronous I/O, it must be able to keep trackof many
I/O requests at the same time. For this purpose, the operating systemmight attach the
wait queue to a device-status table. The kernel manages this table, which contains an
entry for each I/O device, I/O subsystem improves the efficiency of the computer is by
scheduling I/O operations. Another way is by using storage space in main memory or on
disk via techniques called buffering, caching, and spooling.
2.Buffering
A buffer is a memory area that stores data while they are transferred between
two devices or between a device and an application. Buffering is done for three reasons.
One reason is to cope with a speed mismatch between the producer and consumer of a
data stream.
This double buffering decouples the producer of data from the consumer, thus
relaxing timing requirements between them.
The use of buffering is to adapt between devices that have different datatransfer sizes. Such disparities are especially common in computer networking, where
buffers are used widely for fragmentation and reassembly of messages.
buffering is to support copy semantics for application I/O.An example will clarify
the meaning of "copy semantics.'' Suppose that an application has a buffer of data that it
wishes to write to disk. It calls the write () system call, providing a pointer to the buffer
and an integer specifying the number of bytes to write
3.Caching
A cache is a region of fast memory that holds copies of data. Access to the
cached copy is more efficient than access to the original .Caching and buffering are
distinct functions, but sometimes a region of memory can be used for both purposes.
4 Spooling and Device Reservation
A spool is a buffer that holds output for a device, such as a printer, that cannot accept
interleaved data streams. Although a printer can serve only one job at a time, several
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 123
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 14
Page 124
Operating System
applications may wish to print their output concurrently. The operating system solves
thisproblem by intercepting all output to the printer. Each application's outputis spooled to a
separate disk file. When an application finishes printing, the
spooling system queues the corresponding spool file for output to the printer.In some operating
systems The spooling system copies the queued spool files to the printer one at a time. In
some operating systems, spooling is managed by a system daemon process. In others, it is
handled by an in-kernel thread.
5 Error Handling
An operating system that uses protected memory can guard against many kinds of hardware
and application errors, so that a complete system failure is not the usual result of each minor
mechanical glitch. Operating systems can often compensate effectively for transient failures.
For instance, a disk read() failure results in a readC) retry, and a network send() error results in
a resendO, if the protocol specifies. some hardware can provide highly detailed error
information, although many current operating systems are not designed to convey this
information to the application.
3. Explain the services provided by a kernel I/O subsystem. (8)(refer Q.no.2)
Explain
and compare the C-LOOK and C-SCAN disk scheduling
algorithms.
(8) (AUC APR ‘10)
The C-SCAN policy restricts scanning to one direction only. Thus, when the last track has been
visited in one direction, the arm is returned to the opposite end of the disk and the scan begins
again.
This reduces the maximum delay experienced by new requests
c. Provides a more uniform wait time than SCAN.
d. The head moves from one end of the disk to the other. Servicing requests as it goes.When it
reaches the other end, however,
a. it immediately returns to the beginning of the disk, without servicing any requestson the return
trip.
e. Treats the cylinders as a circular list that wraps around from the last cylinder to the firstone.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 125
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 15
Page 126
Operating System
C-LOOK Scheduling
Start the head moving in one direction. Satisfy the request for the closest track in that direction
when there is no more request in the direction, the head is traveling, reverse direction and
repeat. This algorithm is similar to innermost and outermost track on each circuit.
Version of C-SCAN
Arm only goes as far as the last request in each direction, then reverses
directionimmediately, without first going all the way to the end of the disk.
4. Explain in detail the salient features of Linux I/O. (10)
(AUC APR/MAY 2010)
To the user, the I/O system in Linux looks much like that in any UNIX system.That is, to
the extent possible, all device drivers appear as normal files.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 127
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 16
Page 128
Operating System
The systemadministrator can create special files within a file system that contain
referencesto a specific device driver, and a user opening such a file will be able to read from
and write to the device referenced.Linux splits all devices into three classes:
  block devices, 
  character devices, 
 network devices. 
Block devices include all devices that allow random access to completely independent,
fixed-sized blocks of data, including hard disks and floppy disks,CD-ROMs, and flash
memory. Block devices are typically used to store file systems, but direct access to a block
device is also allowed so that program scan create and repair the file system.
ablock represents the unit with which the kernel performs I/O. When a block is read into
memory, it is stored in a buffer.The request manager is the layer of software that manages
the reading andwriting of buffer contents to and from a block-device driver.
A separate list of requests is kept for each block-device driver. Traditionally, these requests
have been scheduled according to a unidirectional-elevator (C-SCAN) algorithm that exploits
the order in which requests are inserted in and removed from the per-device lists.
The fundamental problem with the elevator algorithm is that I/O operations concentrated in a
specific region of the disk can result in starvation of requests that need to occur in other
regions of the disk.
The deadlineI/O scheduler used in version 2.6 works similarly to the elevator algorithm
except that it also associates a deadline with each request, thus addressing the starvation
issue. The deadline scheduler maintains asorted queue of pending I/O operations sorted by
sector number. However, it also maintains two other queues—a read queue for read
operations and a write queue for write operations. These two queues are ordered according
to deadline. Every I/O request is placed in both the sorted queue and either thread or the
write queue
Character devices include most other devices, such as mice and keyboards.The fundamental
difference between block and character devices is randomaccess—block devices may be
accessed randomly, while character devices areonly accessed serially. For example, seeking to
a certain position in a file might
be supported for a DVD but makes no sense to a pointing device such as amouse.
The kernel maintains a standard interface to these drivers by means of a set of tty_struct
structures. Each of these structures provides buffering and flow control on the data stream from
the terminal device and feeds those data to a line discipline.
A line discipline is an interpreter for the information from the terminaldevice. The most
common line discipline is the t t y discipline, which glues theterminal's data stream onto the
standard input and output streams of a user'srunning processes, allowing those processes to
communicate directly with theuser's terminal. This job is complicated by the fact that several
such processesmay be running simultaneously, and the t t y line discipline is responsible
forattaching and detaching the terminal's input and output from the variousprocesses connected
to it as those processes are suspended or awakened by theuser.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 129
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 17
Page 130
Operating System
Network devices are dealt with differently from block and character devices. Users
cannot directly transfer data to network devices; instead, they must communicate
indirectly by opening a connection to the kernel'snetworking subsystem.
6.
Describe the important concepts of application I/O interface. (16) (AUC NOV’11)
Structuring techniques and interfaces for the operating system enable I/O devices to be treated
in a standard, uniform way. For instance, how an application can open a file on a disk without
knowing what kind of disk it is, and how new disks and other devices can be added to a
computer without the operating system being disrupted
The actual differences are encapsulated in ken modules called device drivers mat internally are
custom tailored to each device but that export one of the standard interfaces.
The purpose of the device-driver layer is to hide the differences among device controllers from
the I/O subsystem of the kernel, much as the I/O system calls.
Character-stream or block. A character-stream device transfers bytes one by one, whereas a
block device transfers a block of bytes as a unit.
Sequential or random-access. A sequential device transfers data in a fixed order that is
determined by the device, whereas the user of a random-access device can instruct the device
to seek to any of the available data storage locations.
Synchronous or asynchronous.A synchronous device is one that performs data transfers with
predictable response times. An asynchronous device exhibits irregular or unpredictable
response times.
Sharable or dedicated.A sharable device can be used concurrently by several processes or
threads; a dedicated device cannot.
Speed of operation. Device speeds range from a few bytes per second to a few gigabytes per
second.
Read-write, read only, or write only. Some devices perform both input and output, but others
support only one data direction. For the purpose of application access, many of these
differences are hidden by the operating system, and the devices are grouped into a few
conventional types.
Operating systems also provide special system calls to access a few additional devices,
such as a time-of-day clock and a timer. The performance and addressing
characteristics of network I/O differ significantly from those of disk I/O, most operating
systems provide a network I/O interface that is different from the read-write-seek
interface used for disks.
(i) Consider the following I/O scenarios on a single-user PC.
A mouse used with a graphical user interface
A tape drive on a multi tasking operating system (assume no device
reallocation is available)
A disk drive containing user files
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 131
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 18
Page 132
Operating System
(4) A graphics card with direct bus connection, accessible through memorymapped I/O
For each of these I/O scenarios, would you design the operating system to use
buffering, spooling, caching, or a combination? Would you use polled I/O, or
interrupt driven I/O?
Give reasons for your choices. (8)
(ii) How do you choose a optimal technique among the various disk scheduling
techniques? Explain.(8)
(AUC MAY/JUNE 2012)
7. Describe the various levels of RAID. (8) (AUC MAY/JUNE 2012)
RAID structure
To provide redundancy at lower cost by using the idea of disk striping combined with
"parity" bits (which we describe next) have been proposed. These schemeshave
different cost-performance trade-offs and are classified according tolevels called RAID
levels..
1. RAID – multiple disk drives provides reliability via redundancy.
2. RAID is arranged into six different levels.
3. Several improvements in disk-use techniques involve the use of multiple disks
workingcooperatively.
4. Disk striping uses a group of disks as one storage unit.
5. RAID schemes improve performance and improve the reliability of the storage system
bystoring redundant data.
Mirroring or shadowing keeps duplicate of each disk.
Block interleaved parity uses much less redundancy.
RAID Level 0. RAID level 0 refers to disk arrays with striping at the level ofblocks but
without any redundancy
RAID Level 1. RAID level 1 refers to disk mirroring
RAID Level 2. RAID level 2 is also known as memory-style error-correcting
code(ECC) organization. Memory systems have long detected certain errors by using
parity bits. Each byte in a memory system may have a parity bit associated with it that
records whether the number of bits in the
byte set to 1 is even (parity = 0) or odd (parity = 1). If one of the bits in the byte is
damaged (either a 1 becomes a 0, or a 0 becomes a 1), the parity of the byte changes
and thus will not match the stored parity.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 133
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 19
Page 134
Operating System
ECC can be used directly in disk arrays via striping of bytes across disks. If one of the
disks fails, the remaining bits of the byte and the associated error-correction bits can be
read from otherdisks and used to reconstruct the damaged data.
RAID Level 3. RAID level 3, or bit-interleaved parity organization. If one of the sectors is
damaged, whether any bit in the sector is a 1 or a 0 by computing the parity of the
corresponding bits from sectors in the other disks. If the parity of the remaining bits is
equalto the stored parity, the missing bit is 0; otherwise, it is 1.RAID level 3 is as good as
level 2 but is less expensive in the number of extra disks requiredRAID level 3 has two
advantages over level 1.


First, the storage overhead is reduced because only one parity disk is
needed for several regular disks, whereas one mirror disk is needed for
every disk in level 1. 

Second, since reads and writes of a byte are spread out over multiple
disks with A/-way striping of data, the transfer rate for reading or writing a
single block is N times as fast as with RAID level 1. On the negative side,
RAID level 3 supports fewer I/O’s per second, since every disk has to
participate in every I/O request.. 
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 135
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 20
Page 136
Operating System
RAID Level 4. RAID level 4, or block-interleaved parity organization, uses block-level
striping, as in RAID 0, and in addition keeps a parity block on a separate disk for
corresponding blocks from A! other disks. If one of the disks fails, the parity block can be
used with the corresponding blocks from the other disks to restore the blocks of the
failed disk. Data-transfer rate for each accessis slower, but multiple read accesses can
proceed in parallel, leading to ahigher overall I/O rate.
The transfer rates for large reads are high, since allthe disks can be read in
parallel; large writes also have high transfer rates, since the data and parity can be
written in parallel. An operatingsystem write of data smaller than a block requires that
the block be read,modified with the new data, and written back. The parity block has to
beupdated as well. This is known as the read-modify-write cycle. Thus, asingle write
requires four disk accesses: two to read the two old blocks andtwo to write the two new
blocks.
RAID Level 5. RAID level 5, or block-interleaved distributed parity, differsfrom level 4 by
spreading data and parity among all N + 1 disks, ratherthan storing data in N disks and
parity in one disk. For each block, one ofthe disks stores the parity, and the others store
data.
RAID Level 6. RAID level 6, also called the P + Q redundancy scheme, is much like
RAID level 5 but stores extra redundant information to guard against multiple disk
failures. Instead of parity, error-correcting codes such as the Reed-Solomon codes are
used.
RAID Level 0 + 1. RAID level 0 + 1 refers to a combination of RAID levels0 and 1. RAID
0 provides the performance, while RAID 1 provides the reliability.
8. Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. The drive is
currently serving a request at cylinder 143, and the previous request was at
cylinder 125. The queue of pending requests, in FIFO order, is 86, 1470, 913, 1774,
948, 1509, 1022, 1750, 130 Starting from the current head position, what is the
total distance ((in cylinders) that the disk arm moves to satisfy all the pending
requests, for each of the following disk scheduling
a. FCFS b. SSTFc. SCAN d. LOOK e. C-SCAN
The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509, 1022,1750, 130.
The total seek distance is 7081.
The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470, 1509, 1750,1774.
The total seek distance is 1745.
The SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774,4999, 130,
86. The total seek distance is 9769.
The LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774,130, 86.
The total seek distance is 3319.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 137
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 21
Page 138
Operating System
e. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774,4999,
86, 130.
The total seek distance is 9813.
f. (Bonus.) The C-LOOK schedule is 143, 913, 948, 1022, 1470, 1509,1750, 1774, 86,
130.
The total seek distance is 3363.
9.
Compare the performance of C-SCAN and SCAN scheduling, assuming a inform
distribution of requests. Consider the average response time (the time between the
arrival of a request and the completion of that request’s service), the variation in
response time, and the effective bandwidth. How does performance depend on the
relative sizes of seek time and rotational latency? (16)
10.
Write note on
(i)
Disk attachment
(ii)
Streams
(iii)
Tertiary Storage
Disk attachment
Disks may be attached one of two ways:
Host attached via an I/O port
Network attached via a network connection
Network-Attached Storage
FigureNetwork-Attached Storage
Storage-Area Network
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 139
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 22
Page 140
Operating System
Storage-Area Networks
(ii) Streams
A stream isa full-duplex connection between a device driver and a user-level process. Itconsists
of a stream head that interfaces with the user process, a driverthat controls the device, and zero
or modules more between the stream
head and the driver end. Each of these components contains a pair of queues-a read queue
and a write queue. Message passing is used to transfer databetween queues.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 141
Operating System
M.SUMATHI, AP/CSE, SNS COLLEGE OF TECHNOLOGY
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 23
Page 142
Operating System
Modules provide the functionality of STREAMS processing; they are pushedonto a stream by use
of the ioctl () system call.
STREAMS I/0 is asynchronous (or nonblocking) except when the userprocess communicates with
the stream~ head. When writing to the stream,the user process will block, assuming the next
queue uses flow controtuntilthere is room to copy the message.
The benefit of using STREAMS is that it provides a framework for amodular and incremental
approach to writing device drivers and networkprotocols. Modules may be used by different
streams and hence by differentdevices.
For example, a networking module may be used by both an Ethernetnetwork card and a 802.11
wireless network card
(iii)Territory storage
1. Low cost is the defining characteristic of tertiary storage.
2. Generally, tertiary storage is built using removable media
3. Common examples of removable media are floppy disks and CD-ROMs; other types are
available
Removable Disks
Floppy disk — thin flexible disk coated with magnetic material, enclosed in a protective plastic
case.
Most floppies hold about 1 MB; similar technology is used for removable disks that hold more
than 1 GB.
Removable magnetic disks can be nearly as fast as hard disks, but they are at a greater risk of
damage from exposure.
A magneto-optic disk records data on a rigid platter coated with magnetic material. Laser heat is
used to amplify a large, weak magnetic field to record a bit.
Laser light is also used to read data (Kerr effect).
The magneto-optic head flies much farther from the disk surface than magnetic disk head, and
the magnetic material is covered with
a protective layer of plastic or glass; resistant to head crashes.
Optical disks do not use magnetism; they employ special materials that are altered by laser light.
3. WORM Disks
(6)
The data on read-write disks can be modified over and over.
(7)
WORM (―Write Once, Read Many Times‖) disks can be written only once.
(8)
Thin aluminum film sandwiched between two glass or plastic platters.
(9) To write a bit, the drive uses a laser light to burn a small hole through the aluminum; information
can be destroyed by not altered.
Very durable and reliable
Read Only disks, such ad CD-ROM and DVD, com from the factory with the data
prerecorded.
3. Tapes
1. Compared to a disk, a tape is less expensive and holds more data, but random access is
much slower.
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 143
Operating System
3. Tape is an economical medium for purposes that do not require fast random access, e.g.,
backup copies of disk data, holding huge volumes of data.
4. Large tape installations typically use robotic tape changers that move tapes between tape drives
and storage slots in a tape library.
a. stacker – library that holds a few tapes
b. silo – library that holds thousands of tapes
5. A disk-resident file can be archived to
SNSCT – Department of Computer Science & Engineering (UG&PG)
Page 144