Download Spooling

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

DNIX wikipedia , lookup

Distributed operating system wikipedia , lookup

Burroughs MCP wikipedia , lookup

CP/M wikipedia , lookup

Paging wikipedia , lookup

Process management (computing) wikipedia , lookup

Transcript
504: Operating System – II
Q-1: Explain buffering and spooling.
Ans -1: One way in which the I/O system improves the efficiency of the computer is by scheduling I/O
operations. Another way is by using storage space in main memory or on disk via techniques called
buffering and spooling.
 Buffering:
A buffer is a memory area that stores data while they are transferred between two devices or between a
device and an application.
Often a user process generates requests for output (say) much faster than the device can handle. Instead of
having a process waiting for ‘request-serviced', introduce a buffer to store all requests, then process can go
onto do other things. This is called buffering.
Buffering is done for three reasons:
I. One reason is to cope with a speed mismatch between the producer and consumer of a data stream.
Suppose, for example, that a file is being received via modem for storage on the hard disk. The modem
is about a thousand times slower than the hard disk. So a buffer is created in main memory to
accumulate the bytes received from the modem. When an entire buffer of data has arrived, the buffer
can be written to disk in a single operation. Since the disk write is not instantaneous and the modem
still needs a place to store additional incoming data, two buffers are used. After the modem fills the
first buffer, the disk write is requested. The modem then starts to fill the second buffer while the first
buffer is written to disk. By the time the modem has filled the second buffer, the disk write from the
first one should have completed, so the modem can switch back to the first buffer while the disk writes
the second one.
This double buffering decouples the producer of data from the consumer, thus relaxing timing
requirements between them.
Double buffering: It is the case when two buffers are used. In a producer/consumer situation, mutual
exclusion prevents both processes accessing the buffer at the same time thus, possibly, causing delays.
Giving each process its own buffer will reduce the probability of this delay; a transfer between buffers
takes place when neither is being accessed by its process.
II. A second use of buffering is to adapt between devices that have different data-transfer sizes. Such
disparities are especially common in computer networking, where buffers are used widely for
fragmentation and reassembly of messages. At the sending side, a large message is fragmented into
small network packets. The packets are sent over the network, and the receiving side places them in a
reassembly buffer to form an image of the source data.
III. A third use of buffering is to support copy semantics for application I/O. An example will clarify the
meaning of "copy semantics.'' Suppose that an application has a buffer of data that it wishes to write to
disk. It calls the write () system call, providing a pointer to the buffer and an integer specifying the
number of bytes to write. After the system call returns, what happens if the application changes the
contents of the buffer? With copy semantics, the version of the data written to disk is guaranteed to be
1
Prepared By: Kavita K. Ahuja
the version at the time of the application system call, independent of any subsequent changes in the
application's buffer. A simple way in which the operating system can guarantee copy semantics is for
the write () system call to copy the application data into a kernel buffer before returning control to the
application. The disk write is performed from the kernel buffer, so that subsequent changes to the
application buffer have no effect. Copying of data between kernel buffers and application data space is
common in operating systems, despite the overhead that this operation introduces, because of the clean
semantics.
 Spooling
Spooling refers to a process of transferring data by placing it in a temporary working area where another
program may access it for processing at a later point in time.
The term "spool" is an acronym of "Simultaneous Peripheral Operation On Line". Acronym for
simultaneous peripheral operations on-line, spooling refers to putting jobs in a buffer, a special area in
memory or on a disk where a device can access them when it is ready. Spooling is useful because devices
access data at different rates. The buffer provides a waiting station where data can rest while the slower
device catches up.
This temporary working area could be a file or storage device, but probably not a buffer. Usual uses of the
term spooling apply to situations where there is little or no direct communication between the program
writing the data and the program reading it.
Spooling is often used when a device writes data faster than a target device can read it, allowing the slower
device to work at its own pace without requiring processing to wait for it to catch up.
The most common spooling application is print spooling: documents formatted for printing are stored
onto a buffer (usually an area on a disk) by a fast processor and retrieved and printed by a relatively slower
printer at its own rate. As soon as the fast processor has written the document to the spool device it has
finished with the job and is fully available for other processes. One or more processes may rapidly write
several documents to a print queue without waiting for each one to print before writing the next.
Spooling improves the multiprogramming capability of systems. Most programs require input and produce
output. Without spooling, the number of tasks that could be multiprogrammed might be limited by the
availability of peripherals; with spooling, a task doesn't need access to a real device.
In short,
Buffering --- A method of overlapping the I/O of a job with its execution.
After data has been read and while the CPU starts execution, the input device is instructed to begin the next
input immediately so that it is in memory when it is requested.
Spooling --- Simultaneous Peripheral Operation On-Line
Using a disk or similar device as a large buffer, for reading as far ahead as possible on input devices or
storing output files until the output device (or devices) are ready for them.



Buffering overlaps the I/O of a job with its own computation.
Spooling overlaps the I/O of a job with the computation of all other jobs.
Spooling can be used to provide a job pool
2
Prepared By: Kavita K. Ahuja
Q-2: What are various functions of operating system as process manager?
Ans-2: Process Management:
The operating system is responsible for handling various activities of program management are –





Creation & Deletion both user and system processes
Suspending & Resuming of processes
Providing mechanism for process synchronization (serially execution of the program one after
another )
Providing mechanism for process communication
Providing mechanism of deadlock handling
Process management is an integral part of any modern day operating system. The OS must allocate
resources to processes, enable processes to share and exchange information, protect the resources of each
process from other processes and enable synchronization among processes. To meet these requirements,
the OS must maintain a data structure for each process, which describes the state and resource ownership
of that process and which enables the OS to exert control over each process.
Q-3: Explain the functions and role of Device Controller.
A modern general-purpose computer system consists of one or more CPUs and a number of device
controllers connected through a common bus that provides access to shared memory as shown in figure 1.
The CPU and the device controllers can execute concurrently, competing for memory cycles. To ensure
orderly access to the shared memory, a memory controller is provided whose function is to synchronize
access to the memory.
Figure 1 Device and their Device Controllers
Each device controller is in charge of a specific type of device (for example, disk drives, audio devices, and video
displays) or small class of them (i.e small computer-systems interface (SCSI)) depending on the controller.
A device controller maintains some local buffer storage and a set of special-purpose registers.
The device controller is responsible for moving the data between the peripheral devices and its local
buffer storage. Typically, operating systems have a device driver for each device controller.
The device driver understands the device controller and presents a uniform interface to the device to the
rest of the operating system.
3
Prepared By: Kavita K. Ahuja
To start an I/O operation, the device driver loads the appropriate registers within the device controller. The
device controller, in turn, examines the contents of these registers to determine what action to take (such as
"read a character from the keyboard")- The controller starts the transfer of data from the device to its local
buffer. Once the transfer of data is complete, the device controller informs the device driver via an
interrupt that it has finished its operation. The device driver then returns control to the operating system,
possibly returning the data or a pointer to the data if the operation was a read.
Device Driver operations:






Initialize devices
Interpreting commands from OS
Schedule multiple outstanding requests
Manage data transfers
Accept and process interrupts
Maintain the integrity of driver and kernel data structures
Q-4: Explain the principle of locality of references. How is it used in avoiding the
thrashing?
When a process that does not have enough frames then it will quickly page faults, at this point it must
replace some page. However since all pages are in active use, it must replace a page that will be needed
again and also it quickly faults again, and again. This process continues to fault, replacing pages for which
it then faults and bring back in right away. This high paging is called thrashing.
A process is thrashing if it is spending more time paging than executing.
Thrashing results in severe performance problems. Consider the following scenario, which is based on the
actual behavior of early paging systems.
The operating system monitors CPU utilization. If CPU utilization is too low, we increase the degree of
multiprogramming by introducing a new process to the system. Now suppose that a process enters a new
phase in its execution and needs more frames. It starts faulting and taking frames away from other
processes. These processes need those pages, however, and so they also fault, taking frames from other
processes. These faulting processes must use the paging device to swap pages in and out. As they queue up
4
Prepared By: Kavita K. Ahuja
for the paging device, the ready queue empties. As processes wait for the paging device, CPU utilization
decreases. The CPU scheduler sees the decreasing CPU utilization and increases the degree of
multiprogramming as a result. The new process tries to get started by taking frames from running
processes, causing more page faults and a longer queue for the paging device. As a result, CPU utilization
drops even further, and the CPU scheduler tries to increase the degree of multiprogramming even more.
Thrashing has occurred, and system throughput plunges. The pagefault rate increases tremendously As a
result, the effective memory-access time increases. No work is getting done, because the processes are
spending all their time paging.
Above figure shows that As the degree of multiprogramming increases, CPU utilization also increases,
although more slowly, until a maximum is reached. If the degree of multiprogramming is increased even
further, thrashing sets in, and CPU utilization drops sharply. At this point, to increase CPU utilization and
stop thrashing, we must decrease the degree of multiprogramming.
To avoid thrashing (use of locality of reference):
•
Principle of locality states that program and data references within a process tend to cluster(group)
•
Hence, only a few pieces of a process will be needed over a short period of time
•
Use principle of locality to make intelligent guesses about which pieces will be needed in the near
future to avoid thrashing
To prevent thrashing, we must provide a process with as many frames as it needs. But how do we know
how many frames it "needs'? There are several techniques. The working-set strategy starts by looking at
how many frames a process is actually using. This approach defines the locality model of process
execution.
The locality model states that, as a process executes, it moves from locality to locality. A locality is a set of
pages that are actively used together. A program is generally composed of several different localities,
which may overlap.
For example, when a function is called, it defines a new locality. In this locality, memory references are
made to the instructions of the function call, its local variables, and a subset of the global variables. When
we exit the function, the process leaves this locality, since the local variables and instructions of the
function are no longer in active use. We may return to this locality later.
5
Prepared By: Kavita K. Ahuja
Principle of Locality can be modelled through working-set model.
THE WORKING SET MODEL:
Each process needs a minimum number of pages, called its ‘working set’, in memory to make effective use
of the processor.
 Degree of multiprogramming is proportional to the number of processes present in memory
 Processor utilization: is % of time the processor is busy executing, i.e. not idle waiting for I/O.
 As the degree of multi-programming increases, the processor utilization increases,
However: there is a critical degree of multiprogramming C beyond which processor utilisation starts to
decrease. The processor starts to spend more time swapping pages in and out of memory than executing
processes.
If process has < working set : continually interrupted by paged faults → Thrashing.
Which Pages Constitute the Working Set?
The set of pages which have ‘recently’ been referenced : w(t,h)
Example : Pages which have been referenced during the past 1 second :
The ‘size’ of the working set depends on the interval h, i.e. as h is increased (that is the further into the past
one looks), the fewer extra pages one expects to find in the working set.
6
Prepared By: Kavita K. Ahuja
Difficulties:
a) Principle of locality does not apply to some programs,
b) Working set of some processes is not well defined, e.g. erratic in structure and behaviour.
Q-5: What is the difference between character and block I/O device?
Block devices





Organize data in fixed-size blocks
Transfers are in units of blocks
Blocks have addresses and data are therefore addressable
E.g. hard disks, USB disks, CD-ROMs
Block device Interface:
 read( deviceNumber, deviceAddr, bufferAddr )
Transfer a block of data from “deviceAddr” to “bufferAddr”
 write( deviceNumber, deviceAddr, bufferAddr )
Transfer a block of data from “bufferAddr” to “deviceAddr”
 seek( deviceNumber, deviceAddress )
Move the head to the correct position
Usually not necessary
Character devices





Delivers or accepts a stream of characters, no block structure
Not addressable, no seeks
Can read from stream or write to stream
Printers, network interfaces, terminals
Character device interface:
 read( deviceNumber, bufferAddr, size )
Reads “size” bytes from a byte stream device to “bufferAddr”
 write( deviceNumber, bufferAddr, size )
Write “size” bytes from “bufferAddr” to a byte stream device
Q-6: What are the uses of PMTBR, PMTLR registers?
Ans-6: Each process must have PMT (page map table), PMT must be large enough to have as many
entries as the maximum number of pages per process. However, very few processes actually use up all the
pages and thus, there is a scope for reducing the PMT length and saving the memory. This is achieved by a
register called Page Map Table Limit register (PMTLR) which contains the number of pages contained
in a process. There is one PMTLR for each PMT, PMTLR is maintaining in the PCB for each process in
the register area.
Accessing of the PMT of the running process may be facilitated by means of the Page Map Table Base
Register (PMTBR), which points to the base address of the PMT of the running process. Upon each
process switch (context switch) the PCB of the new running process provides the values to be loaded in the
PMTBR.
7
Prepared By: Kavita K. Ahuja