Download Types of OS

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

MTS system architecture wikipedia , lookup

Unix security wikipedia , lookup

Spring (operating system) wikipedia , lookup

Burroughs MCP wikipedia , lookup

RSTS/E wikipedia , lookup

Distributed operating system wikipedia , lookup

CP/M wikipedia , lookup

OS 2200 wikipedia , lookup

Process management (computing) wikipedia , lookup

VS/9 wikipedia , lookup

Transcript
Operating System Classification
Single User, Multi-User, Simple
Batch Processing,
Multiprogramming, Multi tasking,
Real Time Systems, Parallel Systems,
Distributed systems
Single User
• This OS is designed to manage the
computer so that one user can effectively do
one thing at a time.
• It is aimed at maximum user convenience
and responsiveness
• Provides good interface to a single user
• Example: PCs running MS Windows, and
apple Macintosh, MS DOS, OS/2, Linux
Multi-User
• Allows many different users to take advantage of
computer’s resources simultaneously
• It is used as a single server and multiple number of
clients
• The time shared system allows multiple users to
share the system simultaneously
• The system switches from one user to another
rapidly, giving the impression that each user has
full control of the system
Multi-User
cont…
• Store the information about the number of user’s
connected to the server and provides security to
data from unauthorized access
• Each user has a private set of programs and access
techniques for accessing the data.
• Example : Windows 2000, Novell Netware,
Windows NT, Unix, VMS (Virtual Memory
System), MVS (Multiple virtual Storage- for
mainframe systems)
Simple Batch Processing
• Batch is a sequence of user jobs formed for
the purpose of processing by a Bath
Processing OS
• Batch is not the computational unit, Each
job in a batch is independent of other jobs
and may belong to different users.
• A job consists of program, data and some
control information about the program.
Simple Batch Processing
cont..
• The primary function of the batch
processing system is to service the jobs in a
batch one after another without requiring
the operator’s intervention.
• Achieved by automating the transition from
execution of one job to that of the next job
in the batch
Simple Batch Processing
cont..
• Batch processing is implemented by the kernel or
batch monitor which resides in one part of
computer’s memory
• The remaining memory is used for servicing a user
job- the current job in the batch
• When the operator gives the command to initiate
the processing of a batch, monitor sets up the
processing of first job in the batch
Simple Batch Processing
cont..
• At the end of the job, it performs job
termination processing and initiates the next
job in the batch
• At the end of the batch , it terminates the
batch termination processing and waits for
the next batch initiation by the operator
(only at start and end of a batch)
System
area
User
area
Batch
Monitor
Current
Job of
the
Batch
Start
Job1 Job2
Job n End
Simple Batch Processing
cont..
• Uses Notion of Virtual Devices to conserve the CPU time
• Uses virtual devices like tapes/disks instead of punched
cards/printers
• A program first records a batch of jobs on a magnetic tape
• Batch system processes the jobs and save the results on
another magnetic tape.
• Contents of this magnetic tape are then printed by another
program
• These two spooling operations called inspooling and
outspooling were performed on smaller systems with slow
CPU- resulted in lesser CPU idle times
Disadvantages
• CPU idle during I/O
• I/O devices idle when CPU busy
Multiprogramming Systems
Definition
• The OS can put many programs in the
memory and let the CPU execute
instructions of one program while the I/O
subsystem is busy for I/O operation for
another program
MultiProgramming
Kernel
MultiProgramming
Kernel
MultiProgramming
Kernel
Program 1
Program 1
Program 1
I/O
CPU
I/O
Program 2
Program 3
I/O
CPU
CPU
Program 2
Program 3
I/O
Program 2
Program 3
Cont…
• Multiprogramming kernel performs
– Scheduling (simple scheduling policy and
performs simple partitioned or pool based
allocation of memory and I/O devices)
– Memory Management
– I/O Management
Cont..
• The CPU and the I/O subsystem could
operate on same program
• Thus the program must explicitly
synchronize the activities of the CPU and
the I/O subsystem to ensure that the
program executes correctly
• Done by allocating CPU to program only
when the program is not performing I/O
Architectural support for
Multiprogramming
• DMA(Direct Memory Access)
• Memory Protection
• Privileged mode of CPU
DMA(Direct Memory Access)
• Makes Multiprogramming feasible by permitting
concurrent operation of CPU and I/O devices
– In this mode a block of data can be transferred between
memory and I/O device without involving the CPU
– An I/O instruction indicates the I/O operations to be
performed an also number of bytes to be transferred
– I/O operation starts when the instruction is executed
– Data transfer between the device and memory takes
place over the system bus
– CPU is not involved in this transfer , an interrupt is
raised at the end of transfer of all bytes
Memory Protection
• Prevents mutual interference between
programs
– Ensures that the programs does not access or
destroy contents of memory areas occupied by
other programs or the OS
Privileged mode of CPU
• Provides a method of implementing
memory protection and other measures that
avoid interference between programs
• A program interrupt is raised if the program
tries to execute privileged instruction when
CPU is in user mode
User service
• The turnaround time of a job is affected by
the amount of CPU attention devoted to
other jobs executing concurrently with it
• It depends upon the number of jobs in the
system and priorities assigned to different
jobs by the scheduler
Functions of Kernel-Scheduling
• Scheduling is performed after every interrupt
using simple priority based preemptive scheduling
scheme
• Priority is a tie breaking notion used in a scheduler
to decide which request should be scheduled on
the server when many requests await service
• Preemption is the forced deallocation of the CPU
from a program (from a low priority program to
high priority program)
Functions of Kernel-Memory
Management
• To protect two programs from interference ,
memory protection hardware is used and CPU is
put in the Non-Privileged mode while executing
user programs.
• An effort by a user program to access memory
locations situated outside its memory area, or use
a privileged instruction leads to an interrupt
• Interrupt processing routines for these interrupts
terminate the program that caused and interrupt
Multitasking Systems
or
Time Sharing systems
Definition
Logical extension of multiprogramming,
where CPU executes multiple jobs by
switching between them so frequently that
users can interact with each program while
it is running
• for immediate results
– Require low response time
• Allow many users to share the computer
simultaneously, users have the impression
that they have their own machine
• CPU is multiplexed among several jobs that
are kept in memory and on disk
User Service
• Characterized in terms of time taken to
service a sub-request i.e the response time
• E.g
– A typical request from a user to compile a
statement or execution of program on given
data, the response consists of message from
compiler or results computed. User issues next
request receiving the response. Good response
time would lead to productivity of the user
Scheduling
• To give good response time to all users, they must
have equal opportunity to present their
computational requests and have them serviced.
• Two provisions to ensure this– No priorities assigned to programs, but are executed by
turn (Round robin scheduling)
– Program is prevented form consuming unreasonable
amount of CPU time (Time Slicing)
Preempted program
Time
Slice
over
Scheduler
Scheduling List
Selected
Program
CPU
Computation
over
Round Robin Scheduling
• When a user makes a computational unit to his
program, the program is added to the end of a
Scheduling List
• A small unit of time, called timeslice or quantum,
is defined which is the largest amount of CPU
time any program can consumed when scheduled
to execute.
• The CPU scheduler goes around this queue,
allocating the CPU to each process for a time
interval of one quantum.
• The CPU scheduler picks the first process from
the scheduling list, sets a timer to interrupt after
one quantum, and dispatches the process.
• If the process is still running at the end of the
quantum, the CPU is preempted and the process is
added to the tail of the scheduling list.
• If the process finishes before the end of the
quantum, the process itself releases the CPU
voluntarily.
• In either case, the CPU scheduler assigns the CPU
to the next process in the ready queue.
Memory Management
• Swapping is the technique of temporarily removing
inactive programs from the memory of a computer system
• Three kinds of programs
– Active Programs
– Programs being swapped out of the memory
– Programs being swapped into the memory
• When an active program becomes inactive, the OS swaps it
out by copying its instructions and data onto a disk. A new
program is loaded in its place
• Use of swapping is feasible in time sharing systems
because the time sharing kernel can estimate when the
program is likely to be scheduled next.
Real Time Operating Systems
Definition
A real time application is a program that responds to
activities in an external system within a maximum
time determined by the external system
Applications- missile guidance, command and
control applications like process control and air
traffic control, data sampling and applications like
railway reservation and banking system
Hard & Real Time Systems
• Hard real time system – dedicated to
processing real time applications and
provably meets the response requirements
of an application under all conditions.
• Soft real time system – makes best effort to
meet the response real time application but
cannot guarantee that it will be able to meet
it under all conditions
Features
• Permits creation of multiple processes within an
application
• Permits priorities to be assigned to processes.
• Permits a programmer to define interrupts and
interrupt processing routines
• Uses priority driven or deadline oriented
scheduling
• Provides fault tolerance and graceful degradation
capabilities
Distributed Operating System
Definition
A distributed OS exploits the multiplicity of
resources and the presence of a network to
provide the advantages of resource sharing
across computers, reliability of
operation,speed up of applications and
communication between users
Features
• Resource sharing – improves resource utilization across
boundaries of individual computer systems.
• Reliability- Availability of resources and services despite
failures.
• Computation speed-up- parts of a computation can be
executed in different computer systems to speed-up the
computation
• Communication-Provides means of communication
between remote entities.
• Incremental growth- capabilities of a system can be
enhanced at a price proportional the nature and size of the
enhancement
Key concepts and techniques
used
• Distributed control- a function is performed through
participation of several nodes, possibly all nodes, in a
distributed system.
• Transparency-A resource or service can be accessed
without having to know its location in the distributed
system.
• Remote Procedure Call (RPC)-A process calls a procedure
that is located in a different computer system. The
procedure call is analogous to a procedure or function call
in programming language, except that is the OS that passes
parameters to the remote procedure and returns its results.
Operation of the process making the call is resumed when
results are returned to it.
Parallel Systems/ Multiprocessor
System
Definition
Systems that have two or more processors in
close communication, sharing the computer
bus and sometimes the clock, memory and
peripheral devices
Types of Multiprocessor systems
• Asymmetric multiprocessing – each processor is
assigned a specific task. A master processor
controls the system, the other processors either
look to the master for instruction or have
predefined task. Defines a master-slave
relationship. The master processor schedules and
allocates work to the slave processors. SunOS
version 4
• Symmetric Multiprocessing- Each processor
performs all tasks within the operating system..
No master-slave relationship. SunOS Version 5
Advantages
• Increased throughput- By increasing the number of
processors, get more work done in less time. A certain
amount of overhead incurred in keeping all the parts
working correctly and contention for shared resources
lowers the expected gain.
• Economy of scale -Cost less than equivalent multiple
single processor systems as they share peripherals, mass
storage and power supplies.
• Increased reliability – If the functions can be distributed
properly among several processor, then the failure of one
processor will not affect the system, only slows it down
(graceful degradation and fault tolerance)