Download Difference Between Paging and segmentation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

RSTS/E wikipedia , lookup

Spring (operating system) wikipedia , lookup

Distributed operating system wikipedia , lookup

Copland (operating system) wikipedia , lookup

VS/9 wikipedia , lookup

Burroughs MCP wikipedia , lookup

CP/M wikipedia , lookup

Process management (computing) wikipedia , lookup

Memory management unit wikipedia , lookup

Paging wikipedia , lookup

Transcript
Note : These Data is taken from different WebSite
Difference Between Paging and segmentation
Paging vs Segmentation
Paging is a memory management method used by operating systems. Paging allows the
main memory to use data that is residing on a secondary storage device. These data are
stored in the secondary storage device as blocks of same size called pages. Paging allows
the operating system to use data that will not fit in to the main memory. Memory
segmentation is a method that provides memory protection. Each memory segment is
associated with a specific length and a set of permissions. When a process tries to access
the memory it is first checked to see whether it has the required permission to access the
particular memory segment.
What is Paging?
Paging is a memory management method used by operating systems. Paging allows the
main memory to use data that is residing on a secondary storage device. These data are
stored in the secondary storage device as blocks of same size called pages. Paging allows
the operating system to use data that will not fit in to the main memory. When a program
tries access a page, first the page table is checked to see whether that page is on the main
memory. Page table holds details about where the pages are stored. If it is not in the main
memory, it is called a page fault. Operating system is responsible for handling page faults
without showing it to the program. The operating system first finds where that particular
page is stored in the secondary storage and then brings it in to an empty page frame in the
main memory. Then it updates the page table to indicate that the new data is in the main
memory and returns the control back to the program that initially requested the page.
What is Segmentation?
Memory segmentation is a method that provides memory protection. Each memory segment
is associated with a specific length and a set of permissions. When a process tries to access
the memory it is first checked to see whether it has the required permission to access the
particular memory segment and whether it is within the length specified by that particular
memory segment. If either of these conditions is not satisfied, a hardware exception is
raised. In addition, a segment may also have a flag indicating whether the segment is in the
main memory or not. If the segment is not residing in the main memory, an exception will
be raised and the operating system will bring the segment from the secondary memory to
the main memory.
What is the difference between Paging and Segmentation?
In paging, memory is divided in to equal size segments called pages whereas memory
segments could vary in size (this is why each segment is associated with a length attribute).
Sizes of the segments are determined according to the address space required by a process,
while address space of a process is divided in to pages of equal size in paging. Segmentation
provides security associated with the segments, whereas paging does not provide such a
mechanism.
Paging and Segmentation
Paging
Primary memory is divided into small equal sized partitions (256, 512, 1K) called page
frames.
Process are divided into same sized blocks called pages.
Only bring in the pages you are referencing and keep those you have recently referenced.
Need a page table to this management.
Figure 7.9 shows a sequence of processes using the page frames.
Figure 7.10 shows the page tables entries.
Logical Address Mapping
Figure 7.11 shows a partitioning of the bits of an address to determine the page table entry
and the offset within the page for the address needed.
The process is generally:



extract the page number as the leftmost n bits of the logical address
use the page number as an index into the process page table to find the frame
number f
the starting physical address if f*2^m and the phusical address address of the
referenced byte is that number plus the offset. No adding is necessary just append
the offset bits to the address of frame f.
Segmentation
Paging suffers from internal fragmentation.
Segmentation maps segments representing data structures, modules, etc. into variable
partitions. Not all segments of a process again are loaded at a time, nor are they in
contiguous memory blocks.
We need a segment table very much like a page table.
Figure 7.12 shows a mapping of segmentation type logical to physical addresses.
Again the process is




extract the segment number as the leftmost n bits of the logical address
use the segment number as an index into the process segment table to find the
starting physical address of the segment
compare the offset (the rightmost m bits) to the length of the segment for validity
the physical address is the sum of the starting physical address of the segment plus
the offset. A concatenation of the bits won't work here.
Figure 7.12
Difference between Multiprogramming, Multitasking, Multithreading and
Multiprocessing
Introduction
In the context of computing and operating systems, one might encounter many (confusing) terms which may look similar
but eventually refer to different concepts. In this post, I will try to summarize the basic differences between various
operating systems types and explain why and how they are not the same. The information provided is not new and can
be found all over the place on the internet and actually that may add to the confusion. I hope that having all (not exactly)
the terms clarified in one place would make it easy to remember. There could be many reasons why there are different
operating systems types but it is mainly because each system type is designed from the ground up to meet certain design
goals. On the other hand, the underlying hardware architecture influences the way systems are designed and
implemented.
In a typical computing system, probably there are (many) concurrent application processes competing for (few)
resources, for example the CPU. An operating system must handle resource allocation with due care. There is a popular
operating systems term for such allocation known as scheduling. Depending on operating system’s type, the goals of
scheduling processes can be different. Regardless of system type there should be some kind of fairness when allocating
resources, stated policy must be enforced and the overall system usage should be balanced. As we will see later, the
goals that need to be achieved characterize the operating system type. Let us now talk about some operating systems
types and terminology…
Multiprogramming
One time, I was at the post office standing in line waiting my turn to be served. My turn came but I was not fully
prepared because my mail was not prepackaged. They gave me an empty box to do that on the side. I started packaging
my mail while another customer is occupying my spot. It does not make sense to block the whole line while packaging my
mail however it is a better idea to allow other customers proceed and get served in the mean time. I think this example
(to some extent) is very similar in concept to multiprogramming model where programs are like customers and CPU is like
the post office assistant. Assuming one assistant (single processor system) then only one customer can be served at a
time. While a customer is being served he or she continues until he or she finishes or waits on the side. As long as the
assistant is helping a customer he does not switch to serve other customers.
In a multiprogramming system there are one or more programs (processes or customers) resident in computer’s main
memory ready to execute. Only one program at a time gets the CPU for execution while the others are waiting their turn.
The whole idea of having a multi-programmed system is to optimize system utilization (more specifically CPU time). The
currently executing program gets interrupted by the operating system between tasks (for example waiting for IO, recall
the mail packaging example) and transfer control to another program in line (another customer). Running program keeps
executing until it voluntarily gives the CPU back or when it blocks for IO. As you can see, the design goal is very clear:
processes waiting for IO should not block other processes which in turn wastes CPU time. The idea is to keep the CPU
busy as long as there are processes ready to execute.
Note that in order for such a system to function properly, the operating system must be able to load multiple programs
into separate partitions of the main memory and provide the required protection because the chance of one process
being modified by another process is likely to happen. Other problems that need to be addressed when having multiple
programs in memory is fragmentation as programs enter or leave (swapping) the main memory. Another issue that needs
to be handled as well is that large programs may not fit at once in memory which can be solved by using virtual memory.
In modern operating systems programs are split into equally sized chunks called pages but this is beyond the scope of
this article.
In summary, Multiprogramming system allows multiple processes to reside in main memory where only one program is
running. The running program keeps executing until it blocks for IO and the next program in line takes the turn for
execution. The goal is to optimize CPU utilization by reducing CPU idle time. Finally, please note that the term
multiprogramming is an old term because in modern operating systems the whole program is not loaded completely into
the main memory.
Multiprocessing
Multiprocessing sometimes refers to executing multiple processes (programs) at the same time. This is confusing
because we already have multiprogramming (defined earlier) and multitasking (will talk about it later) that are better to
describe multiple processes running at the same time. Using the right terminology keeps less chance for confusion so
what is multiprocessing then?
Multiprocessing refers actually to the CPU units rather than running processes. If the underlying hardware provides more
than one processor then that is multiprocessing. There are many variations on the basic scheme for example having
multiple cores on one die or multiple dies in one package or multiple packages in one system. In summary,
multiprocessing refers to the underlying hardware (multiple CPUs, Cores) while multiprogramming refers to the software
(multiple programs, processes). Note that a system can be both multi-programmed by having multiple programs running
at the same time and multiprocessing by having more than one physical processor.
Multitasking
Multitasking has the same meaning as multiprogramming in the general sense as both refer to having multiple (programs,
processes, tasks, threads) running at the same time. Multitasking is the term used in modern operating systems when
multiple tasks share a common processing resource (CPU and Memory). At any point in time the CPU is executing one
task only while other tasks waiting their turn. The illusion of parallelism is achieved when the CPU is reassigned to
another task (context switch). There are few main differences between multitasking and multiprogramming (based on the
definition provided in this article). A task in a multitasking operating system is not a whole application program (recall
that programs in modern operating systems are divided into logical pages). Task can also refer to a thread of execution
when one process is divided into sub tasks (will talk about multi threading later). The task does not hijack the CPU until it
finishes like in the older multiprogramming model but rather have a fair share amount of the CPU time called quantum
(will talk about time sharing later in this article). Just to make it easy to remember, multitasking and multiprogramming
refer to a similar concept (sharing CPU time) where one is used in modern operating systems while the other is used in
older operating systems.
Multi Threading
Before we proceed, let us recap for a minute. Multiprogramming refers to multiple programs resident in main memory
and (apparently but not exactly) running at the same time. Multitasking refers to multiple processes running
simultaneously by sharing the CPU time. Multiprocessing refers to multiple CPUs so where does multi threading fit in the
picture.
Multi threading is an execution model that allows a single process to have multiple code segments (threads) run
concurrently within the context of that process. You can think of threads as child processes that share the parent process
resources but execute independently. Multiple threads of a single process can share the CPU in a single CPU system or
(purely) run in parallel in a multiprocessing system. A multitasking system can have multi threaded processes where
different processes share the CPU and at the same time each has its own threads.
The question is why we need to have multiple threads of execution within a single process context. Let me give an
example where it is more convenient to have a multi threaded application. Suppose that you have a GUI application
where you want to issue a command that require long time to finish for example a complex mathematical computation.
Unless you run this command in a separate execution thread you will not be able to interact with the main application
GUI (for example updating a progress bar) because it is going to be frozen (not responding) while the calculation is
taking place.
Multi threading is a smart way to write concurrent software but it also comes with a price because the programmer has to
be aware of race conditions when two or more threads try to access a shared resource and leave the system in an
inconsistent state or a deadlock. Thread synchronization (for example using locks or semaphores) is used to solve this
problem which is beyond the scope of this article.
Time Sharing
Recall that in a single processor system, parallel execution is an illusion. One instruction from one process at a time can
be executed by the CPU even though multiple processes reside in main memory. Imagine a restaurant with only one
waiter and few customers. There is no way for the waiter to serve more than one customer at a time but if it happens
that the waiter is fast enough to rotate on the tables and provide food quickly then you get the feeling that all customers
are being served at the same time. This is the example of time sharing when CPU time (or waiter time) is being shared
between processes (customers). Multiprogramming and multitasking operating systems are nothing but time sharing
systems. In multiprogramming though the CPU is shared between programs it is not the perfect example on CPU time
sharing because one program keeps running until it blocks however in a multitasking (modern operating system) time
sharing is best manifested because each running process takes only a fair amount of the CPU time called quantum time.
Even in a multiprocessing system when we have more than one processor still each processor time is shared between
running processes. As you can see all terms are somehow related in one way or another however not using the right term
in the right context is what makes the confusion so keep that in mind.
Real Time System
At the beginning of this article we mentioned that a system is characterized by the goals that need to be achieved. In a
typical time sharing operating system processes are scheduled so that CPU time is shared among the group. Depending
on the scheduling algorithm, each process gets its share amount of CPU time but there is no guarantee that a process
gets the CPU whenever it wants to. On the other hand, in a real time system a process is guaranteed to get CPU attention
when a specific event happens. There should be an operational deadline from the time the event is triggered to the time
the system responds back. Processes in a real time system are mission critical for example in the case of industrial robots
in an assembly line where at each stage a certain operation is expected to take place.
Conclusion
Multiprogramming, multitasking, multi threading, time sharing and real time systems all refer to software implementation
of scheduling processes for CPU execution. Each scheduling implementation serves certain design goals that characterize
a particular operating system type. Multiprocessing on the other hand refers to the number of CPU units the underlying
hardware provides. That is all for today, thanks for reading.
What is the Difference between Multitasking, Multi-Programming, Time-Sharing, and
Multiprocessing?
A task is an operation such as storing, printing or calculating.
Multitasking: Processing of two more programs by one user concurrently on one processor.
Number of User: 1
Number of Processors: 1
Order of Processing: Concurrently
Multiprogramming: Processing of two more programs by multiple user concurrently on one processor.
Number of User: Multiple
Number of Processors: 1
Order of Processing: Concurrently
Timesharing: Processing of two more programs by multiple user in a Round- Robin Fashion on one processor.
Number of User: Multiple
Number of Processors: 1
Order of Processing: Round Robin
Multiprocessing: Processing of two more programs by one or more user simultaneously on one or more
processor.
Number of User: one or more
Number of Processors: two or more
Order of Processing: simultaneous
Multi programming: Multiprogramming is the technique of running several programs at a time
using timesharing. It allows a computer to do several things at the same time. Multiprogramming
creates logical parallelism. The concept of multiprogramming is that the operating system keeps
several jobs in memory simultaneously.
The operating system selects a job from the job pool and starts executing a job, when that job needs to
wait for any i/o operations the CPU is switched to another job. So the main idea here is that the CPU
is never idle Multi tasking: Multitasking is the logical extension of multi-programming.
The concept of multitasking is quite similar to multiprogramming but difference is that the switching
between jobs occurs so frequently that the users can interact with each program while it is running.
This concept is also known as time-sharing systems. A time-shared operating system uses CPU
scheduling and multiprogramming to provide each user with a small portion of time-shared system.
For example, let us say you are printing a documet of 100 pages. While your computer is performing
that, you still can do other jobs like typing a new document. So, more than one task is performed.
One of the main differences between multiprogramming and multitasking is, "In multiprogramming,
a user cannot interact (everything is decided by OS, like picking the next program and sharing on time
basis, etc...) where as in multitasking, a user can interact with the system (you can type a letter, while
the other task of printing is going on)"
Multi threading: An application typically is implemented as a separate process with several threads
of control. In some situations a single application may be required to perform several
similar tasks for example a web server accepts client requests for web pages, images, sound,
and so forth. A busy web server may have several of clients concurrently accessing it. If the web server
ran as a traditional single-threaded process, it would be able to service only one client at a time.
The amount of time that a client might have to wait for its request to be serviced could be enormous.
So it is efficient to have one process that contains multiple threads to serve the same
purpose. This approach would multithread the web-server process, the server would create a
separate thread that would listen for client requests when a request was made rather than creating
another process it would create another thread to service the request. To get the advantages like
responsiveness, Resource sharing economy and utilization of multiprocessor architectures
multithreading concept can be used.
This type of programming helps when more than one client uses it. For example, let us take our DB.
While I'm typing this post, there would be someone else, doing the same type of job. If DB is not
having a multithread option, then not more than one person will be able to do the same job.
What Is The Difference Between Multitasking, Multiprogramming
And Multiprocessing?
In computing, multitasking is a method by which multiple tasks are performed by the user also known as
processes, share common processing resources such as a CPU. CPU is actively executing more than one
task at a time. Multitasking solves the problem by scheduling the tasks instructions. Which task may be
the one running at any given time, when another waiting task gets a turn. These requests are managed by
reassigning a CPU from one task to another one is called a context switch. Even on computers with more
than one CPU (called multiprocessor machines), multitasking allows many more tasks to be run than
there are CPUs.
Multiprogramming:The method of Multiprogramming systems took place in the 1960s. In that process
several different programs in batch were loaded in the computer memory, and the first one began to run.
One program after another executed when the first program reached an instruction waiting for a device
that has a message, the context of this program was stored away, and the second program in memory was
given a chance to run. The process continued until all programs finished running.
In Multiprogramming programs may have delay; the very first program may very well run for hours
without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was
no problem.
Multiprocessing is the execution of more than one processors at a time. But they must be coordinated.
Multiprocessing is a general term that can mean the dynamic assignment of a program to one of two or
more computers working on the same program at the same time (in parallel). As each computer has its
own operating system so therefore they have to be coordinated and managed properly in order to share
instructions
Differentiate between multiprocessing and multiprogramming?
Multiprocessing is the coordinated processing of programs by more than one computer processor. Multiprocessing is a general term
that can mean the dynamic assignment of a program to one of two or more computers working in tandem or can involve multiple
computers working on the same program at the same time (in parallel).
With the advent of parallel processing, multiprocessing is divided into symmetric multiprocessing (SMP) and massively parallel
processing (MPP).
In symmetric (or "tightly coupled") multiprocessing, the processors share memory and the I/O bus or data path. A single copy of the
operating system is in charge of all the processors. SMP, also known as a "shared everything" system, does not usually exceed 16
processors.
In massively parallel (or "loosely coupled") processing, up to 200 or more processors can work on the same application. Each
processor has its own operating system and memory, but an "interconnect" arrangement of data paths allows messages to be sent
between processors. Typically, the setup for MPP is more complicated, requiring thought about how to partition a common database
among processors and how to assign work among the processors. An MPP system is also known as a "shared nothing" system.
Multiprocessing should not be confused with multiprogramming, or the interleaved execution of two or more programs by a processor.
Today, the term is rarely used since all but the most specialized computer operating systems support multiprogramming.
Multiprocessing can also be confused with multitasking, the management of programs and the system services they request as tasks
that can be interleaved, and with multithreading, the management of multiple execution paths through the computer or of multiple users
sharing the same copy of a program.
Multiprogramming is a rudimentary form of parallel processing in which several programs are run at the same time on a uniprocessor.
Since there is only one processor , there can be no true simultaneous execution of different programs. Instead, the operating system
executes part of one program, then part of another, and so on. To the user it appears that all programs are executing at the same time.
If the machine has the capability of causing an interrupt after a specified time interval, then the operating system will execute each
program for a given length of time, regain control, and then execute another program for a given length of time, and so on. In the
absence of this mechanism, the operating system has no choice but to begin to execute a program with the expectation, but not the
certainty, that the program will eventually return control to the operating system.
If the machine has the capability of protecting memory , then a bug in one program is less likely to interfere with the execution of other
programs. In a system without memory protection, one program can change the contents of storage assigned to other programs or
even the storage assigned to the operating system. The resulting system crashes are not only disruptive, they may be very difficult to
debug since it may not be obvious which of several programs is at fault.