Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Cosc 4740 Chapter 3 Processes What is a Process? • Code Section: Sequence of Instructions • Static Data Section: e.g., global variables and arrays • Dynamic Data Section: e.g., malloc() or calloc() calls • Stack Section: e.g., local variables, function/procedure parameters, and return address • PC (Program Counter): Next instruction Types of a Process • Either OS process (e.g., system call) or User process • Either I/O bound process or CPU bound process o I/O bound processes: Issue lots of I/O requests and little computation o CPU bound processes: Use CPU frequently and I/O infrequently • Either Independent process or Cooperating process o Independent processes o Does not affect the execution of other processes o Is not affected by other processes o Does not share any data with other processes o Cooperative processes (e.g., producer/consumer example) o Can affect the execution of other processes o Can be affected by other processes o Share data with other processes • Either Foreground process or Background process o Foreground processes: o Hold the terminal. o Can receive input and return output from/to the user o Background processes o Detached from the terminal it was started o Runs without user interaction. o The “&” sign will allow you to execute a process as a background process in UNIX. Problems • Process scheduling- problem when and which process to give the CPU to • Inter-process communication – Mutual exclusion: If data is shared we might want to restrict other processes access. We want to avoid interference between two processes – Process synchronization: We may want processes to reach a particular point at specific time. – Deadlock (recognition and resolution): We need to know if it happens and resolve the problem (if we allow it to happen at all). Process Scheduling • Remember: Multiprocessing and Timesharing • As a process executes, it changes state – new: The process is being created. – running: Instructions are being executed. – waiting: The process is waiting for some event to occur. – ready: The process is waiting to be assigned to a process. – terminated: The process has finished execution. Diagram of Process State PCB (Process Control Block) Each process is represented by a PCB in the OS. – – – – – – – – – Process state PID (Process ID) PC (Program counter) Contents of the processor’s registers Scheduling information Memory management information List of I/O devices allocated to the process List of open files etc. (e.g., priority of the process, amount of CPU time used, time limits, …) CPU Switch From Process to Process Process Scheduling • Process scheduler selects among available processes for next execution on CPU • Maintains scheduling queues of processes – Job queue – set of all processes in the system – Ready queue – set of all processes residing in main memory, ready and waiting to execute – Device queues – set of processes waiting for an I/O device – Waiting queues: The list of PCBs. Each PCB represents a “waiting” process who is waiting for termination of the child process or reception of a signal/message – Processes migrate among the various queues Process Representation in Linux • Represented by the C structure task_struct pid t pid; /* process identifier */ long state; /* state of the process */ unsigned int time slice /* scheduling information */ struct task struct *parent; /* this process’s parent */ struct list head children; /* this process’s children */ struct files struct *files; /* list of open files */ struct mm struct *mm; /* address space of this pro */ Ready Queue And Various I/O Device Queues Representation of Process Scheduling Schedulers • Long-term scheduler: o Job pool: In multiprogrammed batch system, there are often more processes submitted than can be executed immediately. These processes are spooled to the HDD (secondary storage). o Long-term scheduler selects processes from the job pool and fills the ready queue. o Executes infrequently can takes more time to select a process o Mainly used in Batch systems o Controls the number of processes in memory[1] • Select a good mix of I/O bound processes and CPU bound processes to maximize both the I/O devices and the CPU • Short-term scheduler (CPU scheduler): o Executes very frequentlymust be very fast to avoid wasting CPU time. o Select a process from the ready queue when the current process releases the CPU[1]. • [1] Some scheduling techniques force the current process to release the CPU (i.e., Preemptive scheduling) • Medium-term scheduler: o Swap in/out[1] partially executed processes to temporary frees up the main memory by reducing the degree of multiprogramming (i.e., the number of processes in the main memory). – We may need this scheduler to improve the mix of I/O bound processes and CPU bound processes in the main memory or to react to an increase in dynamic memory requests (e.g., malloc()). Medium-term Scheduler Context Switching • Occurs when the current process releases the CPU • Procedure: o The current process releases the CPU voluntarily or involuntarily o OS updates the PCB of the current process with the current information (e.g., PC, contents of CPU registers, etc.) o Short-term scheduler selects the next process from the ready queue o OS load CPU registers with new data from PCB of the next process to be executed o Start the instruction pointed by the PC • Frequently occurs and expensive decrease the system performance • Some solutions o Multiple sets of registers o Special HW instruction (e.g., a single instruction to load or store all registers) o Threads whenever possible (e.g., A single process includes both producer and consumer – Will be discussed in later in much more detail) Process Creation • Parent process create children processes, which, in turn create other processes, forming a tree of processes. • Resource sharing – Parent and children share all resources. – Children share subset of parent’s resources. – Parent and child share no resources. • Execution – Parent and children execute concurrently. – Parent waits until children terminate. Process Creation (Cont.) • Address space – Child duplicate of parent. – Child has a program loaded into it. • UNIX examples – fork system call creates new process – exec system call used after a fork to replace the process’ memory space with a new program. Process Creation (Cont.) • If you use “wait” system call, the parent waits until some or all of its children have terminated • If you don’t use “wait” system call, the parent continues to execute concurrently with its children • If you use only “fork” call, the child process is a duplicate of the parent process • If you use “fork” in the parent and “execv” in the child, the child process will have a program loaded into it. C Program Forking Separate Process int main() { pid_t pid; /* fork another process */ pid = fork(); if (pid < 0) { /* error occurred */ fprintf(stderr, "Fork Failed"); exit(-1); } else if (pid == 0) { /* child process */ execlp("/bin/ls", "ls", NULL); } else { /* parent process */ /* parent will wait for the child to complete */ wait (NULL); printf ("Child Complete"); exit(0); } } A Tree of Processes on Solaris Process Termination • Process executes last statement and asks the operating system to decide it (exit). – Output data from child to parent (via wait). – Process’ resources are deallocated by operating system. • Parent may terminate execution of children processes (abort). – Child has exceeded allocated resources. – Task assigned to child is no longer required. – Parent is exiting. • Operating system does not allow child to continue if its parent terminates. • Cascading termination. Inter-Process Communication (IPC) • Cooperative processes: Why we need them? o o o o Information sharing Computation speedup for multi-processor systems Modularity Convenience • OS should provide Cooperative processes with communication mechanisms so that they can communicate with each other. (At least read-write or send-receive) • Example of Cooperative processes: Producer-Consumer problem o Unbounded buffer or bounded buffer o Buffer can be provided by OS (i.e., messagepassing) e.g., pipe() – Buffer can be programmed in the cooperative processes (i.e., shared memory). Direct Communication • Both sender and receiver must name each other. (Symmetric addressing) o Process Q: send (P, message) o Process P: receive (Q, message) • Only the sender names the recipient. (Asymmetric addressing) o Process Q: send (P, message) o Process P: receive (id, message) • Pros: provide some degree of protection • Cons: Limited modularity (changing name of P) Indirect Communication • Messages are sent and received to/from ports (or mailboxes) • Each port has a unique ID • Process P: send (Port ID, message) • Process Q: receive (Port ID, message) • Mailbox can belongs to OS or programmed in cooperative processes Buffering • Port has Zero capacity: o Synchronization is necessary: Sender wait for receiver to send a message. When both of them are ready (i.e., Rendezvous), the sender can send a message. • Port has Bounded capacity: o N messages can be stored in Port temporary. Note, N is the same as the port size. o Sender does not wait when the port is not full o When the port is full, sender wait until one message in the port is consumed • Port has Unbounded capacity: o Sender never has to wait for receiver Bounded-Buffer – Shared-Memory Solution • Shared data #define BUFFER_SIZE 10 typedef struct { ... } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; • Solution is correct, but can only use BUFFER_SIZE-1 elements Bounded-Buffer – Insert() Method while (true) { /* Produce an item */ while (((in = (in + 1) % BUFFER SIZE count) == out) ;/* do nothing -- no free buffers */ buffer[in] = item; in = (in + 1) % BUFFER SIZE; } Bounded Buffer – Remove() Method while (true) { while (in == out); // do nothing -- nothing to consume // remove an item from the buffer item = buffer[out]; out = (out + 1) % BUFFER SIZE; return item; } Synchronization • Message passing may be either blocking or nonblocking • Blocking is considered synchronous – Blocking send has the sender block until the message is received – Blocking receive has the receiver block until a message is available • Non-blocking is considered asynchronous – Non-blocking send has the sender send the message and continue – Non-blocking receive has the receiver receive a valid message or null Client-Server Communication • Sockets • Remote Procedure Calls • Remote Method Invocation (Java) Sockets • A socket is defined as an endpoint for communication • Concatenation of IP address and port • The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8 • Communication consists between a pair of sockets Socket Communication Remote Procedure Calls • Remote procedure call (RPC) abstracts procedure calls between processes on networked systems. • Stubs – client-side proxy for the actual procedure on the server. • The client-side stub locates the server and marshalls the parameters. • The server-side stub receives this message, unpacks the marshalled parameters, and peforms the procedure on the server. Execution of RPC Remote Method Invocation • Remote Method Invocation (RMI) is a Java mechanism similar to RPCs. • RMI allows a Java program on one machine to invoke a method on a remote object. Marshalling Parameters Issues with IPC These issues apply to both message passing and shared memory schemes • Sender or receiver has been terminated and the other one is waiting for a message from the terminated one. Receiver has been terminated and the sender sent a message – Best solution: One of the error return value of the send/receive function is “the sender/receiver has been terminated” – So-So: OS terminates the other one. – Worst solution: OS does not care about it. • Message was lost – Best: Its OS responsibility – So-so: OS detect it and tells the sender about it. – Worst: Its sender’s responsibility. • Message was damaged – Parity check, … Q&A