Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Concurrency, Processes and Threads © 2004, D. J. Foreman 2-1 Concurrency The appearance that multiple actions are occurring at the same time On a uni-processor, something must make that happen ■ A collaboration between the OS and the hardware On a multi-processor, the same problems exist (for each CPU) as on a uni-processor © 2004, D. J. Foreman 2-2 Multiprogramming Combines multiplexing types: Space-multiplexing - Physical Memory Process0 Process1 Time-multiplexing © 2004, D. J. Foreman … Processn - Physical Processor 2-3 Multiprogramming-2 Multiprogramming ■ N programs apparently running simultaneously • space-multiplexed in executable memory • time-multiplexed across the central processor Reason why desired Greater throughput (work done per unit time) ■ More work occurring at the same time ■ Resources required CPU ■ Memory ■ © 2004, D. J. Foreman 2-4 The CPU Instruction cycles Access memory and/or registers ■ Sequential flow via "instruction register" ■ One instruction-completion at a time ■ • (Pipelines only increase the # of completions per time unit). They are still sequential! Modes of execution Privileged (System) ■ Non-privileged (User ) ■ © 2004, D. J. Foreman 2-5 Context Switching 4 atomic actions, depending on direction Kernel to user Memory protection on ■ Privilege mode off ■ Interrupts on (allowed) ■ Set instruction counter ■ User ■ ■ ■ ■ to kernel Memory protection off Privilege mode on Interrupts off (NOT allowed) Save instruction counter © 2004, D. J. Foreman 2-6 Memory Sequential addressing (0 – n) Partitioned ■ System • Inaccessible by user programs ■ User • Partitioned for multiple users • Accessible by system programs © 2004, D. J. Foreman 2-7 Processes-1 A Process is ■ A running program & its address space ■ A unit of resource management ■ Independent of other processes • NO sharing of memory with other processes • May share files open at Fork time One program may start multiple processes, each in its own address space © 2004, D. J. Foreman 2-8 Processes-2 Abstraction Memory Process-1 Process-n CPU Data stream Operating System © 2004, D. J. Foreman 2-9 Process & Address Space Code Data Stack Resources Resources Resources Abstract Machine Environment Address Space © 2004, D. J. Foreman 2-10 Processes-3 The ■ Process life-cycle Creation • User or scheduled system activation ■ Execution • Running – Performing instructions (using the ALU) • Waiting – Resources or Signals • Ready – All resources available except memory and ALU ■ Termination • Process is no longer available © 2004, D. J. Foreman 2-11 Processes-4 Space multiplexing Each process operates in its own "address space" ■ Address space is a sequence of memory locations (addresses) from 0 to 'n' as seen by the application ■ Process addresses must be "mapped" to real addresses in the real machine ■ More © 2004, D. J. Foreman on this later 2-12 Processes-5 Time multiplexing Each process is given a small portion of time to perform instructions ■ O/S controls the time per process and which process gets control next ■ • Many algorithms for this • No rules (from user's/programmer's view) on which process will run next or for how long • Some OS's dynamically adjust both time and sequence © 2004, D. J. Foreman 2-13 Processes-7 FORK ■ (label) Starts a process running from the labeled instruction – gets a copy of address space QUIT() ■ Process terminates itself JOIN (count) (an atomic operation) Merges >=2 processes ■ Really more like "quit, unless I'm the only process left" ■ © 2004, D. J. Foreman 2-14 Threads-1 A unit of execution within a process (like a lightweight process – an "lwp") also called a "task" Share address space, data and devices with other threads within the process Private stack, status (IC, state, etc) Multi-threading ■ >1 thread per process Limited ■ ■ by system to some max # Per system Per process © 2004, D. J. Foreman 2-15 Thread Models DOS Classic UNIX © 2004, D. J. Foreman JRE WinXX, Solaris, Linux, OS/2 2-16 Threads-2 Several thread API's Solaris: kernel-level threads & pthreads ■ Windows: kernel-level threads & pthreads ■ OS/2: kernel-level threads ■ Posix (pthreads) – full set of functions ■ • #include <pthread.h> // for C, C++ • Allows porting without re-coding ■ Java threads implemented in JVM, independent of OS support • Like multiprogramming implementation in Win3.1 • Uses underlying kernel support where available © 2004, D. J. Foreman 2-17 Threads-3 Windows ■ CreateThread( DWORD dwCreateFlags = 0, UINT nStackSize = 0, LPSECURITY_ATTRIBUTES lpSecurityAttrs = NULL ); POSIX ■ (native) (Linux, Solaris, Windows) iret1 = pthread_create( &thread1, NULL, (void*)&print_message_function, (void*) message1); © 2004, D. J. Foreman 2-18 Threads-4 Advantages ■ ■ ■ ■ May request resources with or without blocking on the request Blocked thread does NOT block other threads Inexpensive context switch Utilize MP architecture Thread ■ ■ of kernel-supported threads: library for user threads is in user space Thread library schedules user threads onto LWP’s LWP’s are: • implemented by kernel threads • scheduled by the kernel. © 2004, D. J. Foreman 2-19 Notes on Java The JVM uses monitors for mutual exclusion ■ provides wait and notify for cooperation ■ © 2004, D. J. Foreman 2-20 Java & Threads-1 Thread creation – 2 ways import java.lang.*; public class Counter extends Thread { public void run() //overrides Thread.run { .... } } extension from the Thread class 1. © 2004, D. J. Foreman 2-21 Java & Threads-2 import java.lang.*; public class Counter implements Runnable { Thread T; public void run() { .... } } 2. ■ ■ Instance of the Thread class as a variable of the Counter class – creates an interface Can still extend the Counter class © 2004, D. J. Foreman 2-22 Java & Threads-3 Difference ■ between the two methods Implementing Runnable, -> greater flexibility in the creation of the class counter Thread class also implements the Runnable interface © 2004, D. J. Foreman 2-23 Wait & Signal - semaphores Classical ■ Wait - P (s) // make me wait for something • • • ■ definitions DO WHILE (s<=0) END s=s-1 // when s becomes > 0, decrement it Signal - V (s) // tell others: my critical job is done • s=s+1 These MUST appear as ATOMIC operations to the application © 2004, D. J. Foreman 2-24