Download A thread

Document related concepts

Process management (computing) wikipedia , lookup

Thread (computing) wikipedia , lookup

Transcript
TDC561
Network Programming
Week 6-7:
Concurrency Aspects : POSIX Threads; Multithreading
Camelia Zlatea, PhD
Email: [email protected]
References
 W. Richard Stevens, Network Programming : Networking
API: Sockets and XTI, Volume 1, 2nd edition, 1998 (ISBN 013-490012-X)
– Chap. 23
 John Shapley Gray, Interprocess Communications in UNIX - The Nooks and Crannies Prentice Hall PTR, NJ, 1998
– Chap. 11
Network Programming (TDC561)
Winter 2003
Page 2
Terms
 Unit of resource ownership: process, task
– virtual address space is allocated to it (process image map)
– resources assigned to it (main memory, files, I/O devices)
 Unit of Dispatching: process, thread (lightweight
process)
– the entity scheduled and dispatched by the OS
 Multithreading - multiple threads of execution within
a single process
– concurrency at process level
Network Programming (TDC561)
Winter 2003
Page 3
Threads vs. Processes
 Creation of a new process using fork is
expensive (time & memory).
 A thread (sometimes called a lightweight
process) does not require lots of memory or
startup time.
Network Programming (TDC561)
Winter 2003
Page 4
UNIX Processes
 A process may create sub-processes
– fork();
 A process may terminate
– exit();
 A process may put itself to sleep temporarily
– sleep(20);
– pause();
– wait();
 Processes
– synchronization mechanisms
– communication mechanisms
Network Programming (TDC561)
Winter 2003
Page 5
UNIX Threads
 Multiple Processes - concurrency at OS level
 Multiple Threads - concurrency at process level
– thread = flow of control in a process
– multiple threads (stream of instructions) are executed within
the same process
– threads share code & data (address space)
– threads have their own execution stack, PC, register set and
states
– context switches are avoided
– efficient mapping on multi-processor machines
 A thread (sometimes called a lightweight process)
does not require lots of memory or startup time.
Network Programming (TDC561)
Winter 2003
Page 6
Processes
 Process - a program in execution
– process - active entity
– program - passive entity (binary file)
 Address Space - list of memory locations from where a
process reads/writes (code/data/stack)
 Set of registers (PC, SP, ...)
 Process Table - linked list of structures associates w/
processes
 System Calls - interface between OS and User process
Network Programming (TDC561)
Winter 2003
Page 7
Process Control Block (process attributes)
 Process State
– new, ready, running, blocked, terminated
 Process Image Map Pointer
 Process ID
– assigned at process creation
 Program Counter (PC)
– address of next instruction to be executed




CPU Registers (saved process context)
List of Open File Descriptors
I/O Devices Attached
CPU Scheduling Info (priority)
Network Programming (TDC561)
Winter 2003
Page 8
Process Image Map
Process Table
Proc. 1
Proc. n
Process Image
Text/Code
Segment
Data Segment
Stack Segment
Process Control
Block
Network Programming (TDC561)
Winter 2003
Page 9
Process States
 New - process created ( Ex: fork(); )
 Ready process is waiting to be
assigned to processor (inserted in ready queue)
 Running - instructions are being
executed
 Blocked - wait for events to occur
(inserted in queue) Ex: wait(); pause();
 Terminated - normal/abnormal
termination (exit();)
Network Programming (TDC561)
Winter 2003
Page 10
Process Model
New
wakeup
Blocked/
Suspended
created
Ready
Quantum
Expired
dispatch
Running
sleep
User Mode exit
System Call
Return
Running
Interrupt
Kernel Mode
Terminated
Network Programming (TDC561)
Interrupt
InterruptWinter
return
2003
Page 11
 Context of a Process
–
–
–
–
–
process state (defined by it’s code)
value of u-area
values of registers the process uses
contents of user and kernel stacks
is associated with process image map
 Context Switching
– system executes a process in the context of the process
– when the kernel decides to execute another process, it does
context switching
– kernel saves enough information such that it can later switch
back to the first process and resumes its execution
 Mode Switching
– moving from user to kernel mode
– kernel save information to return to user mode
Network Programming (TDC561)
Winter 2003
Page 12
 User mode
– processes in use mode can access their own instructions and
data; NOT kernel or other process’s code or data
 Kernel mode
– process can access system(kernel) code and data and user
addresses
– Kernel is part of each process
– Kernel executes on behalf of the process
OS
HW
Kernel
Mode
User
Mode
Network Programming (TDC561)
P1
P2
P3
P4
K
K
U
U
Winter 2003
Page 13
Context Switching
P1
OS
P2
Save state in PCB1
Reload state from PCB2
Save state in PCB2
Reload state from PCB1
Network Programming (TDC561)
Winter 2003
Page 14
Context Switching
 Switching the CPU to another process by saving the
state of the old process (PCB) and load the state of the
new process (PCB)
 Pure Overhead
 Performance Bottleneck
 Avoid Overhead of Context Switching by introducing
new structures:
THREADS
Network Programming (TDC561)
Winter 2003
Page 15
Context Switching
Multitasking
Sequential Execution
Network Programming (TDC561)
Winter 2003
Page 16
Multithreaded environment
 PROCESS
– a virtual address space that holds the process
image
– all resources allocated: IPC channels, files etc.
 THREADS
– an execution state: running, ready,..
– an execution context: PC, SP, other registers
– a per thread stack
Network Programming (TDC561)
Winter 2003
Page 17
Threads
 A thread (lightweight process LPW)is a basic unit of
CPU utilization; it consists of:
–
–
–
–
program counter
register set
stack space
thread control block
 A thread shares with its peer threads its:
– code segment
– data segment
– operating system resources
 A traditional or heavyweight process is a task with
one thread
Network Programming (TDC561)
Winter 2003
Page 18
Threads (Cont.)
 In a multiple treaded task, while one server thread is
blocked or waiting, a second thread in the same
task can run.
 Cooperation of multiple threads in the same job
confer higher throughput and improved
performance.
– Kernel is not involved in this type of intra-task
communication
 Applications that requires sharing a common buffer
benefit from threads utilization.
Network Programming (TDC561)
Winter 2003
Page 19
Threads (Cont.)
process
threads
program
counter
text segment
data segment
Network Programming (TDC561)
Winter 2003
Page 20
Threads (Cont.)
 All threads of a process share the state and
resources of that process




Ready
Running
Blocked
Terminated
Network Programming (TDC561)
Winter 2003
Page 21
Thread Operations
 Spawn: create a new thread within a process and
place it in ready list
– program counter
– register set
– stack space
 Block: a thread waits for an event to occur
– PC, register set and SP are saved
– the next ready thread is dispatched
 Unblock: the tread is moved to ready when event is
delivered
 Finish: Threads completes and its register context
and stacks are de-allocated
Network Programming (TDC561)
Winter 2003
Page 22
fork()
Process Parent
Data Segment
( Global
Variables )
Text/Code
Stack
fork()
Process Child
Data Segment
( Global
Variables )
Text/Code
Stack
Network Programming (TDC561)
Winter 2003
Page 23
pthread_create()
Thread Parent
Process A
address space
Data Segment
( Global
Variables )
Text/Code
Stack
pthread_create
Thread Child
Stack
Network Programming (TDC561)
Winter 2003
Page 24
Multiple Threads
 Each process can include many threads.
 All threads of a process share:
–
–
–
–
memory (program code and global data)
open file/socket descriptors
signal handlers and signal dispositions
working environment (current directory, user ID, etc.)
Network Programming (TDC561)
Winter 2003
Page 25
Thread-Specific Resources
 Each thread has it’s own:
– Thread ID (integer)
– Stack, Registers, Program Counter
– errno
 Threads within the same process can
communicate using shared memory.
– Issues
» Mutual Exclusion
» Synchronization
Network Programming (TDC561)
Winter 2003
Page 26
Solaris Threads
 Solaris 2 is a version of UNIX
– support for threads at kernel and user levels
– symmetric multiprocessing
– real-time scheduling
 LWP - lightweight processes - intermediate level
between user-level threads and kernel threads..
Network Programming (TDC561)
Winter 2003
Page 27
Solaris Threads (Cont.)
Resource needs of thread types:
 Kernel Thread: small data structure and a stack;
thread switching does not require changing
memory access information - relatively fast
 LWP: PCB with register data, accounting and
memory information; switching between LPWs is
relatively slow.
 User-level thread: only needs stack and program
counter; no kernel involving means fast switching;
Kernel only sees the LWPs that support user-level
threads.
Network Programming (TDC561)
Winter 2003
Page 28
Thread States
 User-Level Thread
–
–
–
–
Active
Runnable
Stopped
Sleeping
 Lightweight Process (map user threads onto kernel
threads)
–
–
–
–
Running
Blocked
Runnable
Stopped
Network Programming (TDC561)
Winter 2003
Page 29
Thread States
 Ready - maybe scheduled for execution
– priority scheduling
 Standby - selected to run
– waits for processor availability
 Running - executed by CPU until is preempted,
quantum expired, or is blocked or is terminated
 Waiting - blocked for event delivery, to synchronize
with other threads
 Transition - threads is “Ready” but some resources
are not available
 Terminated
Network Programming (TDC561)
Winter 2003
Page 30
User-Level vs. Kernel Level Threads
 Advantages of ULT
– performance: low cost thread operations ( do not
req. crossing protection domains)
– flexibility: scheduling can be app. Specific
– portability: ULT thread library easy to port
 Disadvantages of ULT
– if a ULT is blocked, the entire process (all
threads of that process are blocked)
– cannot take advantage of mutiprocessing
Network Programming (TDC561)
Winter 2003
Page 31
User-Level vs. Kernel Level Threads
ULTs
Thread Scheduling
KLTs
user
kernel
Thread Scheduling
Process Scheduling
Network Programming (TDC561)
Winter 2003
Page 32
Threads in Solaris 2
task 1
task 2
task 3
user-level
thread
LWPs
kernel
thread
kernel
CPU
Network Programming (TDC561)
Winter 2003
Page 33
Windows NT Architecture
Win16
Posix
App.
Posix
Subsyst
Win32
App.
DOS
App.
WOW
OS/2
App.
VDM
VDM
OS/2
Subsyst
Win32
Subsyst
Executive service (Kernel Mode)
Hardware
Network Programming (TDC561)
Winter 2003
Page 34
Windows NT Threads
 Win32 Subsystem
– multiple threads of execution
» Ex.: A word processor that spell-checks your document as you type
– threads are scheduled preemptively
» A thread receives a specific amount of time for execution, and when
that time is up, Windows NT stops executing that thread and begins
executing another thread
» preemptive multitasking with other applications and subsystems
 VDN (Virtual DOS Machine)
– a DOS program executes as a single thread in a separate VDM
 WOW (Windows on Windows)
– cooperative multitasking
» each application performs seq. of operations and then return control
to OS
– non-preemptive scheduling
Network Programming (TDC561)
Winter 2003
Page 35
POSIX Threads
 POSIX (Portable Operating System Interface)
 POSIX threads are similar with Solaris 2 threads
– Difference: POSIX threads use attributes objects to
properties of the threads
» Stack size, Stack Address
» Schedule Policy and Parameters
• FCFS, RR, Preemptive Priority Policy
 Thread package (pthread.h)
– thread creation, destruction
– thread mutual exclusion
– thread synchronization (condition variables)
Network Programming (TDC561)
Winter 2003
Page 36
Posix Threads
 We will focus on Posix Threads - most widely
supported threads programming API.
 You need to link with “-lpthread”
Network Programming (TDC561)
Winter 2003
Page 37
POSIX Threads API
 thread creation and termination
pthread_create(&tid, NULL, func, arg)
pthread_exit(status)
 thread join
pthread_join(tid, &status)
 mutual exclusion
pthread_mutex_lock(&lock)
pthread_mutex_unlock(&lock)
 condition variable
pthread_cond_wait(&c,&lock)
pthread_cond_signal(&c)
Network Programming (TDC561)
Winter 2003
Page 38
Thread Creation
pthread_create(
pthread_t *tid,
const pthread_attr_t *attr,
void *(*func)(void *),
void *arg);
 func is the function to be called; when func() returns the
thread is terminated.
 The return value is 0 for OK and >0 (an error number ) on error.
 Does not set errno
 Thread ID is returned in tid
 Thread attributes can be set using attr, including detached
state and scheduling policy. You can specify NULL and get the
system defaults.
Network Programming (TDC561)
Winter 2003
Page 39
Thread IDs
 Each thread has a unique ID, a thread can find out it's
ID by calling pthread_self().
 Thread IDs are of type pthread_t which is usually an
unsigned int. When debugging it's often useful to do
something like this:
printf("Thread %u:\n",pthread_self());
Network Programming (TDC561)
Winter 2003
Page 40
Thread Arguments
 When func() is called the value arg specified in the
call to pthread_create() is passed as a parameter.
 func can have only 1 parameter, and it can't be larger
than the size of a void *.
 Complex parameters can be passed by creating a
structure and passing the address of the structure.
 The structure can't be a local variable (of the function
calling pthread_create), since threads have different
stacks
Network Programming (TDC561)
Winter 2003
Page 41
Thread args example
struct { int x,y } pair;
void *myfunc( void *arg) {
struct pair *foo = (struct pair *) arg;
printf("%u sum of %d and %d is %d\n",
pthread_self(), foo->x, foo->y,
foo->x+foo->y);
return(NULL);
}
Network Programming (TDC561)
Winter 2003
Page 42
Thread Lifespan
 Once a thread is created, it starts executing the
function func() specified in the call to
pthread_create().
 If func() returns, the thread is terminated.
 A thread can also be terminated by calling
pthread_exit().
 If main() returns or any thread calls exit()all
threads are terminated.
Network Programming (TDC561)
Winter 2003
Page 43
Detached State
 Each thread can be either joinable or detached.
 Detached: on termination all thread resources are
released by the OS. A detached thread cannot be
joined.
 No way to get at the return value of the thread. ( a
pointer to something: void * ).
Network Programming (TDC561)
Winter 2003
Page 44
Joinable Thread
• Joinable: on thread termination the thread ID and
exit status are saved by the OS.
• One thread can "join" another by calling
pthread_join - which waits (blocks) until a
specified thread exits.
int pthread_join( pthread_t tid,
void **status);
Network Programming (TDC561)
Winter 2003
Page 45
Thread Operations
 Spawn: create a new thread within a
process and place it in ready list
– PC, register set, stack space
 Block: a thread waits for an event to occur
– PC, register set and SP are saved
– the next ready thread is dispatched
 Unblock: the thread is moved to ready
when event is delivered
 Finish: threads completes and its register
context and stacks are de-allocated
Network Programming (TDC561)
Winter 2003
Page 46
Threads vs. Processes
 Advantages
– Operations on threads are cheaper than the
corresponding operations on processes
– inter-thread communication is achieved without
kernel intervention, through shared memory
 Disadvantages
– synchronization is critical
– easy to induce race conditions
Network Programming (TDC561)
Winter 2003
Page 47
Example threads
void *work(void *arg) {
int i = pthread_self();
fprintf(stderr,"Thread ID:%d\n”,i);
}
void monitor_ths(int num_ths){
pthread_t *tid; int i;
if ((tid =
(pthread_t *)calloc(num_ths, \
sizeof(pthread_t))) == NULL)
return;
Network Programming (TDC561)
Winter 2003
Page 48
Example threads
/* create a thread for each separate
activity */
for (i = 0; i < num_ths; i++)
if(error=pthread_create((tid + i), \ NULL, work, (void
*)&num_ths));
fprintf(stderr, "Could not create
\
thread %d:%s\n",
i, strerror(error));
for (i = 0; i < num_ths; i++)
pthread_join(*(tid + i), NULL);
}
Network Programming (TDC561)
Winter 2003
Page 49
Example Concurrent Processes
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/wait.h>
void work(void) {
fprintf(stderr,“Process ID:
\
%d\n”,getpid());
}
void main(void) {
pid_t pid; int i, status;
fprintf(stderr, “Process ID: \
%d\n”,getpid());
for (i=1; i<=3;i++)
if (fork()==0)
work();
while (pid=wait(&status)&& pid!=-1)
printf(“Child %d is done\n”,getpid());
}
Network Programming (TDC561)
Winter 2003
Page 50
Shared Global Variables
int counter=0;
void *myfunc(void *arg) {
counter++;
printf("Thread %u is number %d\n",
pthread_self(),counter);
}
main() {
int i; pthread_t tid;
for (i=0;i<10;i++)
pthread_create(&tid,NULL,myfunc,NULL);
}
 Sharing global variables introduces the problem of mutual
exclusion
Network Programming (TDC561)
Winter 2003
Page 51
Solving Concurrency Problems
 pthreads includes support for Mutual Exclusion
primitives that can be used to protect against this
problem.
 The general idea is to lock something before
accessing global variables and to unlock as soon
as you are done.
 Shared socket descriptors should be treated as
global variables
Network Programming (TDC561)
Winter 2003
Page 52
pthread_join() doesn’t help
 pthread_join (which is similar to wait()) requires that
we specify a thread id.
 We can wait for a specific thread, but we can't wait for
"the next thread to exit".
Network Programming (TDC561)
Winter 2003
Page 53
Use a global variable
 When each thread starts up:
– acquires a lock on the variable (using a mutex)
– increments the variable
– releases the lock.
 When each thread shuts down:
– acquires a lock on the variable (using a mutex)
– decrements the variable
– releases the lock.
Network Programming (TDC561)
Winter 2003
Page 54
MUTEX - Mutual Exclusion solutions
 Software solution:
a thread must register its intent to enter CS and then wait until
no other thread has registered a similar intention before
proceeding
 SPIN LOCKS (busy waiting)
ex. TST - text&set instruction
 OS based mechanisms:
semaphores, monitors
Network Programming (TDC561)
Winter 2003
Page 55
pthread_mutex
 A global variable of type pthread_mutex_t is
required:
 pthread_mutex_t counter_mtx=
PTHREAD_MUTEX_INITIALIZER;
 Initialization to PTHREAD_MUTEX_INITIALIZER is
required for a static variable
Network Programming (TDC561)
Winter 2003
Page 56
Locking and Unlocking
 To lock use:
pthread_mutex_lock(pthread_mutex_t &);
 To unlock use:
pthread_mutex_unlock(pthread_mutex_t &);
 Both functions are blocking!
Network Programming (TDC561)
Winter 2003
Page 57
MUTEX solutions
#include <pthread.h>
pthread_mutex_t my_lock =
PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_lock(&my_lock);
/* critical section */
pthread_mutex_unlock(&my_lock);
Network Programming (TDC561)
Winter 2003
Page 58
Producer Function
#include <pthread.h>
#define BUFSIZE 8
static int buffer[BUFSIZE];
static int bufin = 0;
static int bufout = 0;
static pthread_mutex_t buffer_lock =
PTHREAD_MUTEX_INITIALIZER;
/* Put item into buffer at position
bufin and update bufin. */
void put_item(int item)
{
pthread_mutex_lock(&buffer_lock);
buffer[bufin] = item;
bufin = (bufin + 1) % BUFSIZE;
pthread_mutex_unlock(&buffer_lock);
return;
};
Network Programming (TDC561)
Winter 2003
Page 59
Consumer Function
#include <pthread.h>
#define BUFSIZE 8
static int buffer[BUFSIZE];
static int bufin = 0;
static int bufout = 0;
static pthread_mutex_t buffer_lock =
PTHREAD_MUTEX_INITIALIZER;
/* Get the next item from buffer and
put it in *itemp. */
void get_item(int *itemp)
{
pthread_mutex_lock(&buffer_lock);
*itemp = buffer[bufout];
bufout = (bufout + 1) % BUFSIZE;
pthread_mutex_unlock(&buffer_lock);
return;
}
Network Programming (TDC561)
Winter 2003
Page 60
Example Problem
 A server creates a thread for each client. No
more than n threads (and therefore n clients) can
be active at once.
 How can we have the main thread know when a
child thread has terminated and it can now
service a new client?
Network Programming (TDC561)
Winter 2003
Page 61
Condition Variables
 Thread 1
phtread_mutex_lock(&lock)
while(!my-condition)
// acquire the mutex
// test predicate
pthread_cond_wait(&c, &lock) // if predicate True
do_critical_section();
pthread_mutex_unlock(&lock);
// do some work
// release mutex
 Thread 2
phtread_mutex_lock(&lock)
my-condition = true;
pthread_mutex_unlock(&lock);
pthread_cond_signal(&c);
Network Programming (TDC561)
Winter 2003
Page 62
Condition Variables
 pthreads support condition variables, which allow one
thread to wait (sleep) for an event generated by any
other thread.
 This allows us to avoid the busy waiting problem.
pthread_cond_t foo =
PTHREAD_COND_INITIALIZER;
Network Programming (TDC561)
Winter 2003
Page 63
Condition Variables (cont.)
 A condition variable is always used with mutex.
pthread_cond_wait(pthread_cond_t *cptr,
pthread_mutex_t *mptr);
pthread_cond_signal(pthread_cond_t *cptr);
Network Programming (TDC561)
Winter 2003
Page 64
Condition Variables (cont.)
#include <pthread.h>
pthread_mutex_t m = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t v = PTHREAD_COND_INITIALIZER;
pthread_mutex_lock(&m);
while (!test_condition())
pthread_cond_wait(&v, &m);
/* get resource (make test_condition return false) */
pthread_mutex_unlock(&m);
/* do stuff */
pthread_mutex_lock(&m);
/* release resource (make test_condition return true) */
pthread_cond_signal(&v);
pthread_mutex_unlock(&m);
Network Programming (TDC561)
Winter 2003
Page 65
Global Variables Example
// global variable the number of active
// threads (clients)
int active_threads=0;
// mutex used to lock active_threads
pthread_mutex_t at_mutex =
PTHREAD_MUTEX_INITIALIZER;
// condition var. used to signal changes
pthread_cond_t at_cond =
PTHREAD_COND_INITIALIZER;
Network Programming (TDC561)
Winter 2003
Page 66
Child Thread Code
void *cld_func(void *arg) {
. . .
// handle the client
. . .
pthread_mutex_lock(&at_mutex);
active_threads--;
pthread_cond_signal(&at_cond);
pthread_mutex_unlock(&at_mutex);
return();
}
Network Programming (TDC561)
Winter 2003
Page 67
Main thread
// no need to lock yet
active_threads=0;
while (1) {
pthread_mutex_lock(&at_mutex);
while (active_threads < n ) {
active_threads++;
pthread_create(…)
}
pthread_cond_wait( &at_cond, &at_mutex);
pthread_mutex_unlock(&at_mutex);
}
Network Programming (TDC561)
Winter 2003
Page 68
Posix pthread functions
 Sometimes a function needs to have thread
specific data (for example, a function that uses a
static local).
 Functions that support thread specific data:
pthread_key_create()
pthread_once()
pthread_getspecific()
pthread_setspecific()
Network Programming (TDC561)
Winter 2003
Page 69
Producer Function
void *producer(void * arg1)
{
int i;
for (i = 1; i <= SUMSIZE; i++) {
pthread_mutex_lock(&slot_lock);
/* acquire right to a slot */
while (nslots <= 0)
pthread_cond_wait (&slots, &slot_lock);
nslots--;
pthread_mutex_unlock(&slot_lock);
put_item(i*i);
pthread_mutex_lock(&item_lock); /* release right to an item */
nitems++;
pthread_cond_signal(&items);
pthread_mutex_unlock(&item_lock);
}
pthread_mutex_lock(&item_lock);
/* NOTIFIES GLOBAL TERMINATION */
producer_done = 1;
pthread_cond_broadcast(&items);
pthread_mutex_unlock(&item_lock);
return NULL;
}
Network Programming (TDC561)
Winter 2003
Page 70
Consumer Function
void *consumer(void *arg2)
{ int myitem;
for ( ; ; ) {
pthread_mutex_lock(&item_lock); /* acquire right to an item */
while ((nitems <=0) && !producer_done)
pthread_cond_wait(&items, &item_lock);
if ((nitems <= 0) && producer_done) { /* DISTRIBUTED
GLOBAL TERMINATION */
pthread_mutex_unlock(&item_lock);
break;
}
nitems--;
pthread_mutex_unlock(&item_lock);
get_item(&myitem);
sum += myitem;
pthread_mutex_lock(&slot_lock);
/* release right to a slot */
nslots++;
pthread_cond_signal(&slots);
pthread_mutex_unlock(&slot_lock);
}
return NULL;
}
Network Programming (TDC561)
Winter 2003
Page 71
Thread Safe library functions
 You have to be careful with libraries.
 If a function uses any static variables (or global memory)
it’s not safe to use with threads!
 Check man pages for Posix thread-safe functions
Network Programming (TDC561)
Winter 2003
Page 72
Thread Summary
 Threads are awesome, but challenging. You have to
pay attention to details or it's easy to end up with
code that is incorrect (doesn't always work, or
hangs in deadlock).
 Posix threads provides support for mutual
exclusion, condition variables and thread-specific
data.
Network Programming (TDC561)
Winter 2003
Page 73
Appendix
Concurrency
 Communication among processes
 Sharing resources; Allocation of processor time
 Synchronization of multiple processes
» asynchronous - non-blocking communication
» synchronous - blocking communication
 Naming conventions
» symmetric naming
» asymmetric naming
 Direction of information flux
» unidirectional
» bidirectional
Network Programming (TDC561)
Winter 2003
Page 75
Difficulties with Concurrency
 Sharing global resources
 Management of allocation of resources
 Programming errors difficult to locate
Network Programming (TDC561)
Winter 2003
Page 76
Operating System Concerns
 Keep track of active processes
 Allocate and deallocate resources
•
•
•
•
processor time
memory
files
I/O devices
 Protect data and resources
 Result of process must be independent of the speed of
execution since other processes share the processor
time
Network Programming (TDC561)
Winter 2003
Page 77
Competition Among Processes for Resources
 Execution of one process may affect the behavior of
competing processes
 If two processes wish access to a single resource, one
process will be allocated the resource and the other will
have to wait
 Possible the blocked process will never get access to the
resource and never terminate
Network Programming (TDC561)
Winter 2003
Page 78
Competition Among (con’t)
 Atomic Actions - indivisible state transformations
– fine-grain atomicity
» implemented directly in HW
» Examples
– coarse grain atomicity
» sequence of single atomic actions
» need synchronization mechanisms
» <S> an atomic sequence
 Concurrent Program - interleaving of atomic actions of
components
– cobegin P1; P2; …; Pn coend
– not any interleaving is acceptable
– synchronization (conditional synchronization or mutual
exclusion)
Network Programming (TDC561)
Winter 2003
Page 79
Control Problems
 Mutual Exclusion
– critical sections
• only one program at a time is allowed in its critical section
• example only one process at a time is allowed to send
command to the printer
 Deadlock
– a process is in deadlock if it is blocked waiting for a condition
that will never become true
– conditions for deadlock
• mutual exclusion
• circular-waiting
• non-preemptive resources
• busy-waiting
Network Programming (TDC561)
Winter 2003
Page 80
Control Problems (cont’)
 Starvation
– a process waits an unbounded interval of time to
gain access to a resource
 Busy-Waiting
– a process id busy-waiting if it is executing a loop,
waiting for a condition to be true
» ex: while (not B) ; // do B -> skip od
 Polling
– a process continuously and repeatedly
checks(polls) some specific variables to determine if
some events occurred
» ex. In Real-Time system, a process polls to check if
input parameters changed their variables
Network Programming (TDC561)
Winter 2003
Page 81
Control Problems (cont’)
 Livelock
– a process is in livelock if it is spinning while waiting for a
condition that will never become true
Network Programming (TDC561)
Winter 2003
Page 82
Cooperation Among Processes by Sharing
 Processes use and update shared data such as
shared variables, files, and data bases
 Writing must be mutually exclusive
 Critical sections are used to provide data integrity
Network Programming (TDC561)
Winter 2003
Page 83
Cooperation Among Processes by Communication
 Communication provides a way to synchronize,
or coordinate, the various activities
 Possible to have deadlock
– each process waiting for a message from the other
process
 Possible to have starvation
– two processes sending message to each other while
another process waits for a message
Network Programming (TDC561)
Winter 2003
Page 84
Requirements for Mutual Exclusion
 Only one process at a time is allowed in the
critical section for a resource
 If a process halts in its critical section, it must not
interfere with other processed
 A process requiring the critical section must not
be delayed indefinitely; no deadlock or starvation
Network Programming (TDC561)
Winter 2003
Page 85
Requirements for Mutual Exclusion (cont’)
 A process must not be delayed access to a
critical section when there is no other process
using it
 No assumptions are made about relative process
speeds or number of processes
 A process remains inside its critical section for a
finite time only
Network Programming (TDC561)
Winter 2003
Page 86
Critical Section Problem
 N processes all competing to use some shared data
 Each process has a code segment called critical
section, in which the shared data is accessed
 Problem- ensure that when one process is execution in
its critical section, no other process is allowed to
execute in its critical section
 Process Pi::
repeat
entry_section
critical section
exit_section
remainder section
until false
Network Programming (TDC561)
Winter 2003
Page 87
Solution- Critical Section Problem
 Mutual Exclusion
– If a process Pi is executing in its critical section, then no other
process can be executing in their critical sections.
 Progress
– If no process is executing in its critical section and there exists
some processes that wish to enter their critical section, the
selection of the processes that will enter the critical section
next cannot be delayed indefinitely
 Bounded Waiting
– A bound must exist on the #of times that other processes are
allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is
granted.
Network Programming (TDC561)
Winter 2003
Page 88
Semaphores
 Synchronization tool that does not require busy waiting
 Semaphore S - integer variable
 Can only be accessed via atomic(indivisible)operations
wait(S) ::
while S<=0 do skip;
S = S -1;
signal(S) :: S = S +1;
 Queue is used to hold processes waiting on the
semaphore
Network Programming (TDC561)
Winter 2003
Page 89
Critical Section for n Processes
 Shared variables:
var mutex: semaphore
initially mutex = 1
 Process Pi::
repeat
wait(mutex);
critical section
signal(mutex);
remainder section
until false
Network Programming (TDC561)
Winter 2003
Page 90
Semaphore - synchronization tool
 Execute B in Pj only after
executed in Pi
 Semaphore flag initialized to 0
 Code:
Pi
Pj
.
.
A
wait(flag)
signal(flag)
B
Network Programming (TDC561)
A
Winter 2003
Page 91
Deadlock and Starvation
 S and Q two semaphores initialized to 1
P0
P1
wait(S);
wait(Q);
wait(Q);
wait(S);
…
…..
signal(S);
signal(Q);
signal(Q);
signal(S);
Network Programming (TDC561)
Winter 2003
Page 92
Producer Function
producer:
repeat
produce item;
wait(not_full)
wait(mutex)
-add next item to buffer
signal(mutex);
signal(not_empty);
forever;
Network Programming (TDC561)
Winter 2003
Page 93
Consumer Function
consumer:
repeat
wait(not_empty)
wait(mutex)
-remove an item from buffer
signal(mutex);
signal(not_full);
consume item
forever;
Network Programming (TDC561)
Winter 2003
Page 94