Download 資工系網媒所 NEWS實驗室

Document related concepts

Airborne Networking wikipedia , lookup

Remote Desktop Services wikipedia , lookup

Lag wikipedia , lookup

IEEE 1355 wikipedia , lookup

SIP extensions for the IP Multimedia Subsystem wikipedia , lookup

Distributed firewall wikipedia , lookup

Distributed operating system wikipedia , lookup

Real-Time Messaging Protocol wikipedia , lookup

Service-oriented architecture implementation framework wikipedia , lookup

Routing in delay-tolerant networking wikipedia , lookup

Transcript
Chapter 16: Distributed System Structures
Chapter 17: Distributed-File Systems
Chapter 18: Distributed Coordination
國立台灣大學
資訊工程學系
Chapter 16:
Distributed System Structures
國立台灣大學
資訊工程學系
Chapter Objectives
To provide a high-level overview of
distributed systems and the networks
that interconnect them
To discuss the general structure of
distributed operating systems
資工系網媒所
2/119 NEWS實驗室
Chapter 16: Distributed System Structures
Motivation
Types of Network-Based Operating Systems
Network Structure
Network Topology
Communication Structure
Communication Protocols
Robustness
Design Issues
An Example: Networking
資工系網媒所
3/119 NEWS實驗室
Motivation
Distributed system is collection of loosely coupled processors
interconnected by a communications network
Processors variously called nodes, computers, machines, hosts
Site is location of the processor
Reasons for distributed systems
Resource sharing
sharing and printing files at remote sites
processing information in a distributed database
using remote specialized hardware devices
Computation speedup – load sharing
Reliability – detect and recover from site failure, function transfer,
reintegrate failed site
Communication – message passing
資工系網媒所
4/119 NEWS實驗室
A Distributed System
資工系網媒所
5/119 NEWS實驗室
Types of Distributed Operating Systems
Network Operating Systems
Distributed Operating Systems
資工系網媒所
6/119 NEWS實驗室
Network-Operating Systems
Users are aware of multiplicity of machines.
Access to resources of various machines is
done explicitly by:
Remote logging into the appropriate remote machine
(telnet, ssh)
Remote Desktop (Microsoft Windows)
Transferring data from remote machines to local
machines, via the File Transfer Protocol (FTP)
mechanism
資工系網媒所
7/119 NEWS實驗室
Distributed-Operating Systems
Users not aware of multiplicity of machines
Access to remote resources similar to access to local resources
Data Migration – transfer data by transferring entire file,
or transferring only those portions of the file necessary
for the immediate task
Computation Migration – transfer the computation,
rather than the data, across the system
資工系網媒所
8/119 NEWS實驗室
Distributed-Operating Systems (Cont.)
Process Migration – execute an entire process, or parts
of it, at different sites
Load balancing – distribute processes across network to even
the workload
Computation speedup – subprocesses can run concurrently on
different sites
Hardware preference – process execution may require
specialized processor
Software preference – required software may be available at
only a particular site
Data access – run process remotely, rather than transfer all
data locally
資工系網媒所
9/119 NEWS實驗室
Network Structure
Local-Area Network (LAN) – designed to cover
small geographical area.
Multiaccess bus, ring, or star network
Speed  10 – 1000 megabits/second
Broadcast is fast and cheap
Nodes:
usually workstations and/or personal computers
a few (usually one or two) mainframes
資工系網媒所
10/119 NEWS實驗室
Depiction of typical LAN
資工系網媒所
11/119 NEWS實驗室
Network Types (Cont.)
Wide-Area Network (WAN) – links geographically
separated sites
Point-to-point connections over long-haul lines (often leased
from a phone company)
Speed  1.544 – 45 megbits/second
Broadcast usually requires multiple messages
Nodes:
usually a high percentage of mainframes
TCP, IP, UDP, UUCP, PPP, …
資工系網媒所
12/119 NEWS實驗室
Communication Processors in a Wide-Area Network
資工系網媒所
13/119 NEWS實驗室
Network Topology
Sites in the system can be physically connected in a variety of
ways; they are compared with respect to the following criteria:
Installation cost - How expensive is it to link the various sites in the
system?
Communication cost - How long does it take to send a message
from site A to site B?
Reliability - If a link or a site in the system fails, can the remaining
sites still communicate with each other?
The various topologies are depicted as graphs whose nodes
correspond to sites
An edge from node A to node B corresponds to a direct connection
between the two sites
The following six items depict various network topologies
資工系網媒所
14/119 NEWS實驗室
Network Topology
資工系網媒所
15/119 NEWS實驗室
Communication Structure
The design of a communication network must address four basic issues:
Naming and name resolution - How do two
processes locate each other to communicate?
Routing strategies - How are messages sent
through the network?
Connection strategies - How do two processes
send a sequence of messages?
Contention - The network is a shared resource, so
how do we resolve conflicting demands for its use?
資工系網媒所
16/119 NEWS實驗室
Naming and Name Resolution
Name systems in the network
Address messages with the process-id
Identify processes on remote systems by
<host-name, identifier> pair
Domain name service (DNS) – specifies the
naming structure of the hosts, as well as name
to address resolution (Internet)
資工系網媒所
17/119 NEWS實驗室
Routing Strategies
Fixed routing - A path from A to B is specified in advance;
path changes only if a hardware failure disables it
Since the shortest path is usually chosen, communication costs
are minimized
Fixed routing cannot adapt to load changes
Ensures that messages will be delivered in the order in which they
were sent
Virtual circuit - A path from A to B is fixed for the
duration of one session. Different sessions involving
messages from A to B may have different paths
Partial remedy to adapting to load changes
Ensures that messages will be delivered in the order in which they
were sent
資工系網媒所
18/119 NEWS實驗室
Routing Strategies (Cont.)
Dynamic routing - The path used to send a
message form site A to site B is chosen only
when a message is sent
Usually a site sends a message to another site on the
link least used at that particular time
Adapts to load changes by avoiding routing
messages on heavily used path
Messages may arrive out of order
This problem can be remedied by appending a sequence
number to each message
資工系網媒所
19/119 NEWS實驗室
Connection Strategies
Circuit switching - A permanent physical link is established for the
duration of the communication (i.e., telephone system)
Message switching - A temporary link is established for the duration
of one message transfer (i.e., post-office mailing system)
Packet switching - Messages of variable length are divided into
fixed-length packets which are sent to the destination
Each packet may take a different path through the network
The packets must be reassembled into messages as they arrive
Circuit switching requires setup time, but incurs less overhead for
shipping each message, and may waste network bandwidth
Message and packet switching require less setup time, but incur more
overhead per message
資工系網媒所
20/119 NEWS實驗室
Contention
Several sites may want to transmit information over a link
simultaneously. Techniques to avoid repeated collisions include:
CSMA/CD - Carrier sense with multiple access
(CSMA); collision detection (CD)
A site determines whether another message is currently
being transmitted over that link. If two or more sites begin
transmitting at exactly the same time, then they will
register a CD and will stop transmitting
When the system is very busy, many collisions may occur,
and thus performance may be degraded
CSMA/CD is used successfully in the Ethernet
system, the most common network system
資工系網媒所
21/119 NEWS實驗室
Contention (Cont.)
Token passing - A unique message type, known as a token, continuously
circulates in the system (usually a ring structure)
A site that wants to transmit information must wait until the token arrives
When the site completes its round of message passing, it retransmits the token
A token-passing scheme is used by some IBM and HP/Apollo systems
Message slots - A number of fixed-length message slots continuously
circulate in the system (usually a ring structure)
Since a slot can contain only fixed-sized messages, a single logical message
may have to be broken down into a number of smaller packets, each of which
is sent in a separate slot
This scheme has been adopted in the experimental Cambridge Digital
Communication Ring
資工系網媒所
22/119 NEWS實驗室
Communication Protocol
The communication network is partitioned into the following
multiple layers:
Physical layer – handles the mechanical and electrical
details of the physical transmission of a bit stream
Data-link layer – handles the frames, or fixed-length parts of
packets, including any error detection and recovery that
occurred in the physical layer
Network layer – provides connections and routes packets in
the communication network, including handling the address
of outgoing packets, decoding the address of incoming
packets, and maintaining routing information for proper
response to changing load levels
資工系網媒所
23/119 NEWS實驗室
Communication Protocol (Cont.)
Transport layer – responsible for low-level network access and
for message transfer between clients, including partitioning
messages into packets, maintaining packet order, controlling
flow, and generating physical addresses
Session layer – implements sessions, or process-to-process
communications protocols
Presentation layer – resolves the differences in formats among
the various sites in the network, including character conversions,
and half duplex/full duplex (echoing)
Application layer – interacts directly with the users’ deals with
file transfer, remote-login protocols and electronic mail, as well as
schemas for distributed databases
資工系網媒所
24/119 NEWS實驗室
Communication Via ISO Network Model
資工系網媒所
25/119 NEWS實驗室
The ISO Protocol Layer
資工系網媒所
26/119 NEWS實驗室
The ISO Network Message
資工系網媒所
27/119 NEWS實驗室
The TCP/IP Protocol Layers
資工系網媒所
28/119 NEWS實驗室
Robustness
Failure detection
Reconfiguration
資工系網媒所
29/119 NEWS實驗室
Failure Detection
Detecting hardware failure is difficult
To detect a link failure, a handshaking protocol can be used
Assume Site A and Site B have established a link
At fixed intervals, each site will exchange an I-am-up message
indicating that they are up and running
If Site A does not receive a message within the fixed interval,
it assumes either (a) the other site is not up or (b) the
message was lost
Site A can now send an Are-you-up? message to Site B
If Site A does not receive a reply, it can repeat the message
or try an alternate route to Site B
資工系網媒所
30/119 NEWS實驗室
Failure Detection (Cont.)
If Site A does not ultimately receive a reply from Site
B, it concludes some type of failure has occurred
Types of failures:
- Site B is down
- The direct link between A and B is down
- The alternate link from A to B is down
- The message has been lost
However, Site A cannot determine exactly why the
failure has occurred
資工系網媒所
31/119 NEWS實驗室
Reconfiguration
When Site A determines a failure has occurred, it must
reconfigure the system:
1. If the link from A to B has failed, this must be broadcast
to every site in the system
2. If a site has failed, every other site must also be notified
indicating that the services offered by the failed site are no
longer available
When the link or the site becomes available again, this
information must again be broadcast to all other sites
資工系網媒所
32/119 NEWS實驗室
Design Issues
Transparency – the distributed system should appear
as a conventional, centralized system to the user
Fault tolerance – the distributed system should
continue to function in the face of failure
Scalability – as demands increase, the system should
easily accept the addition of new resources to
accommodate the increased demand
Clusters – a collection of semi-autonomous machines
that acts as a single system
資工系網媒所
33/119 NEWS實驗室
Example: Networking
The transmission of a network packet between hosts on an
Ethernet network
Every host has a unique IP address and a corresponding
Ethernet (MAC) address
Communication requires both addresses
Domain Name Service (DNS) can be used to acquire IP
addresses
Address Resolution Protocol (ARP) is used to map MAC
addresses to IP addresses
If the hosts are on the same network, ARP can be used
If the hosts are on different networks, the sending host will send the
packet to a router which routes the packet to the destination network
資工系網媒所
34/119 NEWS實驗室
An Ethernet Packet
資工系網媒所
35/119 NEWS實驗室
End of Chapter 16
資工系網媒所
36/119 NEWS實驗室
Chapter 17:
Distributed-File Systems
國立台灣大學
資訊工程學系
Chapter Objectives
To explain the naming mechanism that provides
location transparency and independence
To describe the various methods for accessing
distributed files
To contrast stateful and stateless distributed file
servers
To show how replication of files on different
machines in a distributed file system is a useful
redundancy for improving availability
To introduce the Andrew file system (AFS) as an
example of a distributed file system
資工系網媒所
38/119 NEWS實驗室
Chapter 17 Distributed-File Systems
Background
Naming and Transparency
Remote File Access
Stateful versus Stateless Service
File Replication
An Example: AFS
資工系網媒所
39/119 NEWS實驗室
Background
Distributed file system (DFS) – a distributed implementation
of the classical time-sharing model of a file system, where
multiple users share files and storage resources
A DFS manages set of dispersed storage devices
Overall storage space managed by a DFS is composed of
different, remotely located, smaller storage spaces
There is usually a correspondence between constituent
storage spaces and sets of files
資工系網媒所
40/119 NEWS實驗室
DFS Structure
Service – software entity running on one or more
machines and providing a particular type of function to
a priori unknown clients
Server – service software running on a single machine
Client – process that can invoke a service using a set
of operations that forms its client interface
A client interface for a file service is formed by a set of
primitive file operations (create, delete, read, write)
Client interface of a DFS should be transparent, i.e.,
not distinguish between local and remote files
資工系網媒所
41/119 NEWS實驗室
Naming and Transparency
Naming – mapping between logical and physical objects
Multilevel mapping – abstraction of a file that hides the details
of how and where on the disk the file is actually stored
A transparent DFS hides the location where in the network the
file is stored
For a file being replicated in several sites, the mapping returns
a set of the locations of this file’s replicas; both the existence of
multiple copies and their location are hidden
資工系網媒所
42/119 NEWS實驗室
Naming Structures
Location transparency – file name does not
reveal the file’s physical storage location
Location independence – file name does not
need to be changed when the file’s physical
storage location changes
資工系網媒所
43/119 NEWS實驗室
Naming Schemes — Three Main Approaches
Files named by combination of their host name and
local name; guarantees a unique system wide name
Attach remote directories to local directories, giving the
appearance of a coherent directory tree; only previously
mounted remote directories can be accessed
transparently
Total integration of the component file systems
A single global name structure spans all the files in the system
If a server is unavailable, some arbitrary set of directories on
different machines also becomes unavailable
資工系網媒所
44/119 NEWS實驗室
Remote File Access
Remote-service mechanism is one transfer approach
Reduce network traffic by retaining recently accessed disk
blocks in a cache, so that repeated accesses to the same
information can be handled locally
If needed data not already cached, a copy of data is brought from
the server to the user
Accesses are performed on the cached copy
Files identified with one master copy residing at the server machine,
but copies of (parts of) the file are scattered in different caches
Cache-consistency problem – keeping the cached copies consistent
with the master file
Could be called network virtual memory
資工系網媒所
45/119 NEWS實驗室
Cache Location – Disk vs. Main Memory
Advantages of disk caches
More reliable
Cached data kept on disk are still there during recovery
and don’t need to be fetched again
Advantages of main-memory caches:
Permit workstations to be diskless
Data can be accessed more quickly
Performance speedup in bigger memories
Server caches (used to speed up disk I/O) are in main
memory regardless of where user caches are located;
using main-memory caches on the user machine permits
a single caching mechanism for servers and users
資工系網媒所
46/119 NEWS實驗室
Cache Update Policy
Write-through – write data through to disk as soon as they are placed
on any cache
Reliable, but poor performance
Delayed-write – modifications written to the cache and then written
through to the server later
Write accesses complete quickly; some data may be overwritten before they
are written back, and so need never be written at all
Poor reliability; unwritten data will be lost whenever a user machine crashes
Variation – scan cache at regular intervals and flush blocks that have been
modified since the last scan
write-on-close
Variation –
, writes data back to the server when the
file is closed
Best for files that are open for long periods and frequently modified
資工系網媒所
47/119 NEWS實驗室
Cachefs and its Use of Caching
資工系網媒所
48/119 NEWS實驗室
Consistency
Is locally cached copy of the data consistent with the
master copy?
Client-initiated approach
Client initiates a validity check
Server checks whether the local data are consistent with the
master copy
Server-initiated approach
Server records, for each client, the (parts of) files it caches
When server detects a potential inconsistency, it must react
資工系網媒所
49/119 NEWS實驗室
Comparing Caching and Remote Service
In caching, many remote accesses handled efficiently
by the local cache; most remote accesses will be served
as fast as local ones
Servers are contracted only occasionally in caching
(rather than for each access)
Reduces server load and network traffic
Enhances potential for scalability
Remote server method handles every remote access
across the network; penalty in network traffic, server
load, and performance
Total network overhead in transmitting big chunks of
data (caching) is lower than a series of responses to
specific requests (remote-service)
資工系網媒所
50/119 NEWS實驗室
Caching and Remote Service (Cont.)
Caching is superior in access patterns with infrequent writes
With frequent writes, substantial overhead incurred to overcome
cache-consistency problem
Benefit from caching when execution carried out on
machines with either local disks or large main memories
Remote access on diskless, small-memory-capacity
machines should be done through remote-service method
In caching, the lower intermachine interface is different form
the upper user interface
In remote-service, the intermachine interface mirrors the
local user-file-system interface
資工系網媒所
51/119 NEWS實驗室
Stateful File Service
Mechanism
Client opens a file
Server fetches information about the file from its disk, stores it in
its memory, and gives the client a connection identifier unique to
the client and the open file
Identifier is used for subsequent accesses until the session ends
Server must reclaim the main-memory space used by clients who
are no longer active
Increased performance
Fewer disk accesses
Stateful server knows if a file was opened for sequential access
and can thus read ahead the next blocks
資工系網媒所
52/119 NEWS實驗室
Stateless File Server
Avoids state information by making each request selfcontained
Each request identifies the file and position in the file
No need to establish and terminate a connection by
open and close operations
資工系網媒所
53/119 NEWS實驗室
Distinctions Between Stateful & Stateless Service
Failure Recovery
A stateful server loses all its volatile state in a crash
Restore state by recovery protocol based on a dialog with
clients, or abort operations that were underway when the
crash occurred
Server needs to be aware of client failures in order to
reclaim space allocated to record the state of crashed client
processes (orphan detection and elimination)
With stateless server, the effects of server failure
sand recovery are almost unnoticeable
A newly reincarnated server can respond to a self-contained
request without any difficulty
資工系網媒所
54/119 NEWS實驗室
Distinctions (Cont.)
Penalties for using the robust stateless service:
longer request messages
slower request processing
additional constraints imposed on DFS design
Some environments require stateful service
A server employing server-initiated cache validation cannot
provide stateless service, since it maintains a record of which
files are cached by which clients
UNIX use of file descriptors and implicit offsets is inherently
stateful; servers must maintain tables to map the file
descriptors to inodes, and store the current offset within a file
資工系網媒所
55/119 NEWS實驗室
File Replication
Replicas of the same file reside on failure-independent machines
Improves availability and can shorten service time
Naming scheme maps a replicated file name to a particular replica
Existence of replicas should be invisible to higher levels
Replicas must be distinguished from one another by different lower-level names
Updates – replicas of a file denote the same logical entity, and thus an
update to any replica must be reflected on all other replicas
Demand replication – reading a nonlocal replica causes it to be cached
locally, thereby generating a new nonprimary replica
資工系網媒所
56/119 NEWS實驗室
An Example: AFS
A distributed computing environment (Andrew) under
development since 1983 at Carnegie-Mellon University,
purchased by IBM and released as Transarc DFS, now
open sourced as OpenAFS
AFS tries to solve complex issues such as uniform
name space, location-independent file sharing, clientside caching (with cache consistency), secure
authentication (via Kerberos)
Also includes server-side caching (via replicas), high availability
Can span 5,000 workstations
資工系網媒所
57/119 NEWS實驗室
ANDREW (Cont.)
Clients are presented with a partitioned space of file
names: a local name space and a shared name space
Dedicated servers, called Vice, present the shared name
space to the clients as an homogeneous, identical, and
location transparent file hierarchy
The local name space is the root file system of a
workstation, from which the shared name space descends
Workstations run the Virtue protocol to communicate with
Vice, and are required to have local disks where they
store their local name space
Servers collectively are responsible for the storage and
management of the shared name space
資工系網媒所
58/119 NEWS實驗室
ANDREW (Cont.)
Clients and servers are structured in clusters
interconnected by a backbone LAN
A cluster consists of a collection of workstations
and a cluster server and is connected to the
backbone by a router
A key mechanism selected for remote file
operations is whole file caching
Opening a file causes it to be cached, in its entirety,
on the local disk
資工系網媒所
59/119 NEWS實驗室
ANDREW Shared Name Space
Andrew’s volumes are small component units associated
with the files of a single client
A fid identifies a Vice file or directory - A fid is 96 bits long
and has three equal-length components:
volume number
vnode number – index into an array containing the inodes of files in
a single volume
uniquifier – allows reuse of vnode numbers, thereby keeping certain
data structures, compact
Fids are location transparent; therefore, file movements from
server to server do not invalidate cached directory contents
Location information is kept on a volume basis, and the
information is replicated on each server
資工系網媒所
60/119 NEWS實驗室
ANDREW File Operations
Andrew caches entire files form servers
A client workstation interacts with Vice servers only during
opening and closing of files
Venus – caches files from Vice when they are
opened, and stores modified copies of files back when
they are closed
Reading and writing bytes of a file are done by the
kernel without Venus intervention on the cached copy
Venus caches contents of directories and symbolic
links, for path-name translation
Exceptions to the caching policy are modifications to
directories that are made directly on the server
responsibility for that directory
資工系網媒所
61/119 NEWS實驗室
ANDREW Implementation
Client processes are interfaced to a UNIX kernel with
the usual set of system calls
Venus carries out path-name translation component
by component
The UNIX file system is used as a low-level storage
system for both servers and clients
The client cache is a local directory on the workstation’s disk
Both Venus and server processes access UNIX files
directly by their inodes to avoid the expensive path
name-to-inode translation routine
資工系網媒所
62/119 NEWS實驗室
ANDREW Implementation (Cont.)
Venus manages two separate caches:
one for status
one for data
LRU algorithm used to keep each of them bounded in size
The status cache is kept in virtual memory to allow rapid
servicing of stat() (file status returning) system calls
The data cache is resident on the local disk, but the UNIX
I/O buffering mechanism does some caching of the disk
blocks in memory that are transparent to Venus
資工系網媒所
63/119 NEWS實驗室
End of Chapter 17
資工系網媒所
64/119 NEWS實驗室
Chapter 18:
Distributed Coordination
國立台灣大學
資訊工程學系
Chapter Objectives
To describe various methods for achieving
mutual exclusion in a distributed system
To explain how atomic transactions can be
implemented in a distributed system
To show how some of the concurrency-control
schemes discussed in Chapter 6 can be
modified for use in a distributed environment
To present schemes for handling deadlock
prevention, deadlock avoidance, and deadlock
detection in a distributed system
資工系網媒所
66/119 NEWS實驗室
Chapter 18 Distributed Coordination
Event Ordering
Mutual Exclusion
Atomicity
Concurrency Control
Deadlock Handling
Election Algorithms
Reaching Agreement
資工系網媒所
67/119 NEWS實驗室
Event Ordering
Happened-before relation (denoted by )
If A and B are events in the same process, and A
was executed before B, then A  B
If A is the event of sending a message by one
process and B is the event of receiving that
message by another process, then A  B
If A  B and B  C then A  C
資工系網媒所
68/119 NEWS實驗室
Relative Time for Three Concurrent Processes
資工系網媒所
69/119 NEWS實驗室
Implementation of 
Associate a timestamp with each system event
Require that for every pair of events A and B, if A  B, then the
timestamp of A is less than the timestamp of B
Within each process Pi a logical clock, LCi is associated
The logical clock can be implemented as a simple counter that is
incremented between any two successive events executed within a
process
Logical clock is monotonically increasing
A process advances its logical clock when it receives a message whose
timestamp is greater than the current value of its logical clock
If the timestamps of two events A and B are the same, then the events
are concurrent
We may use the process identity numbers to break ties and to create
a total ordering
資工系網媒所
70/119 NEWS實驗室
Global Ordering
行程P
行程Q
20
P1
20
Q1
50
P2
20
P1
20
Q1
50
P2
20
P1
20
Q1
50
P2
80
P3
time
40
60
Q3
Q2
(a)
行程P
行程Q
(b)
行程P
行程Q
time
80
P3
51
71
Q3
Q2
80
P3
time
51
Q2
81
Q3
(c)
資工系網媒所
71/119 NEWS實驗室
Distributed Mutual Exclusion (DME)
Assumptions
The system consists of n processes; each process Pi resides at a
different processor
Each process has a critical section that requires mutual exclusion
Requirement
If Pi is executing in its critical section, then no other process Pj is
executing in its critical section
We present two algorithms to ensure the mutual
exclusion execution of processes in their critical sections
資工系網媒所
72/119 NEWS實驗室
DME: Centralized Approach
One of the processes in the system is chosen to coordinate the entry
to the critical section
A process that wants to enter its critical section sends a request
message to the coordinator
The coordinator decides which process can enter the critical section
next, and its sends that process a reply message
When the process receives a reply message from the coordinator, it
enters its critical section
After exiting its critical section, the process sends a release message
to the coordinator and proceeds with its execution
This scheme requires three messages per critical-section entry:
request
reply
release
資工系網媒所
73/119 NEWS實驗室
DME: Fully Distributed Approach
When process Pi wants to enter its critical section, it
generates a new timestamp, TS, and sends the message
request (Pi, TS) to all other processes in the system
When process Pj receives a request message, it may reply
immediately or it may defer sending a reply back
When process Pi receives a reply message from all other
processes in the system, it can enter its critical section
After exiting its critical section, the process sends reply
messages to all its deferred requests
資工系網媒所
74/119 NEWS實驗室
DME: Fully Distributed Approach (Cont)
The decision whether process Pj replies immediately to a
request(Pi, TS) message or defers its reply is based on
three factors:
If Pj is in its critical section, then it defers its reply to Pi
If Pj does not want to enter its critical section, then it sends a reply
immediately to Pi
If Pj wants to enter its critical section but has not yet entered it, then
it compares its own request timestamp with the timestamp TS
If its own request timestamp is greater than TS, then it sends a reply
immediately to Pi (Pi asked first)
Otherwise, the reply is deferred
資工系網媒所
75/119 NEWS實驗室
Desirable Behavior of Fully Distributed Approach
Freedom from Deadlock is ensured
Freedom from starvation is ensured, since entry to the
critical section is scheduled according to the timestamp
ordering
The timestamp ordering ensures that processes are served in a
first-come, first served order
The number of messages per critical-section entry is
2 x (n – 1)
This is the minimum number of required messages per
critical-section entry when processes act independently
and concurrently
資工系網媒所
76/119 NEWS實驗室
Three Undesirable Consequences
The processes need to know the identity of all other
processes in the system, which makes the dynamic addition
and removal of processes more complex
If one of the processes fails, then the entire scheme collapses
This can be dealt with by continuously monitoring the state of all the
processes in the system
Processes that have not entered their critical section must
pause frequently to assure other processes that they intend to
enter the critical section
This protocol is therefore suited for small, stable sets of cooperating
processes
資工系網媒所
77/119 NEWS實驗室
Token-Passing Approach
Circulate a token among processes in system
Token is special type of message
Possession of token entitles holder to enter critical section
Processes logically organized in a ring structure
Unidirectional ring guarantees freedom from
starvation
Two types of failures
Lost token – election must be called
Failed processes – new logical ring established
資工系網媒所
78/119 NEWS實驗室
Atomicity
Either all the operations associated with a program unit are
executed to completion, or none are performed
Ensuring atomicity in a distributed system requires a
transaction coordinator, which is responsible for the
following:
Starting the execution of the transaction
Breaking the transaction into a number of subtransactions, and
distribution these subtransactions to the appropriate sites for
execution
Coordinating the termination of the transaction, which may result in
the transaction being committed at all sites or aborted at all sites
資工系網媒所
79/119 NEWS實驗室
Two-Phase Commit Protocol (2PC)
Assumes fail-stop model
Execution of the protocol is initiated by the coordinator
after the last step of the transaction has been reached
When the protocol is initiated, the transaction may still
be executing at some of the local sites
The protocol involves all the local sites at which the
transaction executed
Example: Let T be a transaction initiated at site Si and
let the transaction coordinator at Si be Ci
資工系網媒所
80/119 NEWS實驗室
Phase 1: Obtaining a Decision
Ci adds <prepare T> record to the log
Ci sends <prepare T> message to all sites
When a site receives a <prepare T> message, the transaction manager
determines if it can commit the transaction
If no: add <no T> record to the log and respond to Ci with <abort T>
If yes:
add <ready T> record to the log
force all log records for T onto stable storage
send <ready T> message to Ci
資工系網媒所
81/119 NEWS實驗室
Phase 1 (Cont)
Coordinator collects responses
All respond “ready”,
decision is commit
At least one response is “abort”,
decision is abort
At least one participant fails to respond within time out period,
decision is abort
資工系網媒所
82/119 NEWS實驗室
Phase 2: Recording Decision in the Database
Coordinator adds a decision record
<abort T> or <commit T>
to its log and forces record onto stable storage
Once that record reaches stable storage it is
irrevocable (even if failures occur)
Coordinator sends a message to each participant
informing it of the decision (commit or abort)
Participants take appropriate action locally
資工系網媒所
83/119 NEWS實驗室
Two-Phase Commit Protocol
coordinator
Site A
Log <PREPARE>,
send to each site.
Log <READY>,
send back.
Log <COMMIT>.
Phase one
Collect all ready
messages.
Site B
Log <READY>,
send back.
Phase two
Log <COMMIT>.
Send to each site.
Log <COMMIT>.
資工系網媒所
84/119 NEWS實驗室
Failure Handling in 2PC – Site Failure
The log contains a <commit T> record
In this case, the site executes redo(T)
The log contains an <abort T> record
In this case, the site executes undo(T)
The contains a <ready T> record; consult Ci
If Ci is down, site sends query-status T message to the other sites
The log contains no control records concerning T
In this case, the site executes undo(T)
資工系網媒所
85/119 NEWS實驗室
Failure Handling in 2PC – Coordinator Ci Failure
If an active site contains a <commit T> record in its log,
the T must be committed
If an active site contains an <abort T> record in its log,
then T must be aborted
If some active site does not contain the record <ready T>
in its log then the failed coordinator Ci cannot have
decided to commit T
Rather than wait for Ci to recover, it is preferable to abort T
All active sites have a <ready T> record in their logs, but
no additional control records
In this case we must wait for the coordinator to recover
Blocking problem – T is blocked pending the recovery of site Si
資工系網媒所
86/119 NEWS實驗室
Concurrency Control
Modify the centralized concurrency schemes to
accommodate the distribution of transactions
Transaction manager coordinates execution of
transactions (or subtransactions) that access data at
local sites
Local transaction only executes at that site
Global transaction executes at several sites
資工系網媒所
87/119 NEWS實驗室
Locking Protocols
Can use the two-phase locking protocol in a distributed
environment by changing how the lock manager is
implemented
Nonreplicated scheme – each site maintains a local
lock manager which administers lock and unlock
requests for those data items that are stored in that site
Simple implementation involves two message transfers for
handling lock requests, and one message transfer for handling
unlock requests
Deadlock handling is more complex
資工系網媒所
88/119 NEWS實驗室
Single-Coordinator Approach
(with replicated data)
A single lock manager resides in a single chosen site, all lock and
unlock requests are made a that site
Simple implementation
Simple deadlock handling
Possibility of bottleneck
Vulnerable to loss of concurrency controller if single site fails
Multiple-coordinator approach distributes lock-manager function
over several sites
資工系網媒所
89/119 NEWS實驗室
Majority Protocol
Avoids drawbacks of central control by dealing
with replicated data in a decentralized manner
More complicated to implement
Deadlock-handling algorithms must be modified;
possible for deadlock to occur in locking only one
data item
資工系網媒所
90/119 NEWS實驗室
Biased Protocol
Similar to majority protocol, but requests for shared
locks prioritized over requests for exclusive locks
Request for locking at one site for shared locks
Request for locking at all site for exclusive locks
Less overhead on read operations than in majority
protocol; but has additional overhead on writes
Like majority protocol, deadlock handling is complex
資工系網媒所
91/119 NEWS實驗室
Primary Copy
One of the sites at which a replica resides is
designated as the primary site
Request to lock a data item is made at the primary site of that
data item
Concurrency control for replicated data handled in a
manner similar to that of unreplicated data
Simple implementation, but if primary site fails, the
data item is unavailable, even though other sites may
have a replica
資工系網媒所
92/119 NEWS實驗室
Timestamping
Generate unique timestamps in distributed scheme:
Each site generates a unique local timestamp
The global unique timestamp is obtained by concatenation of
the unique local timestamp with the unique site identifier
Use a logical clock defined within each site to ensure the fair
generation of timestamps
Timestamp-ordering scheme – combine the
centralized concurrency control timestamp scheme
with the 2PC protocol to obtain a protocol that ensures
serializability with no cascading rollbacks
資工系網媒所
93/119 NEWS實驗室
Generation of Unique Timestamps
+1 or not ?
資工系網媒所
94/119 NEWS實驗室
Deadlock Prevention
Resource-ordering deadlock-prevention – define a
global ordering among the system resources
Assign a unique number to all system resources
A process may request a resource with unique number i only if it
is not holding a resource with a unique number grater than i
Simple to implement; requires little overhead
Banker’s algorithm – designate one of the processes in
the system as the process that maintains the information
necessary to carry out the Banker’s algorithm
Also implemented easily, but may require too much overhead
資工系網媒所
95/119 NEWS實驗室
Timestamped Deadlock-Prevention Scheme
Each process Pi is assigned a unique priority number
Priority numbers are used to decide whether a
process Pi should wait for a process Pj; otherwise Pi is
rolled back
The scheme prevents deadlocks
For every edge Pi  Pj in the wait-for graph, Pi has a higher
priority than Pj
Thus a cycle cannot exist
資工系網媒所
96/119 NEWS實驗室
Wait-Die Scheme
Based on a nonpreemptive technique
If Pi requests a resource currently held by Pj, Pi is allowed to wait
only if it has a smaller timestamp than does Pj (Pi is older than Pj)
Otherwise, Pi is rolled back (dies)
Example: Suppose that processes P1, P2, and P3 have timestamps
10, 20, and 30 respectively
(a) if P1 request a resource R held by P2, then P1 will wait
(b) If P3 requests a resource held by P2, then P3 will be rolled back
Request R
Own R
old
P1:10
new
P2:20
wait
continue
(a)
Request R
Own R
new
P3:30
old
P2:20
rolled back
continue
(b)
資工系網媒所
97/119 NEWS實驗室
Would-Wait Scheme
Based on a preemptive technique; counterpart to the wait-die system
If Pi requests a resource currently held by Pj, Pi is allowed to wait only
if it has a larger timestamp than does Pj (Pi is younger than Pj).
Otherwise Pj is rolled back (Pj is wounded by Pi)
Example: Suppose that processes P1, P2, and P3 have timestamps
10, 20, and 30 respectively
(a) If P1 requests a resource held by P2, then the resource will be preempted
from P2 and P2 will be rolled back
(b) If P3 requests a resource held by P2, then P3 will wait
Request R
Own R
old
P1:10
new
P2:20
preempt
rolled back
(a)
Request R
Own R
new
P3:30
old
P2:20
wait
continue
(b)
資工系網媒所
98/119 NEWS實驗室
Deadlock Detection
Use wait-for graphs
Local wait-for graphs at each local site. The nodes of the graph
correspond to all the processes that are currently either holding
or requesting any of the resources local to that site
May also use a global wait-for graph. This graph is the union of
all local wait-for graphs.
Two Local Wait-For Graphs
Global Wait-For Graph
資工系網媒所
99/119 NEWS實驗室
Deadlock Detection – Centralized Approach
Each site keeps a local wait-for graph
A global wait-for graph is maintained in a single
coordination process
There are three different options (points in time)
when the wait-for graph may be constructed:
1. Whenever a new edge is inserted or removed in one of the local wait-for graphs
2. Periodically, when a number of changes have occurred in a wait-for graph
3. Whenever the coordinator needs to invoke the cycle-detection algorithm
Unnecessary rollbacks may occur as a result of
false cycles
資工系網媒所
100/119 NEWS實驗室
Detection Algorithm Based on Option 3
Append unique identifiers (timestamps) to requests form different sites
When process Pi, at site A, requests a resource from process Pj, at
site B, a request message with timestamp TS is sent
The edge Pi  Pj with the label TS is inserted in the local wait-for of
A. The edge is inserted in the local wait-for graph of B only if B has
received the request message and cannot immediately grant the
requested resource
資工系網媒所
101/119 NEWS實驗室
The Algorithm
1. The controller sends an initiating message to each site in the system
2. On receiving this message, a site sends its local wait-for graph to the
coordinator
3. When the controller has received a reply from each site, it constructs a
graph as follows:
(a) The constructed graph contains a vertex for every process in the system
(b) The graph has an edge Pi  Pj if and only if
(1)
(2)
there is an edge Pi  Pj in one of the wait-for graphs, or
an edge Pi  Pj with some label TS appears in more than one wait-for graph
If the constructed graph contains a cycle  deadlock
資工系網媒所
102/119 NEWS實驗室
Local and Global Wait-For Graphs
資工系網媒所
103/119 NEWS實驗室
Fully Distributed Approach
All controllers share equally the responsibility for detecting deadlock
Every site constructs a wait-for graph that represents a part of the
total graph
We add one additional node Pex to each local wait-for graph
If a local wait-for graph contains a cycle that does not involve node
Pex, then the system is in a deadlock state
A cycle involving Pex implies the possibility of a deadlock
To ascertain whether a deadlock does exist, a distributed deadlockdetection algorithm must be invoked
資工系網媒所
104/119 NEWS實驗室
Augmented Local Wait-For Graphs
資工系網媒所
105/119 NEWS實驗室
Election Algorithms
Determine where a new copy of the coordinator should be restarted
Assume that a unique priority number is associated with each active
process in the system, and assume that the priority number of
process Pi is i
Assume a one-to-one correspondence between processes and sites
The coordinator is always the process with the largest priority
number. When a coordinator fails, the algorithm must elect that
active process with the largest priority number
Two algorithms, the bully algorithm and a ring algorithm, can be
used to elect a new coordinator in case of failures
資工系網媒所
106/119 NEWS實驗室
Bully Algorithm (1/3)
Applicable to systems where every process can send a message
to every other process in the system
If process Pi sends a request that is not answered by the
coordinator within a time interval T, assume that the coordinator
has failed; Pi tries to elect itself as the new coordinator
Pi sends an election message to every process with a higher
priority number, Pi then waits for any of these processes to
answer within T
If no response within T, assume that all processes with numbers
greater than i have failed; Pi elects itself the new coordinator
If answer is received, Pi begins time interval T’, waiting to receive
a message that a process with a higher priority number has been
elected
資工系網媒所
107/119 NEWS實驗室
Bully Algorithm (2/3)
If no message is sent within T’, assume the process with a higher
number has failed; Pi should restart the algorithm
If Pi is not the coordinator, then, at any time during execution, Pi
may receive one of the following two messages from process Pj
Pj is the new coordinator (j > i). Pi, in turn, records this information
Pj started an election (j > i). Pi, sends a response to Pj and begins its
own election algorithm, provided that Pi has not already initiated
such an election
After a failed process recovers, it immediately begins execution of
the same algorithm
If there are no active processes with higher numbers, the
recovered process forces all processes with lower number to let it
become the coordinator process, even if there is a currently
active coordinator with a lower number
資工系網媒所
108/119 NEWS實驗室
Bully Algorithm (3/3)
P1
process
P2
P5
P1
P1
P2
P5
elect
P3
P4
elect
no
P4
P3
P4
(a) P2 elects itself
P2
P5
(b) P3 and P4 rejects P2
P3
(c) P3 and P4
elects themselves
P1
P1
P2
P5
P2
P5
elected
no
P4
P3
(d) P4 rejects P3
P4
P3
(e) P4 is elected
資工系網媒所
109/119 NEWS實驗室
Ring Algorithm (1/3)
Applicable to systems organized as a ring (logically or physically)
Assumes that the links are unidirectional, and that processes
send their messages to their right neighbors
Each process maintains an active list, consisting of all the priority
numbers of all active processes in the system when the
algorithm ends
If process Pi detects a coordinator failure, I creates a new active
list that is initially empty. It then sends a message elect(i) to its
right neighbor, and adds the number i to its active list
資工系網媒所
110/119 NEWS實驗室
Ring Algorithm (2/3)
If Pi receives a message elect(j) from the process on the left, it must respond in
one of three ways:
1.
2.
3.
If this is the first elect message it has seen or sent, Pi creates a new active
list with the numbers i and j

It then sends the message elect(i), followed by the message elect(j)
If i  j, then the active list for Pi now contains the numbers of all the active
processes in the system

Pi can now determine the largest number in the active list to identify the
new coordinator process
If i = j, then Pi receives the message elect(i)

The active list for Pi contains all the active processes in the system

Pi can now determine the new coordinator process.
資工系網媒所
111/119 NEWS實驗室
Ring Structure
P1
Token passing
P4
Network
P7
P2
P6
P3
P3
P5
P6
P2
P1
(a) Physical Connection
P7
P5
P4
(b) Token Ring
資工系網媒所
112/119 NEWS實驗室
Ring Algorithm (3/3)
2, 3, 4, 5, 6, 1
P2
P1
P2 detects failure of P7
active list
2
P3
P7
2, 3
2, 3, 4, 5, 6
P4
P6
P5
2, 3, 4
2, 3, 4, 5
資工系網媒所
113/119 NEWS實驗室
Reaching Agreement
There are applications where a set of processes wish
to agree on a common “value”
Such agreement may not take place due to:
Faulty communication medium
Faulty processes
Processes may send garbled or incorrect messages to other
processes
A subset of the processes may collaborate with each other in an
attempt to defeat the scheme
資工系網媒所
114/119 NEWS實驗室
Faulty Communications
Process Pi at site A, has sent a message to process Pj at site B;
to proceed, Pi needs to know if Pj has received the message
Detect failures using a time-out scheme
When Pi sends out a message, it also specifies a time interval during
which it is willing to wait for an acknowledgment message form Pj
When Pj receives the message, it immediately sends an
acknowledgment to Pi
If Pi receives the acknowledgment message within the specified time
interval, it concludes that Pj has received its message
If a time-out occurs, Pj needs to retransmit its message and wait for an
acknowledgment
Continue until Pi either receives an acknowledgment, or is notified by
the system that B is down
資工系網媒所
115/119 NEWS實驗室
Faulty Communications (Cont)
Suppose that Pj also needs to know that Pi has received
its acknowledgment message, in order to decide on how
to proceed
In the presence of failure, it is not possible to accomplish this task
It is not possible in a distributed environment for processes Pi and
Pj to agree completely on their respective states
資工系網媒所
116/119 NEWS實驗室
Faulty Processes (Byzantine Generals Problem)
Communication medium is reliable, but processes can fail
in unpredictable ways
Consider a system of n processes, of which no more than
m are faulty
Suppose that each process Pi has some private value of Vi
Devise an algorithm that allows each nonfaulty Pi to
construct a vector Xi = (Ai,1, Ai,2, …, Ai,n) such that::
If Pj is a nonfaulty process, then Aij = Vj.
If Pi and Pj are both nonfaulty processes, then Xi = Xj.
Solutions share the following properties
A correct algorithm can be devised only if n  3 x m + 1
The worst-case delay for reaching agreement is proportionate to m
+ 1 message-passing delays
資工系網媒所
117/119 NEWS實驗室
Faulty Processes (Cont)
An algorithm for the case where m = 1 and n = 4 requires two
rounds of information exchange:
Each process sends its private value to the other 3 processes
Each process sends the information it has obtained in the first
round to all other processes
If a faulty process refuses to send messages, a nonfaulty
process can choose an arbitrary value and pretend that that
value was sent by that process
After the two rounds are completed, a nonfaulty process Pi can
construct its vector Xi = (Ai,1, Ai,2, Ai,3, Ai,4) as follows:
Ai,j = Vi
For j  i, if at least two of the three values reported for process Pj
agree, then the majority value is used to set the value of Aij
Otherwise, a default value (nil) is used
資工系網媒所
118/119 NEWS實驗室
End of Chapter 18
資工系網媒所
119/119 NEWS實驗室