Download Homework 4: Solutions

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Recursion (computer science) wikipedia , lookup

Post-quantum cryptography wikipedia , lookup

Cryptographic hash function wikipedia , lookup

Diffie–Hellman key exchange wikipedia , lookup

Scheduling (computing) wikipedia , lookup

Rainbow table wikipedia , lookup

Drift plus penalty wikipedia , lookup

Corecursion wikipedia , lookup

Transcript
2D1353, Algorithms, Data structures and Complexity, 1998
Homework 4: Solutions
1. (a) The rst-in rst-out (FIFO) queue is described in CLR 11. There
are two operations dened on a FIFO queue:
push(Q; x) puts the element x at the end of the queue Q.
pop(Q) removes the rst element from the queue Q and returns
it. If the queue is empty, returns null.
The priority queue is described in CLR 7.5. It supports the following operations:
insert(P; x) inserts the element x in the priority queue P .
maximum(P ) returns the element with the smallest key in P .
extract-min(P ) removes and returns the element in P with the
smallest key. If the queue is empty, returns null.
Our job is to emulate a FIFO queue using a priority queue. The
solution is quite simple: Assign to each element a key equal to
the order in which it is put in the queue. The very rst element is
assigned 1, the second 2 etc. We keep track on the next available key for push operations, and notes that extract-min exactly
corresponds to pop, since the rst element in the queue has the
lowest key. The following describes our implementation:
new-queue Let Q be an empty priority queue
Let Q.nextkey 1
Return Q
push(Q; s) Let element.entry
s
Let element.key Q.nextkey
Call insert(Q,element)
Let Q.nextkey Q.nextkey +1
pop(Q) Return extract-min (Q)
(This implementation is rather stupid, since it makes the FIFO
queue operations run in O(log n) time as compared to the O(1)
time required for a straight-forward implementation. Another
practical problem is that nextkey will grow without bound.)
(b) The second part of the problem is to implement a FIFO queue using two stacks. (The problem only asks us to implement a queue,
but we assume that this is a FIFO queue. Other assumptions are
valid as long as the implementation is correct.)
A stack is similar to a FIFO queue and has the same operations:
push(S; x) puts the element x at the top of the stack S .
pop(S ) removes the element at the top of the stack and returns
it. If the stack is empty, returns null.
The key to this problem is to use the two stacks alternately. The
rst stack is used to extract data. The element at the top of this
stack is the rst element in the queue. The second stack is used
to insert data, and the elements are stored with the rst element
in the queue at the bottom of this stack. If we insert an element
in this second stack, it will end up at the end of the queue.
For our implementation we use two stacks labeled top-up and
bottom-up. Furthermore, we will need a variable mode which has
one of the values push-mode and pop-mode. During the operation, the whole FIFO queue will be stored in either of the two
stacks. If mode = push-mode it will be stored in the bottom-up stack and vice versa. Furthermore, we will need a ip operation
which moves the entire queue from one stack to the other. The
implementation looks as follows:
new-queue Let Q.top-up and Q.bottom-up be two empty stacks
Let Q:mode push-mode
Return Q
ip(S1; S2 ) // Moves all elements from S1 to S2
While ((x pop(S1 )) =
6 null) push(S2 ; x)
pop(Q) If (Q.mode = push-mode ) then call ip(Q.bottom-up,Q.topup) and set Q.mode pop-mode
Return pop(Q.top-up)
push(Q; x) If (Q.mode = pop-mode ) then call ip(Q.top-up,Q.bottomup) and set Q.mode push-mode
Call push(Q.bottom-up,x)
Insertion is always performed at the top of the bottom-up stack,
which equals the end of the queue. Extraction is performed at
the top of the top-up stack, which equals the front of the queue.
Hence, the algorithm works as intended.
The running time of both operations are O(n), where n is the
number of elements in the structure. This worst-case appears
when we have to change the mode and call the ip routine.
(Note: Another solution is to keep the queue in one stack the
whole time and only use the other stack as temporary storage.
However, the implementation above will be better in practice since
it only ips the stacks when forced to.)
2. (a) First consider a doubly linked list dened as follows. Each node
x has a data eld, and pointers next(x) and previous(x) to the
next and previous nodes in the list, respectively. A special node
h is called the head of the list; for this node the data eld is void
and only the next(h) and previous(h) elds are used to point out
the rst and last elements of the list, respectively. The last node
in the list has next(x) = h, and the rst node in the list has
previous(x) = h. In an empty list, the head's both pointers point
to h itself.
For all nodes (except the head), we dene the value np(x) as the
bitwise exclusive or of the usual next and previous pointers, i.e.,
np(x) = previous(x) next(x). For the head, we still have pointers to the rst and last elements in the list (they are called next(h)
and previous(h) respectively.) It is now possible to traverse the
list using only the np values.
Suppose that x comes directly before y in the list (y = next(x))
and that we know the addresses of both x and y. We can then
compute next(y) by np(y) x, since from the denition of np(y)
this is equal to previous(y) next(y) x, which in turn is equal
to next(y) since previous(y) = x so that previous(y) x = 0.
To start the traversal, we must know the address of two nodes in
direct succession of each other. Starting in the head, which still
has the previous and next pointers, we can traverse the entire list
forwards by using h and next(h) in place of x and y in the calculation above. Starting with h and previous(h) instead traverses
the list backwards. (Of course, we know that the end of the list
is reached when we get back to the head.)
(b) The following algorithm shows in detail how to traverse the list
forwards to look for a node with key k. (Assume that the key of
a node x can be retrieved by key(x)).
Input: A pointer h to the head of a doubly linked list, and
a key k to search for.
Output: A pointer to the node with the key k, or NULL if
no node with key k is in the list.
Search(h,k)
(1) if next(h) 6= h
(2)
last h
(3)
curr next(h)
(4)
while curr 6= h
(5)
if key(curr) = k
(6)
return curr
(7)
temp curr
(8)
curr last np(curr )
(9)
last temp
(10) return NULL
The following algorithm inserts a new node, initially with no contents, at the end of the list.
Input: A pointer h to the head of a doubly linked list.
Output: A pointer to the newly created node.
Insert(h)
(1)
(2)
(3)
(4)
(5)
(6)
(7)
Allocate memory for a new node.
p pointer to allocated memory.
last previous(h)
np(p) previous(h) h
previous(h) p
np(last ) np(last ) h p
return p
The following algorithm deletes the last node in the list.
Input: A pointer h to the head of a doubly linked list.
Output: None.
Delete(h)
(1) newlast np(previous(h)) h
(2) np(newlast ) np(newlast ) previous(h) h
(3)
previous(h)
newlast
(c) Since starting by following previous(h) or next(h) is the only difference in travering the list forwards or backwards, it follows that
reversing the list is only a matter of switching the values of these
two pointers.
The follwing neat trick can be used to switch the pointer values
without the use of a temporary variable:
Input: A pointer h to the head of a doubly linked list.
Output: None.
Reverse(h)
(1) previous(h) previous(h) next(h)
(2) next(h) previous(h) next(h)
(3) previous(h) previous(h) next(h)
3. Suppose that we have a string S = sn 1 sn 2 s1 s0 . By the denition
of the hash function, we have
h(S ) =
n
X1
i=0
si 2pi mod (2p 1)
(1)
where p is a suitable prime number. This expression can be simplied
using elementary number theory:
n
X1
h(S ) =
i=0
si2pi mod (2p 1)
{ (a + b) mod c = (a mod c) + (b mod c) mod c 8a; b; c : c > 0 }
n
X1
=
i=0
si2pi mod (2p 1) mod (2p 1)
{ ab mod c = a(b mod c) mod c 8a; b; c : c > 0
ab mod c = (a mod c)b mod c 8a; b; c : b 0; c > 0 }
n
X1
=
i=0
n
X1
=
i=0
n
X1
=
i=0
si 2p mod (2p 1)
i mod (2p 1)
si 1i mod (2p 1)
si mod (2p 1)
But this implies that the hash value of a string only depends on the
characters occurring in the string (and, of course, their frequencies).
If the string X is a permutation of the string Y , all characters occur
with the same frequencies in both strings; the strings therefore hash to
the same value.
This property of the hash function (1) is undesirable in many in
fact, almost all applications; here are some examples:
Spell-checking:
Most spell-checkers depend on hash functions (often several different) for nding misspelled words. A spell-checker which does
not detect common transpositions (e.g. coh instead of och in
Swedish) is not very useful.
String storage:
A common usage for string hashing is in programs (e.g. compilers) that need to store all strings which occur in the input. A hash
function with the above property would degrade performance as
the number of collisions becomes much larger when words which
are transpositions of each other hash to the same value. In extreme cases, the time to search for a string in the hash table
can become O(n), where n is the number of strings in the table,
instead of the typical O(1). This can of course happen for any
hash function, but it is much more likely to happen for the hash
function (1).
Gunnar Andersson, Henrik Ståhl, Staan Ulfberg, February 12