* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Probabilistic Data Structures for Priority Queues
Survey
Document related concepts
Transcript
Probabilistic Data Structures for Priority Queues (Extended Abstract) R. Sridhar, K. Rajasekar, and C. Pandu Rangan Department of Computer Science and Engineering, IIT Madras 600036, India. [email protected] Abstract. We present several simple probabilistic data structures for implementing priority queues. We present a data structure called simple bottom-up sampled heap (SBSH), supporting insert in O(1) expected time and delete, delete minimum, decrease key and meld in O(log n) time with high probability. An extension of SBSH called BSH1, supporting insert and meld in O(1) worst case time is presented. This data structure uses a novel “buffering technique” to improve the expected bounds to worst-case bounds. Another extension of SBSH called BSH2, performing insert, decrease key and meld in O(1) amortized expected time and delete and delete minimum in O(log n) time with high probability is also presented. The amortized performance of this data structure is comparable to that of Fibonacci heaps (in probabilistic terms). Moreover, unlike Fibonacci heaps, each operation takes O(log n) time with high probability, making the data structure suitable for real-time applications. Keywords : priority queue, probabilistic data structures, decrease key, meld, skip list, bottom-up sampling, amortization, buffering technique 1 Introduction The implementation of priority queues is a classical problem in data structures [2–6, 8]. Priority queues are extensively used in many algorithms for various applications like network optimization and task scheduling [8]. Deterministic data structures achieving best performance in the amortized and in the worst-case sense are reported in [8] and [3] respectively. These data structures support delete and delete minimum in O(log n) time and insert, decrease key and meld in O(1) time. However the latter data structure is extremely complicated and may not be of much practical importance, as quoted in [3]. Probabilistic alternatives to deterministic data structures are better in terms of simplicity and constant factors involved in actual run-time. Skip lists [10] and randomized treaps [1] are examples of data structures supporting dictionary maintenance with performance comparable to more complicated deterministic data structures like AVL trees [7] and red-black trees [13]. Although skip lists are proposed to implement the dictionary operations, it is not hard to see that delete minimum can be executed in O(1) expected time as the minimum is stored S. Arnborg, L. Ivansson (Eds.): Algorithm Theory - SWAT’98, LNCS 1432, pp. 143–154, 1998. c Springer-Verlag Berlin Heidelberg 1998 144 R. Sridhar et al. as the first item in the skip list. However, no probabilistic data structure implementing all the operations supported by (say) Fibonacci heaps, in comparable (probabilistic) time has been reported. We first propose a data structure called simple bottom-up sampled heap (SBSH) that uses bottom up sampling, as in skip lists. We then propose two extensions of SBSH called BSH1 and BSH2. The performance of these data structures are compared with existing deterministic counterparts in table 1. Table 1. Comparison of various data structures for priority queues delete delete insert decrease meld minimum key O(log n) O(log n) O(log n) O(log n) O(log n) Binomial heaps [6] (worst case) Fibonacci heaps [8] O(log n) O(log n) O(1) O(1) O(1) (amortized) Fast meldable O(log n) O(log n) O(1) O(log n) O(1) heaps [2] (worst case) Brodal [3] O(log n) O(log n) O(1) O(1) O(1) (worst case) SBSH O(log n) O(log n) O(1) O(log n) O(log n) (this paper) whp whp exp whp whp BSH1 O(log n) O(log n) O(1) O(log n) O(1) (this paper) whp whp whp BSH2 O(log n) O(log n) O(1) O(1) O(1) (this paper) whp whp amor amor amor whp :with high probability exp : expected amor : Amortized expected From table 1, we can conclude that if high probability bounds are considered to be as good as worst case bounds, SBSH and BSH1 have performances comparable to Binomial heaps [6] and Fast Meldable heaps [2] respectively. The amortized performance of BSH2 is comparable to that of Fibonacci heaps [8]. Moreover, BSH2 performs all operations in O(log n) time with high probability while Fibonacci heap may take Θ(n) time for delete minimum operation. This makes BSH2 useful in real-time applications. 2 Simple Bottom-up sampled heaps In this section, we describe a simple data structure called simple bottom-up sampled heap (SBSH) that performs priority queue operations in time comparable to Binomial heaps [6] (see rows 1 and 5 of Table 1). Probabilistic Data Structures for Priority Queues 2.1 145 Properties Let S = {x1 , x2 , . . . , xn } be a set of distinct 1 real numbers, where x1 = min(S). Consider a sequence of sets S = S1 , S2 , . . . obtained as follows. The minimum element x1 is included in all these sets. An element xi , i > 1 in set Si is included in Si+1 if the toss of a p-biased coin is head. Let h be the smallest index such that Sh = {x1 }. Then the sequence S1 , S2 , . . . , Sh is said to be a bottom-up sample of S. An equivalent way to obtain a bottom-up sample of S is to include each xi , for i ≥ 2, in sets S1 , . . . , SXi where Xi is a geometric random variable with P r[Xi = k] = pk−1 (1 − p) for all k ≥ 1. The element x1 is included in sets S1 , . . . , Sh , where h = 1 + max{Xi | 2 ≤ i ≤ n}. We construct a simple bottom-up sampled heap (SBSH) H from a bottom-up sample as follows. For an element xi ∈ Sj create a node labeled xji . These nodes are arranged as a tree. All nodes of the form x1i are called leaf nodes. The leaf node x1i contains xi . All other nodes (non-leaf nodes) do not contain any value. We call all nodes of the form xi1 as base nodes. 6 5 4 3 2 1 5 50 10 15 45 35 40 25 30 55 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 43 x11 20 x12 60 x13 21 x14 Fig. 1. A simple bottom-up sampled heap A pointer to the node x11 called bottom pointer is stored in H.bottom. This is used to access the heap H. Let ht(H) = h denote the height of the data structure. The pointers connecting nodes in the tree are described below : down : All non-leaf nodes xji have xji .down = xj−1 , and all leaf nodes have down i pointer as Nil. For example, in Fig.1, x24 .down = x14 and x15 .down = Nil. , and all other nodes up : All nodes xji such that xi ∈ Sj+1 have xji .up = xj+1 i have up pointer as Nil. In Fig.1, x214 .up = x314 and x210 .up = Nil. 1 This assumption is made to simplify the presentation of our results. In general, our data structures can be extended to maintain a multi-set S 146 R. Sridhar et al. : Let Sj = {xi1 , xi2 , . . . , xik }. The sibling pointer of a node xim points to xim , if j = h. xim+1 , if m < k and xim+1 is not present in Sj+1 . xil otherwise, where l is the maximum index ≤ i such that xil is present in Sj+1 . The sibling set of a node x, sib(x) is defined as the set of nodes other than x that can be reached by traversing sibling pointers only. For example, sib(x35 ) = {x38 } and sib(x111 ) = {x112 , x113 }. Bdescendant : We define the set of descendants of a non-leaf node x, DescSet(x), as the set of leaf nodes that can be reached from x.down by a sequence of down and sibling pointers. The pointer Bdescendant of a non-leaf node, x points to the leaf node with minimum value among all the nodes in DescSet(x). The Bdescendant pointer of any leaf node is Nil. In Fig.1, x45 .Bdescendant = x18 and x511 .Bdescendant = x112 . sibling 1. 2. 3. Using this best descendant pointer we can define the weight of any node xji as w(Bdescendant(xji )) if j > 1 w(xji ) = xi if j = 1 In Fig.1, w(x38 ) = 25 and w(x22 ) = 10. Definition 1. The parent function of a node x in a SBSH is defined as if x.up = Nil and x.sibling = x Nil if x.up = Nil parent(x) = x.up parent(x.sibling ) otherwise Notice that parent(xh1 ) = Nil. Since all nodes have exactly one sibling with non-Nil up pointer, the parent function can be evaluated on all nodes. From the definition of sibling set, we observe that (y ∈ sib(x)) ≡ (parent(x) = parent(y)). Let us define parentj as the parent function composed with itself j times, i.e. parent(parentj−1 (x)) if j > 0 j parent (x) = x if j = 0 Observation 2 A leaf node x belongs to DescSet(xji ) iff parentj−1 (x) is xji . From the above observation, we can give a recursive definition for DescSet as DescSet(x) = y∈sib(x.down)∪{x.down} x DescSet(y) if x.down = Nil if x.down = Nil (1) Thus, the best descendant of a node x can be computed as the minimum of the best descendants of nodes in the sibling linked list of x.down. Probabilistic Data Structures for Priority Queues 2.2 147 Operations We shall now consider the various priority queue operations on simple bottomup sampled heaps. The decrease key operation is performed as a delete and an insert operation. Insert. When an element z is to be inserted in the SBSH, we repeatedly toss a coin till a failure is encountered. If we obtain k consecutive successes, then we create a leaf node z 1 and non-leaf nodes z 2 , . . . , z k+1 and connect z j with z j+1 by both up and down pointers for all 1 ≤ j ≤ k. Then, we insert this column of nodes next to the column of base nodes. For all 1 ≤ j ≤ k, the base node xj1 , is detached from its siblings and z j takes the place of xj1 in the list. The node z k+1 is inserted next to xk+1 in the sibling list of xk+1 . The Bdescendant pointers of 1 1 j the nodes z for all 2 ≤ ≤ k + 1 are evaluated using (1). While inserting an element, care must be taken in the following special cases : 1. If z < w(H.bottom), then swap z and w(H.bottom). 2. If k + 1 > ht(H) − 1, then create new levels. Delete. The delete operation removes an element xi given the pointer to the leaf node x1i . Notice that all nodes of the form xji , where j ≥ 1 have to be removed. Deletion consists of the following steps : 1. If x1i = H.bottom, Then Call DeleteMin(H) and exit 2. FindLeft : This operation finds out the node x1i−1 . This is done by traversing the up pointer till Nil is encountered. Let the last node encountered be xki . The right-most descendant of the left sibling 2 of xki is x1i−1 . 3. Remove : This operation removes all nodes of the form xji , where j ≥ 1. The sibling linked list of xji is merged with the sibling linked list of parentj−1 (x1i−1 ) for all 1 ≤ j ≤ k. The Bdescendant pointers of all nodes of the form parentj−1 (x1i−1 ) are updated during the traversal. 4. Cascade : Notice that nodes of the form parentj (x1i−1 ), where j > k may have incorrect Bdescendant pointers. The cascade operation updates the Bdescendant pointer of these nodes, till a node with no change in its Bdescendant pointer is encountered. DeleteMin. The delete minimum operation involves the following operations 1. Finding the second minimum : The second minimum node can be found by finding the node with minimum weight of Bdescendant, among all siblings of base nodes. The correctness of this operation follows from (1). 2. Removing the minimum element : Let leaf node x contain the second minimum element. We set w(H.bottom) to w(x) and call Delete(H, x). 2 The node y such that y.sibling = xki 148 R. Sridhar et al. Meld. The meld operation combines two heaps destructively to form another heap containing elements of these heaps put together. The minimum of one heap (say H1 ) is inserted into the other heap H2 . The base nodes of H1 are removed from bottom to top and at each level its siblings are attached to the rightmost node in the corresponding level in H2 , creating new levels in H2 if necessary. Changes in Bdescendant pointers may be cascaded upwards. H2 .bottom is returned as the new bottom of the heap. 2.3 Analysis As the data structure presented is probabilistic, we analyze the random variables using notions of expected value, high probability and very high probability. An event is said to occur with high probability if it occurs with probability at least 1−n−k , for all 0 < k = O(1). An event is said to occur with very high probability if it occurs with probability at least 1 − k −n , for all 1 < k = O(1). Every element in an SBSH has its height as a geometric random variable. Moreover every sibling linked list contains a geometric random number of nodes. Using Chernoff bounds [11] to analyze the sum of independent geometric random variables, we can prove the following theorem. Theorem 3. SBSH is a data structure on pointer machines that 1. 2. 3. 4. 3 has height O(log n) with high probability performs insert operation in O(1) expected time performs all operations in O(log n) time with high probability has O(n) space complexity with very high probability. BSH1 In this section, we present an extension of SBSH called BSH1. First, we present a lazy meld routine. Then, we improve O(1) expected bounds on runtime to constant worst case bounds. 3.1 Performing meld operation in a lazy manner We now present an extension of SBSH that performs meld operation in O(1) expected time. We define an extra pointer, top for the data structure. This points to the node xh1 . We maintain many simple bottom-up sampled heaps, called sub-heaps. The bottom and top pointers of one of these sub-heaps called primary sub-heap are used to access the data structure. Other sub-heaps (secondary subheaps) are accessible through representative nodes. The following property holds for every sub-heap in the data structure. Property 4 The leaf node storing the minimum in a sub-heap cannot be a representative node. Probabilistic Data Structures for Priority Queues 149 Any leaf node other than those corresponding to the minimum element of a sub-heap in the data structure can be a representative node. When a meld operation is performed on heaps H1 and H2 , without melding the two heaps, the heap with larger minimum (say H2 w.l.g) is represented by a node referred to as “representative” node, and this node is inserted into H1 . The down pointer of this node is set to H2 .top. Since representative nodes are always leaf nodes, the condition x.down = Nil ∨ x.representative = T rue specifies that node x is a leaf node. Similarly, H2 .top.up is set to the corresponding representative node. It can be seen that the condition for a node x to be a topmost base node in any sub-heap is x.up = Nil ∨ x.up.representative = T rue. Since we perform the meld operation in a lazy manner, the other operations have to be modified appropriately. Insert operation is performed as in SBSH. Thus we perform insert and meld operations in O(1) expected time. The following steps are performed by Delete(H, x). 1. If x = H.bottom, Then Call DeleteMin(H) and exit 2. Delete x as in SBSH. The cascade operation is performed till we encounter a node y whose Bdescendant pointer is not changed or is a topmost base node in a sub-heap. 3. If parent(y).representative = T rue, then it means that the delete operation is performed on the minimum element in a secondary sub-heap. In this case, we remove the representative node y.up and meld the sub-heap belonging to y with the primary sub-heap as in SBSH. During the operation DeleteMin(H), the following steps are performed and the initial value of w(H.bottom) is reported. 1. Find the leaf node (say x) corresponding to the second minimum and replace w(H.bottom) by w(x). 2. If x.down = Nil, then the node x is a non-representative node. In this case Delete(H, x) is performed. Otherwise, Delete(H, x.down .Bdescendant ) is executed. The correctness of this operation follows from property 4. Lemma 5. After any sequence of operations, property 4 is satisfied by the data structure. Lemma 6. The number of representative nodes in the data structure is less than the number of non-representative nodes From the above lemma, we can infer that the asymptotic space complexity of the data structure is same as that of SBSH. 3.2 Buffered update In this section we will improve the O(1) expected bounds on insert and meld operations to O(1) worst-case bounds, using “buffered update”. 150 R. Sridhar et al. Maintaining minsibling pointer. BSH1 is based on the heap described in Sect. 3.1. An additional pointer, minsibling is defined for every base node. The pointer x.minsibling will point to the minimum node in y∈sib(x) DescSet(y). This will cause slight modifications in Delete and DeleteMin operations. Every time a node is removed from the sibling linked list of a base node, the minsibling pointer of the base node will be updated using the Bdescendant pointers of the siblings. We also maintain a pointer lastsibling in each base node x which points to the node y having y.sibling = x. During insert operation, either a base node loses all its siblings, or the inserted node is added to the base node’s sibling linked list. In the first case, the minsibling pointer is used to compute the Bdescendant of the inserted node and the minsibling pointer is set to Nil. In the second case, the minsibling pointer is set to the node with lesser weight among itself and Bdescendant of the inserted node. So at each level, the Insert operation will take constant time (say one unit of work). Therefore the complexity of Insert will be O(X), where X is a geometric random variable. Performing Insert and Meld in constant time . Each sub-heap is associated with an auxiliary double ended queue accessible using x.queue, where x is the topmost base node of the sub-heap. It can be seen that H.top.queue is the queue associated with the primary sub-heap. These queues or buffers contain unfinished insertion tasks. Every entry e in the queue (buffer) will correspond to the insertion of an element in some number of levels (say l) in the heap. The entry e requires l units of work for its completion. We now present the constant time Insert operation. Function Insert(z) 1 2 3 4 5 6 Create a leaf node z 1 and insert it next to H.bottom. Add an entry to the queue corresponding to the insertion of non-leaf nodes. Perform t units of work by removing tasks from the queue. If the last task is not fully completed, Place the remaining task in front of the queue. Return a pointer to z 1 Meld operation involves the insertion of a node in the heap. Again, this can be done as given above. Therefore Meld can be done in constant time. We now state a tail estimate on the work held in a buffer after any sequence of operations. Lemma 7. The work held in the buffer associated with any sub-heap is O(1) expected and O(log n) with high probability provided pt > 1. Delete and DeleteMin. When deletion of an element (due to Delete or DeleteMin) from a sub-heap is done, all operations in the buffer associated with that sub-heap are performed. Once the buffer is emptied, the element is Probabilistic Data Structures for Priority Queues 151 deleted as in Sect. 3.1. The extra overhead will be proportional to the work held in the buffer which is O(log n) with high probability (from lemma 7). The space complexity of the buffers will be proportional to the number of nodes yet to be created in the data structure. Therefore the asymptotic bound on space complexity remains the same. Theorem 8. BSH1 is a data structure on pointer machines that 1. performs insert and meld operations in O(1) worst-case time 2. performs delete, delete minimum and decrease key operations in O(log n) time with high probability 3. has O(n) space complexity with very high probability. 4 BSH2 In this section we present an extension of SBSH, called BSH2 that performs Insert and DecreaseKey operations in O(1) expected time and Delete, DeleteMin and Meld in O(log n) time with high probability. In section 4.3, we prove that the amortized cost of Insert, DecreaseKey and Meld is O(1) expected. In addition to the up, down, sibling and Bdescendant pointers, we define a Hancestor pointer for all leaf nodes. This pointer points to the highest ancestor of the node having its Bdescendant pointer pointing to this leaf node. All nonleaf nodes have Hancestor pointer as Nil. Formally, a leaf node x has its highest ancestor as y if 1. y.Bdescendant = x 2. parent(y) = Nil or parent(y).Bdescendant = x The number of elements in a heap H is stored in H.N . Every node x stores its level number in x.level. An extendible array H.Display stores the pointer to the base node at level j in H.Display[j]. 4.1 Randomizing the order of siblings The following property is always satisfied by the data structure : Property 9 The probability that a node x in a sibling linked list is to the left of another node y in the same linked list 3 is 1/2 and is independent of the weights of the nodes and the order in which nodes appear in other sibling linked lists. We define a Shuffle operation that restores property 9, after the insertion of a node x in the first position of a sibling linked list, i.e., x.up is non-Nil. This places the first node of a sibling linked list in a randomly chosen position. The algorithm for Shuffle is given below. 3 A node with non-Nil up pointer is encountered while traversing the sibling linked list from y to x 152 R. Sridhar et al. Procedure Shuffle(x) 1 2 3 4 5 4.2 k ← number of siblings of node x Choose uniformly at random, an integer i ∈ [0, k] If i = 0, Then parent(x).down ← x.sibling Insert x after ith sibling in the circular linked list. Operations Operations Delete and DeleteMin are performed as in SBSH. Insert. Let z be the element to be inserted. The insertion operation is performed as in SBSH. Let nodes z 1 , z 2 , . . . , z k+1 correspond to the inserted element. Then the Shuffle operation is performed on each inserted node, i.e. z 1 , z 2 , . . . , z k+1 . Therefore property 9 is satisfied. DecreaseKey. We now describe the DecreaseKey(H, x, b) operation reducing the weight of a node x to b, in a heap H. Let the node y be the highest ancestor of x, i.e., y = x.Hancestor , and let z be the parent of y. Notice that w(y) is equal to w(x). Let the weight of z be c, and let y be the sibling of y with weight c. We now consider all cases that can arise while performing a DecreaseKey operation and present the algorithm for each of these cases. Case I : b > c We reduce w(x) to b. Notice that no change is required in any of the Bdescendant and Hancestor pointers. Case II : b < c and y is to the left of y in their sibling linked list We reduce w(x) to c and call DecreaseKey(H, z.Bdescendant , b) Case III : b < c and y is to the right of y in their sibling linked list We delete all nodes above y, changing sibling, Bdescendant and Hancestor pointers appropriately. We remove the sub-tree rooted at y and insert it next to H.Display[y.level], as in the insert operation. In other words, non-leaf nodes are placed in levels y.level + 1 to y.level + X next to the corresponding base nodes, where P r[X = k] = pk (1 − p), for all k ≥ 0. Care should be taken to handle the following special cases. 1. If x = H.bottom, then we reduce w(x) to b and exit. 2. If b < w(H.bottom), then we swap b and w(H.bottom) and continue the decrease key operation. Probabilistic Data Structures for Priority Queues 153 Meld. The meld operation in BSH2 is an extension of the meld operation in SBSH. Care is taken to maintain property 9. Consider the operation Meld(H1 , H2 ) on two heaps H1 and H2 . W.l.g we assume that H1 is the heap with lesser number of elements, i.e., H1 .N < H2 .N . The following steps are performed : 1. An element with value w(H2 .bottom) is inserted in H1 . 2. For all base nodes in H2 from level 1 to ht(H1 ) − 1, we merge the rightmost sibling linked list at level l in H1 (say L1 ) with the sibling linked list of the base node of H2 at level l (say L2 ). The order in which L1 and L2 are concatenated is random, i.e., the resulting list is L1 L2 or L2 L1 with equal probability. 3. If ht(H1 ) < ht(H2 ), then we replace H1 .bottom.Hancestor by H2 ’s base node at level ht(H1 ). Then, the first ht(H1 ) elements are updated in array H2 .Display and array H1 .Display is destroyed. We return H2 as the resulting heap after setting H2 .bottom to H1 .bottom. 4. If ht(H1 ) ≥ ht(H2 ), then the array H2 .Display is destroyed and H1 is returned as the resulting heap. Lemma 10. Property 9 is satisfied after any sequence of Insert, Delete, DeleteMin, DecreaseKey and Meld operations. From the above lemma, we can show that all operations defined for BSH2 are correct. 4.3 Analysis . The runtime analysis of BSH2 is similar to that of SBSH. The overhead involved in “shuffling” the first node in a sibling linked list does not change the asymptotic bound on insert operation as Lemma 11. The shuffle operation takes O(1) expected time. The following result on the complexity of DecreaseKey operation can be inferred from property 9. Lemma 12. During the DecreaseKey operation, the number of times case II arises is bounded by a geometric random variable. From the above lemma, we can show that decrease key operation takes O(1) expected time. The complexity of meld operation in BSH2 is O(ht(H1 )) expected. It can be noted that ht(H1 ) < ht(H2 ) with probability atleast 1/2. Using amortized analysis, we can exploit this property to obtain a constant expected amortized bound on insert, decrease key and meld operations. Theorem 13. BSH2 is a data structure implementable on RAM that 1. performs insert and decrease key operations in O(1) expected time 2. performs all operations in O(log n) time with high probability 3. performs insert, decrease key and meld operations in amortized O(1) expected time 4. has O(n) space complexity with very high probability. 154 5 R. Sridhar et al. Conclusion and Open Problems In this paper, we have presented three simple and efficient data structures to implement single ended priority queues. In practice, these data structures may be better than their deterministic counterparts due to smaller constants in the asymptotic bounds. Improving the amortized constant expected bounds in BSH2 to expected bounds or worst-case bounds, may result in a data structure theoretically comparable but practically better than Brodal’s data structure [3]. References 1. C.R.Aragon and R.G. Seidel. Randomized search trees. Proc. 30th Ann. IEEE Symposium on Foundations of Computing, 540-545 (1989) 2. Gerth Stólting Brodal. Fast meldable priority queues. Proc. 4th International Workshop, WADS, 282-290 (1995) 3. Gerth Stólting Brodal. Worst-case efficient priority queues. Proc. 7th Ann. ACM Symposium on Discrete Algorithms, 52-58 (1996) 4. Giorgio Gambosi, Enrico Nardelli, Maurizio Talamo. A Pointer-Free data structure for merging heaps and min-max heaps. Theoritical Computer Science 84(1), 107126 (1991) 5. James R. Driscoll, Harold N. Gabow, Ruth Shrairman and Robert E. Tarjan. Relaxed Heaps : An alternative approach to Fibonacci Heaps with applications to parallel computing. Comm. ACM 31(11), 1343-1354 (1988) 6. Jean Vuillemin. A data structure for manipulating priority queues. Comm. ACM 21(4), 309-315 (1978) 7. Knuth, D. The Art of Computer Programming, Volume 3, Sorting and Searching. Addison-Wesley, Reading, Mass., 1973 8. Michael L. Fredman and Robert E. Tarjan Fibonacci heaps and their uses in improved network optimization algorithms. Proc. 25th Annual Symposium on Foundations of Computer Science, 338-346 (1984) 9. Michiel Smid. Lecture Notes : Selected Topics in Data Structures. Max-Plank Institute for Informatics, Germany. 10. W. Pugh. Skip lists : A probabilistic alternative to balanced trees. Comm. ACM 33, 668-676 (1990) 11. P. Raghavan, Lecture notes in randomized algorithms, Technical Report RC15340, IBM J.J.Watson Research Center (1989). 12. Rolf Fagerberg, A Note on Worst Case Efficient Meldable Priority Queues, Technical Report, Odense University Computer Science Department Preprint 1996-12. 13. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest. Introduction to Algorithms. The MIT Press, Cambridge, Massachusetts (1989)