Download 03_dcluster_jan31_2am

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Nearest-neighbor chain algorithm wikipedia , lookup

Nonlinear dimensionality reduction wikipedia , lookup

K-means clustering wikipedia , lookup

K-nearest neighbors algorithm wikipedia , lookup

Cluster analysis wikipedia , lookup

Transcript
Efficient Density Clustering For Large Spatial Data Using Hawaiian
Metrics1
Fei Pan, Baoying Wang, Yi Zhang, Dongmei Ren, Xin Hu, William Perrizo
Computer Science Department
North Dakota State University
Fargo, ND 58105
1
Patents are pending on P-tree technology.
This work partially supported by GSA ACT# 96130308.
Abstract
Data mining for spatial data has become increasingly important as more and more organizations are
exposed to spatial data from such sources as remote sensing, geographical information systems (GIS),
astronomy, computer cartography, environmental assessment and planning, bioinformatics, etc.
Recently, density based clustering methods, such as DENCLUE, DBSCAN, OPTICS, have been
published and recognized as powerful clustering methods for Data Mining. These approaches have run
time complexity of O ( n log n ) when using spatial index techniques, R+ tree and grid cell. However,
these methods are known to lack scalability with respect to dimensionality. In this paper, we develop a
new efficient density based clustering algorithm using Hawaiian metrics and P-trees2. The fast P-tree
ANDing operation facilitates the calculation of the density function within Hawaiian rings. The average
run time complexity of our algorithm for spatial data in d-dimension is O ( dn n ) . Our proposed
method has comparable cardinality scalability with other density methods for small and medium size of
data, but superior dimensional scalability.
Keywords: Density Clustering. Hawaiian Metrics. Peano Count Trees. Spatial Data
1. Introduction
With the rapid growth of large quantities of spatial data collected in various application areas, such as
remote sensing, geographical information systems (GIS), astronomy, computer cartography,
environmental assessment and planning, efficient spatial data mining methods are in great demand.
Density based cluster algorithms have been recognized as a powerful clustering approach capable of
discovering arbitrary shape of clusters as well as dealing with noise and outliers, and are widely used in
the mining of large spatial data.
There are two major approaches for density-based methods. The first approach is represented by
DENCLUE [3]. It exploits a density function, e.g., step function or Gaussian function to measure the
density in attribute metric space. Clusters are identified by determining density attractors. Thus,
clusters of arbitrary shape can be easily determined by overall density functions. This algorithm scales
well with run time complexity O ( n log n ) by means of grid cells techniques. However, it requires
careful selection of the density parameter  and noise threshold , which may significantly influence
the quality of the clustering results [10].
The second approach calculates the density of all data points and groups them based on density
connectivity. Typical algorithms in this approach include DBSCAN [6] and OPTICS [8]. DBSCAN first
defines a core object as a set of neighbor points consisting of more than a specified number of data
points. All the data points reachable within a chain of overlapping core objects define a cluster. The run
2
Patents are pending on the P-tree technology. This work is partially supported by GSA Grant ACT#:
K96130308.
time complexity of DBSCAN is O ( n log n ) for spatial data when using a spatial index. Otherwise, it
is
O (n 2 ) [10]. OPTICS can be considered as an extension of DBSCAN without providing global
density. It assumes each cluster has its own density parameter and uses a random variable to learn its
probability distribution. It has the same run time complexity as DBSCAN, that is, O ( n log n ) if a
spatial index is used and
O (n 2 ) otherwise.
However, the spatial index techniques, such as R tree, R+ tree, and grid cell, are known to be suitable
for low dimensional data sets. They perform well in 2-3 dimensions. In high dimensional spaces they
exhibit poor behavior in the worst case and in typical cases as well [13]. The reason is that the data
space becomes sparse at high dimensionalities causing the bounding regions to become large.
Recently, a new distance metric, the Hawaiian Metric, has been proposed for data mining [2]. It
exploits a new lossless data structure, called the Peano Count Tree (P-tree) [1]. The performance of
Hawaiian metric data mining using P-trees is shown to be fast and accurate [2].
In this paper, we propose an efficient density clustering algorithm using Hawaiian metrics and show
that the method scales well with respect to dimension. The basic idea is to make use of P-trees and
Hawaiian metrics to calculate the density function in O ( n ) time, on the average. The fast P-tree
ANDing operation is used to get density functions within certain Hawaiian ring neighbors. Furthermore,
we adopt a look around pruning method to combine the density calculation and a hill climbing
technique. The overall run time complexity is O ( dn n ) for a d-dimensional data set, on the average.
Experimental results show that the algorithm works efficiently on large-scale, high-dimensional, spatial
data, outperforming other density methods significantly.
This paper is organized as follows. In section 2, The Hawaiian metrics and P-tree techniques are briefly
reviewed. In section 3, we introduce the new efficient density clustering method using Hawaiian Metric,
and then prove its efficiency in terms of time complexity. Finally, we compare our method with other
density methods experimentally in section 4 and conclude the paper in section 5.
The symbols used in this paper are given in Table 1.
Table 1. Symbols and Notations
Symbol
Definition
X
Spatial pixel, X = {x1, x2, …, xn}, n is the
number of attributes
m
Maximal bit length of attributes
r
Radius of Hawaiian ring
Pi,j
Basic P-tree for bit j of attribute i
Pi,j’
Complement of Pi,j
bi,j
Pxi,j
Pvi,r
The jth bit of the ith attribute of x.
Operator P-tree of jth bit of the ith attribute of x
Value P-tree within ring r
Px,r
Qid
Tuple P-tree within ring r
Quadrant identification
2. Review of Hawaiian Metrics and P-trees
Distance metrics (or similarity functions) are key elements of clustering algorithms and therefore play
an important role in data mining. A number of distance metrics has been utilized so far; of which some
of the common ones are Euclidian distance, Manhattan distance, Max distance and Lp-distance. In this
section, we first briefly review the Hawaiian Metrics, and the Peano Count Tree (P-tree) [1] data
structure and related P-tree algebra.
2.1. Hawaiian Metrics
Many distance metrics have been used in clustering algorithms. Representative metrics include
Euclidean distance, Manhattan distance, and Max distance (Minkowski or Lq-metrics with q=2, 1 and
 respectively). For two data points, X = (x1, x2, x3, …, xn-1) and Y = (y1, y2, y3, …, yn-1), the Euclidean
distance function is defined as
Minkowski distance function
d 2 ( X ,Y ) 
d q ( X ,Y )  q
n 1
(x
i 1
 y i ) 2 . It can be generalized to the Lq or
i
n 1
 wi | x
i 1
i
this gives the Euclidean function. If q=1, d 1 ( X , Y ) 
 y i |q , where q is a natural number. If q=2,
n 1
| x
i 1
i
 yi | gives the Manhattan distance.
n 1
Max function is defined as
d  ( X , Y )  max | x i  y i | .
i 1
The Hawaiian metrics, also called HOBBit metric [2], is bit wise distance function. It measures
distance based on the most significant consecutive matching bit positions starting from the left
(Position Of Inequality or POI – leading to the Hawaiian terminology).
measurements are based on the following observation.
Hawaiian metric difference
When comparing two values bitwise from left
to right, once a difference is found, the position of that first difference reveals much about the
magnitude of difference between the two values. Let Ai be a non-negative fixed point attribute in
tabular data sets, R(A1, A2, ..., An).
Each attribute, Ai, the values are represented as fixed-point binary
numbers, x, i.e., x = x(m)x(m-1)---x(1)x(0).x(-1)---x(-n). Let X and Y be two values of Ai, the position
of inequality (POI) or Hawaiian similarity between X and Y is defined by
m( X , Y )  max{ i | xi  yi  1}
where
xi and y i are the i th bits of X and Y respectively, and  denotes the XOR (exclusive OR)
operation. In another word, m is the left most position at which X and Y differ. The Hawaiian distance
between two tuples, X and Y, is defined by d ( X , Y )  2
m ( X ,Y )
.
For two value X and Y of a signed fixed binary attribute, Ai, the Hawaiian distance between X and Y
are same as above if X and Y are of the same sign. If X and Y are of opposite sign, then the distance is
d ( X , Y )  d ( X ,0)  d (Y ,0) . Hawaiian metric data mining uses a data structure, called a Peano
Count Tree (P-tree), to facilitate its computation for spatial data. Some details about P-tree are
described in next section 2.2.
2.2. Peano Count Trees (P-trees)
The Peano Count Tree (P-tree) is a tree structure organizing any tabular data set with fixed point
numerical values (categorical attributes can be coded to fixed point numeric and floating point
attributes can be intervalized using their exponents). Each attribute is split into separate files, one for
each bit position. A basic P-tree, Pi, j, is then the P-tree for the jth bit of the ith attribute. Given a fixed
point attribute of m bits, there are m basic P-trees, one for each bit position. The complement of a basic
P-tree, Pi, j, is a P-tree associated with the column of bit complements, which is denoted as P i, j ‘. Figure 1
shows an example of basic P-tree construction and its complement P-tree.
11
11
11
11
11
11
00
01
11
11
11
11
11
11
11
11
11
00
11
11
00
00
00
00
00
00
00
10
00
00
00
00
a) 8x8 bSQ file
36
_________/ / \ \__________
/
___ /
\___
\
/
/
\
\
16
___7___
___13___
0
/
/
|
\
/ | \
\
2
0
4 1
4 4
1
4
//|\
//|\
//|\
1100
0010
0001
b) Basic P-tree
Figure 1.
28
__________/ / \ \_________
/
___ /
\___
\
/
/
\
\
0
___9___
___3__
16
/
/
|
\
/ | \
\
2
4
0 3
0 0 3
0
//|\
//|\
//|\
0011
1101
1110
c) Complement P-tree
An Example of P-tree Construction
In Figure 1, we are assuming the original table is a table where each row is a pixel in an image and
each attribute is a reflectance value (e.g., of Red, Green, Blue, etc.) ranging from 0 and 255. The 88
bit array on the left of Figure 1 is some bit of some attribute (a bit-Sequential or bSQ file). We have
arranged the bits spatially (rather than as a single column of bits) so that their pixel of origin is clear.
The corresponding Peano Count tree (P-tree) showing the hierarchy of quadrant 1-bit counts is given in
the middle. The root count is 36, and the counts at the next level, 16, 7, 13, 0, are the 1-bit count for the
four major quadrants (in Peano or Z order). Since the first and last quadrant is made up of entirely
1-bits and 0-bits respectively, we do not need sub-trees for these two quadrants. The complement is
shown on the right. It provides the 0-bit counts for each quadrant.
We note here that we identify
quadrants using a Quadrant identifier, Qid - the string of successive sub-quadrant numbers (01,2 or 3 in
Z or Peano order, separated by “.” (as in IP addresses).
quadrant in Figure 1, is 2.2 .
Thus, the Qid of the bolded and underlined
P-tree ANDing is one of the most important and frequently used algebraic operations on P-trees. The
ANDing operation is executed using Peano Mask trees (PM-trees), a lossless, compressed variant of
Peano Count Trees, which used simple masks instead of root counts at each internal node. In PM-trees,
three value logic, i.e., 0, 1, and m (for mixed), is used to represent pure-0, pure-1 and non-pure (or mixed)
quadrants, respectively. The bit-wise AND of two bit columns can be done efficiently using PM-tree, as
illustrated in Figure 2.
m
____/
m
/
\ \______
/
/
\
\
/
/
\
\
1
m
m
1
/ / \ \
// \ \
m 0 1 m 11 m 1
//|\
//|\
//|\
1110
0010
1101
_____/
/
/
/
/
/
1
a). P-tree-1
0
\
\
m
________ / / \
/
____ /
/
/
1
0
/
1
\______
\
\
m
/ / \ \
1 1 1 m
//|\
0100
\
0
b). P-tree-2
Figure 2.
\____
\
\
\
m
| \ \
1 m
//|\
1101
\
0
m
//|\
0100
c). ANDing Result
P-tree ANDing Operation
Figure 2 shows the PM-tree result (on the right) of the ANDing of PM-tree1 (on the left) and PM-tree2
(in the middle). There are several ways to perform P-tree ANDing. The basic way is to perform
ANDing level-by-level starting from the root level. The rules are summarized in Table 2.
Table 2. P-tree AND rules
Operand 1
0
0
Operand 2
0
1
Result
0
0
0
1
1
m
m
1
m
m
0
1
m
0 if four sub-quadrants result in
0; Otherwise m
In Table 2, operand 1 and operand 2 are two P-trees (or sub-trees). ANDing a pure-0 tree with any
P-tree results in a pure-0 tree. ANDing a pure-1 tree with any P-tree, P2, results in P2. ANDing two
mixed trees results a mixed tree or pure-0 tree.
By using P-tree logical AND and complement operations, Hawaiian distance can be computed very
quickly. The detailed algorithm for Hawaiian ring based P-tree operations are discussed in the
following section.
3. The P-tree Hawaiian Density Based Clustering Algorithm
Generally speaking, density based cluster algorithms group the attribute objects into a set of connected
dense components separated by regions of low density. A cluster is regarded as a connected dense
region of objects, which grows in any direction that density leads. Therefore, density based clusters are
capable of discovering arbitrarily shaped clusters and deal well with noise and outliers.
The main drawback of existing density based algorithms is slowness and lack of scalability. Typical
density based algorithms, such as DBSCAN, OPTICS and DENCLUE, exploit different approaches to
improve the speed and scalability. In this paper, we propose a P-tree Hawaiian ring based density
clustering algorithm, which we will refer to as PHD-Clustering (P-tree, Hawaiian, Density Clustering).
The basic idea is to exploit Hawaiian rings (all points lying between a inner Hawaiian distance
threshold and an outer Hawaiian distance threshold from a center point) and P-trees to get the density
function in one step. The fast P-tree ANDing operation is used to get density function within any
specified Hawaiian ring neighbor. We also adopt a look around pruning method to combine the density
calculation and simple hill climbing. The detailed algorithm is in section 3.1 and 3.2.
In section 3.1, we describe calculation of the density function using P-trees and Hawaiian rings. In
section 3.2, the algorithm for finding density attractors is discussed. Finally, the efficiency of our
algorithm is analyzed in terms of time complexity.
3.1. Calculation of the Density Function Using P-trees and Hawaiian Rings
Definition 3.1.1.
Hawaiian Ring The Hawaiian ring of radii, r1 and r2 , centered at c is defined as
R(c, r1, r2) = {x X | r2 d(c,x)  r1}, where d(c,x) is Hawaiian distance. Figure 3 shows a diagram of
Hawaiian ring R(c, r1, r2) in spatial data set, X.
Figure 3.
Diagram of Hawaiian Ring
The calculation of densities within a Hawaiian ring is accomplished by P-tree ANDing.
point, x, let x = b11b12 … bnm , where bi,j is x’s i bit value in the j attribute column.
th
th
The bit-P-trees for x, Pxi,j , are then defined by
If bi,j = 1
Otherwise
Pxi,j = Pi,j
= P’i,j
The attribute-P-trees for x within the Hawaiian ring, R(x, 0, r), are then defined by
For any data
Pvi,r
= Pxi,1 & Pxi,2 & … & Pxi,r
( i = 1, 2, 3, …, n)
The tuple-P-tree for x within the Hawaiian ring, R(x, 0, r), are then defined by
Px,r = Pv1,r & Pv2,r &Pv3,r & … & Pvn,r
We define the Hawaiian density of x to be the weighted summation of the tuple-P-tree root counts of
the Hawaiian rings, R(x, r-1, r) , which is calculated as follows
m
Dx =
w
r 1
r
* ( RootCount ( Px, r )  RootCount ( Px, r  1))
…(eq.1)
where Dx denotes the Hawaiian density of data point x, with respect to weights, wr .
There are many weighting functions which can be used to adjust this density, e.g. Gaussian weighting,
Kriging and radial basis function weighting, Podium weighting [13], etc. In this paper, we use wr = 2(r*
d)
(d is the dimension of X). The selection of this weight is based on the rationale that the density of
each point should be the same for uniformly distribute data.
3.2. Finding Density Attractors Using the Look Around Pruning Technique
Once the density of each data point is defined, the next step is to define density attractors, i.e., local
maxima of the overall density function.
Having a high density doesn’t necessarily make a point a
density attractor – it must have the highest density among its neighbors.
Instead of using formal hill
climbing as is done in DENCLUE [13], we adopt a simpler heuristic look around technique. We first
define a neighborhood as a ball of some chosen radius, . The number, , can range from 0 to the
maximal bit length of attributes. After finding the density function, D x, of a point, x, we compare that
density with that of any density attractor within its neighborhood. If it is greater than the density of all
its neighbors, it is labeled as a new density attractor.
Any old density attractor in that neighborhood is
de-labeled as a density attractor.
After all the data points have gone through the process above, we have a set of intermediate density
attractors.
We go over and compare each attractor’s density with that of its nearest neighbor. If the
former is less than the latter, the attractor is de-labeled. Otherwise, it is a final density attractor. The
look around pruning algorithm is robust, which means the clustering results are independent of data
point order.
The algorithm is summarized in Figure 4.
INPUT: P-tree Set Pi,j for bit j and attribute i, neighborhood 
OUTPUT: Density attractors
// Pi,j – P-tree for attribute i and bit j; Pi – Neighborhood P-tree; Pv [h]– value P-tree within radius h
//N - # of data points; n - # of attributes; P - Neighborhood threshold P-tree
// m - maximal bit length of attributes; flag[i] – label array of cluster center of data point i.
//wi[h] – density weight array of Hawaiian ring (i, h, h+1); DENS[i] – density array
BEGIN
FOR i=1 to N DO
flag[i] 0
Pi  Pure1 P-tree, DENS[i]  0, PrevRC  0
FOR h = 1 TO m - 1 DO
Figure 4.
PHDClustering Algorithm
For example, suppose Qid of data point X is 0.3.2 and D x = 250. The tuple P-tree Px, within
neighborhood  is shown in Figure 5. We need compare Dx with the neighbor’s density. From the Px, ,
x has four neighbors with Qids of 0.0.2, 0.3.1, 2.3.0 and 2.3.3.
respectively 300, 0, 220 and 0, then
with the maximal density of
If the densities of these points are
0.0.2 and 2.3.0 are labeled as density attractors. By comparing D x
0.0.2 and 2.3.0, 250 < max(300, 220), therefore we determine that x is
not a density attractor. Otherwise if Dx = 350, 350 > max (300, 220), x is labeled as the new density
attractor. The old density attractors 0.0.2 and 2.3.0 are de-labeled and will not be considered later.
/
3
/ /\
1 0
//\\
0010
5
_____ / / \ \______
/ \
\
0 2
0
\
/ /\ \
0 2 0 00 2
//\\
//\\
0110
1001
Figure 5.
Tuple-P-tree for x within the Hawaiian ring, R(x, 0, )
3.3. Time Complexity Analysis
Let  be the fan-out of a P-tree and let n be the number of data points it represents. We first present
some Lemmas on P-trees, and then derive the run time complexity O ( n n ) on average.
Lemma 3.3.1.
The number of level of P-tree k = log() n
Proof Sketch:
The numbers of nodes in each level of P-trees are: 1, , 2, 3, … k. Obviously the
leave level k is n bits long, i.e. k = n. Thus k = log() n.
Lemma 3.3.2.
Maximum number of nodes in P-tree at the worst case  = ( n – 1) / ( – 1)
Proof Sketch: without compression, the total number of nodes  = 1 +  + 2 + 3 + … k-1 = (k – 1) /
( – 1), according to Lemma 3.3.1, k = n, we get
 = ( n – 1) / ( – 1)
Lemma 3.3.3. Total number of nodes in P-tree with compression ratio of  (<1)  = 1 + (k * n – ) /
( *  – 1), where k is the number of levels of P-tree.
Proof Sketch: The numbers of nodes in each level of P-trees with compression ratio  at level i is i *
i-1., where i ranges from 1 to k.. For example, at level 2, there are ( * )*  = 2 *  nodes. We get the
total number of nodes with compression ratio of  is

= 1 +  + 2 *  + 3 * 2 + … + k-1 * k-2
= 1 +  * (k-1*k-1 – 1) / (* – 1)
= 1 + ( k * k – ) / ( *  – 1)
= 1 + (k * n – ) / ( *  – 1)
Corollary 3.3.1.
When  = 0, the total number of nodes in P-tree is 1; when  = 1, the total number
of nodes in P-tree is (n – ) ( – 1) + 1. We also notice that when  = 0.5 and  = 4, the total number of
nodes in P-tree with compression ratio  is

= 1 + (4k/2k – 2 *4) / (4 – 2)
= 1 + (4k /2 –
=1+(
Theorem 3.3.1.
8) /2
n - 8 ) /2
Average run time complexity of PHDClustering with compression ratio 0.5 and
fan-out of 4 is O (d*n *
n ), where d is the number of dimensions.
Proof Sketch: P-tree ANDing operation is executed node by node to calculate the density. Each node
ANDing is counted as one operation. For n data points in d-dimension, there are d*m basic P-trees,
here m is the maximal bit size of each dimension. The total run time to get density P-trees is d*m*n*,
where  is the total number of nodes of a P-tree.
For data sets with fan-out  = 4 and average compress rate  = 0.5, according to Corollary 3.4.1, the
total number of nodes of a P-tree  = 1 + (
n - 8) /2. Therefore, the total time to get density for n
data points in d-dimension is d*m*n * (1 + (
n - 8) /2).
Since we adopt look around pruning technique and only compare previous density attractors, so the run
time complexity of finding density attractors can be neglected. Thus, the average time complexity of
density based clustering using P-tree with compression ratio 0.5 and fan-out of 4 is O (d*n *
n ).
4. Experiment Evaluation
Our experiments were implemented in C++ language on a 1GHz Pentium PC machine with 1GB main
memory, running on Debian Linux 4.0. The test data includes the aerial TIFF image (with Red, Green
and Blue band reflectance values), moisture, and nitrate map of the Oaks area in North Dakota. The
data is prepared in five sizes, that is, 128x128, 256x256, 512x512, 1024x1024, 2048x2048. The data
sets are available at [4]. We evaluate our proposed P-tree Hawaiian ring based density clustering
algorithm, PHDClustering from respect of scalability, quality and parameter sensitivity.
In this experiment, we compare our proposed PHDClustering with several Density Function based
Clustering method (DFC) using different distance metrics, including Manhattan distance based DFC
(DFC-Manhattan), Euclidian distance based DFC (DFC-Euclidian), and Max distance DFC
(DFC-Max). The experiment was performed on the five different sizes of data sets. The average CPU
run time of 30 runs is shown in Figure 6.
Average Run Time (S)
DFC-Manhattan
DFC-Max
DFC-Euclidean
PHDCluster
1800
1600
1400
1200
1000
800
600
400
200
0
128x128
256x256
512x512
1024x1024 2048x2048
Data Size (number of tuples)
Figure 6.
Running Time Comparison of PHDCluster with other Density Clustering using
Different metrics
From Figure 6, we see that DC-Manhattan is faster than DC-Max and Dc-Euclidan. But PHDCluster
method is much faster than all of them on these five data sets. Especially when the data set size
increases, the time of PHDCluster method increases at a much lower rate than other methods. The
experiment results show that PHDCluster method is more scalable for large spatial data set.
5. Conclusion
In this paper, we propose an efficient P-tree Hawaiian density based clustering algorithm (PHDCluster),
with average time complexity, O ( dn n ) ), for spatial data sets.
PHDCluster exploits a new
distance metric, the Hawaiian metric, to calculate density functions using Peano Trees (P-trees). The
Hawaiian metric is natural for spatial data and the calculation of Hawaiian metrics using P-tree is
extremely fast. Our proposed method has comparable cardinality scalability with other density methods
for small and medium size of data, but is shown to be superior regarding dimensional scalability.
Our method is particularly useful for data streams. In data streams, such as large sets of transactions,
remotely sensed images, multimedia video, etc., new data keeps on arrival continually. Therefore both
speed and accuracy are critical issues. Achieving high speed using P-tree, and high accuracy using the
weighted Hawaiian metrics provides a density based clustering method that is well suited to the
clustering of steam data. Besides spatial data, our method also has potential applications in other areas,
such as DNA micro array and medical image analysis.
Acknowledgement
This paper would not have been done without the help of all the DataSurgies at NDSU and friends. A
special thanks to GSA Grant ACT# K96130308 for financial support.
Reference:
1.
William Perrizo, Peano Count Tree Technology, Technical Report NDSU-CSOR-TR-01-1, 2001.
2.
Maleq Khan, Qin Ding, William Perrizo, k-Nearest Neighbor Classification on Spatial Data
Streams Using P-Trees, PAKDD 2002, Spriger-Verlag, LNAI 2336, 2002, pp. 517-528.
3.
Alexander Hinneburg, Daniel A. Keim, An Efficient Approach to Clustering in Large Multimedia
Databases with Noise, Proc. 4rd Int. Conf. on Knowledge Discovery and Data Mining, AAAI
Press, 1998.
4.
TIFF image data sets. Available at http://midas-10cs.ndsu.nodak.edu/data/images/.
5.
Ester M., Kriegel H.P., Sander J., Xu X, Density-Connected Sets and their Application for Trend
Detection in Spatial Databases, Proc. 3rd Int. Conf. On Knowledge Discovery and Data Mining,
AAAI Press, 1997.
6.
ESTER, M., KRIEGEL, H-P., SANDER, J. and XU, X. 1996. A density-based algorithm for
discovering clusters in large spatial databases with noise. In Proceedings of the 2nd ACM
SIGKDD, 226-231, Portland, Oregon.
7.
SANDER, J., ESTER, M., KRIEGEL, H.-P., and XU, X. 1998. Density-based clustering in spatial
databases: the algorithm GDBSCAN and its applications. In Data Mining and Knowledge
Discovery, 2, 2, 169-194.
8.
ANKERST, M., BREUNIG, M., KRIEGEL, H.-P., and SANDER, J. 1999. OPTICS: Ordering
points to identify clustering structure. In Proceedings of the ACM SIGMOD Conference, 49-60,
Philadelphia, PA.
9.
XU, X., ESTER, M., KRIEGEL, H.-P., and SANDER, J. 1998. A distribution-based clustering
algorithm for mining in large spatial databases. In Proceedings of the 14th ICDE, 324-331,
Orlando, FL.
10. HAN, J. and KAMBER, M. 2001. Data Mining. Morgan Kaufmann Publishers.
11. HAN, J., KAMBER, M., and TUNG, A. K. H. 2001. Spatial clustering methods in data mining: A
survey. In Miller, H. and Han, J. (Eds.) Geographic Data Mining and Knowledge Discovery, Taylor
and Francis.
12. Perrizo, W., Ding, Q., Denton, A., Scott, Kirk., Ding, Q., and Khan, M. 2003. PINE – Podium
Incremental Neighbor Evaluator for Classifying Spatial Data. SAC2003, Melbourne, Florida, USA
13. S. Arya, D.M. Mount, and O. Narayan, Accounting for boundary effects in nearest-neighbor
searching. Discrete and Computational Gemetry, pp. 155-176, 1996.
14. H. Samet. The Design and Analysis of Spatial Data Structures. Addison-Wesley, Reading, MA,
1989.
15. T. Sellis, N. Roussopoulos and C. Faloutsos. Multidimensional Access Methods: Trees Have
Grown Everywhere. Proceedings of the 23 rd International Conference on Very Large Data Bases
(VLDB), 1997, pp. 13-15.