
Trajectory Clustering: A Partition-and-Group Framework
... In our trajectory partitioning problem, a hypothesis corresponds to a specific set of trajectory partitions. This formulation is quite natural because we want to find the optimal partitioning of a trajectory. As a result, finding the optimal partitioning translates to finding the best hypothesis usi ...
... In our trajectory partitioning problem, a hypothesis corresponds to a specific set of trajectory partitions. This formulation is quite natural because we want to find the optimal partitioning of a trajectory. As a result, finding the optimal partitioning translates to finding the best hypothesis usi ...
Full Text
... also brings an effect stabilizing variation of recognition ratio; and on recognition time, even when plural KNNs are performed in parallel, by devising its distance calculation it can be done not so as to extremely increase on comparison with that in single KNN. Alizadeh et al. in [20] proposed a ne ...
... also brings an effect stabilizing variation of recognition ratio; and on recognition time, even when plural KNNs are performed in parallel, by devising its distance calculation it can be done not so as to extremely increase on comparison with that in single KNN. Alizadeh et al. in [20] proposed a ne ...
Agglomerative Independent Variable Group Analysis
... in understanding the structure of the data set as well as focusing further modelling efforts to smaller and more meaningful subproblems. Grouping or clustering variables based on their mutual dependence was the objective of the Independent Variable Group Analysis [13,1] (IVGA) method. In this paper ...
... in understanding the structure of the data set as well as focusing further modelling efforts to smaller and more meaningful subproblems. Grouping or clustering variables based on their mutual dependence was the objective of the Independent Variable Group Analysis [13,1] (IVGA) method. In this paper ...
Multiple Features Subset Selection using Meta
... this chemical. The pheromone decays over time, resulting in much less pheromone on less popular paths. Given that over time the shortest route will have the higher rate of ant traversal, this path will be reinforced and the others diminished until all ants follow the same, shortest path. The overall ...
... this chemical. The pheromone decays over time, resulting in much less pheromone on less popular paths. Given that over time the shortest route will have the higher rate of ant traversal, this path will be reinforced and the others diminished until all ants follow the same, shortest path. The overall ...
Approximation Algorithms for Clustering Uncertain Data
... instead more involved solutions are necessary, generating new approximation schemes for uncertain data. Clustering Uncertain Data and Soft Clustering. ‘Soft clustering’ (sometimes also known as probabilistic clustering) is a relaxation of clustering which asks for a set of cluster centers and a frac ...
... instead more involved solutions are necessary, generating new approximation schemes for uncertain data. Clustering Uncertain Data and Soft Clustering. ‘Soft clustering’ (sometimes also known as probabilistic clustering) is a relaxation of clustering which asks for a set of cluster centers and a frac ...
Scale-free Clustering - UEF Electronic Publications
... even more difficult task than selecting the similarity measure. There are lots of methods available, each with different characteristics. The clustering method can be either hard or fuzzy, depending on whether a data point is allowed to belong to more than one cluster, with a definite degree of memb ...
... even more difficult task than selecting the similarity measure. There are lots of methods available, each with different characteristics. The clustering method can be either hard or fuzzy, depending on whether a data point is allowed to belong to more than one cluster, with a definite degree of memb ...
Nearest-neighbor chain algorithm

In the theory of cluster analysis, the nearest-neighbor chain algorithm is a method that can be used to perform several types of agglomerative hierarchical clustering, using an amount of memory that is linear in the number of points to be clustered and an amount of time linear in the number of distinct distances between pairs of points. The main idea of the algorithm is to find pairs of clusters to merge by following paths in the nearest neighbor graph of the clusters until the paths terminate in pairs of mutual nearest neighbors. The algorithm was developed and implemented in 1982 by J. P. Benzécri and J. Juan, based on earlier methods that constructed hierarchical clusterings using mutual nearest neighbor pairs without taking advantage of nearest neighbor chains.