
Business Intelligence from Web Usage Mining
... 2.2.4. Optimization of fuzzy inference system The EvoNF framework proposed by Abraham (2002) was used to optimze the fuzzy inference method, which is an integrated computational framework to optimize fuzzy inference system using neural network learning and evolutionary computation. Solving multi-obj ...
... 2.2.4. Optimization of fuzzy inference system The EvoNF framework proposed by Abraham (2002) was used to optimze the fuzzy inference method, which is an integrated computational framework to optimize fuzzy inference system using neural network learning and evolutionary computation. Solving multi-obj ...
Parallel Clustering Algorithms - Amazon Simple Storage Service (S3)
... are passed to its children. As the kd-tree is constructed based on the data points, it does not need to be updated at each iteration, which saves the time overall. Ng and Han [60] developed a new clustering method called CLARANS which is dynamic version of CLARA [50] applicable for a large data set ...
... are passed to its children. As the kd-tree is constructed based on the data points, it does not need to be updated at each iteration, which saves the time overall. Ng and Han [60] developed a new clustering method called CLARANS which is dynamic version of CLARA [50] applicable for a large data set ...
4. Conclusions Acknowledgments 5. References
... The algorithm DBSCAN presented in (Ester et al. 1996) is based on two lemmata which can also be proven for the generalized notion of a cluster, i.e. a density-connected set. In the current context they state the following. Given the parameters NPred and MinWeight, we can discover a densityconnected ...
... The algorithm DBSCAN presented in (Ester et al. 1996) is based on two lemmata which can also be proven for the generalized notion of a cluster, i.e. a density-connected set. In the current context they state the following. Given the parameters NPred and MinWeight, we can discover a densityconnected ...
pdf (preprint)
... hidden and unexpected information, which cannot be discovered using traditional statistical methods that require a priori hypothesis and cannot handle large amounts of data (Miller and Han 2009). This deficiency led to the emergence of spatio-temporal data mining, which is dedicated to the revelatio ...
... hidden and unexpected information, which cannot be discovered using traditional statistical methods that require a priori hypothesis and cannot handle large amounts of data (Miller and Han 2009). This deficiency led to the emergence of spatio-temporal data mining, which is dedicated to the revelatio ...
PPT - Mining of Massive Datasets
... J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org ...
... J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org ...
H. Wang, H. Shan, A. Banerjee. Bayesian Cluster Ensembles
... Similar to the mixture modeling approach, BCE treats all base clustering results for each data point as a vector with a discrete value on each dimension, and learns a mixedmembership model from such a representation. In addition, we extend BCE to generalized BCE (GBCE), which learns a consensus clus ...
... Similar to the mixture modeling approach, BCE treats all base clustering results for each data point as a vector with a discrete value on each dimension, and learns a mixedmembership model from such a representation. In addition, we extend BCE to generalized BCE (GBCE), which learns a consensus clus ...
Nearest-neighbor chain algorithm

In the theory of cluster analysis, the nearest-neighbor chain algorithm is a method that can be used to perform several types of agglomerative hierarchical clustering, using an amount of memory that is linear in the number of points to be clustered and an amount of time linear in the number of distinct distances between pairs of points. The main idea of the algorithm is to find pairs of clusters to merge by following paths in the nearest neighbor graph of the clusters until the paths terminate in pairs of mutual nearest neighbors. The algorithm was developed and implemented in 1982 by J. P. Benzécri and J. Juan, based on earlier methods that constructed hierarchical clusterings using mutual nearest neighbor pairs without taking advantage of nearest neighbor chains.