• Study Resource
  • Explore
    • Arts & Humanities
    • Business
    • Engineering & Technology
    • Foreign Language
    • History
    • Math
    • Science
    • Social Science

    Top subcategories

    • Advanced Math
    • Algebra
    • Basic Math
    • Calculus
    • Geometry
    • Linear Algebra
    • Pre-Algebra
    • Pre-Calculus
    • Statistics And Probability
    • Trigonometry
    • other →

    Top subcategories

    • Astronomy
    • Astrophysics
    • Biology
    • Chemistry
    • Earth Science
    • Environmental Science
    • Health Science
    • Physics
    • other →

    Top subcategories

    • Anthropology
    • Law
    • Political Science
    • Psychology
    • Sociology
    • other →

    Top subcategories

    • Accounting
    • Economics
    • Finance
    • Management
    • other →

    Top subcategories

    • Aerospace Engineering
    • Bioengineering
    • Chemical Engineering
    • Civil Engineering
    • Computer Science
    • Electrical Engineering
    • Industrial Engineering
    • Mechanical Engineering
    • Web Design
    • other →

    Top subcategories

    • Architecture
    • Communications
    • English
    • Gender Studies
    • Music
    • Performing Arts
    • Philosophy
    • Religious Studies
    • Writing
    • other →

    Top subcategories

    • Ancient History
    • European History
    • US History
    • World History
    • other →

    Top subcategories

    • Croatian
    • Czech
    • Finnish
    • Greek
    • Hindi
    • Japanese
    • Korean
    • Persian
    • Swedish
    • Turkish
    • other →
 
Profile Documents Logout
Upload
Using Self-Organizing Maps and K
Using Self-Organizing Maps and K

Comparing K-value Estimation for Categorical and Numeric Data
Comparing K-value Estimation for Categorical and Numeric Data

... and in order to work with them it is often beneficial to reduce the dimension of the data prior to using learning algorithms. This is effective because often the structure of the data may be described in far fewer dimensions, and most learning algorithms perform best when the dimension is low. What ...
International Journal on Advanced Computer Theory and
International Journal on Advanced Computer Theory and

1. introduction
1. introduction

... based clustering. It uses the basic idea of agglomerative hierarchical clustering in combination with a distance measurement criterion that is similar to the one used by K-Means. Farthest-First assigns a center to a random point, and then computes the k most distant points [20]. This algorithm works ...
Clustering Context-Specific Gene Regulatory Networks
Clustering Context-Specific Gene Regulatory Networks

05_iasse_vssd_cust - NDSU Computer Science
05_iasse_vssd_cust - NDSU Computer Science

On Subspace Clustering with Density Consciousness
On Subspace Clustering with Density Consciousness

unsupervised static discretization methods
unsupervised static discretization methods

... There is a large variety of discretization methods. Dougherty et al. (1995) [3] present a systematic survey of all the discretization method developed by that time. They also make a first classification of discretization methods based on three directions: global vs. local, supervised vs. unsupervise ...
Chapter 9 Part 1
Chapter 9 Part 1

... – Objects are often linked together in various ways – Massive links can be used to cluster objects: SimRank, LinkClus ...
Incremental Affinity Propagation Clustering Based on Message
Incremental Affinity Propagation Clustering Based on Message

... we extend a recently proposed clustering algorithm, affinity propagation (AP) clustering, to handle dynamic data. Several experiments have shown its consistent superiority over the previous algorithms in static data. AP clustering is an exemplar-based method that realized by assigning each data poin ...
Chapter 9 - cse.sc.edu
Chapter 9 - cse.sc.edu

Impact of Outlier Removal and Normalization
Impact of Outlier Removal and Normalization

Text Mining: Finding Nuggets in Mountains of Textual Data
Text Mining: Finding Nuggets in Mountains of Textual Data

... Does not perform in-depth syntactic or semantic analysis of the text; the results are fast but only heuristic with regards to actual semantics of the text. ...
application of enhanced clustering technique
application of enhanced clustering technique

Segmentation
Segmentation

slides
slides

K-Means Based Clustering In High Dimensional Data
K-Means Based Clustering In High Dimensional Data

... to break traditional clustering algorithms [9].Three problems persist. First, No ground truth that relate the true clusters in real world data. Second, a massive diversity of different measure is used that reflects evaluation aspects of the clustering result. Finally, authors have limited their surve ...
Steven F. Ashby Center for Applied Scientific Computing
Steven F. Ashby Center for Applied Scientific Computing

Survey on Clustering Algorithms for Sentence Level Text
Survey on Clustering Algorithms for Sentence Level Text

Cluster analysis with ants Applied Soft Computing
Cluster analysis with ants Applied Soft Computing

Analysis of the efficiency of Data Clustering Algorithms on high
Analysis of the efficiency of Data Clustering Algorithms on high

Modelling Clusters of Arbitrary Shape with Agglomerative
Modelling Clusters of Arbitrary Shape with Agglomerative

Analysis of the efficiency of Data Clustering Algorithms on high
Analysis of the efficiency of Data Clustering Algorithms on high

OPTICS: Ordering Points To Identify the Clustering Structure
OPTICS: Ordering Points To Identify the Clustering Structure

MIS2502:  Jing Gong
MIS2502: Jing Gong

... using Pivot table is not data mining • Sum, average, min, max, time trend… ...
< 1 ... 45 46 47 48 49 50 51 52 53 ... 88 >

Nearest-neighbor chain algorithm



In the theory of cluster analysis, the nearest-neighbor chain algorithm is a method that can be used to perform several types of agglomerative hierarchical clustering, using an amount of memory that is linear in the number of points to be clustered and an amount of time linear in the number of distinct distances between pairs of points. The main idea of the algorithm is to find pairs of clusters to merge by following paths in the nearest neighbor graph of the clusters until the paths terminate in pairs of mutual nearest neighbors. The algorithm was developed and implemented in 1982 by J. P. Benzécri and J. Juan, based on earlier methods that constructed hierarchical clusterings using mutual nearest neighbor pairs without taking advantage of nearest neighbor chains.
  • studyres.com © 2025
  • DMCA
  • Privacy
  • Terms
  • Report