• Study Resource
  • Explore
    • Arts & Humanities
    • Business
    • Engineering & Technology
    • Foreign Language
    • History
    • Math
    • Science
    • Social Science

    Top subcategories

    • Advanced Math
    • Algebra
    • Basic Math
    • Calculus
    • Geometry
    • Linear Algebra
    • Pre-Algebra
    • Pre-Calculus
    • Statistics And Probability
    • Trigonometry
    • other →

    Top subcategories

    • Astronomy
    • Astrophysics
    • Biology
    • Chemistry
    • Earth Science
    • Environmental Science
    • Health Science
    • Physics
    • other →

    Top subcategories

    • Anthropology
    • Law
    • Political Science
    • Psychology
    • Sociology
    • other →

    Top subcategories

    • Accounting
    • Economics
    • Finance
    • Management
    • other →

    Top subcategories

    • Aerospace Engineering
    • Bioengineering
    • Chemical Engineering
    • Civil Engineering
    • Computer Science
    • Electrical Engineering
    • Industrial Engineering
    • Mechanical Engineering
    • Web Design
    • other →

    Top subcategories

    • Architecture
    • Communications
    • English
    • Gender Studies
    • Music
    • Performing Arts
    • Philosophy
    • Religious Studies
    • Writing
    • other →

    Top subcategories

    • Ancient History
    • European History
    • US History
    • World History
    • other →

    Top subcategories

    • Croatian
    • Czech
    • Finnish
    • Greek
    • Hindi
    • Japanese
    • Korean
    • Persian
    • Swedish
    • Turkish
    • other →
 
Profile Documents Logout
Upload
Finding density-based subspace clusters in graphs with feature
Finding density-based subspace clusters in graphs with feature

... proposed model, we present a detailed discussion of our model’s parameters, and we show how our approach generalizes well known clustering principles. Furthermore, we prove the correctness of our fixed point iteration technique, its convergence and its runtime complexity. 2 Related work Different cl ...
Consensus Guided Unsupervised Feature Selection
Consensus Guided Unsupervised Feature Selection

... widely discussed in machine learning and data mining community (Guyon and Elisseeff 2003; Li and Fu 2015). Clearly, features after selection are easily interpreted, need shorter training time, and most importantly overcome the over fitting problem. A straightforward way is to enumerate all different ...
Different Cube Computation Approaches: Survey Paper
Different Cube Computation Approaches: Survey Paper

Enhance Rule Based Detection for Software Fault Prone
Enhance Rule Based Detection for Software Fault Prone

... prediction such as size and complexity metrics, multivariate analysis, and multi-colinearity using Bayesian belief networks [8, 9]. Naïve Bayes is widely used for building classifier due to its simplicity and optimal accuracy that it delivers based on Bayes theorem. When developing a defect predicto ...
Automated Semantic Knowledge Acquisition from Sensor Data
Automated Semantic Knowledge Acquisition from Sensor Data

... requires new methods to structure and represent the information and to make the data accessible and processable for the application and services that use these data. The semantic technologies have been used in the recent years as one of the key solutions to provide formalised representations of the ...
Software Defect Prediction Using Regression via Classification
Software Defect Prediction Using Regression via Classification

... the approaches on the Pekka dataset. We firstly notice that RvC actually manages to get better regression error than the standard regression approaches. Indeed within the top three performers we find two RvC approaches (SMO, RIPPER) and only one regression approach (SMOreg). The best average perform ...
On Clustering Validation Techniques
On Clustering Validation Techniques

Lecture Notes - Computer Science Department
Lecture Notes - Computer Science Department

... Parametric methods assume a specific probability distribution for the attributes, they use the data to estimate its parameters, and then they compute the probability of the values. Those that have a probability lower than a specified threshold are marked as outliers. Other possibility is to use the ...
Machine Learning in Materials Science: Recent Progress and
Machine Learning in Materials Science: Recent Progress and

... data. However the nature of the training data, and hence what can be accomplished with the data, differs between the two. In supervised learning, the training data consists of a set of input values (e.g., the structures of different materials) as well as a corresponding set of output values (e.g., m ...
Privacy Preserving Distributed DBSCAN Clustering
Privacy Preserving Distributed DBSCAN Clustering

GR2411971203
GR2411971203

cst new slicing techniques to improve classification accuracy
cst new slicing techniques to improve classification accuracy

... λ = [{Cs|Cs is a set of sliced cases}] OR λ = {all cases that contains one or more important feature(s)} I = {if1, if2,.…, ifn} where n is the number of important features in I I ⊆ Ci ⊆ S I ⊆ Cs ⊆ λ. ...
Multivariate discretization by recursive supervised
Multivariate discretization by recursive supervised

... explanatory attributes and fails to discover conjointly defined patterns. This fact is usually illustrated by the XOR problem (cf. Figure 1) : the contributions of the axes have to be considered conjointly. Many authors have thus introduced a fourth category in the preceding taxonomy : multivariate v ...
Discovering Characteristic Actions from On
Discovering Characteristic Actions from On

Multivariate Discretization by Recursive Supervised Bipartition of
Multivariate Discretization by Recursive Supervised Bipartition of

... explanatory attributes and fails to discover conjointly defined patterns. This fact is usually illustrated by the XOR problem (cf. Figure 1) : the contributions of the axes have to be considered conjointly. Many authors have thus introduced a fourth category in the preceding taxonomy : multivariate v ...
dm_clustering1
dm_clustering1

A New Intrusion Detection System using Support Vector Machines and Hierarchical Clustering
A New Intrusion Detection System using Support Vector Machines and Hierarchical Clustering

... attacks make it difficult for legitimate users to access various network services by purposely occupying or sabotaging network resources and services. This can be done by sending large amounts of network traffic, exploiting wellknown faults in networking services, overloading network hosts, etc. Net ...
mahout-intro
mahout-intro

... • “Machine Learning is programming computers to optimize a performance criterion using example data or past experience” – Intro. To Machine Learning by E. Alpaydin ...
Mining Partial Periodicity in Large Time Series Databases using
Mining Partial Periodicity in Large Time Series Databases using

... a week to be on a Monday, the first work day, whereas others might divide a calendar between Sunday and Saturday, with the work days in-between. In this paper, we implement an Apriori-based approach to mining segment-wise partial periodicity in a discretized time series, using a novel data architect ...
Applying Data Mining to Demand Forecasting and Product Allocations
Applying Data Mining to Demand Forecasting and Product Allocations

Introduction to Similarity Assessment and Clustering
Introduction to Similarity Assessment and Clustering

... We assume that the k-means initialization assigns the green, blue, and brown points to a single cluster; after centroids are computed and objects are reassigned, it can easily be seen that that the brown cluster becomes empty. Han, Kamber, Eick: Introduction to Clustering and Similarity Assessment ...
View PDF - International Journal of Computer Science and Mobile
View PDF - International Journal of Computer Science and Mobile

... while the data clusters are being distinct from each other. There are a number of techniques, developed for optimization, inspired by the behavior of natural systems (Pham & Karaboga, 2000). Experimental results showed that swarm intelligence can be employed as a natural optimization technique for o ...
a comprehensive study of major techniques of multi level frequent
a comprehensive study of major techniques of multi level frequent

... collection. However, searching for useful and interesting patterns and rules was still an open problem [8]. Some of the basic mining techniques : Apriori, Fp-Growth etc. ...
CANCER MICROARRAY DATA FEATURE SELECTION USING
CANCER MICROARRAY DATA FEATURE SELECTION USING

... Cancer investigations in microarray data play a major role in cancer analysis and the treatment. Cancer microarray data consists of complex gene expressed patterns of cancer. In this article, a Multi-Objective Binary Particle Swarm Optimization (MOBPSO) algorithm is proposed for analyzing cancer gen ...
A Survey on Association Rule Mining
A Survey on Association Rule Mining

... support count of each individual item accumulation during the first pass. Suppose the minimal support threshold is 30%, large one item was generated as shown in Table 1(c). Based on that item I4 and I6 are removed. From frequent 1-items, candidate 2-items are generated as mentioned in the Table 1(d) ...
< 1 ... 43 44 45 46 47 48 49 50 51 ... 169 >

K-means clustering

k-means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining. k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.The problem is computationally difficult (NP-hard); however, there are efficient heuristic algorithms that are commonly employed and converge quickly to a local optimum. These are usually similar to the expectation-maximization algorithm for mixtures of Gaussian distributions via an iterative refinement approach employed by both algorithms. Additionally, they both use cluster centers to model the data; however, k-means clustering tends to find clusters of comparable spatial extent, while the expectation-maximization mechanism allows clusters to have different shapes.The algorithm has a loose relationship to the k-nearest neighbor classifier, a popular machine learning technique for classification that is often confused with k-means because of the k in the name. One can apply the 1-nearest neighbor classifier on the cluster centers obtained by k-means to classify new data into the existing clusters. This is known as nearest centroid classifier or Rocchio algorithm.
  • studyres.com © 2025
  • DMCA
  • Privacy
  • Terms
  • Report